Software patents have long been a concern in the free software development
community. For many years, though, that concern was of a theoretical
nature; few patents had actually been used (in a public way, at least) to
attack projects of interest. The mobile patent wars have changed that
situation; now systems based on free software are on the front line in a
highly visible legal battle. As a result, we are starting to feel the
sting of software patents; the situation is likely to get worse before it
In late June, a US District Court granted a request by Apple to ban the
sale of the Galaxy Nexus smartphone in the US due to the phone's alleged
infringements of Apple's patents; the phone was then duly pulled from the
Google store. That ban has since been lifted, but that should not be seen
as a victory against software patents; indeed, the contrary is true. The
only reason the Galaxy Nexus is available again is Google's short-term
capitulation; the company has simply removed the offending features from
the "Jelly Bean" Android release. Google's claim that the patents were no
longer at issue was enough to get the handset back on the market—for now.
What are those features? The biggest fight seems to be over patent #6,847,959, the
so-called "Siri patent." This patent, filed in 2000, has the following as
its first independent claim:
A method for locating information in a computer system, comprising
the steps of:
inputting an information identifier;
providing said information identifier to a plurality of plug-in
modules each using a different heuristic to locate information
which matches said identifier;
providing at least one candidate item of information from said
displaying a representation of said candidate item of information.
There are 17 dependent claims specifying that the "information identifier"
may come from a dialog box or through voice input; the "heuristics" can
involve searches on file names, file contents, local files, web pages, and
so on. They narrow the scope of the patent, but do not change its
Even thinking back to the year 2000, it is hard to find a great deal of
novelty in this concept. If one wants to search for something, one likely
wants to search all of the
available resources. If one wants to search multiple locations or with
multiple algorithms, one creates an API by which independent search modules
invoked. The method described here is obvious; it should come to the mind
of any developer skilled in the art of software development. But this is
the valuable innovation that allowed Apple to block the sales of a
competing product in one of the largest markets on the planet.
Google's response has been to cripple the functionality of its Android
search bar, which will be restricted to searching the net only. Anybody
running the Jelly Bean release will see that restricted functionality; it
will also be pushed out to 4.0-based ("Ice Cream Sandwich") devices as an
"update." And that is how things are likely to stand until the case runs
its course, a process that could take years.
So, to put it bluntly: software patents have allowed a manufacturer of
highly closed devices to hold
one of the most open handsets available hostage and to block it from the
market entirely. They have allowed said corporation to force the removal
of obvious functionality from a device (mostly) based on free software. To
think that this kind of thing won't happen again, or that it won't strike
code that is more interesting to the free software community, is to be
optimistic indeed. That does not seem to be the way the wind is blowing.
It would be nice to think that, somehow, the software patent problem will
be solved in the near future. There are occasional encouraging signs, such
as US appeals court judge Richard
Posner tossing out another Apple case and speaking out against software
patents. But actual attempts to reform the patent system never seem to get
What seems more likely is that the major players in the mobile industry
will eventually come together around some sort of patent pool that lets
them get on with their businesses. Perhaps this will be a voluntary
action, or perhaps there will be a certain amount of governmental pressure
applied first. Either way, the end result is likely to be a regime in
which the established players are free to get on with the process of making
money while new companies, like the just-announced Jolla Ltd, face
additional barriers to entry. Such a situation is not likely to be good
for the industry or for free software.
But, then, one never knows. As bogus software patents threaten to take
down products and services that people actually care about, we may yet see
an increase in support for reforms. Perhaps the best strategy against
software patents is the one we are already executing: make the best free
system we can and ensure that it is widely diffused into systems that the
world depends on. As patent litigation increasingly turns into a general
denial of service attack against the economy as a whole, tolerance for the
system may wane. One can always hope, anyway.
Comments (25 posted)
Mozilla surprised Thunderbird fans on July 6 when it announced that it
was pulling developers from the project. Mozilla says it will
continue to test, patch, and maintain future releases — including
stability and security fixes — while letting community members
guide development of new features. But that promise did not prevent
a slew of headlines reporting that the email client was being put out
to pasture. A number of Mozilla developers have subsequently commented
on the decision, helping to clarify the outlook for the future
somewhat, if not completely.
Mozilla chief Mitchell Baker posted the announcement
on her blog, starting with the question "is Thunderbird a likely
source of innovation and of leadership in today’s Internet life? Or
is Thunderbird already pretty much what its users want and mostly
needs some on-going maintenance?" The answer from Mozilla's
upper echelons, evidently, is that the desktop email client is
essentially feature-complete, and not likely to experience further
innovations. Consequently, Mozilla as a whole is better off directing
its engineering resources to its current "priority" projects.
Baker's post was interpreted by many to mean that Mozilla was halting
development on Thunderbird, perhaps offloading control of the project
to the open source community or otherwise attempting to get rid of the
project without saying that it was getting rid of the
project. Thunderbird would hardly be the first open source project to
suffer such a fate, so a pessimistic take on the announcement is
understandable. But the details that have emerged since the
announcement paint a different picture.
On July 7, Jb Piacentino posted
an announcement to the tb-planning mailing list which covered the same
ground as Baker's post. In it, he assured readers that the move was
not the cessation of Thunderbird development:
We're not "stopping" Thunderbird, but proposing we adapt
the Thunderbird release and governance model in a way that allows both
ongoing security and stability maintenance, as well as community-driven
innovation and development for the product.
Thunderbird developer Ludovic Hirlimann said
on his blog that Thunderbird 14, 15, and 16 would all be released
before the new plan takes effect, and that the new model's practical
effect would be that "we won’t have the time to work on
specking, developing and testing new features," although the
team would still participate in the development process.
Details about the plan are described
on the Mozilla wiki. The plan draws a distinction between the normal
Thunderbird and the extended support release (ESR) version. Mozilla
will focus on the Thunderbird ESR releases and associated security
updates, while allowing other contributors to work on the standard
Thunderbird trunk. Mozilla will continue to provide the testing and
release infrastructure, and Mozilla staffers will serve as the release
team. But the Mozilla staffers will not be tasked with
introducing new features. ESR releases are guaranteed to receive
security updates for one year, rolled out with Firefox ESR, on a
Despite Piacentino's reassurances and the wiki's lengthier
explanation, some on the list still interpreted the news in starkly
different terms. For example, while Ben Bucksch took
it to mean an end-of-life announcement, Charles Tanstaafl read
the announcement to mean that Mozilla employees would "focus on
stability and fixing many of the long standing bugs".
Others wanted more specifics on the new process. Kai Engert asked
whether the arrangement meant that Thunderbird releases would be kept
in sync with Firefox on shared components (including Gecko):
The one thing I'm worried about is regressions.
Firefox and Thunderbird share application level code that is responsible
for the correct functioning of security protocols.
If a change is made because it's needed by Firefox, it's easy to forget
that Thunderbird may rely on the previous behaviour, and the change
might cause a regression in
functionality/usability/correctness/completeness for Thunderbird.
This has happened in the past. If Thunderbird becomes even less of a
priority for the Mozilla project, with even fewer people available to
work on cleanup and adjustments to newer Gecko core, then there's the
risk that such regressions might occur more frequently in the future.
Concerns raised also included the fate of in-progress development work
(such as the long awaited rewrite of Thunderbird's address book) and
whether or not the outside community would be able to mentor Google
Summer of Code (GSoC) projects, which have been a dependable source of
new code in the past. The community has indeed played a major part in recent innovations, including the new "conversations" view
extension, MIME handling, and the recent removal
of RDF as a dependency. Mozilla's Mark Banner replied
that Thunderbird's annual ESR releases would synchronize with the
then-current Firefox release (including any Gecko updates), but that
the intervening six-week security update releases would not roll in
recent changes. The bulk of in-progress projects are slated to be
completed before the new process begins, he added. Finally, he
out that Thunderbird community members had mentored past GSoC
projects, so the process change should not interfere.
Email versus the web
Several Mozilla staffers commented about the announcement in blog posts
of their own. Thunderbird developer Joshua Cranmer observed:
Thunderbird has not been a priority for Mozilla since before I started
working on it. There really isn't any coordination in mozilla-central
to make sure that any planned "featurectomies" don't impact
Thunderbird—we typically get the same notice that add-on authors get,
despite being arguably the largest binary user of the codebase outside
of mozilla-central. Given also that the Fennec and B2G codebases were
subsequently merged into mozilla-central (one of the arguments I heard
about the Fennec merge was that "it's too difficult to maintain the
project outside of mozilla-central") and that comm-central remains
separate, it should be quickly clear how much apathy for Thunderbird
existed prior to this announcement.
Cranmer did not bemoan this situation, however. He saw it as
natural considering the growth of mobile email, and because
"Mozilla's primary goal is to promote the Open Web." The
assertion that the web — but not email — is
Mozilla's central mission was also touched on in official channels.
The wiki page states that the priority projects getting Mozilla's
attention are "important web and mobile" efforts,
"while Thunderbird remains a pure desktop only email
client." Baker's blog post similarly noted that the project
has "seen the rising popularity of Web-based forms of
communications representing email alternatives to a desktop
But Bucksch took
issue with that notion in considerable detail, observing that if
Thunderbird is losing out to web-based email, that constitutes a loss,
because "Webmail is definitely not open. You're totally
dependent on the features and limitations the provider offers [...]
Privacy goes out the door with webmail. Even integrity: The ISP can
even alter the message contents years after the fact, and I have no
way to verify or prove this."
Mozilla's stated mission is "to
promote openness, innovation and opportunity on the web,
but Bucksch points out that its manifesto
stakes out considerably broader principles about the openness of the
Internet as a whole. Side-stepping for the moment why the
organization has a separate "mission" statement and "manifesto" at all
(much less inconsistent ones), the point is well-taken. If Thunderbird
has failed to grab a majority of the world's email client share, what
users are left with are proprietary OS-vendor clients on the desktop,
or proprietary software services on the web. Mozilla Labs briefly toyed with a webmail client called Raindrop, but shuttered it before it left the experimental phase.
Perhaps competition from webmail clients is a side issue, and
Mozilla is primarily readying itself to make a greater play for what
it sees as the new email battleground on mobile devices, with its
Boot-to-Gecko effort (which was recently renamed
Firefox OS). Andrew Sutherland, a developer on Mozilla's forthcoming
Firefox OS email client, told
the tb-planning list that he and other team members were list
subscribers, and were at least open to the possibility of
collaborating with the Thunderbird community on compatibility features.
Despite the doomsday predictions that leaked out following the initial
announcement, Mozilla's plans indicate that it is committed to testing
and releasing Thunderbird for at least the next year or so (depending
on the final release date of Thunderbird ESR 17). The distant future is
less clear, but that could be said of many other projects. Anyone who
doubts the ability of the Mozilla volunteer community to maintain a
product needs only to look at Seamonkey, which
continues to live on long after Mozilla lost interest. Still,
Mozilla's second-class treatment of its email client is troubling for
other reasons. Email itself may be relatively static, but IM, VOIP,
and other communication methods are coming and going all the time, and
Mozilla has not offered a consistent client story for them. If
Firefox is Mozilla's only product, users' hope for an open web boils
down to "hopefully the service providers will write open source web
apps for foo" — which seems like a long shot.
Comments (10 posted)
The Qt Project was launched in October
2011 to foster the open development of the Qt toolkit. Qt is the underlying
framework used by KDE, of course, so Akademy attendees are understandably
interested in the status and progress of the Qt Project. Thiago Macieira
provided that update in a surprisingly well-attended keynote—surprising
because it was early on
the day after a social event that stretched into the wee hours.
The Qt Project
The Qt Project is based on four principles, Macieira said: fairness,
inclusiveness, transparency, and meritocracy. Fairness means that the
project is open to
everyone, while inclusiveness means that people can just start participating as there are no barriers in place
required. Transparency covers the decision-making process which is
completely open. Discussions happen on the mailing list, whose
participants ultimately make any decisions. When discussion takes place
elsewhere, it needs to be posted to the list for others to review and
comment on. Finally, a meritocracy means that contributors who have "shown
their skills and dedication" are given commit rights, and are the ones who
get to make the decisions for the project. That way, the most experienced
people get the most deciding power, he said.
The project has seen quite a bit of activity in the eight months it has
existed. Over that period, there have been 18,000 commits to the code
base, with an average of 412 per week. There were some dips in the graph
that he showed, for Christmas, Easter, Norwegian national day, and one that
he called "Elop". The latter correlated with Nokia CEO Stephen Elop's
statements that have caused concern about the future of Qt within the
company. When an
audience member suggested banning Christmas to increase productivity,
Macieira chuckled and said that banning Elop would be more effective.
Commit numbers do not show the whole story, though. To get a sense for the
community that is coming together around the project, Macieira also looked
at how many different people are contributing. Over the eight months, 481
different email addresses were used for contributions—averaging 140
different email addresses per week. There are some
more than one address, of course, but those numbers give an idea of the
number of contributors to the project.
Macieira put up a flowchart describing how to contribute to
the project. It looked relatively complex, with lots of "ladders and
contributing is actually fairly straightforward, he said. He pointed to the project wiki for
information on what is needed.
Code contributions are managed with the project's Gerrit code review instance.
Using the dashboard in that tool, one can see the status of current code
reviews, look at comments that have been made on the code, see the diffs
for the changes, and so on.
Code that has passed review from both humans and a bot that checks for
problems can then be staged from the Gerrit system. That moves the code
into an integration phase where it is merged into the mainline, compiled,
and tested. Two and a half hours later, contributors will get an email
with the results of the integration. It is "very simple", he said, and all
of the "eleven steps" from the flowchart "boil down" to this process.
KDE and Qt
Qt and KDE are "greater together", Macieira said, and he would like to see
the two communities merge into one large community. Qt provides the
libraries and framework, while KDE is building applications on top of
that. If KDE needs new features, they can be put into Qt as there is now a
"really nice way to make that happen". In the past, there had been
obstacles to moving functionality into Qt, but those are gone.
He gave several examples of people working on moving KDE functionality into
Qt. The KMimeType class has recently been added to Qt as QMimeType,
was essentially just moving and renaming the code. Other KDE classes have
required adaptations prior to moving into Qt, including KStandardDirs and
KTempDir. David Faure has been doing much of that
work, but he is not alone as Jon Layt has been working on moving the KDE
printing subsystem into Qt, while Richard Moore has been adding some of the
KDE encryption code (e.g. SSL sockets) to Qt.
Those are just three KDE developers who have started working in the
Qt upstream, Macieira said. There is a lot more code that could make the
move including things like KIO (KDE I/O), the KDE command-line parser, and
Beyond just code contributions, the project can use help in lots of areas.
One can be an advocate and help spread the word about the Qt Project.
Reporting bugs (and helping to fix some of them) is another area.
Documentation, translations, artwork, and so on also need people to work on
them; there is "a lot that you can do", Macieira said.
Developers can also start reviewing the code that is being proposed. It is
easy to get started with the code review system after creating an account.
What is really needed is "more people" and KDE is an obvious
source for some of those people. The two projects should "work more
closely to solve our objectives together", he concluded.
Asked about the filtering capabilities in the code review system to find
patches of interest for review, Macieira admitted that the search and
filtering functionality could use some work. There are ways to watch
specific projects by a regular expression match, but overall it lacks some
features that would be useful.
Another audience member asked about the statistics and, in particular,
whether they could be quantified based on whether the person considers
themselves a KDE developer. That is difficult to do, Macieira said,
because people wear multiple hats, but there is definitely value in doing so.
The last suggestion was for a joint KDE/Qt conference. Macieira
agreed that there would be "a lot of value in getting the two
communities together", but that it wouldn't work for this year. He would
like to see that happen next year, perhaps as part of the Qt Contributors
Summit. The summit is a hands-on working conference, without presentations
like Akademy has, he said, so a separate event might be the right
way to go.
After years of trying to turn Qt into a more open project with a
community orientation, it is nice to see that effort start to come to fruition.
Given the uncertainty in the future of Qt at Nokia, having an independent
project, with contributions from others outside of the company, will help
to ensure the future of the toolkit. Since KDE is one of the bigger users
of Qt, it only makes sense for the two projects to work closely
together—exactly what Macieira was advocating.
[ The author would like to thank KDE e.V. for travel assistance to Tallinn
for Akademy. ]
Comments (3 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Cyberoam DPI and certificates; New vulnerabilities in kernel, openjpeg, pidgin, python3, ...
- Kernel: Btrfs send/receive; Supporting 64-bit ARM systems; Linux power management.
- Distributions: Searching for common ground between Debian and FSF; Android, CentOS, Fedora, ...
- Development: KWin scripting; Galaxy; Linux Libertine; Firefox updates; ...
- Announcements: Free Bassel Khartabil, GitHub raises funding, The next GPL, ...