The 2009 linux.conf.au was held in Hobart, on the island of Tasmania. The
setting for LCA - typically on a university campus - is always nice, but it
is hard to imagine a more beautiful place to meet than Hobart. As an added
bonus, the mild temperatures offered a nice complement to both (1) the
brutally high temperatures being felt on the Australian mainland, and
(2) the rather severe winter conditions awaiting your editor on his
return. A number of talks from LCA 2009 have been covered in separate
articles; here your editor will summarize a few other things worth mentioning.
Prior to the event, your editor heard a few people express disappointment
over the choice of keynote speakers this time around. As it happens, at
least some of that disappointment was premature. It is true that things got off to a bit
of a slow start on the first day, when Thomas Limoncelli delivered a
hand-waving talk about "scarcity and abundance." Thomas is a good and
entertaining speaker, but he seemed to think that he was addressing a
gathering of system administrators, so his talk missed the mark.
Unfortunately, your editor got waylaid and missed Angela Beesley's keynote
on the second day.
The speaker for the final day was Sun's Simon Phipps. Your editor entered
this talk with low expectations, but was pleasantly surprised. Simon is an
engaging speaker, and he would appear to understand our community well. As
might be expected, he glossed over some of Sun's more difficult community
interactions, choosing instead to focus on more positive things and the
interaction between the community and companies in general.
Simon's thesis is that we're heading into a "third wave" of free software.
The first wave started, perhaps, before Richard Stallman wrote the GNU
Manifesto; Simon notes that IBM's unbundling of the software for its
nascent PC offering (in response to antitrust problems) played a huge role
in defining the software market of the 1980's. But the Free Software
Foundation brought a lot of things into focus and started the ball rolling
for real. The second wave came about roughly with the founding of the
Apache Software Foundation; that was when the world came to understand that
free software developers can produce high-quality code. He gave Ubuntu as
an example, and noted that even the Gartner Group has come to see some
value in free software.
The third wave is coming as businesses really figure out how to work with
free software. In his point of view, the right way is to do everything to
drive adoption of the software; again, Canonical was held up as an example
of how to do it right. One should only sell licenses, he says, to
businesses who haven't figured out the true value of free software. Why,
he asks, should a company which understands things buy RHEL or SLES
licenses? (It's worth noting that a Red Hat representative took issue with
that comment, not without reason).
"Third wave" businesses should work with something like a subscription
model, selling support services as needed. Things like defect resolution,
preferably done by people who have commit privileges with the project
involved. Businesses can make upgrades easier, provide production support
tools, or, if really needed, sell indemnity guarantees.
Some concerns were raised, the first of which was licenses. While noting
wryly that his company "has done lots of experimentation" with software
licenses, Simon identified license proliferation as a big problem. In the
future, he thinks, the problems associated with proliferation will tend to
drive projects toward a single license - most likely the GPL.
Another problem is, of course, software patents. Simon says we shouldn't
worry too much about patent trolls, though; there is not much we can do
about them in any case. A much bigger concern is companies (unnamed) which
are working as members of the community but which are, simultaneously,
filing a stream of "parallel patents" covering the work they do. Should
one of these companies turn against the community, it could create all
kinds of problems. For this reason, Simon is a big fan of licenses like
GPLv3 or the Apache license which include patent covenants. Every company
which engages the community under the terms of such a license gives up some
of its patent weaponry in the process. The more companies we can bring
into this sort of "patent peace," the better off we will be.
Even so, he says, the day may come when the community needs a strong patron
to defend it against a determined patent attack.
Simon then asked the audience to consider what it is that makes a company a
true friend of free software. Is it just a matter of strapping on a
penguin beak, as the Tasmanian devil has done to become the LCA2009 mascot?
The real measure of friendship is contributions to the community; Sun, he
pointed out, has done a lot in that regard. In closing, Simon's message to
"third wave" businesses was to keep freedom in mind. There is a place, he
says, for both pragmatism and radical idealism. The biggest enemy of
freedom is a happy slave; he held up his Apple notebook as an example.
In response to questions, Simon noted that the license problems with the
sunrpc code will hopefully be fixed soon. The problem is that this code is 25
years old and there's nobody around who worked on it at the time, so
determining its origins is hard. He also said that "pressure is mounting"
to release the ZFS filesystem under a GPL-compatible license. And he
suggested that, eventually, Red Hat will have to start selling support
services for Fedora, since that is the distribution that people are
Freedom was also at the top of the agenda during Rob Savoye's talk. He
discussed the launch of the Open Media
Now! Foundation, which has been formed to address the problem of codec
patents head-on. As Rob puts it, we all create content, we should be able
to give copies of our own content to anyone. In addition, the data we
create never goes away, but our ability to read that data just might.
Plus, he's simply fed up with hearing complaints that gnash does not work
with YouTube videos; it works just fine, but they cannot distribute gnash
with the requisite codecs.
To deal with this problem, the Foundation is starting a determined effort
to gather prior art which can apply to existing codec patents. With any
luck, some of the worst of them can be invalidated. But just as much
effort is going into figuring out ways to work around codec patents. Most
patents are tightly written; it's often possible to find a way to code an
algorithm which falls outside of a given patent's claims. When a proper
workaround is found (and determining "proper" is a job for a lawyer), the
relevant patent can thereafter be ignored. It is a far easier, more
certain, and more cost-effective way of dealing with software patents, so
Rob thinks the community should be putting much more effort into finding
workarounds. He hopes that people will join up with Open Media Now and
help to make that happen.
Matthew Wilcox managed to fill a room with a standing-room-only crowd (and
not much standing room, at that) despite being scheduled at the same time
as Andrew Tridgell. His topic - solid-state drives - is clearly
interesting to a lot of people. Matthew discussed some of the issues with
these drives, many of which have been covered here in the past. Those
problems are being slowly resolved by the manufacturers, but there is a
different class of problems which is now coming to the fore. There are
certain kinds of kernel overhead which one doesn't notice when an I/O
operation takes milliseconds to complete. When that operation completes in
microseconds, though, that kernel overhead can become a bottleneck. So he
has been working on finding these problems and fixing them, but it is going
to take a while. He made the interesting observation that, at SSD speeds,
block I/O starts to look more like network traffic, and the kernel needs to
adopt some of the same techniques to be able to keep up with the hardware.
The Penguin Dinner auction was back this year, after having been dropped
from the schedule in 2008. The auction is always an interesting event,
often involving people deciding to spend a few thousand dollars on a
T-shirt after having consumed enough alcohol to make any such decision
especially unwise. This year's auction beneficiary was the Save The Tasmanian Devil
organization, which came away from the event somewhat richer than it
had hoped. After a long series of bids, matching offers, and simple
passing-the-hat in the crowd, a large consortium of bidders managed to get
a total of nearly AU$40,000 pledged to this cause. There was one
condition, though: Bdale Garbee not only had to lose his beard, but it had
to be done at the hands of Linus Torvalds.
The "free as in beard" event happened on the last day of the conference.
As was noted in the live Twitter feed being projected in the room, it was
most surreal to sit in a room of 500 people all quietly watching a man
shave. Bdale's wife, who took the picture which was nominally the object
being auctioned, has made it clear that he will not be allowed to attend
LCA unaccompanied again.
In 2010, linux.conf.au will, for the second time ever, not be held in
Australia. The winning bid for next year came from Wellington, New
Zealand - a setting which rivals Hobart in beauty. Mark your calendars for
January 18 to 23; it should be a good time.
Comments (4 posted)
For years, linux.conf.au has been one of the best places to go to catch up
with the state of the X Window System; the 2009 event was no exception.
There was a big difference this time around, though. X talks have
typically been all about the great changes which are coming in the near
future. This time, the X developers had a different story: most of
those great changes are done and will soon be heading toward a distribution near
Keith Packard's talk started with that theme. When he spoke at
LCA2008, there were a few missing features in X.org. Small things like
composited three-dimensional graphics, monitor hotplugging, shared
graphical objects, kernel-based mode setting, and kernel-based
two-dimensional drawing. One of the main things holding all of that work
back was the lack of a memory manager which could work with the graphics
processor (GPU). It was, Keith said, much like programming everything in
early Fortran; doing things with memory was painful.
That problem is history;
X now has a kernel-based memory management system. It can be used
to allocate persistent objects which are shared between the CPU and the
GPU. Since graphical objects are persistent, applications no longer need to make backup
copies of everything; these objects will not disappear. Objects have
globally-visible names, which, among other things, allows them to be shared
between applications. They can even be shared between different APIs, with
objects being transformed between various types (image, texture, etc.) as
needed. It looks, in fact, an awful lot like a filesystem; there may
eventually be a virtual filesystem interface to these objects.
This memory manager is, of course, the graphics execution manager,
or GEM. It is new code; the developers first started talking about the
need to start over with a new memory manager in March, 2008. The first
implementation was posted in April, and the code was merged for the 2.6.28
kernel, released in December. In the process, the GEM developers dropped a
lot of generality; they essentially abandoned the task of supporting BSD
systems, for example ("sorry about that," says Keith). They also limit
support to some Intel hardware at this point. After seeing attempts at large,
general solutions fail, the GEM developers decided to focus on getting one
thing working, and to generalize thereafter. There is work in progress to
get GEM working with ATI chipsets, but that project will not be done for a
little while yet.
Moving data between caches is very
expensive, so caching must be managed with great care.
This is a task they
had assumed would be hard. "Unfortunately," says Keith, "we were right."
GEM is built around the shmfs filesystem code; much of the fundamental
object allocation is done there. That part is easy; the biggest hassle
turns out to be in the area of cache management. Even on Intel hardware,
which is alleged to be fully cache-coherent, there are caching issues which
arise when dealing with the GPU. Moving data between caches is very
expensive, so caching must be managed with great care. This is a task they
had assumed would be hard. "Unfortunately," says Keith, "we were right."
One fundamental design feature of GEM is the use of global names for
graphical objects. Unlike previous APIs, GEM does not deal with physical
addresses of objects in its API. That allows the kernel to move things
around as needed; as a result, every application can work with the
assumption that it has access to the full GPU memory aperture. Graphical
objects, in turn, are referenced by "batch buffers," which contain
sequences of operations for the GPU. The batch buffer is the fundamental
scheduling unit used by GEM; by allowing multiple applications to schedule
batch buffers for execution, the GEM developers hope to be able to take
advantage of the parallelism of the GPU.
GEM replaces the "balkanized" memory management found in earlier APIs.
Persistent objects eliminate a number of annoyances, such as the dumping of
textures at every task switch. What is also gone is the allocation of the
entire memory aperture at startup time; memory is now allocated as needed.
And lots of data copying has been taken out. All told, it is a much
cleaner and better-performing solution than its predecessors.
Getting this code into the kernel was a classic example of working well
with the community. The developers took pains to post their code early,
then they listened to the comments which came back. In the process of
responding to reviews, they were able to make some internal kernel API
changes which made life easier. In general, they found, when you actively
engage the kernel community, making changes is easy.
The next step was the new DRI2 X extension, intended to replace the (now
legacy) DRI extension. It only has three requests, enabling connection to
the hardware and buffer allocation. The DRI shared memory area (and its
associated lock) have been removed, eliminating a whole class of problems.
Buffer management is all done in the X server; that makes life a lot easier.
Then, there is the kernel mode-setting (KMS) API - the other big missing
piece. KMS gets user-space applications out of the business of programming
the adapter directly, putting the kernel in control. The KMS code (merged
for 2.6.29) also implements the fbdev interface, meaning that graphics and
the console now share the same driver. Among other things, that will let
the kernel present a traceback when the system panics, even if X is
running. Fast user switching is another nice feature which falls out of
the KMS merge.
KMS also eliminates the need for the X server to run with root privileges,
which should help security-conscious Linux users sleep better at night.
The X server is a huge body of code which, as a rule, has never been
through a serious security audit. It's a lot better if that code can be
run in an unprivileged mode.
Finally, KMS holds out the promise of someday supporting non-graphical uses
of the GPU. See the GPGPU site for
information on the kinds of things people try to do once they see the GPU
as a more general-purpose coprocessor.
All is not yet perfect, naturally. Beyond its limited hardware support,
the new code also does not yet solve the longstanding "tearing" problem.
Tearing happens when an update is not coordinated with the monitor's
vertical refresh, causing half-updated screens. It is hard to solve
without stalling the GPU to wait for vertical refresh, an operation which
kills performance. So the X developers are looking at ways to
context-switch the GPU. Then buffer copies can be queued in the kernel and
caused to happen after the vertical refresh interrupt. It's a somewhat
hard problem, but, says Keith, it will be fixed soon.
There is reason to believe this promise. The X developers have managed to
create and merge a great deal of code over the course of the last year.
Keith's talk was a sort of a celebration; the multi-year process of
bringing X out of years of stagnation and into the 21st century is coming
to a close. That is certainly an achievement worth celebrating.
Postscript: Keith's talk concerned the video output aspect of the X
Window System, but an output-only system is not particularly interesting. The other
side of the equation - input - was addressed by Peter Hutterer in a
separate session. Much of the talk was dedicated to describing the current
state of affairs on the input side of X. Suffice to say that it is a
complex collection of software modules which have been bolted on over the
years; see the diagram in the background of the picture to the
What is more interesting is where things are going from here. A lot of
work is being done in this area, though, according to Peter, only a couple
of developers are doing it. Much of the classic
configuration-file magic has been superseded by HAL-based autoconfiguration
code. The complex sequence of events which follows the attachment of a
keyboard is being simplified. Various limits - on the number of buttons on
a device, for example - are being lifted. And, of course, the
multi-pointer X work (discussed
at LCA2008) is finding its way into the mainline X server and into
The problems in the input side of X have received less attention, but it is
still an area which has been crying out for work for some time. Now that
work, too, is heading toward completion. For users of X (and that is
almost all of us), life is indeed getting better.
Comments (23 posted)
As described in Plugging into
last October, the runtime library code used by the GCC compiler (which
implements much of the basic functionality that individual languages need
for most programs) has long carried
a license exemption allowing it to be combined with proprietary software.
In response to the introduction of version 3 of the GPL and the desire
to add a plugin infrastructure to GCC, the FSF has now
the licensing of the GCC runtime code has changed. The FSF wishes to
modernize this bit of licensing code while, simultaneously, using it as a
defense against the distribution of proprietary GCC plugins.
Section 7 of GPLv3
explicitly allows copyright holders to exempt recipients of the software
from specific terms of the license. Interestingly, people who redistribute
the software have the option of removing those added permissions. The new
GCC runtime library license is GPLv3, but with an additional
described in Section 7. That permission reads:
You have permission to propagate a work of Target Code formed by
combining the Runtime Library with Independent Modules, even if
such propagation would otherwise violate the terms of GPLv3,
provided that all Target Code was generated by Eligible Compilation
Processes. You may then convey such a combination under terms of
your choice, consistent with the licensing of the Independent
Anybody who distributes a program which uses the GCC runtime, and
which is not licensed under GPLv3, will depend on this exemption, so
it is good to understand what it says. In short, it allows the runtime to
be combined with code under any license as long as that code has been built
with an "Eligible Compilation Process."
The license defines a "Compilation Process" as the series of steps which
transforms high-level code into target code. It does not include anything
which happens before the high-level code hits the compiler. So
preprocessors and code generation systems are explicitly not a part
of the compilation process. As for what makes an "Eligible Compilation
Process," the license reads:
A Compilation Process is "Eligible" if it is done using GCC, alone
or with other GPL-compatible software, or if it is done without
using any work based on GCC. For example, using non-GPL-compatible
Software to optimize the GCC intermediate representation would not
qualify as an Eligible Compilation Process.
This is where the license bites users of proprietary GCC plugins. Since
those plugins are not GPL-compatible, they render the compilation process
"ineligible" and the resulting code cannot be distributed in combination
with the GCC runtime libraries. This approach has some interesting
- "GPL-compatible" is defined as allowing combination with GCC. So a
compilation process which employs a GPLv2-licensed module loses
- This must be the first free software license which discriminates on
the basis of how other code was processed. Combining with proprietary
code is just fine, but combining with free software that happens to
have been run through a proprietary optimizing module is not allowed.
It is an interesting extension of free software licensing conditions
that could well prove to have unexpected results.
- While the use of a proprietary GCC module removes the license
exemption, using a 100% proprietary compiler does not. As long as the
compiler is not derived from GCC somehow, linking to the GCC runtime
library is allowed.
material released with the license change includes this
However, the FSF decided long ago to allow developers to use GCC's
libraries to compile any program, regardless of its license.
Developing nonfree software is not good for society, and we have no
obligation to make it easier. We decided to permit this because
forbidding it seemed likely to backfire, and because using small
libraries to limit the use of GCC seemed like the tail wagging the
With this change, though, the FSF is doing exactly that: using its "small
libraries" to control how the to-be-developed GCC plugin mechanism will be
used. It will be interesting to see how well this works; if a vendor is
truly determined to become a purveyor of proprietary GCC modules, the
implementation of some replacement "small libraries" might not appear to be
much of an obstacle. In that sense, this licensing truly could backfire:
it could result in the distribution of binaries built with both proprietary
GCC modules and a proprietary runtime library.
But, then, that depends on the existence of vendors wanting to distribute
proprietary compiler plugins in the first place. It is not entirely clear
that such vendors exist at this point. So it may well end up that the
runtime exemption will not bring about any changes noticeable by users or
developers, most of whom never thought about the runtime exemption in its
previous form either.
Comments (25 posted)
Buried deep inside a recent interview
with Linus Torvalds was the revelation that he had moved away from KDE
and back to GNOME—which he famously abandoned in 2005. The
cause of that switch was the problems he had with KDE 4.0, which seems to
be a popular reaction to that release. Various media outlets,
Slashdot in particular, elevated Torvalds's switch to the headline of the
interview. That led, of course, to some loud complaints from the KDE community,
but also a much more
from KDE project lead Aaron Seigo. While it is somewhat interesting to
know Torvalds's choice for his desktop, there are other, more important issues
that stem from the controversy.
Never one to mince words, Torvalds is clear in his unhappiness: "I
used to be a KDE user. I thought KDE 4.0 was such a disaster, I switched to
GNOME." But, he does go on to acknowledge that he understands,
perhaps even partially agrees with, the reasons behind it:
[...] but I think they did it
badly. They did so [many] changes, it was a half-baked release. It may turn
out to be the right decision in the end, and I will retry KDE, but I
suspect I'm not the only person they lost.
There has been a regular stream of reports of unhappy KDE users, with many
folks switching to GNOME due to KDE 4.0 not living up to their
expectations—or even being usable at all. Part of the problem stems
from Fedora's decision to move to KDE 4 in Fedora 9, but not give users a
way to fall back to KDE 3.5. When Torvalds upgraded to Fedora 9, he got a
desktop that "was not as functional", leading him to go back
to GNOME—though, he hates "the fact that my right button
doesn't do what I want it to do", which was
one of the reasons
he moved to KDE in the first place.
One facet of the problem, as Seigo points out, is the race between distributions
to incorporate the most leading—perhaps bleeding—edge software
versions. It is clear that KDE did not do enough to communicate what it
thought 4.0 was: "KDE 4.0.0 is our 'will eat your children' release
of KDE4, not the next release of KDE 3.5" is how Seigo described
it when it was released. That message, along with the idea that KDE 4
would not be ready to replace 3.5 until 4.1 was released, didn't really
propagate though. It was hard for users, distributions, and the press to
the KDE vision of the future from the actual reality of what was delivered.
There clearly were users, perhaps less vocal or with fewer requirements,
who stuck with KDE through the transition.
The author notes that he went through the same upgrade path in Fedora without
suffering any major problems. Reduced functionality and some annoyances
were certainly present, but it was not enough to cause a switch to a different
desktop environment. It is impossible to get any real numbers for users
who switched, had a distribution that allowed them to stick with 3.5, or
just muddled through until KDE 4 became more usable. But,
without a doubt, the handling of the KDE 4.0 release gave the project a
rather nasty black eye.
Seigo also minces few words when pointing to the distributions to take a
large part of that blame:
I have to admit that it's really hard to stay positive about the efforts of
downstreams when they wander around feeling they should be above reproach
while simultaneously hurting our (theirs and ours) users in a rush to be
more bad ass bleeding edge than any other cool dude distro in town. I hope
this time instead of handing out spankings, the distros can sit back and
think about things and try and figure out how they played an unfortunate
part in the 4.0 fiasco.
There is no real substitute to distributions and projects like KDE working
together to determine what should be packaged up in the next distribution
release. It is unclear where exactly that process broke down for Fedora 9,
but it certainly led to much of the outcry about KDE 4. But, if they
had it to do all over again, how would KDE have handled things differently?
Projects want to make their latest releases available to users, so that
testing, bug reporting, and fixing can happen. That is the service that
distributions provide. But users rightly expect a certain base level of
functionality in the tools that get released.
To some extent, it is a classic chicken-and-egg problem. In his defense
of the 4.0 release process, Seigo notes that
releases, as opposed to alphas or betas, are the only way to get attention
from users and testers:
Between the rc's and the tagging of 4.0.0 the number of reports from
testing skyrocketed. This is great, and shows that when I assert "people
don't test when it's alpha or even beta" I'm absolutely correct. This is
not about tricking people either: people seem to forget that the open
source method is based on participation not consumption. So testers look
for a cue to start testing; that is their form of participation. "alpha"
and even "beta" is often not enough of a cue, especially today when so many
of our testing users are not nearly as technically skilled with the
compiler, debuggers, etc as the typical Free software user was 10 years
It would be easy to just fault KDE for releasing too early, but Seigo does
have a point about "participation". Likely due to their exuberance at what
they had accomplished for KDE 4, the developers were blinded to the
inadequacies of the release for day-to-day use—at least for some
users. The project needed to clearly get the message out that it might not
be usable by all and it failed to do that. It's a fine line, but for
something as integral as a desktop environment, it would have been better
to find a way to release with more things working. The flip side, of
course, is that it takes testing to figure out what isn't
working—which is part of the service users provide back to the
This is not the first time we have seen this kind of thing.
Red Hat, and now Fedora, have always been rather—some
would say overly—aggressive about including new software into
releases. Some readers
will likely remember the problems with the switch to glibc-2.0 in Red Hat
5. Others may fondly recall Red Hat 7, which shipped an unreleased GCC
that didn't build the kernel correctly.
We may be seeing something similar play out with the recently announced plans
to include btrfs in Fedora 11. While it has been merged into the
mainline kernel for 2.6.29 (due in March), it is most definitely
not in its final form. There are likely to be stability issues as well as
possible changes to the
user-space API. There is even the possibility of an on-disk format
change, though Chris Mason and the btrfs developers are hoping to avoid it.
Much like with KDE 4, btrfs will likely benefit from
more users, but there is the risk that some will either miss or ignore the
warnings and lose critical data in a btrfs volume. Should that turn
out to be some high-profile developer who declares the filesystem to be a
"disaster", it could be a setback to the adoption of btrfs.
KDE 4.2 has just been released, and early
reports would indicate that it is very functional. With the
problems from the KDE 4.0 release—now a year old—fading in the
memory of many, a rekindling of those flames is probably less than completely
welcomed by the project. But the lessons they learned, even if solutions
are not obvious, are important for KDE as well as other
projects. Because free software is developed and released in the open,
much can be learned from other projects' mistakes. It is yet another
benefit that openness provides.
Comments (111 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Book Review: Hacking VoIP; New vulnerabilities in dia, kernel, ktorrent, tor,...
- Kernel: LCA: A new approach to asynchronous I/O; Snet and the LSM API; A SystemTap update.
- Distributions: Fedora looks to prevent upgrade disasters; KNOPPIX 6.0, Tin Hat 20090119, Ubuntu 8.04.2 LTS; Ext4 to be standard for Fedora 11
- Development: The long road to a working Cheese, new versions of Eventum, Firebird, Clonezilla, Samba, conntrack-tools, nginx, Sector, unattended-gui, Invada plugins, Azureus, xTuple, GNOME, KDE, Asymptote, UOX3, Elisa, MediaInfo, Denemo, Filterclavier, OO.o, Pyevolve, BleachBit, GCC.
- Press: US Supreme Court and software patents, location-aware tools, Sun vs Red Hat, Open-source software in education, using CloneZilla, Java resources, Mozilla Test Pilot and SUSE Studio reviews, Scott McNealy picked by Obama administration.
- Announcements: Wikipedia license change, new FSFE software guide, Radeon R6/R7 reference, We're Linux video contest, EuroDjangoCon cfp, IMF cfp, Collab Summit cfp, PostgreSQL Conf cfp, Cloud Summit registration, DOHCS conf, FOSS in Healthcare, RailsConf.