linux.conf.au might appear, at first glance, to be an event condemned to amateurish
disorganization. This conference moves to a different city every year, where
it is organized by a fresh crowd of volunteers with little or no previous
experience in putting together this sort of event. Under the guidance of
, a working formula
appears to have been found. By bringing in previous years' organizers to
give advice and oversight to the current event's team, linux.conf.au
manages to benefit from its past experience while simultaneously giving
each set of organizers an opportunity to experiment and bring in new
ideas. The result is, arguably, the best set of Linux conferences offered
anywhere on the planet.
linux.conf.au 2005 was no exception. A few weblog entries hint at a bit of
behind-the-scenes turbulence, but, to an attendee (or a speaker), this
conference was flawlessly organized. The facility worked well, the talks
were (mostly) great, the wireless network was ubiquitous and highly
reliable, and, yes, the coffee was good. The technical content was solid,
but the event was also filled with a uniquely Australian sense of humor and
fun. This year's organizers, led by Steven Hanley (picture at right) did
an outstanding job.
Some of the talks have been covered in other LWN articles. Here are some
quick notes on a few other talks that your editor was able to attend.
The GNOME miniconf covered many themes, but seemed to be dominated
by two in particular: marketing the project and future development
directions. The GNOME developers look, with a certain degree of envy, at
the amount of publicity that Firefox has received, and wonder how they can
get some of it for themselves. Part of the problem, as they see it, is
that GNOME is not a nice, simple download like Firefox; it's more like a
big, sprawling mess. The GNOME live CD project could help in this regard;
it got some attention at
LinuxWorld, but it needs some work and nobody has taken it on.
The other issue on the GNOME developers' minds is the GNOME 3.0 project. A
3.0 release gives the project the opportunity to break API compatibility,
something it has carefully avoided doing across 2.x. The only problem is
that the project does not really seem to have any idea of what it wants to
accomplish in 3.0. The developers had a clear vision of usability which
(whether you like their approach or not) carried them through a successful
set of 2.x releases. An upgraded vision for 3.0 does not yet exist.
Perhaps the most interesting idea came from Jeff Waugh. There is much
potential for network-enabled collaborative technologies - especially if
you resist the temptation to call them "groupware." Some cool ideas are
likely to see implementations in the next few months. The massive nature of
OpenOffice.org makes it a difficult platform for this sort of
experimentation, however, so much of the interesting work is happening with
tools like AbiWord and gnumeric. We may soon see a time when
OpenOffice.org, while remaining good at what it does, has been surpassed by
its competitors, which make better platforms for playing with new ideas.
Andrew Tridgell's keynote covered more than the simple cloning of
BitKeeper; the bulk of it related, instead, to the increasing use of
advanced software development techniques in the free software community.
The community is now at the forefront in many areas.
One example is the increased use of static analysis tools. For years,
lint was the state of the art; now the gcc suite itself
incorporates a wide variety of static checks beyond the standard warnings.
Tools like "sparse" have helped the kernel developers to find many problems
before users are bitten by them. The most notable thing, though, is that
the development projects are actually using these tools. Runtime
analysis has also come a long way; Tridge singled out valgrind as being one of the most important
advances in a long time.
Automatic code generation is coming into its own; something like half of
the Samba 4 code is created in this way. The trouble here is that it
is difficult to create general-purpose code generation tools which produce
what various projects really need. Samba ended up creating its own IDL
compiler to generate much of its protocol code, and other projects may well
end up doing the same. The effort paid off quickly: the resulting code is
more robust, more correct, easier to instrument and debug, and easier to
Some time went into the "asynchronous server" problem: how does one write a
server which deals with asynchronous requests from the outside world? None
of the alternatives appeal: threads are evil, processes are ugly, and state
machines "send you mad." For Samba 4, all of these techniques have
been combined in a user-configurable way. Embedded users can collapse
the whole system into a single process, while a multi-process, multi-thread
configuration can be used on monster servers. The Samba hackers have
managed to reduce the single user connection overhead to less than 20KB, a
massive improvement from previous versions. State machines have been tamed
with "composite functions," which take much of the hard-to-debug
indirection out of the code.
Memory management is another area which has seen improvements; Tridge was
especially pleased with the version of talloc() used in
Samba 4. This memory allocation library allows dynamic memory
allocations to be organized in a hierarchy; an entire subtree of the
hierarchy can be freed (calling optional destructors) with one call. This
scheme gives most of the advantages of a fully garbage-collected language
without the associated overhead.
Finally, Tridge noted that projects are actually starting to use test
coverage tools. The combination of static analysis, runtime analysis, and
test coverage can be very effective in completely eliminating certain
classes of errors (such as leaking data by writing uninitialized data to
Keith Packard and Carl Worth talked about work in desktop graphics.
Keith's discussion of the reworking of the X Window system has been
covered on LWN before. Carl gave a good overview of the Cairo vector graphics
library. Cairo, he notes, is being used in upcoming or test versions of
dia, evince, gtk+, mozilla, scribus, and more. Most of these projects are
still not using Cairo by default; it's too slow, still, for comfortable
use. Cairo is headed toward a 1.0 release with a final API shakeup and the
beginnings of the necessary performance work.
What audiences will likely remember from these talks, however, are the
demonstrations. This year's eye candy is the rubbery window which distorts
realistically when dragged across the screen. These windows can also be
spun around and literally thrown three virtual desktops away. Anybody who
has seen one of Keith's talks can imagine how much fun he was having
flinging windows around. The funnest Cairo demonstration may well be roadster, a free map
Elizabeth Garbee discussed her experiences in avoiding homework by
designing tuxracer courses; she then proceeded to create a brutal new
course in front of the audience. Not everybody can get away with creating
a talk around playing games in front of a crowd.
Her talk complemented an issue
raised by Rusty Russell: he has apparently lost much time recently playing
The Battle For Wesnoth, and was well impressed by the accompanying artwork
and music. To continue to progress, our community will have to do better
at attracting other sorts of contributors: artists, musicians, and so on.
That means we will need to think about how we can create good tools for
these contributors, and help them gently when they run into trouble.
Other stuff. Two other themes resonated through the conference.
One is that everybody is concerned about the BitKeeper episode, and amused
to learn how little was involved in the infamous "reverse engineering" of
its network protocol. The other is that a large number of attendees were running
Ubuntu. Even when the Canonical employees are factored out (the company
seems to have moved its offices to Canberra for the conference), Ubuntu has
clearly claimed a significant part of the distribution "market" among Linux
Your editor gave two talks at the conference; the slides are available
online for both: A Linux
Kernel Roadmap and Kobjects, ksets, and
ktypes. The kernel talk was covered in ComputerWorld,
and, subsequently, The
Inquirer. It is interesting to compare what was reported against the
linux.conf.au 2006 will be held in Dunedin, New Zealand, starting
January 23, 2006. Your editor hopes to be there.
Comments (3 posted)
The final linux.conf.au keynote was delivered by FSF attorney Eben Moglen.
It was, it must be said, one of the best
talks your editor has seen in some time. Mr. Moglen can take an absolutely
uncompromising approach to software freedom just as well as, say, Richard
Stallman, but he can deliver the message in a way that is vital and
effective for a far wider audience. While one would not want to distract
him from his important legal work, it would be a good thing if Eben Moglen
spoke a little more often.
The following is a poor attempt to summarize the talk.
The "legal state of the free world" is strong. In particular, attacks on
the General Public License have abated. One year ago, the SCO group was
claiming that the GPL was invalid and in violation of the U.S. constitution.
That kind of talk is not happening any more. SCO "has not
completely flatlined," but it is almost there.
What were the legal consequences of the SCO attack? Certainly the
invalidation of the GPL was not one of them. There were two outcomes, one
positive, and one less so.
On the positive side, the industry (as composed of large vendors who make
money from free software) has decided that the community needs better
lawyers. In particular, the industry has concluded that financing good
legal advice for the community early in the game, before problems develop,
is a good investment. The result was the creation of the Software Freedom Law Center, with
almost $5 million in funding. That figure can be expected to triple
in the near future. There should be, soon, abundant legal help available
for nonprofit organizations and developers working in the free software
In this sense, the dotcom bust was a fortuitous event as well. As
technology jobs went away, numerous technical people found their way into law
school. Many of them were not too happy about it, but these were the
students Eben had been waiting for the last fifteen years. Soon, there
will be a new crop of lawyers who understand technology and who can read
code - and they will be funded to work for the community. This is a very
good outcome, and we owe thanks to Darl McBride for helping to bring it about.
The other outcome from the SCO attack is the general realization, in the
boardrooms of companies threatened by free software, that copyright attacks
are of limited value. SCO and its backers brought a heavily funded attack
against a project set up fifteen years ago by a student in Helsinki who
didn't think he had any need for lawyers - and that project sustained the
attack easily. Copyright does not appear, any more, to be a legal tool
which can be used to impede the spread of free software.
Patent attacks are a different matter, and "we are going to face serious
challenges" in that area. There will probably not be much in the way of
patent infringement suits against individual developers; those developers
simply do not have the deep pockets which might attract such a suit.
Instead, the attacks will come in the form of threats to users.
This is happening now: corporate officers will get a visit from "the
monopoly" or others and be told about the sort of trouble waiting for it as
a result of its use of patent-infringing free software. That trouble can
be avoided by quietly paying royalties to the patent holder. This is
happening "more than we would believe" currently - companies are paying
royalties for their use of free software. It remains quiet because it is
in nobody's interest to make this sort of shakedown public. The victims
will not come forward; they will not even tell their suppliers.
Defending against patents is a complicated task. An important part is
destroying patents - getting the (U.S, mainly) patent office to reevaluate
and (hopefully) invalidate a threatening patent. This is what was done
with Microsoft's FAT patent, for example. When it works, it is by far the
most cost-effective way of dealing with patent problems; it is far cheaper
than trying to litigate a patent case later on.
This process is tricky. Typically, a group wishing to invalidate a patent
gets a single shot, in the form of its initial request to the patent
office. After that, the process becomes confidential, and involves
communications with the patent holder. So that first shot has to be a very
good one. They are getting better at it.
Killing patents makes people in the industry nervous - they have their
arsenal of patents too, after all. There is, however, an "agonizing
reappraisal" of the patent system going on within the industry.
Some companies in the technology industry are starting to
get a sense that the patent system does not work in their favor. It will
be interesting to see what happens within IBM, in particular. In general,
patent reform is going to be a big issue over the next couple of years.
Some parts of industry will favor reform, others (such as the
pharmaceutical industry) are happy with the system as it stands now.
There will be groups trying to redirect the reform process to favor their
own interests, and many "false
friends" appearing out of the woodwork. There will be opportunities for
serious reform, but the community will have to step carefully.
Meanwhile, Samba 4, in particular, may not be safe; there are likely to be
patents out there. "Expect trouble."
[In a separate session, Eben encouraged free software developers to record
their novel inventions and to obtain patents on the best of them. Free
legal help can be made available to obtain patents on the best ideas.
Until the rules of the game can be changed, we must play the game, and
having the right patents available may make all the difference in defending
against an attack.]
Back to the GPL: the work done by Harald Welte getting the German courts to
recognize and enforce the GPL has been a very good thing. Eben, however,
is also pleased by the fact that, over the last decade or so, he has not
had to take the GPL to court. Threats to enforce the GPL are entirely
credible - there are few volunteers to be the first defendant in a GPL
infringement suit in the U.S. It also helps that the Free Software
Foundation, in enforcing the GPL, seeks neither money nor publicity.
Instead, what they want is compliance with the license. "I get compliance
every single time."
Enforcement against embedded manufacturers ("appliances") has been
problematic in the past. These manufacturers have less motivation
to comply with the GPL, and the costs of compliance (especially after a
product has been released) are higher. The working strategy in this case
recognizes that the company actually guilty of the infringement (usually a
relatively anonymous manufacturer in the far east) is highly receptive to
pressure from its real customers: the companies who put their nameplates on
the hardware and sell it to the end users. If you go to a company with a
big brand and get that company to pressure the initial supplier, that
supplier will listen.
Meanwhile, the appliance manufacturers have started to figure out that
posting their source is not just something they have to do to comply with
the GPL - it can be good business in its own right. When the source is out
there, their customers will do some of their quality assurance and product
improvement work for them - and remain happier customers.
In summary, the problems with GPL compliance by appliance manufacturers
will go away in the near future.
There is not much to be said, at this point, about what will be in
version 3 of the GPL. Much, however, can be said about the process.
The GPL currently serves four different, and sometimes conflicting goals.
Any attempt to update the GPL must preserve its ability to serve all of
those goals. The components of the GPL are:
- A worldwide copyright license. Worldwide licenses are exceedingly
rare; they are typically tuned to each legal system in which they
operate. The GPL cannot be issued in various national versions,
however; it must work everywhere.
- A code of industry conduct - how players in the free software world
will interact with each other. Any new code of conduct must be
negotiated with the industry; it cannot just be imposed by fiat.
- The GPL is a political document; it forms, in a sense, the
constitution of the free software movement.
- It is the codification of the thought of Richard Stallman, and must
continue to adhere to his beliefs.
Updating the GPL will be a long process. Eben will be putting together an
international gathering of copyright lawyers to help with the crafting of
the copyright license portion of the GPL. A separate gathering of industry
representatives will be needed to hammer out the necessary compromises on
the code of conduct; this is a part of the process which may not sit well
with Richard Stallman, but it must happen anyway. The constitutional part
of the GPL, instead, should see minimal changes - there has been no
fundamental change in the wider world to motivate the creation of a new
constitution. On the last point, there will be no revision of the GPL
which does not meet with the approval of Richard Stallman and the Free
When a new license nears readiness, it will be posted with a long
explanation of why each decision was made. Then will come the comment
period, as the FSF tries to build a consensus around the new license. The
revision of the GPL is, perhaps, the most difficult task Eben has ever
taken on, and he is not sure that he is up to it. The job must be done,
As for when: "soon." He did not want to undertake revisions of the GPL
while it was under attack - updating the GPL should not be seen as a
defensive maneuver. Now, however, the GPL is not under attack, and "the
monopoly" is distracted for the next couple of years trying to get its next
big software release out. This is the time to get the work done, so
something is going to happen.
In response to a question about software-controlled radios: that is a
global problem, not just limited to the United States.
Japan, it seems, is the worst
jurisdiction in this regard; there have been threats to arrest foreign
software radio developers should they set foot there. Fixing the software
radio problem is a key part of ensuring freedom of communication in the
future, and it is currently Eben's most pressing problem. There has been
little progress so far, however, and new strategies will be required.
In general, freedom is under threat worldwide. The events since 9/11, in
particular, have accelerated trends toward a repressive,
surveillance-oriented world. If we want to ensure our political freedoms
in this environment, we must work for technological freedom. Without the ability
to control our own systems, to communicate freely in privacy, and to
interact with others, we will not have the wider freedoms we hope for. The
free software movement is the heir to the free-speech movements which
started in Europe centuries ago; we are at the forefront of what has been a
very long and difficult fight for freedom. The difference is that "this
time we win."
Standing ovations for speakers at Linux conferences are a rare thing; Eben
Moglen received two of them.
Comments (48 posted)
One of the big questions surrounding the release of Debian "Sarge" (aside
from "when?") is why the amd64 architecture is not making the cut. It's not
as if the amd64 port is unready, as indicated by this status
report from Andreas Jochens
of the amd64 porters team.
Inclusion of amd64 in Sarge has been the subject of some heated
exchanges on the Debian-devel list, as far back as July of 2004. To the
average user, it probably seems logical that the amd64 port should be
included, since the work seems to be done, and other packages like GNOME
KDE 3.3 have found their way in. To get clarification, we invited comment from
Jochens and Debian Release Manager Steve Langasek.
According to Langasek, the decision not to include amd64 in Sarge is
strictly due to mirror space.
When sarge is released, the size of the Debian archive is going to balloon,
as full mirrors are asked to carry all of woody, sarge, etch (the new
testing), and sid. While it's true that there are many Debian mirrors that
will be glad to make room for amd64 -- unofficial or not -- we also know
that there are plenty of other mirrors that have limited space available
for Debian, and some of them may have to drop us after sarge is released
because of this size increase. Making the archive even larger by adding
amd64 to sarge means more mirrors that will have to drop Debian.
After the release, Langasek said that the FTP team plans to put a solution
in place that will allow "partial by-architecture mirroring for etch
using the limited toolkit demanded by our mirror operators... At that
point, we will be much better able to accommodate amd64 without penalizing
the existing architectures."
However, some disagree that adding amd64 to the mirrors would be an
unreasonable burden. Branden J. Moore, for example, says
that the Debian archive is not that large compared to other
These are the numbers from a dh -h on the mirror I admin:
While others mirrors may very well be suffering from space
constraints... they do have the ability to use proper --exclude lines in
rsync to avoid mirroring the debs from the archs that they don't want. I
know it's not the best solution, as their Packages.gz file becomes bad, but
Jochens is not offended by the decision to keep amd64 out of Sarge, and
says it's a "good thing" that the release will be supported
separately by the amd64 porting team.
This could even be an example how other Debian ports could be handled in
the future. I view the Debian archive mainly as a source archive which can
be compiled for a large set of different architectures. The most important
thing is, that fixes for architecture specific problems will be applied to
the package sources. Debian package maintainers usually do a very good job
We were also curious about the criteria used by the release team to decide
what goes in. For example, why were GNOME and KDE updated, but X.org will
not be included until Etch? Langasek says that the decisions have to do
with making sure that someone will continue to do updates for the software,
and that it would not derail the Sarge release process:
So the KDE and GNOME updates have happened because the KDE and GNOME teams
have worked with the release team to make them come about in a
non-disruptive way. For X, which is very near the bottom of the dependency
tree and one of the more hardware-dependent components of the system, I'm
not sure any transition to X.org could have been non-destructive; and the X
Strike Force, our X maintenance team, opted not to push for it. We all
know that a stable release is going to be perceived as "old" by the end of
its life cycle whether or not we succeed in establishing a predictable
release cycle for etch, so the difference between shipping an X server
that's three, six, or nine months behind upstream is small when weighed
against, say, causing a one, two, or three month delay in a release that's
As for amd64, this was never the release team's decision to make; we work
closely with the FTP team in preparation of a release, but it's the FTP
team who has to make the judgment calls about how our infrastructure will
or won't scale to handle new projects... All the reasons for keeping it out
are logistical ones that people are intent on addressing soon after the
sarge release, and I have every confidence that this will happen in the
timeframe for etch.
Indeed, even the GNOME and KDE releases now in Sarge are somewhat
outdated. While Sarge (including amd64) looks poised to ship with GNOME
2.8, KDE 3.3 and XFree86, Ubuntu is shipping with GNOME 2.10, KDE 3.4 and a
fresh release of X.org. However, not all packages in Ubuntu are newer than
Sarge. Vim shipped with Ubuntu for x86_64 is version 6.3.46, while Vim is
at 6.3.68 in the Alioth repository.
Even though amd64 will not be released to mirrors as part of Sarge, Jochens
said that the release "is not 'unofficial' anymore."
It is supported by the Debian release team, the Debian kernel team, the
Debian installer team and others. The only difference to other ports is
that the binary package archive for amd64 is maintained by the porting team
instead of the ftp-master team. Again, I consider this a good way to share
responsibilities and an example for other ports.
Jochens also assured us that the amd64 team will be able to maintain the
amd64 release throughout the Sarge lifecycle, saying that it is
"mostly a matter of compiling the updated Debian sources when they
become available...amd64 specific security issues will be coordinated with
the Debian security team."
For all intents and purposes, it would seem that the discussion is purely
academic at this point. Debian users who want Sarge on amd64 will be able
to get it, though perhaps not from official Debian mirrors. For those who
are interested in trying out the amd64 port, the project is currently hosted on Alioth with a
on AMD64 HOWTO.
Comments (none posted)
Page editor: Jonathan Corbet
Next page: Security>>