The seventh annual Ottawa Linux Symposium has come to an end. Your editor,
who has attended six of the seven OLS events, finds the conference in good
health. OLS was larger this year - some 700 people - but it has handled
its growth well. OLS remains one of the premier Linux development
A look at the
schedule reveals some clear themes for this event. Virtualization is
obviously at the top of the list for many OLS attendees; the largest room
was dedicated to the topic for a full day. This was perhaps the most
kernel-oriented schedule yet from an already kernel-dominated event; there
was hardly enough non-kernel content to fill even a single track. Those
who are interested in the user space side of free software may find
themselves drifting toward other events; but kernel people will find plenty
of interest at OLS.
OLS is an increasingly professional event; the proportion of students and
part-time hackers attending the event appears to have dropped over the
years. Registration fees can be as high as C$750. A surprising number of
the attendees are mostly concerned with what their customers want from
Linux; these are people who are making their living in a way which at least
involves Linux and free software.
As always, there was no trade show floor at OLS; nobody is trying to sell
anything to the attendees. OLS is very much about technology and
development communities, and little about hype.
Your editor, rather than trying to provide exhaustive coverage of the
event, attended some of the more interesting sessions. The resulting
articles have been posted over the last week; for convenience, they are:
- A challenge for
developers. Jim Gettys thinks that free software developers have
to get past the "mantra of one," build the multiuser, cooperative
systems of the future, and take the lead for the next generation of
- Linux and trusted
computing. IBM engineers Emily Ratliff and Tom Lendacky discuss
the current state of Linux support for the "trusted platform module"
(TPM) chip and some of the good things that it can do for us. Trusted
computing does not have to be an evil thing.
- Xen and UML. Lead
developers from the two most prominent Linux paravirtualization
projects discuss where those projects are and what's coming next.
There was much more than the above at OLS this year; your editor, in
particular, appreciated Keith Packard's discussion of the TWIN window
system (designed for very small devices), Michael Austin Halcrow's
description of the eCryptfs filesystem (hopefully to be written up in the
future), Rusty Russell's discussion of nfsim, and Pat Mochel's
sysfs talk. The Wednesday reception featured
talks by Doug Fisher of Intel (who nearly got booed off the stage when it
became clear that his talk was being run from a Windows system) and Art
Cannon from IBM. Art's talk, a buzzword-loaded presentation on how to talk
to business people about open source, was well received but hard to follow
due to the poor acoustics and high noise level in the room. If you gather
several hundred people (many of whom have not seen each other over the past
year) into a room and give them all the beer they want, it can be hard to
get them to sit down, be quiet, and listen to somebody talk about business
Dave Jones's ending keynote, instead, got everybody's full attention.
Dave, who, among other things, is the current maintainer of Red Hat's
kernels, is concerned with the number of regressions and other bugs seen in
recent kernels. The quality of our kernels, says Dave, is going down as a
result of regressions, and driver regressions in particular.
There's a lot of reasons for the problems. They date back, perhaps, to the
adoption of BitKeeper. With BK, Linus could quickly pull in a large set of
patches from a subsystem maintainer without really looking at them all. So
BitKeeper increased the velocity of patches through the system, with some
cost as to the quality. The real problem, however, is one of testing. The
only way to really find kernel bugs is to have the kernel tested by a wide
variety of users. This is particularly true for driver bugs; nobody, not
even the driver maintainer, can possibly have all of the hardware needed to
perform even remotely comprehensive testing. It takes a large community of
users to do that.
When testing does happen, we need to make it easier for users to report
bugs. Requiring a user to create a BugZilla account and fill in vast
amounts of information for a (possibly) tiny bug is counterproductive; many
bug reporters will simply give up and go away. Bug reporting should be a
simple and quick operation.
There are, in any case, quite a few challenges involved in dealing with bug
reporters; this was Dave's opportunity to complain a little about the
frustrations of his job. Bug reporters tend to always see their bug as the
most important one (so, he says, bug reporting systems should not allow
reporters to set the severity of the bug); they will continue to mess with
the system while others are trying to fix the bug, making confirmation of
fixes difficult; some of them file a bug and disappear, not responding to
requests for important information; they will lie about the configuration
of their systems (and the presence of binary-only modules in particular);
and so on. The receiving end of a major distribution's bug tracking system
can be a difficult place to be.
The question of the proper place to report bugs came up. Many bugs seen by
end users are really bugs in the upstream package, not in a particular
distribution's version of it. Those bugs should be reported to the
real, upstream maintainer. Some distributions (Debian, for example) see
this reporting as their responsibility; others would like bug reporters to
go directly upstream. Dave, in particular, notes that quite a few kernel
bugs show up only in the Red Hat BugZilla system; they never make it to the
(not universally used) kernel BugZilla. How many other distributors, he
wonders, have kernel bugs sitting in their bug trackers which should really
be reported to the community? In the future, it would be nice if BugZilla
installations could talk to each other so that bugs could be forwarded to
the right place; however, each BugZilla evidently has its own schema,
making that sort of communication difficult.
Dave noted that the kernel has gotten significantly more complicated over
the time he has been working on it. Coming up to speed and really
understanding what is happening inside the kernel is a challenging task.
Kernel developers need to recognize this and take advantage of all the
techniques and tools which are available to them to produce better
Next year's keynote speaker will be Greg Kroah-Hartman.
The final event of OLS is the infamous Black Thorn party; it is the ideal
way to unwind after an intense week of conferencing. The Black Thorn is
getting a little small, however; one of the OLS organizers was asking
people to put their backpacks aside so there would be room for everybody to
stand. If OLS continues to grow, the final event may have to happen
Comments (11 posted)
On April 5, 2005, it was announced that BitMover would "focus exclusively"
on its commercial BitKeeper offering and withdraw the free-beer client used
by a number of free software developers. This was a nervous moment;
BitKeeper had become an integral part of the Linux kernel development
process. Nobody wanted to go back to the old days - when no source code
management system was used at all - but there was no clear successor to
BitKeeper on offer.
And where might such a successor have been expected to come from? We had been
told many times that the development of BitKeeper required numerous
person-years of work and millions of dollars of funding. The free software
community was simply not up to the task of creating a tool with that sort
of capabilities - especially not in a hurry. The kernel development
community, having lost a tool it relied upon heavily, appeared doomed to a
long painful period of adjustment.
Two full days later, Linus announced the
first release of a tool called "git." It was, he said, "_really_ nasty,"
but it was a starting point. On April 20, fifteen days after the
withdrawal of BitKeeper, the 2.6.12-rc3 kernel prepatch, done entirely with
git, was released. The git tool, in those days, was clearly suitable only
for early adopters, but, even then, it was also clearly going somewhere.
Git brings with it some truly innovative concepts; it is not a clone of any
other source code management system. Indeed, at its core, it is not really
an SCM at all. What git offers is a content-addressable object
filesystem. If you store a file in git, it does not really have a name;
instead, it can be looked up using its contents (as represented by an SHA
hash). A hierarchical grouping of files - a particular kernel release, for
example - is represented by a separate "tree" object listing which
files are part of the group and where they are to be found. Files do not
have any history - they simply exist or not, and two versions of the same
file are only linked by virtue of being in the same place in two different
This way of organizing things is hard to grasp, initially, but it makes
some interesting things possible. One of the harder problems in many SCM
systems - handling the renaming of files - requires no special care with
git. A single git repository can hold any number of branches or parallel
trees without confusion. File integrity checking is built into the basic
lookup mechanism, so that corruption will be detected automatically, and,
if desired, kernel releases can be cryptographically signed easily.
Perhaps most importantly, however: git made certain options, such as the
merging of patches, very fast.
It's worth noting that git is not a clone of BitKeeper, or of any other
SCM. Certainly it incorporates lessons learned from years of use of
BitKeeper and other tools; it supports changesets, for example, and is
designed to be used in a distributed mode. But git is something new, it
brings a unique approach to the problem.
Watching the git development process snowball over the last few months has
been fascinating. A large and active development community coalesced
around git in short order; interestingly, relatively few of the core git
developers were significant kernel contributors. In a short period of
time, git has acquired most of the features expected from an SCM, its rough
edges have been smoothed, it has picked up a variety of graphical interfaces,
and it is widely used in the kernel development community. Git is clearly
The git developers are now working
toward a 1.0 release. As part of that process, Linus has now handed git over to a new
maintainer: Junio Hamano. Junio has been an active git developer for some
time; he will now attempt to take
the project forward as its leader. He will have plenty of work ahead
of him as git moves into a more stable (though still fast-moving) phase.
Git is an example of how well the free software process can work. Linus
has shown us, once again, that he knows how to get a successful free
software project started: put out a minimal (but well thought out) core
that begins to solve a problem, then let the community run with it. The
result is a vibrant, living project which incorporates the best of what has
been learned before while simultaneously breaking new ground. The creator
of the Linux kernel appears to have launched another winner.
But, then, some things still seem to surprise even Linus:
|August 25, 1991||July 26, 2005|
"I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones."
"...this thing ended up being a bit bigger and more
professional than I originally even envisioned."
Let this be a lesson to all free software developers out there: the
humblest of projects can, with the right ideas and participation, become
far more "big and professional" than one might ever imagine.
Comments (6 posted)
The Mozilla Foundation is shaking up
its roadmap a little -- though not "scrapping" the
1.1 release as had been reported in some outlets.
The 1.1 release was originally planned for this month, but that has been changed
to a 1.5 release planned for September. Chris Hofmann, Mozilla's director
of engineering, talked to us about the change in the roadmap, and what's
ahead for Firefox and Thunderbird.
Hofmann said that the version number change was made for a number of
[The change] is partly technical, one of the features that is going into
this next release is a software updating feature, so we were able to do a
better job of testing incremental updates with this software update
feature. As we move up the numbering scale, and make sure that all of that
detection and ability to deal with numbering changes works with part of the
software update system and more importantly, recognizes the progress that
we've made in the last six months getting a number of features into the
product that we hadn't expected to be there and this far along.
Firefox developer Asa Dotzler also wrote about
One major consideration in this decision was the sheer volume of changes in
the Firefox core (Gecko) made a minor .1 increment seem misleading. While
it may not be obvious by looking simply at release dates, today's Gecko
core of Firefox has seen nearly 16 months worth of changes compared to what
shipped in Firefox 1.0. This is because we created our Gecko 1.7 branch
(the branch from which Firefox 1.0 shipped) back in April of 2004. At that
time, Gecko development on the trunk continued and very little of that work
was carried over to the 1.7 branch to be included in Firefox 1.0.
Indeed, there are quite a few new
features and other changes in Firefox 1.5, many of which we covered on
LWN with the first Deer Park
Alpha release. The 1.5 release should have improvements in pop-up
blocking, tab reordering, Scalable Vector
Graphics (SVG) support and ECMAScript
for XML (E4X) support.
One of the improvements that Hofmann highlighted for 1.5 is Firefox's extensions
system. According to Hofmann, the 1.5 release will handle versioning
information of extensions and "ability for the browser to recognize
extensions that might be incompatible with specific releases."
Hofmann also said that this release would allow the user to turn extensions
on and off, something that the Firefox 1.0 does not allow -- though some
extensions, like Greasemonkey
do provide that feature directly.
The 1.0 to 1.5 jump will also bring about some changes to the Firefox API,
which may affect
extensions that work with the current interface.
There's a pretty big shift in the API set for applications and extensions
that are moving from 1.0 to 1.5, most of the extension authors have taken
the work to make extensions that are going to be compatible with 1.5. There
might be a few more changes we make in the next few weeks of the
development cycle, but by the time we get to 1.5 release, the goal is to
have a very large percentage of the extensions available be compatible with
Thunderbird is also being shifted from a 1.1 release to a 1.5 release
around the same time frame as Firefox. Hofmann said that the version bump
for Thunderbird was, in part, because development had been moving along
nicely for Thunderbird as well -- but also because the Mozilla Foundation
is trying to keep version numbers for both applications in sync. He noted
that Thunderbird 1.5 would have improvements in spam detection and for
detecting phishing attacks, in-line spell checking and improved RSS
features. Thunderbird 1.5 will also feature improvements for updates, and
users should be able to do updates from Thunderbird directly.
Though the feature sets are sketchy at this point, the Mozilla Foundation's
roadmap calls for a Firefox 2.0 release in early 2006 and a Firefox 3.0 by
the end of 2006. One feature that Hofmann talked about for future releases
Runner. According to Hofmann, Xul Runner will allow Firefox,
Thunderbird and other applications "to share core components of
technology." According to Hofmann, any one of the Mozilla
applications would include the core features, and then users would only
need to download "a thin layer" for additional applications.
Hofmann said that the first instance of Xul Runner would be available
"around the time we ship Firefox 1.5," and that the next
versions of Firefox and Thunderbird would be built on top of Xul Runner and
"allow sharing of common code" that both applications use.
Given the amount of time 1.5 has been in development (Firefox 1.0 was
released in November, 2004) it seems a bit ambitious to plan the 2.0 and
3.0 releases in 2006. However, anything is possible.
Meanwhile, the Firefox 1.5 Beta is scheduled for August, and a second
alpha release is available now for brave souls who can't wait for new
features, or who are eager to help in testing.
Comments (4 posted)
Page editor: Jonathan Corbet
Next page: Security>>