The 2003 Ottawa Linux Symposium has run its course. Once again, OLS has
established itself as the premier North American Linux developers'
conference. A solid roster of speakers delivered four days' worth of
intensely technical talks on where Linux development (and kernel
development in particular) is headed. It is always nice to attend an event
where the talk is technical and nobody is trying to sell you anything.
Your editor was not able to attend all of the presentations, of course, and
did not write up every one he attended. Below, however, you'll find quick
summaries of several of the more interesting talks given at OLS this year.
Those looking for all the details can find them in the OLS 2003
In response to popular demand (i.e. somebody actually asked), I
have also put up the slides for my talk on
porting drivers to the new kernel. Giving this sort of talk at OLS is a
unique challenge, given that, for any topic, there's certain to be at least
one person in the audience who knows way more than the speaker. Happily,
the hecklers were kind...
Thanks are due to the OLS organizers for putting together another
high-quality event, for the event's sponsors for helping making it
possible, and to all the speakers for presenting their work.
Ugly ducklings - resurrecting unmaintained code
Dave Jone's talk covered the work he has done in 2.5 to fix up the MTRR and
AGPGART drivers. Dave has observed a common sort of lifecycle for
drivers. A driver is initially written for a specific vendor's widget.
Over time, it is extended to support compatible widgets from other vendors,
then slightly different widgets from yet other vendors. The number of
special cases increases. Meanwhile, the maintainer gets bored and moves
on. Eventually you end up with thousands of lines of spaghetti which is
Dave's approach to such drivers includes splitting code into separate files
by vendor (usually) and separating code which should never have been run
together in the first place. "Useless abstractions" can be cleaned out.
Eventually you end up with a code body which is sufficiently clean and
understandable that it can be updated for modern hardware, new features,
etc. But one should not underestimate the amount of work it can take to
Large projects and bugzilla
Luis Villa discussed his experience working as GNOME's quality
assurance person. He has, he estimates, read some 30,000 bug reports over
the last few years. The experience appears to not have warped him
badly, though such things can take years to show.
He is, as one might expect, a strong proponent of organized bug tracking.
A good QA system, he says, makes writing software easier (through
reductions in mailing list traffic, among other things), eases the release
process, makes the software better, and, important, makes writing software
The key point of the talk, perhaps, was that QA people have less power in
free software projects than they do in the proprietary world. That makes
it even more imperative that they not forget that they are providing a
service to the developers, and that they have to understand what the
developers need from them. Filtering ("triage") is especially important;
developers should not have to deal directly with the full flow of bug
reports. If the bug trackers are providing the sort of bug filtering and
categorization that the hackers need, all will be well. Otherwise the bug
tracking system will degenerate into an unused pile of old information.
Interactive kernel performance
Robert Love's talk covered work done in the 2.5 kernel to improve
interactive performance. What's interactive? Robert takes a wide view;
interactive applications are "everything except Oracle." The topics
covered will be familiar to LWN Kernel Page readers; they include the
anticipatory disk scheduler, the O(1) process scheduler, the preemptible
kernel and other low-latency work, etc. In his opinion, the single most
important bit of work to go in this time around (with regard to interactive
performance) is the anticipatory scheduler.
udev - devfs done right
Greg Kroah-Hartman described udev, his user-space devfs replacement
(covered here last April
) in a
standing room only session. Progress on udev has been slow since April
(Greg has been busy with other stuff), but some things have happened.
There is now a set of configuration files to allow the user to specify how
device naming and permissions should be handled; it uses various attributes
of a device (it's serial number, label, position in the bus topology,
etc.) to figure out what the system administrator would like it to be
called. Future versions will use the "tdb" database to track devices and
handle persistent naming.
Future work includes changing udev to run as a daemon process; this change
is required to properly handle out-of-order hotplug events. For those
wanting to experiment with it, the udev code
can be found on kernel.org in /pub/linux/utils/kernel/hotplug/.
Why doesn't my laptop suspend?
Pat Mochel's talk was on power management, or "why doesn't my laptop
suspend?" He asked for a show of hands: how many in the audience have
laptops? Well, this is OLS, so most of the attendees raised their hands.
How many of those suspend correctly? Most hands went down.
Older, APM-based machines would handle suspend operations entirely in the
firmware; it "just worked" for most people. Newer ACPI systems, however,
suspend task into the software; this is evidently an improvement. And
Linux software has not yet caught up
with that. ACPI support is pretty much in place, but that is the easy
part. The harder part is working power management support into all the
drivers, coming up with a reliable way of suspending the system, and
implementing a reasonable user-level interface to it all.
Much of this work has been done for 2.5; it still languishes in Pat's tree,
however, and has not been merged into the mainline. The changes include a
new set of driver power management methods; there is also a cleaned up software
suspend subsystem with a safer snapshot mechanism and the ability to write
the system image to any persistent media.
Pat has said that he will finish this work, though it was clear that he
would appreciate some help from other developers as well. His hope is to
get the work merged by August 20. Should he be successful,
appreciative users should send him a birthday present ("small, unmarked
bills") on that day.
Toward an O(1) VM
Rik van Riel discussed recent work with virtual memory management; the talk
covered page replacement strategies, the reverse mapping VM, etc. The key
point of his talk, however, was this: by many metrics relevent to VM, our
newer, "faster" machines are actually slower. Over the years, the time
required to perform tasks like reading an entire disk, or writing a system's
entire RAM to disk, has gone up by a couple of orders of magnitude or more.
Much of the previous century's research into VM is losing its relevance as
memory and disk sizes increases faster then the transfer speeds between
them. VM hackers increasingly find themselves having to make things up as
Integrating DMA into the device model
James Bottomley discussed the new generic DMA layer; these functions have
been documented in this article
, which is
part of the LWN driver porting series. What was new in this talk is
James's discussion of where the DMA API needs to go in the future. The
current version has no way of returning a failure code from the DMA mapping
routines. But failure can happen: a system can run out of I/O memory
management unit space, a problem which will be exacerbated in the future as
GART hardware is used as a poor man's IOMMU. At some point, that sort of
failure must be communicated back to the caller.
Device drivers can provide a DMA mask describing the range of addresses
that their devices can handle. But there is no way for the system to pass
back a mask saying which addresses the device needs to handle.
Better performance can often be maintained when devices are operated in
their smaller-memory modes; the system should provide the information that
allows those modes to be used when they are applicable. Finally, the
current approach to cache coherency needs some work; drivers should be able
to find out just how coherency works on the host system. The means by
which the CPU and peripherals share DMA buffers needs to be reorganized
into a straightforward ownership model; in the current system, it is not
always clear who has the right to change a buffer, and that can lead to
Your reporter was going to write up the legendary OLS closing party at the
Black Thorn, but the whole event has become somewhat fuzzy and difficult to
recall. Suffice to say that a lot of fun was had.
to post comments)