As has been widely reported
Andrew Morton recently told an audience at
LinuxTag about his fears that the Linux kernel is getting buggier over
time. That worry resonates with a number of users and developers, many of
whom have never gotten entirely used to the 2.6 development model. The
result of this discussion may be a long look at how the kernel is
developed, culminating in a discussion at the annual Kernel Summit in
Ottawa this July. Easy answers may be difficult to come by, however.
Even the core question - are more bugs being added to the kernel than are
being fixed? - is not straightforward. Many developers have a sort of gut
sense that the answer is "yes," but the issue is hard to quantify. There
is no mechanism in place to track the number of kernel users, the number of
known bugs, and when those bugs are fixed. Some information can be found
in the kernel bug tracker run by OSDL,
but acceptance of this tracker by kernel developers is far from universal,
and only a subset of bugs are reported there. Distributors have their own
bug trackers, but there is little flow of information between those
trackers and the OSDL one; distributor trackers will also reflect problems
(and fixes) in distributor patches which are not in the mainline kernel.
Dave Jones publishes statistics from
the Fedora tracker, but it is hard to know what to make of them.
Part of the problem is that an increasing bug count does not, in itself,
indicate that the kernel is getting worse. A kernel which is larger and
more complex may have more bugs, even if the density of those bugs is going
down - and the 2.6 kernel is growing quickly. Increased scrutiny will
result in a higher level of reported bugs, but a lot of those bugs could be
quite old. The recent Coverity scans, for example, revealed some
longstanding bugs. If the user base is growing and becoming more diverse,
more bugs will be reported in the same code, even if that code has not
Dustin Kirkland has taken a different approach. For each 2.6 kernel
version, he performed a search for "linux 2.6.x", followed by searches for
strings like "linux 2.6.x panic". The trouble reports were then normalized
by the total number of results, and the graph shown on the right was
produced (click on it for the full-resolution version). Dustin's results
show a relatively stable level of problem reports, with the number of
problems dropping for the most recent kernel releases.
Clearly, there are limits to the conclusions which can be drawn from these
sorts of statistics. The results which show up in Google may not be
representative of the real troubles afflicting Linux users, and the lower
levels for recent kernels may simply reflect the fact that fewer people are
using those kernels. But the fact that these results are as good as
anything else available shows how little hard information is available.
Some other efforts are in the works to attempt to quantify the problem -
stay tuned to LWN for information as it becomes available.
In a way, however, whether the problem is getting worse is an irrelevant
question. The simple fact is that there are more kernel bugs than anybody
would like to see, and, importantly, many of these bugs are remaining
unfixed for very long periods of time. So, regardless of whether the
situation is getting worse, it seems worth asking (1) where the bugs are
coming from, and (2) why are they not getting fixed?
The first question has no easy answer. It would be nice if somebody would
look at bug fixes entering the kernel with an eye toward figuring out when
the fixed bug was first introduced - and whether similar bugs might exist
elsewhere. That would be a long and labor-intensive task, however, and
nobody is doing it. In general, the kernel lacks a person whose
time is dedicated to tracking (and understanding) bugs. At the 2005 Kernel
Summit, Andrew Morton indicated that he would like to have a full-time
bugmaster, but this person does not yet exist. If, somehow, such a
position could be funded (it is hard to see as a long-term volunteer job),
it could help with the tracking and understanding of bugs - and with
ensuring that those bugs get fixed.
Why bugs do not get fixed might be a little easier to understand.
Certainly part of the problem must be that it is more fun to develop cool
new features than to track down obscure problems. The older development
process - where, at times, new features would not even be merged into a
development kernel for a year at a time - might have provided more
motivation for bug fixing than the 2.6 process, where the merge window
opens every month or two. But feature development cannot be the entire
problem; most developers have enough pride and care about their work to
want their code to work properly.
The kernel is a highly modular body of code with a large development
community. Many (or even most) developers only understand a relatively
small part of it. So it is easy for kernel developers to feel
that the bulk of the outstanding bugs are "not their department" - somebody
else's problem. But the person nominally responsible for a particular part
of the code may be overwhelmed with other issues, unresponsive and
difficult to deal with, or missing in action. Many parts of the kernel
have no active maintainer at all. So problems in many kernel subsystems
tend to get fixed slowly, if at all - especially in the absence of an irate
and paying customer. For this reason, Andrew has encouraged kernel
developers to branch out and address bugs outside of their normal areas.
That is a hard sell, however.
Kernel bugs can be seriously hard to find and fix. The kernel must operate
- on very intimate terms - with an unbelievable variety of hardware and
software configurations. Many users stumble across problems that no
developer or tester has ever encountered. Reproducing these problems can be
impossible, especially if nobody with an interest in the area has the
affected hardware. Tracking down many of these bugs can require long
conversations where the developer asks the reporter to try different things
and come back with the results. Developers often lack the patience for
these exchanges, but, crucially, users often do as well. So a lot of these
problems just fall by the wayside and are not fixed for a long time, if
Bug prevention is an area with ongoing promise. Many of the most
error-prone kernel interfaces have been fixed over the years, eliminating
whole classes of problems, but more can be done. More formal regression
tests could be a good thing, but (1) the kernel developers have, so
far, not found a huge amount of value in the results from efforts like the
Linux Test Project, and
(2) no amount of regression testing can realistically be expected to
find the hardware-related problems which are the root of so many kernel
bugs. Static analysis offers a great deal of promise, but free tools like
sparse need quite a bit of work, yet, to realize that promise.
The end result is that, while there are ways in which the kernel process
can be improved, there is a distinct lack of quick fixes in sight. Fixing
kernel bugs is hard work, and the kernel maintainers lack the ability to
order anybody to do that work. So, while the kernel community can be
expected to come to grips with the problem - to the extent that there is a
problem - the process of getting to a higher-quality kernel could take some
Comments (58 posted)
Your editor is fortunate enough to live in a town with an excellent radio
station. It is a public station, funded (mostly) by its listeners and
operated (mostly) by volunteers. It is a nearly 30-year-old application of
many free software concepts to the airwaves; appropriately, its name is KGNU
. For those who do not live in the area,
or who find the reception problematic here on the edge of the mountains,
KGNU makes a set of streams available over the net; there is even an Ogg
KGNU airs an incredible variety of music and public affairs programming;
much of what is heard there is available nowhere else in the area.
Unfortunately, some of the most interesting programs are not broadcast at
times when it is convenient for your editor to listen to them. Some of the
best music is late at night, and the public affairs programs
broadcast during the day tends to be incompatible with the need to write
As a result, your editor has a strong desire to record shows of interest
and listen to them at a later time. This is, of course, a classic, legal
exercise of fair use rights. For years, this activity has been performed
using a DAT deck, which will happily record a three-hour show without
breaks. Unfortunately, this solution (1) requires somebody to push
the "record" button at the right time, and (2) depends on the continued
operation of an aging piece of audio equipment whose reliability was not
the greatest even when it was new. It would make a lot of sense to,
instead, simply record the audio stream from the net. Recording could be
automated, and the result could be moved to a portable player for
It is not surprising that proprietary players for streaming media lack a
"record" option. But, one would think, free players would provide such an
obvious bit of functionality. As it turns out, however, most of the free
players which can tune in network streams also lack recording capability.
Whether this omission is simply a matter of other development priorities
coming first or is, instead, a capitulation to the entertainment industry
is not clear. Regardless of why, a Linux user who has fired up totem,
amarok, or xmms to play an audio stream will not readily find a "record"
There are, however, a number of options available for those who would
record audio streams on a Linux system. Here are a few that your editor
Recording through the sound system
Audio streams passing through the ALSA sound system are generally available
to applications via a capture interface. So, in fact, almost any free
recording application can be used to grab the stream as it passes
through the kernel. A simple example can be made with arecord:
arecord -f cd -d 7200 stream.wav
This command will record a stream in WAV format, automatically stopping
after two hours. Other recording applications (ecasound, ardour, etc.) can
also be used.
There are some downsides to this approach. Recording in this way occupies
the sound system, making it impossible to listen to anything else. Changes
to mixer settings can affect the recording. Depending on the sound
hardware in use, the system might have trouble simultaneously playing an
audio stream and recording it. And, regardless of other problems, this
solution involves several transformations to the audio stream between the
network interface and its eventual resting place on the disk. Your editor
would rather store the stream as it was received from the source.
If the stream of interest is in the Ogg Vorbis format, the ogg123 tool
can be used to capture it. A command like this will do:
ogg123 -d wav -f stream.wav http://stream-url
With a second option (-d oss), ogg123 can simultaneously
play the stream and record it to the disk file. There is an option for
specifying the duration of the recording (useful for grabbing shows via a
cron job), but it did not work properly on your editor's system.
For whatever reason, ogg123 lacks the ability to save an Ogg
stream directly to disk - it must convert it to the uncompressed WAV format
first. One can always re-encode the stream - at recording time using a
pipe, even - but putting an audio stream through a second round of lossy
encoding cannot do it any good. It would be much nicer to just save the
stream directly to disk.
If something exists on the net, there is a way to tell wget to
fetch it. Audio streams are no exception; running:
will do the trick. No transformations will be applied to the stream - it
will be saved as received from the source, which is as it should be. On
the other hand, wget is not really designed with streams in mind.
In particular, it lacks an option for setting the recording period, making
it a bit harder to run in an automated mode - though a couple lines of
shell scripting suffice to take care of that problem.
While most streaming media players lack a record option, mplayer is a notable
exception. A stream can be recorded with a command like:
mplayer -dumpstream -dumpfile stream.ogg http://stream-url
Of course, streams in just about any format can be recorded in this
manner; mplayer will save the stream as it receives it.
The list of options understood by mplayer easily qualifies as one
of the longest for any application anywhere on the planet. A definitive
study could require some months, but, as far as your editor can tell, none
of those options tell mplayer how long it should run. As with
wget, that omission makes mplayer a little harder to use
in an automated mode.
Some distributions are more enthusiastic about including mplayer
than others. Packages for almost any distribution are readily available,
however, to those who search for them.
The definitive tool for capturing streams may well be streamripper. This utility
will grab a stream and store it to disk, possibly splitting it into
separate tracks as it goes. It can function as a relay, making it possible
to listen to a stream as it is being recorded - or to distribute a stream
around an internal network. In its simplest form, streamripper is
Options exist to limit recording time, control separation into tracks,
establish a stream relay, and automatically discard advertisements. There
are graphical frontends for GNOME (streamtuner) and KDE (KStreamRipper).
There is also an amarok
From your editor's point of view, streamripper is the right tool
for this job. It is the only one which was designed for the purpose of
capturing audio streams in their original format. In a pinch,
wget will do the job, as will mplayer. Employing a huge
tool like mplayer for this purpose feels somewhat like using a
nail gun to hang a calendar, however.
For now, we are lucky in that there are quite a few high-quality streams
which can be time-shifted and enjoyed in this manner. Unfortunately, the
future looks to be made up of DRM-encrusted streams and no access for users
of free software. No fair use rights. If we want to live in a world where
broadcast streams are accessible with free tools and developers of stream
players are not afraid to add "record" buttons, we need to ensure that the
legal climate does not become more hostile than it is already. Otherwise,
finding a good stream capture tool could become much harder than it is
Comments (24 posted)
For today's chapter on the ongoing software patent debacle, let us have a
look at Apple's
patent application #981993
. This application, filed in November,
2004, has to do with providing an audio interface to a computing device.
In particular, claim 1 reads:
A method for providing an audible user interface for a user of a
computing device, the method comprising: receiving a selection of a
user interface control on the computing device; selecting an audio
file associated with the selected user interface control; and
playing the selected audio file at the computing device such that
an audio prompt is audiblized for the user, the audio prompt
describing the selected user interface control or a displayed user
interface item corresponding to the selected user interface
The additional, dependent claims make this technology more specific to
media players in particular. There is another independent claim which
reads like this:
A method for creating an audio file at a host computer system, the
method comprising: receiving a text string at a text to speech
conversion engine; creating an audio file based upon the text
string; and associating the audio file to a media file.
Numerous other claims assert ownership over various combinations of the two
above techniques. In summary, what Apple is claiming is the ability to
create voice files for a media player device, load them onto that device,
and have the device play those files in response to user actions.
This patent would appear to cover a relatively obvious technology.
Speaking computers are not particularly new; corporate voice mail systems
have operated in this way for quite some time. Experience shows, however,
that this sort of prior art often carries little weight in the patent
office. Unless something happens, the chances of Apple winning this patent
would appear to be fairly good.
The Rockbox project has produced a
GPL-licensed firmware distribution which runs on a wide variety of media
players from a
number of vendors - including Apple. Rockbox adds a number of interesting
and useful features; see this LWN
review from last January for more information. One feature of
particular interest at the moment, however, is the voice interface
capabilities built into Rockbox. This feature would appear to be well
described by the Apple patent application; it uses voice files generated on
a host system to allow navigation through the menus in an audible manner.
When the voice mode is enabled, Rockbox's prompts are indeed "audiblized"
for the users.
Rockbox has had this feature since early 2004. That is prior to the filing
of this patent (though not the requisite one year prior), but Apple's
application references an earlier one, filed in 2003. So Rockbox cannot
serve as prior art in this case.
One of the most encouraging and heartening things your editor has seen over
the last year has been the stream of blind users showing up on the Rockbox
mailing lists. By making this feature available, Rockbox has made media
players accessible to a broad community of users who have been ignored by
the manufacturers of these devices. It is a beautiful example of how the
free software community can meet the needs of a user community which is not
seen as being profitable in the proprietary world. Apple may have been
busy filing patents back in 2003, but it was Rockbox which first brought a
voice interface to the iPod.
The voice menu feature in Rockbox has been an empowering addition for a
number of people. The idea that it could be shut down by this patent is
appalling. But Apple will have a clear incentive to do exactly that:
Rockbox turns the competition's players into much nicer devices. Should
Apple's near-monopoly on media players begin to erode (and there is no real
reason why it should last forever), Apple will, beyond doubt, reach for
legal weapons which might inhibit competing offerings. Apple has done that
before, after all.
This particular weapon should be neutralized before it becomes a real
threat. It is a fight which should be winnable - the idea of an audio
interface was not first conceived in 2003. But without some determined
resistance, Apple may well obtain the patent it is asking for. At that
point, the free software community will (in the U.S., at least) be fenced
out of an area which it explored before - and better than - anybody else.
Comments (24 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: X.Org vulnerabilities and responses; New vulnerabilities in busybox, firefox, kernel, mysql, ...
- Kernel: Multi-protection VMAs; Random number safety; The Xen patches.
- Distributions: A kernel for Dapper and Etch; SUSE Linux 10.1; Edubuntu Council Elected; Complete Fedora board named
- Development: What next for the Xfce Project?, new versions of MySQL, DBD::Pg, LAT,
CUPS, Sussen, Infrae Document Library, TwoLAME, PLplot, Open Administration for Schools, XCircuit, lbDMF, MH-E, CLAM Music Annotator, Dino, PiTiVi,
Firefox CCK, Pooter, PHP.
- Press: Everyone Wants to 'Own' Your PC, kernel getting buggier, naming conventions,
Kubuntu and KDE meeting, LinuxWorld Canada, SGI Files for Bankruptcy,
BitTorrent and Warner Bros., Legal analysis of GPLv3 patent provisions,
using strace, Phonon and KDE multimedia, ODF for MS Office.
- Announcements: Novell Open Workgroup Suite, WIPO drops webcasting, Summer of Code
announcements, Java EE 5 spec, PyWeek winners, Stallman at Porto Alegre,
Desktop Architects slides, NetBeans Day, KDE-Artists revamped.