The current stable 2.6 kernel is 126.96.36.199
on May 10. It
contains yet another security fix; this one is for a denial of service problem
in the filesystem
The current 2.6 prepatch is 2.6.17-rc4, released on May 11. It is
almost entirely made up of fixes; Linus says "this is the time to
hunker down for 2.6.17." The long-format
changelog has the details.
Nearly 100 patches have been merged into the mainline git repository since
-rc4 was released; they are almost all fixes.
The current -mm tree is 2.6.17-rc4-mm1. Recent changes
to -mm include CacheFS, a
patch making address space operations constant, the deprecation of smbfs
(see below), the per-task delay
accounting patches, eCryptfs, and klibc, a
lightweight C library for use in initramfs code.
Comments (none posted)
Kernel development news
I could set up a nice business here selling second-hand brown paper
-- Andrew Morton
I think actually we're heading towards needing Linux V2 - the
rewrite. It seems that fixing simple bugs cause[s] other bugs, and
that means we're heading into a maintainability nightmare.
-- Russell King
Comments (7 posted)
The venerable smbfs code allows Linux systems to mount filesystems exported
via the SMB protocol. It thus can be used for accessing files exported
from a Windows system. This filesystem has seen a lot of use over the
years, but has, in recent times, been overtaken by the newer CIFS
filesystem. At this point, CIFS receives almost all of the developer
attention, and most users have (or, at least, should have) moved over.
As an example of the difference in how smbfs and CIFS are maintained,
consider the 188.8.131.52 stable
kernel update, which contained a fix for a security problem in the CIFS
code. Though CIFS has its roots in smbfs, nobody was paying enough
attention to realize that smbfs might suffer from the same vulnerability.
Thus, while 184.108.40.206 fixed the CIFS problem on April 24, the matching
smbfs fix (which forced 220.127.116.11), did not appear until
May 4, eleven days later. In the mean time, smbfs was vulnerable to a
known bug, for anybody who thought to look for it.
The 2.6.17-rc4-mm1 kernel recognizes the unmaintained nature of smbfs with
a patch marking it as being deprecated and slated for eventual removal.
All remaining users are encouraged to move over to the CIFS implementation
instead. For some users, the end has come sooner - the Fedora Core 5
kernel already does not support smbfs. Since there
is an alternative in the kernel and ready to go, this migration should not
be a big problem.
It is a nice scenario, but there is one little problem: the CIFS code
cannot work with Windows 95 and Windows 98 systems. Without
smbfs, Linux users will not be able to mount shares exported from hosts
running those old versions of Windows. Some observers have commented that
those versions of Windows are too old to support, but Linus isn't buying it:
But we do _not_ drop features just because they are deemed
"unnecessary". As long as somebody actually _uses_ smbfs, and as
long as those users are willing to test and perhaps send in patches
for when/if it breaks, we should not drop it.
The word from Andrew Morton is that Windows 9x support for CIFS is in the works,
and should, with luck, by ready in time to go into 2.6.18. If things
happen that way, then the 2.6.18 kernel might just include a deprecation
notice for smbfs, and smbfs could be marked "broken" by the end of the
year. Anybody still using smbfs should consider themselves warned.
Comments (2 posted)
Jeff Garzik has recently let it
that he has merged a large set of patches to the serial ATA (SATA)
subsystem. Says Jeff: "If all goes well, this update should improve
error handling, solve several outstanding, difficult-to-solve bugs, and
provide a good foundation for adding some nifty features in the
" His plans are to get the new code merged into the 2.6.18
kernel, once that cycle begins. The result could be a significantly
different experience for Linux SATA users, some of whom have been fighting
problems for some time.
The patches themselves have been posted to the linux-ide list. It makes
for some imposing reading: they are 122 patches, divided into eleven sets.
This flood of code is primarily the work of Tejun Heo, though Jens Axboe
and Albert Lee have also played a significant part. In brief, what is
- A completely reworked libata error handler. This code makes up about
a third of the total set of patches, and cleans up a lot of things.
It creates a modularized error handling mechanism which allows
low-level drivers to intervene or change the response at various
points in the process. Memory needed for error handling is now
allocated ahead of time, minimizing the possibility for complications
just when things are already going wrong. There is a special circular
buffer set aside for recording errors; this information is used, for
example, within the recovery code to determine that the error rate is
too high and that transmission speed should be lowered.
The result of all this work should be a much more robust SATA
subsystem which can recover from a much wider range of errors.
- A new programmed I/O loop which uses interrupts, rather than older
method of polling the controller from a kernel thread. In cases where
programmed I/O is needed, the new code should be more efficient.
- Native Command Queuing (NCQ). NCQ is the SATA version of tagged
command queuing - the ability to have several I/O requests to the
same drive outstanding at the same time. NCQ eliminates the idle time
between when one command completes and the next is issued, but the
real advantage is with the ordering of operations. The Linux block
I/O subsystem attempts to issue block I/O requests in an efficient
order, but it must use a certain amount of guessing, since there is no
way to know how the blocks are really organized on the disk. But the
drive itself knows very well where each block lives, so it is well
placed to optimize the ordering of requests. The result can be a
significant improvement in performance.
The Linux NCQ implementation can have up to 32 operations outstanding
at any given time - though both the drive and the host controller can
reduce that number. Your editor is not aware of any relative
performance benchmarks which have been posted.
- Hotplug support is another large piece of the patch set. With these
patches in place, the SATA layer can deal with drives which come and
go - as long as the underlying hardware was designed with hotplugging
in mind. There is also a "warmplug" capability for more limited
hardware, where a system user can request the addition or removal of
drives on a running system.
- A new layer (called "ata_link") has been added to libata; ata_link
handles the physical-layer connection to the drives. The main
motivation for ata_link appears to make it possible to support SATA
multipliers, which expand the number of drives which can be
plugged into a system. The current port multiplier code supports the
"frame information structure" switching mode, whereby all connected
drives can be active simultaneously. For now, it only works
with the sil24 driver, but support for others will certainly come.
Most of this code has been under development and discussion for some time.
The sense (among its developers) is that the bulk of it is ready to go into
2.6.18, though the hotplug, ata_link, and port multiplier code may have to wait for another cycle. Andrew
Morton has expressed some concerns about
merging all of this code when a rather long list of SATA-related bugs
remains outstanding; Jeff responded that
this code will fix many of the bugs and make tracking down many of the rest
easier. So, chances are, 2.6.18 will include a much-improved SATA layer.
Comments (5 posted)
There are a number of virtualization technologies available for Linux, some
of which have gained a lot of headlines in the last year or two. One of
the oldest and most interesting, however, maintains a lower profile.
(UML), first implemented by Jeff Dike, takes a unique
approach to virtualization. A UML kernel runs within a process on a normal
Linux host; it is, essentially, a special port of the kernel designed to
run within another Linux system. As a result, a UML system looks like a
series of ordinary processes on the host; it can be managed (and debugged)
like any other process tree.
UML can be somewhat intimidating at first. It brings a new set of acronyms
and a whole set of complex configuration options. As with many parts of
Linux, the documentation available for UML has not always been everything
one might want. So the publication of User Mode Linux,
written by the same Jeff Dike, is a welcome event. This book is part of
the Bruce Perens Open Source Series, meaning that it will be released under
the Open Content License later this year. For now, however, the book must
be obtained the old-fashioned way. For those interested in UML, it should
be a worthwhile investment.
The book adopts a tutorial format, starting with an introduction to UML and
virtualization in general. It provides a walk through of a simple UML
session, then introduces virtual disks and network interfaces.
The core of the book is a series of chapters on managing UML and connecting
it with the host system (and other UML instances). So there is a chapter
on filesystem management, including details on how to provide restricted
access to filesystems on the host. A detailed chapter on networking has
been provided. UML has several possible network transports which can be
used to create isolated networks for UML systems or to connect those
systems to the wider world; this chapter covers them all and provides
guidance on how to choose between them. Then there is a chapter on the
management interface to UML.
The final set of chapters looks at configuring UML for specific tasks.
Chapter 11 talks about building UML from source. In your editor's opinion,
that chapter comes a little late; everything to that point has simply
assumed that UML is already available on the reader's system. Some
distributions have UML packages, but others do not. So some early guidance
on how to build a UML system and create an initial filesystem for it to
boot from would have been nice. The book finishes with some talk of the
(ambitious) future plans for UML and a couple of reference sections.
There is no clear information on just which version of UML is covered - an
unfortunate omission. The sample boot output in the introductory chapter
shows 2.6.10 and 2.6.11-rc kernels.
Minor quibbles aside, it is hard to find much to complain about in Jeff's
book. It provides a much-needed reference for an important Linux
virtualization mechanism. There are a number of possible uses for UML,
including kernel development, server consolidation, embedded systems
development, experimenting with different distributions, or the simple joy
of running a large cluster on one's laptop. Regardless of their goal, UML
users will find this book to be a worthwhile addition to their shelves.
Comments (none posted)
Patches and updates
Core kernel code
Filesystems and block I/O
Page editor: Jonathan Corbet
Next page: Distributions>>