The current development kernel is 2.6.38-rc6
on February 21. Linus
Diff-wise, the most noticeable thing here is removal of the /proc
interface from the target code (so that we don't make a release with
deprecated interfaces). But ignoring that (and some arm mach/map.h
cleanup patches), the diffs really are pretty small.
But what is probably actually noticeable is a lot of small fixups,
mostly in drivers. Nothing really exciting, I'm afraid. Or not afraid,
since excitement at this stage in the -rc series is a bad thing.
The short-form changelog is in the announcement, or see the
full changelog for the details.
Stable updates: the 220.127.116.11, 18.104.22.168, and 22.214.171.124 stable kernels were released on
February 17. They contain lots of
fixes all over the tree. Also, this will be the last of the 2.6.36.x
stable kernels: "[...] you should move to the .37 kernel series as this is
the last .36 kernel to be released. It's now 'end of life', 'dead',
'buried', 'pining for the fjords', or whatever term you and your
company uses for things that are no more."
The 126.96.36.199 update is in the review
process as of this writing. It contains 70 fixes, and should be released
on or after February 24.
Comments (none posted)
The structure passed is the structure abused.
-- Al Viro
Distributed systems are tricky. That's one reason I work in
security, it is so much simpler.
-- Casey Schaufler
Incidentally, many Linux filesystem implementations don't have
especially robust error handling for failures during attempts to
mount corrupt filesystems. As an example, I have a deliberately
corrupted btrfs filesystem that triggers a BUG() if you attempt to
mount it. I formatted a USB stick with this filesystem, so now I
have a USB stick that will panic the kernels of distributions that
support auto-mounting, in some cases even when the screen is
-- Dan Rosenberg
Comments (3 posted)
Intel's GMA500 graphics chipset has been a source of pain for a few years;
unlike almost everything else from Intel, it lacks a free driver. That
situation appears to be changing, though: Alan Cox has posted a new GMA500 driver
for the staging tree.
Alan says "Currently it's unaccelerated but still pretty snappy even
compositing with the frame buffer X server.
" It seems that quite a
bit of work is needed (the driver is going into staging for a reason), and
it's not clear when (or how) proper 3D support will be added, but it's a
step in the right direction.
Comments (26 posted)
The FIEMAP ioctl()
command can be used to learn about how
a file's blocks are laid out on the disk. It's useful for determining
fragmentation, optimizing boot-time readahead order, and a number of other
things. One of those other things, though, has turned up bugs in how a
couple of important filesystems implement FIEMAP
The cp application, it seems, has recently been taught to use
FIEMAP to find holes in files. The idea is to optimize the
copying of such files by not even reading the holes; that way, the need to
zero-fill pages (in the kernel) and compare them against pages full of
zeros (in user space) can be eliminated. It seems like a better way of
Somewhere along the way, Chris Mason got word that cp was
corrupting files on btrfs filesystems. The problem, naturally enough, was
that FIEMAP was reporting holes where none should exist. The root
cause was that FIEMAP was not prepared to deal with regions of a
file which have been written to, but which do not actually have blocks
assigned yet. The delayed allocation mechanism used by most contemporary
filesystems will create exactly that kind of situation, so this is not a
Chris fixed the problem for btrfs, then
decided to see how other filesystems handled the same situation. From his report, xfs handled things well, but ext4
had similar bugs in situations where delayed allocation and real holes came
together in the same file. Certain types of bugs, it seems, are likely to
turn up in more than one context.
Chris's fix should get into 2.6.38 before the final release; chances are
good that an ext4 fix will be fast-tracked as well. Expect stable kernel
backports too. In the meantime, be careful when copying recently-written
files with new versions of cp on those filesystems.
Comments (6 posted)
Kernel development news
Over the last week or so a number of interesting topics have come up with
regard to the low-level functioning of the block layer. This article will
survey a few of these topics.
Enforcing read-only: The block layer has a mechanism by which a
driver can mark a specific device (or partition) as being read-only. This
flag may be set if the physical device is write-locked; it can also be set
by higher-level code (the DM or MD layers, for example) when the
administrator creates a read-only device. Tejun Heo discovered an
interesting thing, though: this flag is not enforced within the block
layer. An attempt to open a write-protected device for write access will
succeed, and the block layer will happily issue write operations to a
read-only device. That struck Tejun as wrong, so he put a patch for 2.6.38
which addresses part of the problem: an attempt to open a read-only device
for write access will be blocked.
It turns out, though, that this check breaks things. Since enforcement of
read-only status has never been done, developers have been careless about
how they open block devices. So, with this patch in place, the loop
device, device mapper, and MD all break when trying to open a read-only
device, even if the ultimate goal is read-only access. Breaking things on
this scale is not one of the stated goals of the 2.6.38 development cycle,
so Chuck Ebbert has posted a patch
reverting the change; some version of this patch is likely to be merged
before the final 2.6.38 release.
In-kernel code which is careless about open permissions can easily be
fixed, but fixing the user-space utilities will take rather longer. So
this check probably cannot be put into the open() path anytime
soon. Beyond that, as Linus pointed out,
it may never really be the right thing to do; there are times when it may
be necessary to open a read-only device for write access. Real enforcement
of read-only status, if it is to be done in the block layer, probably needs
to happen when operations are actually submitted to the device. How many
things that would break remains to be seen.
Stable pages: Linux has had support for block data integrity checking since 2008. In
short, this feature takes advantage of suitably-equipped hardware to ensure
that data is not corrupted between the host and its destination in
persistent storage. Before writing a block to a device, the kernel will
calculate a checksum and send it with the data; if the data, once written
by the device, no longer matches the checksum, the device will signal an
error. This mechanism can increase overall confidence that the system is
storing data without corrupting it.
There is one little problem, though. Imagine a sequence of events where
the kernel calculates a checksum for a specific block, issues a write
operation, then goes on to do more interesting things. Before the block
controller gets around to acting on the request, some process comes along
and changes the contents of the block. At this point, the checksum will no
longer match, and the operation can fail. What is the best way to respond
to (or, better, prevent) this outcome?
Darrick Wong has addressed this problem with a
patch which takes a possibly heavy-handed approach: when integrity
checking is in use, blocks will be copied before the checksum is calculated
and the I/O operation initiated. The rest of the system can then do
anything it wants with the original data; the data as it existed when the
write operation was queued will be written to the device. This approach
will certainly work, but the cost is clear: an extra copy operation is
added to the write path. That is not a cost that sits well with all
The proper way to solve this, for some value of "proper," is implementing
"stable pages" within the filesystem code. In essence: a page which is
under writeout becomes immutable; any process trying to change that page's
contents will block until the write operation is complete. This solution
is not universally popular either; it is said to have an adverse impact on
at least one
benchmark regardless of whether integrity checking is in use. As Jan Kara
noted, the best-performing approach will
not be the same for everybody:
In fact what is going to be faster depends pretty much on your
system config. If you have enough CPU/RAM bandwidth compared to
storage speed, you're better [off] doing copying. If you can barely
saturate storage with your CPU/RAM, waiting is probably better for
Some people also like the fact that the block-copying approach puts the
pain on users of the integrity-checking features while not hurting other
users - assuming that the cost of all those page allocations and copies
doesn't affect anybody else. That said, stable pages look like they will
be the approach taken in the future; as Martin Petersen pointed out, there are a number of filesystem
features - encryption, for example - which depend on it. Work is underway
to add this capability to a number of filesystems; at the moment, only
Btrfs has proper stable page support.
Comprehensive block I/O throttling coverage. Last week's Kernel
Page featured hierarchical I/O scheduling;
that work fills in an important feature, but the limitations of the (quite
new) bandwidth controller don't stop there. One of its larger shortcomings
is that it only really works with I/O submitted directly from process
context. When I/O is initiated by the kernel (in particular, when the
writeback code flushes dirty pages to disk), the controller is unable to
associate the pages with the process that dirtied them. Since on
many (or most) systems most block I/O writes are generated that way, it is
easy to see that the block I/O controller's coverage is somewhat limited at
Andrea Righi has posted a patch set which
is meant to lift that limitation by tracking the ownership of all dirty
pages in the system. There is code in the kernel now which can do that
ownership tracking; the memory usage controller needs that information to
do its job. So Andrea's patch generalizes the ownership tracking code and
makes it serve the I/O controller's purposes as well. Half of the existing
flags field in struct page_cgroup are taken to hold an
index describing which control group the page belongs to. That makes the
block controller different from how the memory controller uses this
structure - the latter stores a direct pointer to its mem_cgroup
structure - but it does have the advantage of not increasing the size of
the page_cgroup structure.
That advantage is not to be undervalued: struct page_cgroup
shadows struct page, so one can exist for almost every page in the
system. Even a little bit of overhead adds up quickly when such large
quantities are involved. That overhead will be the biggest disadvantage of
this new feature; anybody who wants to throttle block I/O bandwidth, and
who is not also using the memory controller, will pay a significant cost in
increased kernel memory use. The payback is that block I/O throttling
actually works as intended; without page tracking, it can only give
Comments (1 posted)
The kernel's debugfs filesystem is meant to be a place where kernel
developers can place any information which seems to be of value to
somebody. Unlike the other kernel virtual filesystems (/proc
), debugfs has an explicit "no rules" rule. Anything
developers want to put there is fair game, without regard for taste,
(hypothetically) ABI stability, or perceived usefulness. "No rules" does
not extend as far as compromising the security of the system, though,
which has led to an attempt to lock debugfs down.
Eugene Teo recently posted a request for CVE
numbers for 20 separate vulnerabilities involving world-writable files
in debugfs and sysfs. Some of the debugfs vulnerabilities would seemingly
allow any local user to write arbitrary values into device registers - a
situation from which little good can be expected to emerge. Expect yet
another set of kernel updates in the near future as these holes are closed
and fixes are made available to users.
In response to these vulnerabilities, Kees Cook posted a patch which would cause debugfs to be
mounted with root-only access permissions. That way, any future mistakes
in debugfs would be inaccessible to nonprivileged users and, thus, would
not be a new vulnerability in need of fixing. The patch was not received
well; it looks suspiciously like a rule in a land where there are supposed
to be no rules. Greg Kroah-Hartman responded:
It's just stupid mistakes being made here, don't try to lock down
the whole filesystem for just a handful of bugs.
Kees suggested that these mistakes could keep on happening, and that "no
rules" might not be the best approach, but Alan Cox responded:
It's a debugging fs, it needs to be "no rules" other than the obvious
"don't mount it on production systems"
There is one little problem with the idea of not mounting debugfs on
production systems, though: there is useful stuff in that filesystem. At
the top of the list must certainly be the control files for perf and
ftrace; most of our nice, new tracing infrastructure will not work without
debugfs. There are also knobs for tweaking scheduler features, interfaces
for the "usbmon" tool, interfaces used by Red Hat's kvm_stat tool, and so
on. There is enough useful stuff in debugfs that is it can be found
mounted well outside of kernel debugging environments; it has reached the
point that Greg challenges the idea that
debugfs should not be mounted on production systems:
No, not true at all, the "enterprise" distros all mount debugfs for
good reason on their systems.
"No rules" and "mounted on enterprise systems" seems like a bad
combination; it would be nice to make things more secure. A number of
proposals have been floated to do that, including:
- Teach the checkpatch.pl tool to look for world-writable debugfs
files and complain about them. This step has already been taken; the
version of checkpatch.pl found in 2.6.38 will point out
world-writable files in either debugfs or sysfs.
- Disallow world-writable files in debugfs. A patch has been posted to
this effect; so far, there have been few comments to indicate whether
such a restriction would look too much like a rule for debugfs or not.
- Move generally useful interfaces out of debugfs to a place with a bit
less of a wild-west flavor, then leave debugfs unmounted on most
systems. This is an idea which makes a lot of sense on the face of
it, but it can also run into practical difficulties. Moving
interfaces requires possibly cleaning them up, making a stronger
commitment to ABI compatibility going forward and, importantly,
breaking tools which depend on the current location of those
The last concern could be a show stopper; it could force developers to
maintain both the old and new interfaces in parallel for some years. Many
developers, faced with that sort of task, may just decide to leave the
interface where it is. Debugfs is not supposed to have any ABI guarantees,
but, as has become clear in the past, such
a policy does not necessarily prevent the creation of an ABI which must be
maintained going forward.
So debugfs on production systems seems likely to be with us for some time.
Given that, there is no alternative to making it more secure. The
checkpatch.pl change is a good start, but it cannot take the place of
proper code review. Reviewers have a tendency to skip over debugfs code,
but, if that code is to run on important systems, that tendency must be
fought. Debugfs code must uphold the security of the system just like any
other kernel code.
Comments (13 posted)
Flash drives are getting larger and cheaper; as a result, they are showing
up in an increasing number of devices. These drives are not the same as
the rotating-media drives which preceded them, and they have different
performance characteristics. If Linux is to make proper use of of this class
of hardware, it must drive it in a way which is aware of its advantages and
This article will review the properties of typical flash devices and list
some optimizations that should allow Linux to get the most out of low-cost
flash drives. The kernel working group of the Linaro project is currently
researching this topic as an increasing number of embedded designs move
away from raw NAND flash devices to embedded MMC or SD drives that hide the
NAND interface and provide a simplified linear block device. This drives
down system design complexity and cost but also means that regular
block-oriented filesystems are used instead of the Linux MTD layer that
can talk to raw flash.
Most filesystems and the block layer in Linux are highly optimized
for rotating media, in particular by organizing all accesses to
avoid seeks. It has become clear that some of these optimizations
are pointless or even counterproductive with solid-state storage media.
In recent kernels, there is a per-device flag for non-rotational
devices that treats these slightly differently, by assuming that
all seeks are free, but is that really enough to get good I/O
performance on solid state drives? High-end drives are
getting fast enough to make optimizations for CPU load more interesting
than optimizations for ideal access patterns. In contrast, the
more common SD cards and USB flash drives are very sensitive
to specific access patterns and can show very high latencies for writes
unless they are used with the preformatted FAT32 file layout.
As an example, a desktop machine using a 16 GB, 25 MB/s CompactFlash
card to hold an ext3 root filesystem ended up freezing the user interface
for minutes during phases of intensive block I/O, despite having
gigabytes of free RAM available. Similar problems often happen on
small embedded and mobile machines that rely on SD cards for their file
To understand why this happens, it is important to find
out how the embedded controllers on these cards work. Since very
little information is publicly documented, most of the following
information had to be gathered using reverse engineering based
on timing data collected from a large number of SD cards and other
Pages, erase blocks and segments
All NAND flash chips are physically organized into "pages" and "erase blocks."
A page is the smallest unit that can be addressed in a single
read or write operation by the embedded microcontroller on a managed
flash device, and it has an effective size between 2KB and 32KB
in current consumer flash drives. This means that while a single
512-byte access is possible on the host interface (USB, ATA, MMC, ...),
it takes almost the same time as a full page access inside of the
Although it is usually possible to write single pages, the data cannot be
overwritten without being erased first, and erasing is only possible
in much larger units, typically between 128KB and 2MB. The controllers
group these erase blocks into even larger segments, called "erase block
groups," "allocation units," or simply "segments." The most common size for
these segments is 4MB for drives in the multi-gigabyte class, and all
operations on the drive happen in these units; in particular, the drive
will never erase any unit smaller than a segment.
The drives have a single lookup table which contains a mapping between
and physical segments. On a typical 8GB SD card using 4MB segments,
contains a little under 2000 entries, which is small enough to be kept
in the RAM of the card's microcontroller at all times. A small number
of physical segments is set aside in a pool to handle wear leveling,
bad blocks and garbage collection.
Ideally, the drive expects all data to be written in full segments,
which is what happens when recording a live video or storing
a music collection on a FAT32 filesystem.
The way the physical characteristics of the card make themselves felt can
be seen in the plot to the right (click on the thumbnail for the full-size
version), which summarizes the results of a number of tests on an SDHC
memory card. The best-case read throughput is 13.5MB/s, while the linear
write throughput is 11.5MB/s. The results show that the segment size is
4MB; any properly-aligned, 4MB write will be fast.
efficient block size for reads and writes is 64KB, all accesses smaller
than that are significantly slower.
Individual pages are 8KB; the costs of extra garbage collection caused by
smaller writes can be seen. The card as a whole has been optimized for
linear write operations; random writes are much slower. Additionally, only
one segment can be open at a time; alternating between two segments will
cause garbage collection at every access, slowing write speeds to a mere
33KB/s. That said, the FAT file table area (from 4MB to 8MB) is managed
differently, enabling small writes to be done efficiently there.
The second image to the right shows a plot of read access times, in page
granularity, on the first 32MB of a Panasonic Class 10 SDHC card. This
illustrates various properties of the card. The segment size of 4MB can
clearly be seen from the various changes in performance at the boundaries
between segments. All closed segments have the same read performance, as do
have all erased segments, which are a little faster to read. The FAT area
in the second segment is a bit slower when reading because it uses a block
remapping algorithm. One segment has been opened for writing by writing a
few blocks in the middle before the read test, that segment can be seen as
being a little faster to read on this specific card. Also, an effect of
multi-level-cell (MLC) flash is that it alternates between slightly slower
and faster pages, which the plot shows as two parallel lines for some
When a segment that already contains data is written to, a new segment is
allocated from the free pool and the drive writes the new data into
that segment. Once the segment has been written to from start to finish,
the lookup table will be updated to point to the new segment, while the old
segment is put into the free pool and erased in the background.
By always allocating a new segment, the drive can avoid wearing out a
single physical segment in cases where the host always writes to the same block
addresses. Instead, all writes are statistically distributed to all
the segments that get written to from time to time. The better memory cards
and SSDs also do static wear leveling, meaning they occasionally move
a logical segment that contains static data to a physical segment that
has been erased many times to even out the wear and increase the expected
lifetime of the card. However, the vast majority of cheap memory cards
do not do this but, instead, rely on the host software to write
to every segment of the drive at some time or other.
The diagram to the right shows how this mapping works in a typical flash
drive; click on it for an animated version.
To improve wear leveling, the host can also issue trim or erase commands
on full segments to increase the size of the free pool. However, file
systems in Linux do not know the segment size and typically issue trim
commands on partial segments, which can improve write performance inside
that segment but not help wear leveling across segments.
In real life, writing 4 MB segments at once is more the exception than
the rule, so drives need to cope with partial updates of segments.
While data gets written to a logical segment, the controller normally
has an old and a new physical segment associated with it. In order to
free up the extra segment, it has to combine all the logical blocks in
that segment into physical blocks on only one segment and discard
all the previously used physical blocks, a process called garbage
collection. A number of garbage collection techniques can be observed
in current drives, including special optimizations using caching in
RAM or NOR flash and dynamically adapting to the access patterns.
Most drives however use a very simple garbage collection method, typically
one of the following three. Each description below is accompanied by a
diagram which, when clicked, will lead to an animated version showing how
the technique works.
Linear-access optimized garbage collection.
Drives that are advertised as being ideal for video storage usually expect
long, contiguous reads and writes. They always write a physical segment
from start to end, so, if the first write into a segment does not address
the first logical block inside it, the drive copies all blocks in front
of it from the old segment before writing the new data. Similarly,
a subsequent write to a block that is not logically contiguous to the
previously written one requires the drive to copy all intermediate blocks.
Garbage collection simply fills the new segment up to the end with copies
of the unchanged blocks from the old segment.
The advantage is optimum performance for all reads and for long writes, but
the disadvantage is that the drive ends up copying almost an entire segment
for each block that gets written in the wrong order, for instance when
the block elevator algorithm writes the blocks in reverse order
attempting to avoid long seeks. Also, writing linear data smaller than
the minimum block size of the drive makes it write the same block twice,
which forces an immediate garbage collection. The minimum block size that
the drive expects here is normally the cluster size of the preformatted
FAT32 filesystem, between 4KB and 32KB, but on SD cards, it can be
even larger than that.
Drives that are hardwired to linear-access optimized segments are basically
useless for ext3 and most other Linux filesystems because of this, because
they keep small data structures like inodes and block bitmaps in front
of the actual data and need to seek back to these in order to write new
Fortunately, a significant number of flash drives support random access
within a logical segment, by remapping logical blocks to free physical blocks
as they get written. Since this requires maintaining another lookup
mechanism, both read and write accesses are slightly slower than the
ideal linear-access behavior, and a small amount of out-of-band data
needs to be reserved to store the lookup table.
This method also does not allow efficient writing in any small units
when the manufacturers optimize for larger blocks in order to keep the
size of the lookup table small. Writing the same block repeatedly
still requires a full garbage-collection, which makes this method
unsuitable for storing an ext3 journal or any other data that
frequently gets written to the same area on the drive.
The best random-access behavior is provided by using the same approach
that log-structured filesystems like jffs2, logfs or nilfs2 and
block-remappers like UBI in Linux use. Data that is written anywhere
in the logical segment always goes to the next free block in the
new physical segment, and the drive keeps a log of all the writes
cached. Once the last free block is used up, a garbage collection is
performed using a third physical segment.
In the end, writing this way is slower than the other two approaches
in the best case, because every block is written at least twice, but
the worst case is much better.
This approach is normally used only in the first few segments on the
drive, which contain the file allocation table in FAT32 preformatted
drives. Some drives are also able to use this mode when they detect
access patterns that match writes to a FAT32 style directory entry.
Obviously, any such optimizations don't normally do the right
thing when a different filesystem is used on the drive than it
was intended for, but there is some potential for optimization,
e.g. by ensuring that the ext3 journal uses the blocks that are
designed to hold the FAT.
Restrictions on open segments
One major difference between the various manufacturers is how many
segments they can write to at any given time. Starting to write
a segment requires another physical segment, or two in case of
a data logging algorithm, to be reserved, and requires some RAM
on the embedded microcontroller to maintain the segment. Writing
to a new segment will cause garbage collection on a previously
open segment. That can lead to thrashing as the drive must repeatedly
switch open segments; see the animation behind the diagram to the right for
a visualization of how that works.
On many of the better drives, five or more segments can be open
simultaneously, which is good enough for most use cases, but some
brands can only have one or two segments open at a time, which
causes them to constantly go through garbage collection when used
with most of the common filesystems other than FAT32.
When a drive reserves the segments specifically to hold the FAT,
these will always be open to allow updating it while writing streaming
data to other segments.
When a filesystem wants to optimize its block allocation to
the geometry of a flash drive, it needs to know the position of
the segments on the drive. On partitioned media, this also
implies that each partition is aligned to the start of a segment,
and this is true for all preformatted SD cards and other media
that require special care for segment optimizations.
Unfortunately, the fdisk and sfdisk tools from util-linux make it
particularly hard to do this correctly, because they try to
preserve an archaic geometry of 255 "heads" and 63 "sectors"
and, by default, align partitions to "cylinder" boundaries. None
of these units have any significance on today's hard drives or
flash drives, but they are kept for backwards compatibility with
The result is that most partitions are as misaligned as possible,
they start on a odd-numbered 512-byte sector, which defeats
all optimizations that a filesystem can do to align its accesses
to logical blocks and segments inside of the partition.
The same problem has been discussed a lot in the light of hard
drives with 4KB sectors, but it is much more significant when
dealing with flash media. Current versions of fdisk ask the kernel
about physical sector (BLKPBSZGET) and optimum I/O size (BLKIOOPT),
but currently these are rarely reported correctly by the kernel
for flash drives, because the kernel itself does not have the
necessary information. SDHC cards report the segment size in
sysfs, but this is not used by any partitioning tools, and all
cards currently seem to report 4MB segments, even those that
actually use 2MB or 8MB segments internally.
The linaro-media-create tool (from Linaro Image Tools)
has recently been changed to align partitions to 4 MB boundaries
when installing to a bootable SD card, to work around this problem.
There is a huge potential for optimizing Linux to better deal
with the deficiencies of flash media in various places in the
kernel and elsewhere. With the storage and filesystem summit
coming up this April, there is hopefully time to discuss these
and other ideas:
- All partition tools should default to a much larger alignment,
e.g. 4 MB or what the drive itself reports, for flash media
and ignore cylinder boundaries.
- The page cache could benefit from the fact that larger accesses
end up taking less time than accesses shorter than a flash page.
When a drive reads 16KB, the kernel may as well add all of it to the page
- The elevator and I/O scheduler algorithms can do much better
than they do today for drives that only do linear access.
Ideally, all outstanding writes to one segment should be
submitted in order within a segment before moving to another
- A stacked block device can be used to reorder blocks during
write, creating a copy-on-write log-structured device on
top of drives that can only write to one segment at a time.
A first draft design for device is available on the
FlashDeviceMapper page at Linaro.
- The largest potential is probably in the block allocation
algorithm in the filesystem. The filesystem can ensure that
it submits writes in the correct order to avoid garbage
collection most of the time. Btrfs, nilfs2 and logfs get this
right to a certain degree, but could probably get much better.
More information about specific measurements can be found in the
Linaro flash card survey. Readers are welcome to add data about their
memory cards and USB drives to the list.
The tool that was used to do all measurements is available from
Comments (93 posted)
Patches and updates
Core kernel code
Filesystems and block I/O
Benchmarks and bugs
Page editor: Jonathan Corbet
Next page: Distributions>>