Brief items
The current development kernel is 3.9-rc4,
released on March 23. Linus says:
"
Another week, another -rc. And things haven't calmed down, meaning
that the nice small and calm -rc2 was definitely the outlier so far.
… While it hasn't been as calm as I'd like things to be, it's not
like things have been hugely exciting either. Most of this really is
pretty trivial. It's all over, with the bulk in drivers (drm, md, net, mtd,
usb, sound), but also some arch updates (powerpc, arm, sparc, x86) and
filesystem work (cifs, ext4)."
Stable updates: 3.2.42 was released
on March 27.
The 3.8.5,
3.4.38,
and 3.0.71 updates are in the review cycle
as of this writing; they can be expected on or after March 28. Also
in review are 3.5.7.9 and
3.6.11.1 (a new, short-term series meant to
support the 3.6-based realtime stable kernels).
Comments (none posted)
Be careful, you've already submitted some kernel patches; keep on
this patch and you might just wake up one morning and find yourself
a kernel developer.
—
Paul Moore
You can't just constantly ignore patches, that's reserved for
kernel developers with more experience :)
—
Greg Kroah-Hartman (Thanks to Thomas Petazzoni)
This patch adds new knob "reclaim under proc/<pid>/" so task
manager can reclaim any target process anytime, anywhere. It could
give another method to platform for using memory efficiently.
It can avoid process killing for getting free memory, which was
really terrible experience because I lost my best score of game I
had ever after I switch the phone call while I enjoyed the game.
—
Minchan Kim
Comments (none posted)
Kernel development news
By Michael Kerrisk
March 27, 2013
Linus Torvalds has railed frequently and loudly against kernel
developers breaking user space. But that rule is not ironclad; there
are exceptions. As Linus once noted:
But the "out" to that rule is that "if nobody notices, it's not
broken" […] So breaking user space is a bit like trees falling
in the forest. If there's nobody around to see it, did it really
break?
The story of how a kernel change caused a GlusterFS breakage shows
that there are sometimes unfortunate twists to those exceptions.
The kernel change and its consequences
GlusterFS is a widely-used, free,
scale-out,
distributed filesystem that is available on Linux and a number of other
UNIX-like systems. GlusterFS was initially developed by Gluster, Inc., but
since Red Hat acquired that company in 2011, it has mainly driven work on
the filesystem.
GlusterFS's problems sprang from an ext4 filesystem patch
by Fan Yong that addressed a long-standing issue in ext4's support for the
readdir() API by widening the "directory offset" values used by
the API from 32 to 64 bits. That change was needed to reliably support
readdir() traversals in large directories; we'll discuss those
changes and the reasons for making them in a
companion article. One point from that discussion is worth making here:
these "offset" values are in truth a kind of cookie, rather than a true
offset within a directory. Thus, for the remainder of this article, we'll
generally refer to them as "cookies". Fan's patch made its way into the
mainline 3.4 kernel (released in May 2012), but appears also to have been
ported into the 3.3.x kernel that was released with Fedora 17 (also
released in May 2012).
Fan's patch solved a problem for ext4, but inadvertently created one
for GlusterFS servers that use ext4 as their underlying storage
mechanism. However, nobody reported problems in time to cause the patch to
be reconsidered. The symptom on affected systems, as noted in a July 2012
Red Hat bug
report, was that using readdir() to scan a directory on a
GlusterFS system would end up in an infinite loop in some cases.
The cause of the problem—as detailed
by Anand Avati in a recent (March 2013) discussion on the ext4 mailing
list—is that GlusterFS makes some assumptions about the "cookies"
used by the readdir() API. In particular, although these values
are 64 bits long, the GlusterFS developers noted that only the lower 32
bits were used, and so decided to encode some additional
information—namely the index of the Gluster server holding the
file—inside their own internal version of the cookie, according to
this formula:
final_d_off = (ext4_d_off * MAX_SERVERS) + server_idx
This GlusterFS internal cookie is exchanged in the 64-bit cookie that
is passed in NFSv3 readdir() requests between GlusterFS clients and
front-end servers. (An ASCII art diagram
posted in the mailing list thread by J. Bruce Fields clarifies the
relationship of the various GlusterFS components.) The GlusterFS internal
cookie allows the server to easily encode the identify of the GlusterFS
storage server that holds a particular directory.
This scheme worked fine as long as only 32 bits were used in the ext4
readdir() cookies (ext4_d_off), but promptly blew up when
the cookies switched to using 64 bits, since the multiplication caused some
bits to be lost from the top end of ext4_d_off.
An August 2012 gluster.org blog
post by Joe Julian pointed out that the problem affected not only
Fedora 17's 3.3 kernel, but also the kernel in Red Hat's Enterprise Linux
distribution, because the kernel change had been backported into the much
older 2.6.32 distribution kernel supplied in RHEL 6.3 and later.
The recommended workaround was either to downgrade
to an earlier kernel version that did not include the patch or
to reformat the GlusterFS bricks (the fundamental storage unit on a
GlusterFS node) to use XFS instead of ext4. (Using XFS rather than ext4 had
already been recommended practice when using GlusterFS.) Needless to say,
neither of these solutions was easily practicable for some GlusterFS users.
Mitigating GlusterFS's problem
In his March 2013 mail, Anand bemoaned the fact that the manual pages
gave no indication that the readdir() API "offsets" were cookies
rather than something like a conventional file offset whose range might
bounded. Indeed, the manual pages rather hinted towards the latter
interpretation. (That, at least, is a problem that is now addressed.)
Anand went on to request a fix to the problem:
You can always say "this is your fault" for interpreting the man
pages differently and punish us by leaving things as they are (and
unfortunately a big chunk of users who want both ext4 and gluster
jeopardized). Or you can be kind, generous and be considerate to
the legacy apps and users (of which gluster is only a subset) and
only provide a mount option to control the large d_off behavior.
But, as the ext4 maintainer, Ted Ts'o, noted, Fan's patch addressed a real problem
that affected well-behaved applications that did not make mistaken
assumptions about the value returned by telldir(). Adding a mount
option that nullified the effect of that patch would affect all programs
using a filesystem and penalize those well-behaved applications by
exposing them to the problem that the patch was designed to fix.
Ted instead proposed another approach: a per-process setting that
allowed an application to request the older readdir() cookie
semantics. The advantage of that approach is that it provides a solution
for applications that misuse the cookie without penalizing applications
that do the right thing. This solution could, he said, take the form of an ext4-specific
ioctl() operation employed immediately after calling
opendir(). Anand thought that
should be a workable solution for GlusterFS. The requisite patch does not
yet seem to have appeared, but one supposes that it will be written and
submitted during the 3.10 merge window, and possibly backported into
earlier stable kernels.
So, a year after the ext4 kernel change broke GlusterFS, it seems that
a (kernel) solution will be found to address GlusterFS's difficulties. In
passing, it's probably fair to mention that one reason that the (proposed)
fix took so long in coming was that the GlusterFS developers initially
thought they might be able to work around the kernel change by making
changes in GlusterFS. However, it ultimately turned
out to be impossible to exchange both a full 64-bit readdir()
cookie and a GlusterFS storage server ID in the NFS readdir()
requests exchanged between GlusterFS clients and front-end servers.
Summary: the meta-problem
In the end, the GlusterFS breakage might have been
avoided. Ted's proposed fix could have been rolled out at the same time
as Fan's patch, so as to minimize any disruptions for GlusterFS
users. Returning to Linus's quote at the beginning of this article puts us
on the trail of a deeper problem.
"If there's nobody around to see it, did it really break?"
was Linus's rhetorical question. The problem is that this is a test whose
results can be rather arbitrary. Sometimes, as was the case in the implementation
of EPOLLWAKEUP, a kernel change that causes a minor breakage
in a user-space application that is doing strange things will be reverted
or modified because it is fortuitously spotted by someone close to the
development scene—namely, a kernel developer who notices a
misbehavior on their desktop system.
However, other users may be so far from the scene of change that it can
be a considerable time before they see a problem. By the time those users
detect a user-space breakage, the corresponding stable kernel may already
be several release cycles in the past. One can easily imagine that few
kernel developers are running a GlusterFS node on their development
systems. Conversely, one can imagine that most users of GlusterFS are
running production environments where stability and uptime are critical,
and testing an -rc kernel is neither practical nor a high priority.
Thus, a rather important user-space breakage was missed—one that,
if it had been detected, would almost certainly have triggered modification
or reversion of the relevant patches, or stern words from Linus in the face
of any resistance to making such changes. And, certainly, this is not a
one-off case. Your editor did not need to look too far to find another
example, where a change in the way that POSIX
message queue limits are enforced in Linux 3.5 led to a report
of breakage in a database engine nine months later.
The "if there's nobody around to see it" metric requires that someone
is looking. That is of course a strong argument that the developers of
user-space applications such as GlusterFS who want to ensure that their
applications keep working on newer kernels must vigilantly and thoroughly
test -rc kernels. Clearly that did not happen.
However, it seems a little unfair to place the blame solely on user
space. The ext4 modifications that affected GlusterFS clearly represented a
change to the kernel-user-space ABI (and for reasons that we describe in
our follow-up article, that change was clearly necessary). In cases such as
this (and the POSIX message queue change), perhaps even more caution was
warranted when making the change. At the very least, a loud announcement in
the commit message that the kernel changes represented a change to the ABI
would have been helpful; that might have jogged some reviewers to think
about the possible implications and resulted in the ext4 changes
being made in a way that minimized problems for GlusterFS. A greater
commitment on both sides to improving the documentation would also be
helpful. It's notable that even after deficiencies in the documentation
were mentioned as a contributing factor to GlusterFS problem, no-one sent a
patch to improve said documentation. All in all, it seems that parties on
both sides of the ABI could be doing a better job.
Comments (29 posted)
By Michael Kerrisk
March 27, 2013
In a separate article, we explained how
an ext4 change to the kernel-user-space ABI in Linux 3.4 broke the
GlusterFS filesystem; here, we look in detail at the change and why it was
needed. The change in question was a
patch by Fan Yong that widened the readdir() "cookies"
produced by ext4 from 32 to 64 bits. Understanding why Fan's patch was
necessary first requires a bit of background on the readdir() API.
The readdir API consists of a number of functions that allow
an application to walk through the entries in a directory list. The opendir()
function opens a directory stream for a specified directory. The readdir()
function returns the contents of a directory stream, one entry at a
time. The telldir()
and
seekdir() functions provide lseek-style functionality: an
application can remember its current position in a directory stream using
telldir(), scan further entries with readdir(), and then
return to the remembered position using seekdir().
It turns out that supporting the readdir API is a source of
considerable pain for filesystem developers. The API was designed in a
simpler age, when directories were essentially linear tables of filenames
plus inode numbers. The first of the widely used Linux filesystems, ext2,
followed that design. In such filesystems, one can meaningfully talk about
an offset within a directory table.
However, in the interests of improving performance and supporting new
features, modern filesystems (such as ext4) have long since adopted more
complex data structures—typically B-trees (PDF)—for
representing directories. The problem with B-tree structures, from the
point of view of implementing the readdir() API, is that the nodes
in a tree can undergo (sometimes drastic) rearrangements as entries are
added to and removed from the tree. This reordering of the tree renders the
concept of a directory "offset" meaningless. The lack of a stable offset
value is obviously a difficulty when implementing telldir() and
seekdir(). However, it is also a problem for the implementation of
readdir(), which must be done in such a way that a loop using
readdir() to scan an entire directory will return a list of all
files in the directory, without duplicates. Consequently,
readdir() must internally also maintain some kind of stable
representation of a position within the directory stream.
Although there is no notion of an offset inside a B-tree, the
implementers of modern filesystems must still support the
readdir API (albeit
reluctantly); indeed, support for the API is a POSIX
requirement. Therefore, it is necessary to find some means of supporting
"directory position" semantics. This is generally done by fudging the
returned offset value, instead returning an internally understood "cookie"
value. The idea is that the kernel computes a hash value that encodes some
notion of the current position in a directory (tree) and returns that value
(the cookie) to user space. A subsequent readdir() or
seekdir() will pass the cookie back to the kernel, at which point
the kernel decodes the cookie to derive a position within the directory.
Encoding the directory position as a cookie works, more or less, but
has some limitations. The cookie has historically been a 31-bit hash
value, because older NFS implementations could handle only 32-bit
cookies. (The hash is 31-bit because the off_t type used to
represent the information is defined as a signed type, and negative offsets
are not allowed.) In earlier times, a 31-bit hash was not too much of a
problem: filesystem limitations meant that directories were usually small, so
the chance that two directory entries would hash to the same value was
small.
However, modern filesystems allow for large directories—so large
that the chance of two files producing the same 31-bit hash is
significant. For example, in a directory with 2000 entries, the chance of a
collision is around 0.1%. In a directory with 32,768 entries (the
historical limit in ext2), the chance is somewhat more than 20%. (For the
math behind these numbers, see the Wikipedia article
on the Birthday Paradox.) Modern filesystems have much higher limits on
the number of files in a directory, with a corresponding increase in the
chance of hash collisions; in a directory with 100,000
entries, the probability is over 90%.
Two files that hash to the same cookie value can lead to problems when
using readdir(), especially on NFS. Suppose that we want to scan
all of the files in a directory. And suppose that two files, say
abc and xyz, hash to the same value, and that the
directory is ordered such that abc is scanned first. When an NFS
client readdir() later reaches the file xyz, it will
receive a cookie that is exactly the same as for abc. Upon passing
that cookie back to the NFS server, the next readdir() will
commence at the file following abc. The NFS client code has some logic
to detect this situation; that logic causes readdir() to give the
(somewhat counter-intuitive) error ELOOP, "Too many levels of
symbolic links".
This error can be fairly easily reproduced on NFS with older
kernels. One simply has to create an ext4 directory containing enough
files, mount that directory over NFS, and run any program that performs a
readdir() loop over the directory on the NFS client. When working
with a local filesystem (no NFS involved), the same problem exists, but in
a different form. One does not encounter it when using readdir(),
because of the way in which that function is implemented on top of the
getdents() system call. Essentially, opendir() opens a
file descriptor that is used by getdents(); the kernel is able to
internally associate a directory position with that file descriptor, so
cookies play no part in the implementation of readdir(). By
contrast, because NFS is stateless, each
readdir() over NFS requires that the NFS server explicitly locate
the directory position corresponding to the cookie sent by the client.
On the other hand, the problem can be observed with a local ext4
filesystem when using telldir(), because that function explicitly
returns the directory "offset" cookie to the caller. If two directory
entries produce the same "offset" cookie when calling telldir(),
then a call to seekdir() after either of the telldir()
calls will go back to the same location. A user-space loop such as the
following easily reveals the problem, encountering a difficulty analogous
to a readdir() loop over NFS:
dirp = opendir("/path/to/ext4/dir");
while ((dirent = readdir(dirp)) != NULL) {
...
seekdir(dirp, telldir(dirp));
...
}
The seekdir(dirp, telldir(dirp)) call is a seeming no-op,
simply resetting the directory position to its current location. However,
where a directory entry hashes to the same value as an earlier
directory entry, the effect of the call will be to reset the directory
position to the earlier entry with the same hash. An infinite loop thus
results. Real programs would of course not use telldir() and
seekdir() in this manner. However, every now and then programs
that use those calls would obtain a surprising result: a seekdir()
would reposition the directory stream to a completely unexpected location.
Thus, the cookie collision problem needed to be fixed for the benefit
of both ext4 and (especially) NFS. The simplest way of reducing the
likelihood of hash collisions is to increase the size of the hash
space. That was the purpose of Fan's patch, which increased the size of the
hash space for the offset cookies produced by ext4 from 31 bits to 63. (A
similar
change has also been merged for ext3.) With a 63-bit hash space, even a
directory containing one million entries would have less than one chance in
four million of producing a hash collision. Of course, a corresponding
change is required in NFS, so that the NFS server is able to deal with the
larger cookie sizes. That change was provided in a
patch by Bernd Schubert.
Reading this article and the GlusterFS article together, one might
wonder why GlusterFS doesn't have the same problems with XFS that it has
with ext4. The answer, as noted by Dave
Chinner, is that XFS uses a rather different scheme to produce
readdir() cookies. That scheme produces cookies that require only
32 bits, and the cookies are produced in such a way as to guarantee that no
two files can generate the same cookie. XFS is able to produce unique
32-bit cookies due to the virtual mapping it overlays onto the directory
index; adding such a mapping to ext4 (which does not otherwise need it)
would be a large job.
Comments (29 posted)
By Jonathan Corbet
March 26, 2013
The world was a simpler place when the TCP/IP network protocol suite was
first designed. The net was slow and primitive and it was often a triumph
to get a connection to a far-away host at all. The machines at either end
of a TCP session normally did not have to concern themselves with how that
connection was made; such details were left to routers. As a result, TCP
is built around the notion of a (single) connection between two hosts. The
Multipath TCP (MPTCP) project looks
to change that view of networking by adding support for multiple transport
paths to the endpoints; it offers a lot of benefits, but designing a
deployable protocol for today's Internet is surprisingly hard.
Things have gotten rather more complicated in the years since TCP was first
deployed.
Connections to multiple networks, once the province of large server
systems, are now ubiquitous; a smartphone, for example, can have separate,
simultaneous interfaces to a cellular network, a WiFi network, and,
possibly, other networks via Bluetooth or USB ports. Each of those networks
provides a possible way to reach a remote host, but any given
TCP session will use only one of them. That leads to obvious policy
considerations (which interface should be used when) and operational
difficulties: most handset users are familiar with how a WiFi-based TCP
session will be broken if the device moves out of range of the access
point, for example.
What if a TCP session could make use of all of the available paths between
the two endpoints at any given time? There would be performance
improvements, since each of the paths could carry data in parallel, and
congested paths could be avoided in favor of faster paths at any given
time. Sessions could also be more robust. Imagine a video stream that is
established over both WiFi and cellular networks; if the watcher leaves the
house (one hopes somebody else is driving), the stream would shift
transparently to the cellular connection without interruption. Data
centers, where multiple paths between systems and variable congestion are
both common, could also make use of a multipath-capable transport protocol.
The problem is that TCP does not work that way. Enter MPTCP, which
is designed to work that way.
How it works
A TCP session is normally set up by way of a three-way handshake. The
initiating host sends a packet with the SYN flag set, the receiving host,
if it is amenable to the connection, responds with a packet containing both
the SYN and ACK flags. The final ACK packet sent by the initiator puts
the connection into the "established" state; after that, data can be
transferred in either direction.
An MPTCP session starts in the same way, with one change: the initiator
adds the new MP_CAPABLE option to the SYN packet. If the receiving host
supports MPTCP, it will add that option to its SYN-ACK reply; the two hosts
will also include cryptographic keys in these packets for later use. The
final ACK (which must also carry the MP_CAPABLE option) establishes a
multipath session, albeit a session using a single path just like
traditional TCP.
When MPTCP is in use, both sides recognize a distinction between the
session itself and any specific "subflow" used by that session. So, at
any point, either party to the session can initiate another TCP connection
to the other side, with the proviso that the address and/or port at one end or the
other of the connection must differ. So, if a smartphone has initiated an
MPTCP connection to a server using its WiFi interface:
It can add another
subflow at any time by connecting to the same server by way of its cellular
interface:
That subflow is added by sending a SYN packet with the MP_JOIN option; it
also includes information on which MPTCP session is to be joined. Needless
to say, the protocol designers are concerned that a hostile party might try
to join somebody else's session; the previously-exchanged cryptographic
keys are used to prevent such attacks from succeeding. If the receiving
server is amenable to adding the subflow, it will allow the establishment
of the new TCP connection and add it to the MPTCP session.
Once a session has more than one subflow, it is up to the systems on each
end to decide how to split traffic between them (though it is possible to
mark a specific subflow for use only when any others no longer work). A
single receive window applies to the session as a whole. Each subflow
looks like a normal TCP connection, with its own sequence numbers, but the
session as a whole has a separate sequence number; there is another TCP
option (DSS, or "Data Sequence Signal") which is used to inform the other
end how data on each subflow fits into the overall stream.
Subflows can come and go over the life of an MPTCP connection. They can be
explicitly closed by either end, or they can simply vanish if one of the
paths becomes unavailable. If the underlying machinery is working well,
applications should not even notice these changes. Just like IP can hide
routing changes, MPTCP can hide the details of which paths it is using at
any given time. It should, from an application's point of view, just work.
Needless to say, there are vast numbers of details that have been glossed
over here. Making a protocol extension like this work requires thinking
about issues like congestion control, how to manage retransmissions over a
different path, how one party can tell the other about additional addresses
(paths) it could use, how to decide when setting up multiple subflows is
worth the expense,
and so on. The MPTCP designers have done much of that thinking; see
RFC 6824 for the details.
The dreaded middlebox
One set of details merits a closer look, though. The designers of MPTCP
are not interested in going through an idle academic exercise; they want to
create a solution to real problems that will be deployed on the existing
Internet. And that means designing something that will function with the
net as it exists now. At one level, that means making things work
transparently for TCP-based applications. But there is an entire section in
the RFC that is concerned with "middleboxes" and how they can sabotage
any attempt to introduce a new protocol.
Middleboxes are routers that impose some sort of constraint or
transformation on network traffic passing through them. Network address
translation (NAT) boxes are one example: they hide an entire network behind
a translation layer that will change the address and port of a connection
on its way through. NAT boxes can also insert data into a stream — adding
commands to make FTP work, for example. Some boxes will acknowledge data
on its way through, well before it arrives at the real destination, in an
attempt to increase pipelining. Some routers will drop packets with
unknown options; that behavior made the rollout of the selective
acknowledgment (SACK) feature much harder than it needed to be. Firewalls
will kill connections with holes in the sequence number stream; they will
also, sometimes, transform sequence numbers on the way through. Splitting
and coalescing of segments can cause options to be dropped or duplicated.
And so on; the list of potential problems is impressive.
On top of that, anybody trying to introduce an entirely new transport-layer
is likely to discover that it will not make it across the Internet at all.
Much of the routing infrastructure on the net assumes that TCP and UDP are
all there is; anything else has a poor chance of making it through.
Working around these issues drove the design of MPTCP at all levels. TCP
was never designed for multiple subflows; rather than bolting that idea
onto the protocol, it might well have been better to start over. One could
have incorporated the lessons learned from TCP in all ways — including
doing things entirely differently where it made sense. But the resulting
protocol would not work on today's Internet, so the designers had no choice
but to create a protocol that, to almost every middlebox out there, looks
like plain old TCP.
So every subflow is an independent TCP connection in every respect. Since
holes in sequence numbers can cause problems, each subflow has its own
sequence and a mapping layer must be added on top. That mapping layer uses
relative sequence numbers because some middlebox may have changed those
numbers as they passed through. The two sides assign "address identifiers"
to the IP addresses of their interfaces and use those identifiers to
communicate about those interfaces, since the addresses themselves may be
changed by a NAT box in the middle. When one side tells the other about an
available interface, it adds an "address identifier" to be used in future
messages because a NAT box might change the visible address of that
interface. Special checks exist for subflows that corrupt data, insert
preemptive acknowledgments, or strip unknown options; such subflows will
not be used. And the whole thing is designed to fall back gracefully to
ordinary TCP if the interference is too strong to overcome.
It is all a clever bit of design on the part of the MPTCP developers, but
it also highlights an area of concern: the "dumb" Internet with end-to-end
transparent routing of data is a thing of the distant past. What we have
now is inflexible and somewhat hostile to the deployment of new technologies. The
MPTCP developers have been able to work around these limitations, but the
effort required was considerable. In the future, we may find that the net
is broken in fundamental ways and it simply cannot be fixed; some might say
that the difficulties in moving to IPv6 show that this has already
happened.
Future directions
The current MPTCP code can be found at the MPTCP github
repository; it adds a good 10,000 lines to the mainline kernel's
networking subtree. While it has apparently been the subject of
discussions with various networking developers, it has not, yet,
been posted for public review or inclusion into the mainline. It does,
however, seem to work: the MPTCP developers claim to have implemented the fastest TCP
connection ever by transmitting at a rate of 51.8Gb/s over six 10Gb
links.
MPTCP is still relatively young, so there is almost certainly quite a bit
of work yet to be done before it is ready for mainline merging or
production use. There is also some thinking to be done on the application
side; it may be possible for MPTCP-aware applications to make better use of
the available paths. Projects like this are arguably never finished (we are
still refining TCP, after all), but MPTCP does seem to have reached the
point where more users may want to start experimenting with it.
Anybody wanting to play with this code can grab the project's kernel
repository and build a custom kernel. For those who are not up to that
level of effort, the project offers a number of other
options, including a Debian repository, instructions for running MPTCP
on Amazon's EC2, and kernels for a handful of Android-based handsets.
Needless to say, the developers are highly interested in hearing bug
reports or other testing results.
Comments (70 posted)
Patches and updates
Kernel trees
- Thomas Gleixner: 3.8.4-rt1 .
(March 23, 2013)
- Sebastian Andrzej Siewior: 3.8.4-rt2 .
(March 27, 2013)
Build system
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Next page: Distributions>>