User: Password:
|
|
Subscribe / Log in / New account

Kernel development

Brief items

Kernel release status

The current development kernel is 3.4-rc3, released on April 15. "Anyway, the shortlog is appended, but I don't think there really is anything hugely exciting there. It's mostly driver updates, with a smattering of architecture fixes and networking." All told, 332 changesets have been merged since 3.4-rc2.

Stable updates: the 3.0.28, 3.2.15 and 3.3.2 updates were released on April 13; each contains a long list of important fixes.

Comments (none posted)

Quotes of the week

OTOH I've been exposed to source code of some secret sauce drivers. That makes me believe that in a lot of cases the reason for not disclosing the driver code is that publishing might put the innocent and unprepared reader in danger of gastric ulcer and eyecancer.

So I commit this change for health protection reasons.

Thomas Gleixner

If you know physics, think of a release (and the tag is associated with it) as "collapsing the wave function". Before that, the outside world doesn't know the exact state of your tree (they may have good guesses based on the patches they've seen) - after that, it's "done". You cannot change it. And more importantly, other people can see the state, and measure it, and depend on it.
Linus Torvalds

Oh, but I have a sick and twisted mind. And I'm incredibly smart and photogenic too...

Ready? You *will* go blind - blinded by the pure beauty and intellect in this thing:

    #define IS_DEFINED(x) (__stringify(CONFIG_##x)[0]=='1')

That really is a piece of art. I'm expecting that the Guggenheim will contact me any moment now about it.

Linus Torvalds

Comments (8 posted)

Allocating uninitialized file blocks

By Jonathan Corbet
April 17, 2012
The fallocate() system call can be used to increase the size of a file without actually writing to the new blocks. It is useful as a way to encourage the kernel to lay out the new blocks contiguously on disk, or just to ensure that sufficient space is available before beginning a complex operation. Filesystems implementing fallocate() take care to note that the new blocks have not actually been written; attempts to read those uninitialized blocks will normally just return zeroes. To do otherwise would be to risk disclosing information remaining in blocks recently freed from other files.

For most users, fallocate() works just as it should. In some cases, though, the application in question does a lot of random writes scattered throughout the file. Writing to a small part of an uninitialized extent may force the filesystem to initialize a much larger range of blocks, slowing things down. But if the application knows where it has written in the file, and will thus never read from uninitialized parts of that file, it gains no benefit from this extra work.

How much does this initialization cost? Zheng Liu recently implemented a new fallocate() flag (called FALLOC_FL_NO_HIDE_STALE) that marks new blocks as being initialized, even though the filesystem has not actually written them; these blocks, will thus contain random old data. A random-write benchmark that took 76 seconds on a mainline kernel ran in 18 seconds when this flag was used. Needless to say, that is a significant performance improvement; for that reason, Zheng has proposed that this flag be merged into the mainline.

Such a feature has obvious security implications; Zheng's patch tries to address them by providing a sysctl knob to enable the new feature and defaulting it to "off." Still, Ric Wheeler didn't like the idea, saying "Sounds like we are proposing the introduction a huge security hole instead of addressing the performance issue head on." Ted Ts'o was a little more positive, especially if access to the feature required a capability like CAP_SYS_RAWIO. But everybody seems to agree that a good first step would be to figure out why performance is so bad in this situation and see if a proper fix can be made. If the performance issue can be made to go away without requiring application changes or possibly exposing sensitive data, everybody will be better off in the end.

Comments (21 posted)

Kernel development news

Toward more reliable logging

By Jonathan Corbet
April 13, 2012
Messages from the kernel are created by humans, usually using one of the many variants of the printk() function. But, increasingly, those messages are read by machines in the form of log file parsers, automated management systems, and so on. The machines have, for some time, struggled to make sense out of those human-created messages which, often as not, are unpredictable in their organization, lacking important information, and subject to change. So it is not surprising that there has been ongoing interest in adding some structure to kernel log messages; the subject was recently raised by the audience at the Collaboration Summit kernel panel. Even so, almost every attempt to improve kernel logging has failed to make much (if any) headway.

The same fate seemed to be in store for Lennart Poettering and Kay Sievers when they presented their ideas at the 2011 Kernel Summit; in particular, their concept of attaching 128-bit unique ID numbers to each message was met with almost universal disdain. Lennart and Kay have not given up, though. The latest form of their work on the kernel side of the problem can be seen in the structured printk() patch recently posted by Kay.

The patch does a few independent things - a cause for a bit of complaining on the mailing list. The first of these is to change the kernel's internal log buffer from a stream of characters into a series of records. Each message is stored into the buffer with a header containing its length, sequence number, facility number, and priority. In the end, Kay says, the space consumed by messages does not grow; indeed, it may shrink a bit.

The record-oriented internal format has a number of advantages. If messages are being created faster than they can be consumed by user space, it is necessary to overwrite the oldest ones. Current kernels simply write over the character stream, with the result that truncated messages can find their way into the log. The new implementation drops entire messages at once, so the stream, while no longer complete, is not corrupted. The sequence numbers allow any consumer to know that messages have been dropped - and exactly how many it missed. The record-oriented format also enables improved handling of continuation lines at the cost of making the use of the KERN_CONT "priority" mandatory for such lines.

The second change is to allow the addition of a facility number and a "dictionary" containing additional information that, most likely, will be of interest to automated parsers. The dictionary contains "KEY=value" pairs, separated by spaces; these pairs will contain, for example, device and subsystem names to unambiguously identify the device that generated the message. Kernel code that wants to attach a facility number and/or dictionary to a message will use the new function printk_emit() to do so:

    int printk_emit(int facility, int level, const char *dict, 
			size_t dictlen, const char *fmt, ...);

Regular printk() turns into a call to printk_emit() with a facility of zero and a null dict.

Creation of the dictionary string itself is left as an exercise for the caller; it is not something one is likely to see done in most places where printk() is called. In fact, the only full user of printk_emit() in the patch is dev_printk() which uses it to add a dictionary with SUBSYSTEM and DEVICE fields describing the device of interest. If some form of this patch is merged, one can expect this usage pattern to continue; the creation of dictionaries with ancillary information will mostly be done with subsystem-specific print functions.

Finally, the patch changes the appearance of log messages when they reach user space. After some complaints from Linus, the format has evolved to look something like this:

    7,384,6193747;sr 1:0:0:0: Attached scsi CD-ROM sr0
     SUBSYSTEM=scsi
     DEVICE=+scsi:1:0:0:0

The comma-separated fields at the beginning are the message priority (7, in this case), the sequence number (384), and the system time in microseconds since boot. The rest of the line is the message as passed to printk(). If the message includes a dictionary, it appears in the following lines; following the style set in RFC 821, continuation lines begin with white space. The result, it is hoped, is an output format that is simultaneously easy for humans to read and machines to parse.

The behavior of the /dev/kmsg device changes somewhat as well. In current kernels, this device is only useful for injecting messages into the kernel log stream. Kay's patch turns it into a device supporting read() and poll() as well, with multiple concurrent readers supported. If messages are overwritten before a particular reader is able to consume them, the next read() call will return an EPIPE error; subsequent reads will start from the next available message. This device thus becomes a source of kernel log data that is easy to work with and that reliably delivers log messages or ensures that the recipient knows something was lost.

Modulo some quibbling over the log format, the response to the patch seems to be mostly positive. The biggest exception is arguably Ingo Molnar, whose suggestion that tracepoints and perf should be used instead does not appear to have received a lot of support. Even Linus seems mostly happy; the absence of the 128-bit unique ID perhaps has a lot to do with that. But, beyond that, a more robust log buffer with sequence numbers has some clear advantages; Linus suggested that, if that part were split out, he might even consider merging it for the 3.4 release. That seems unlikely to happen at this point in the cycle, but it wouldn't be surprising to see something come together for the 3.5 development cycle. If that happens, Linux will still lack a true structured logging mechanism, but it would have something with more structure and reliability than it has now.

Comments (72 posted)

The value of release bureaucracy

By Jonathan Corbet
April 17, 2012
Those who read the linux-kernel mailing list will, over time, develop an ability to recognize certain types of discussions by the pattern of the thread. One of those types must certainly be "lone participant persistently argues that the entire kernel community is doing it wrong." Such discussions can often be a good source for inflammatory quotes, but they often lack much in the way of redeeming value otherwise. This thread on the rules for merging patches into stable releases would seem to fit the pattern, but a couple of the points discussed there may be worthy of highlighting. If nothing else, perhaps a repeat of that discussion can be avoided in the future.

This patch to the ath9k wireless driver was meant to fix a simple bug; it was merged for 3.4. Since it was a bug fix, it was duly marked for the stable updates and shipped in 3.3.1. It turns out to not have been such a good idea, though; some 3.3.1 users have reported that the "fix" can break the driver and sometimes make the system as a whole unusable. That is not the sort of improvement that stable kernel users are generally hoping for. Naturally, they hoped to receive a fix to the fix as soon as possible.

When the 3.3.2 update went into the review process without a revert for the offending commit, some users asked why. The answer was simple: the rules for the stable tree do not allow the inclusion of any patch that has not already been merged, in some form, into the mainline. Since this particular fix had not yet made it to Linus (it was still in the wireless tree), Greg Kroah-Hartman, the stable kernel maintainer, declined to take it for the 3.3.2 cycle. And that is where the trouble started.

Our lone participant (Felipe Contreras) denounced this decision as a triumph of bureaucratic rules over the need to actually deliver working kernels to users. Beyond that, he said, since reverting the broken patch simply restored the relevant code to its form in the 3.3 release, the code was, in effect, already upstream. Accepting the revert, he said, would have the same effect as dropping the bad patch before 3.3.1 was released. In this view, refusing to accept the fix made little sense.

Several kernel developers tried to convince him otherwise using arguments based on the experience gained from many years of kernel maintenance. They do not appear to have succeeded. But they did clearly express a couple of points that are worth repeating; even if one does not agree with them, they explain why certain things are done the way they are.

The first of those was that experience has proved, all too many times, that fixes applied only to stable kernel releases can easily go astray before getting into the mainline. So problems that get fixed in a stable release may not be fixed in current development kernels - which are the base for future stable kernels. So stable kernel users may see a problem addressed, only to have it reappear when they upgrade to a new stable series. Needless to say, that, too, is not the experience stable kernel users are after. On the other hand, people who like to search for security holes can learn a lot by watching for fixes that don't make it into the mainline.

It is true that dropped patches used to be a far bigger problem than they are now. A patch applied to, say, a 2.2 release had no obvious path into the 2.3 development series; such patches often fell on the floor despite the efforts of developers who were specifically trying to prevent such occurrences. In the current development model, a fix that makes it into a subsystem maintainer's tree will almost certainly get all the way into the mainline. But, even now, it's not all that rare for a patch to get stranded in a forgotten repository branch somewhere. When the system is handling tens of thousands of patches every year, the occasional misrouted patch is just not a surprise.

The simple truth of the matter is that many bugs are found by stable kernel users; there are more of them and they try to use their kernels for real work. As this thread has shown, those users also tend to complain if the specific fixes they need don't get into stable releases; they form an effective monitoring solution that ensures that fixes are applied. The "mainline first" rule takes advantage of this network of users to ensure that fixes are applied for the long term and not just for a specific stable series. At the cost of (occasionally) making users wait a short while for a fix, it ensures that they will not need the same fix again in the future and helps to make the kernel less buggy in general.

Developers also took strong exception to the claim that applying a revert is the same as having never applied the incorrect fix in the first place. That can almost never be strictly true, of course; the rest of the kernel will have changed between the fix and the revert, so the end product differs from the initial state and may misbehave in new and interesting ways. But the real issue is that both the fix and the revert contain information beyond the code changes: they document a bug and why a specific attempt to fix that bug failed. The next developer who tries to fix the bug, or who makes other changes to the same code, will have more information to work with and, hopefully, will be able to do a better job. The "mainline first" rule helps to ensure that this information is complete and that is it preserved in the long term.

In other words, some real thought has gone into the creation of the stable kernel rules. The kernel development community, at this point, has accumulated a great deal of experience that will not be pushed aside lightly. So the stable kernel rules are unlikely to be relaxed anytime soon. The one-sided nature of the discussion suggests that most developers understand all of this. That probably won't be enough to avoid the need to discuss it all again sometime in the near future, though.

Comments (7 posted)

LTTng 2.0: Tracing for power users and developers - part 2

April 18, 2012

This article was contributed by Mathieu Desnoyers, Julien Desfossez, and David Goulet

In part 1 of this article, we presented the motivations that led to the creation of LTTng 2.0, its features, along with an overview of the respective strengths of LTTng 2.0, Perf, and Ftrace. We then presented two LTTng 2.0 usage examples.

In this article, we will start with two more advanced usage examples, and then proceed to a presentation of LTTngTop, a low-overhead, top-alike view, based on tracing rather than sampling /proc. This article focuses on some of the "cool features" that are made possible with the LTTng 2.0 design: combined tracing of both kernel and user space, use of performance counters to augment trace data, and combining all these together to generate a higher-level view of the system CPU and I/O activity with LTTngTop. But first, we continue with the examples:

3. Combined user space and kernel tracing

This example shows how to gather a trace from both the kernel and a user-space application. Even though the previous examples focused only on kernel tracing, LTTng 2.0 also offers fast user-space tracing support with the "lttng-ust" (LTTng User-space Tracer) library. For more information on how to instrument your application, see the lttng-ust(3) and lttng-gen-tp(1) man pages.

The hello.c test program is distributed with the lttng-ust source. It has an example tracepoint that associates various types of data with the tracepoint. The tracepoint data, including all of the different types can be seen below in the first instance of hitting the tracepoint.

    # (as root, or tracing group)
    $ lttng create
    $ lttng enable-event --kernel --all
    $ lttng enable-event --userspace --all
    $ lttng start
    $ cd lttng-ust/tests/hello
    $ ./hello		# Very, very high-throughput test application
    $ sleep 10          # [ let system generate some activity ]
    $ lttng stop
    $ lttng view
    $ lttng destroy

Output from lttng view:

    [...]
    [18:47:03.263188612] (+0.000018352) softirq_exit: { cpu_id = 1 }, { vec = 4 }
    [18:47:03.263193518] (+0.000004906) exit_syscall: { cpu_id = 1 }, { ret = 0 }
    [18:47:03.263198346] (+0.000004828) ust_tests_hello:tptest: { cpu_id = 3 }, { \
	intfield = 1676, intfield2 = 0x68C, longfield = 1676, \
	netintfield = 1676, netintfieldhex = 0x68C, arrfield1 = [ [0] = 1, [1] = 2, \
	[2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, \
        seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], \
	_seqfield2_length = 4, seqfield2 = "test", stringfield = "test", \
	floatfield = 2222, doublefield = 2 }
    [18:47:03.263199453] (+0.000001107) sys_write: { cpu_id = 3 }, { fd = 18, \
        buf = 0x7F5C935EAD4D, count = 1 } 
    [18:47:03.263200997] (+0.000001544) sys_poll: { cpu_id = 1 }, { ufds = 0x1C9D8A0, \
        nfds = 6, timeout_msecs = -1 }
    [18:47:03.263201067] (+0.000000070) exit_syscall: { cpu_id = 3 }, { ret = 1 }
    [18:47:03.263204813] (+0.000003746) ust_tests_hello:tptest: { cpu_id = 3 }, { \
        intfield = 1677, [...] }
    [18:47:03.263207406] (+0.000002593) ust_tests_hello:tptest: { cpu_id = 3 }, { \
	intfield = 1678, [...] }
    [...]

In short, the output above shows that CPU 1 is executing the end of a softirq handler, CPU 3 is in user mode within the "hello" test application, writing its high-throughput event to the buffer. This example is taken at the moment the buffer switch occurs within the LTTng-UST tracer, so the application signals the consumer daemon waiting on poll() on CPU 1 that data is ready. The "hello" test application then continues writing into its tracing buffer.

Correlated analysis of events coming from both the kernel and user space, gathered efficiently without round-trips between the kernel and user space, enables debugging systemic problems across execution layers. User-space instrumentation with the LTTng-UST tracepoint event API, and the use of trace log-levels in combination with wildcards, are not covered here for brevity, but you can look at the lttng(1) man page if you are curious.

4. Performance counters and kretprobes

This example shows how to combine kernel instrumentation mechanisms to get information that is otherwise unavailable. In this case, we are interested in the number of LLC (Last Level Cache) misses produced by each invocation of a function in the Linux kernel. We arbitrarily chose the function call_rcu_sched().

First, it is important to measure the overhead produced by kretprobes, reading the performance monitoring unit (PMU), and tracing with LTTng to understand how much of the total count can be attributed to the tracing itself. LTTng has a "calibrate" command to trigger calibration functions which, when instrumented, collect the base cost of the instrumentation.

Here is an example showing the calibration, using an i7 processor with 4 general-purpose PMU registers. The information about PMU registers can be found in the kernel boot messages under "Performance Events", then look for "generic registers". Note that some registers may be reserved by the kernel NMI watchdog.

This sequence of commands will gather a trace executing a kretprobe hooked on an empty function that gathers the LLC-misses information (see lttng add-context --help to get a list of the available PMU counters).

    $ lttng create calibrate-function
    $ lttng enable-event calibrate --kernel --function lttng_calibrate_kretprobe
    $ lttng add-context --kernel -t perf:LLC-load-misses -t perf:LLC-store-misses \
		    -t perf:LLC-prefetch-misses
    $ lttng start
    $ for a in $(seq 1 10); do \
	    lttng calibrate --kernel --function;
    $ done
    $ lttng stop
    $ lttng view
    $ lttng destroy

The output from babeltrace can be analyzed to look at the per-PMU counter delta between consecutive "calibrate_entry" and "calibrate_return" events. Note that these counters are per-CPU, so scheduling events need to be present in the trace to account for migration between CPUs. Therefore, for calibration purposes, only events staying on the same CPU should be considered.

The average result, for the i7, on 10 samples:

				 Average     Std.Dev.
    perf_LLC_load_misses:           5.0       0.577
    perf_LLC_store_misses:          1.6       0.516
    perf_LLC_prefetch_misses:       9.0      14.742

As can be seen, the load and store misses are relatively stable across runs (their standard deviation is relatively low) compared to the prefetch misses. We can conclude from this information that LLC load and store misses can be accounted for quite precisely by removing the calibration base-line, but pre-fetches within a function seem to behave too erratically (not much causality link between the code executed and the CPU pre-fetch activity) to be accounted for.

We can then continue with our test, which was performed on a 2.6.38 Linux kernel, on a dual-core i7 SMP CPU, with hyperthreading (the same system that was calibrated above):

    $ lttng create measure-call-rcu-sched
    $ lttng enable-event call_rcu_sched -k --function call_rcu_sched
    $ lttng add-context --kernel -t perf:LLC-load-misses -t perf:LLC-store-misses \
		    -t perf:LLC-prefetch-misses
    $ lttng start
    $ sleep 10        # [ let system generate some activity ]
    $ lttng stop
    $ lttng view
    $ lttng destroy

Here is some sample output using:

    $ lttng view -e 'babeltrace --clock-raw --no-delta'

    timestamp = 37648.652070250,
    name = call_rcu_sched_entry,
    stream.packet.context = { cpu_id = 1 },
    stream.event.context = {
	    perf_LLC_prefetch_misses = 3814,
	    perf_LLC_store_misses = 9866,
	    perf_LLC_load_misses = 16328
    },
    event.fields = {
	    ip = 0xFFFFFFFF81094A5E,
	    parent_ip = 0xFFFFFFFF810654D3
    }

    timestamp = 37648.652073373,
    name = call_rcu_sched_return,
    stream.packet.context = { cpu_id = 1 },
    stream.event.context = {
	    perf_LLC_prefetch_misses = 3815,
	    perf_LLC_store_misses = 9871,
	    perf_LLC_load_misses = 16337
    },
    event.fields = {
	    ip = 0xFFFFFFFF81094A5E,
	    parent_ip = 0xFFFFFFFF810654D3
    }

An analysis of the 1159 entry/return pairs on CPU 1 that did not migrate between processors yields:

				 Average     Std.Dev.
    perf_LLC_load_misses:          4.727      6.371
    perf_LLC_store_misses:         1.280      1.198
    perf_LLC_prefetch_misses:      1.662      4.832

So the numbers we have here are within the range of the empty function calibration. We can therefore say that call_rcu_sched() is doing a good job at staying within the Last Level Cache. We could repeat the experiment with other kernel functions, targeting L1 misses, branch misses, and various other PMU counters.

LTTngTop

[LTTngTop]

LTTngTop is a ncurses-based tool developed to provide system administrators with a convenient way to browse traces and quickly find a problem, or at least a period of time when a problem occurred. That information considerably reduces the number of events we need to analyze manually. It is designed to suit the system administrators because it behaves like the popular top CPU activity monitoring program. In addition to the usual behavior of top and similar tools, where the display is refreshed at a fixed interval, LTTngTop allows the user to pause the reading of the trace to take time to look at what is happening, and also to go back and forth in time to easily see the evolution between two states.

In order to properly handle the events without the risk of attributing statistics to the wrong process in case of a lost event, we require that the events be self-describing. For use with LTTngTop, it is required that each event include the process identifier (PID), the process name (procname), the thread identifier (tid), and parent process identifier (ppid), all of which can be done using the context information. Although adding this data makes the trace bigger, it ensures that every event is handled appropriately, even if LTTng needs to discard some events (which can happen if the trace sub-buffers are too small).

As of now, LTTngTop only works with recorded traces, but work is in progress to support live tracing. The tool displays statistics such as the CPU usage time, the PMU counters (real data per-process, not sampled), and the I/O bandwidth. By default it reads one second of trace data and refreshes the display every second which gives the feeling of playing back the activity on the system. The intended usage of this tool is to allow non-developers (especially system administrators) to use trace data and to help pinpoint the period of time when a problem occurred on the system.

In the following example, we record a trace suitable for analysis with LTTngTop (with pid, procname, tid, ppid context information associated with each event) and with three PMU counters.

    $ lttng create
    $ lttng enable-event --kernel --all
    $ lttng add-context -k -t pid -t procname -t tid -t ppid \
	    -t perf:cache-misses -t perf:major-faults \
	    -t perf:branch-load-misses
    $ lttng start
    $ sleep 10	# [ let system generate some activity ]
    $ lttng stop
    $ lttng destroy
    $ lttngtop /path/to/the/trace

With this example, you will see exactly the activity that occurred on the system, and can use the left and right arrows on the keyboard to navigate forward and backward in the history. As noted above, work is in progress to support live trace reads. It will be performed through a shared memory map on the local machine, and eventually should support viewing live traces streamed from a remote target. LTTngTop is still in development mode but is usable, it can be found in the Git tree, and the README explains how to get it up and running.

Upstreaming: The road ahead

We really hope that the LTTng features will gradually make their way into the upstream kernel. Our target with LTTng 2.0 has been to ensure that our user base quickly gets access to the features provided by LTTng 2.0 through their Linux distributions, without having to wait for the upstreaming process to come to completion.

It remains to be discussed whether the LTTng-specific focus on integrated, low-overhead, full-system tracing, and its ability to share tools with various industry players, make strong enough arguments to justify merging its ABI into the Linux kernel. Nevertheless, our goal is to share the common pieces of infrastructure with Perf and Ftrace whenever it is beneficial for all of the projects to do so.

Conclusion

The very low overhead and high-scalability of the LTTng tracer makes it an excellent choice to tackle issues on high-availability, low-latency, production servers dealing with high-throughput data. The tracer flexibility allows combining traces gathered from both kernel and user space to be analyzed on a common time-line.

LTTng has been used to debug performance, behavior, and real-time issues at various client sites. Some examples include using the kernel tracer to identify an abnormally long interrupt handler duration and to pinpoint the cause of delays in a soft real-time system due to a firmware bug. At the I/O level, identification of bottlenecks caused by combined fsync() calls and large logs being written by timing-sensitive services was made possible by use of tracing. Another client example was one that experienced slowdowns and timeouts after moving from a local to a distributed filesystem: identifying much longer I/O requests in the distributed setup using the LTTng kernel tracer allowed us to pinpoint a filesystem cache that was too small as the root cause of the problem.

We are currently working on several features for LTTng 2.1: integration of flight recorder "snapshots" into lttng-tools, live trace streaming over the network, system-wide LTTng-UST buffers, and filtering of LTTng-UST event fields at trace collection time. With these features and others down the road on top of the existing LTTng 2.0 base, we hope to succeed in our goal to make developers' and system administrators' lives easier.

[ Mathieu Desnoyers is the CEO of EfficiOS Inc., which also employs Julien Desfossez and David Goulet. LTTng was created under the supervision of Professor Michel R. Dagenais at Ecole Polytechnique de Montréal, where all of the authors have done (or are doing) post-graduate studies. ]

Comments (3 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Page editor: Jonathan Corbet
Next page: Distributions>>


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds