On the Desktop
Linux in the news
All in one big page
See also: last week's Kernel page.
The current kernel release is 2.4.10, which was released by Linus on September 23. 2.4.10 is a huge (11MB) patch with some far-reaching changes: jffs2 and NTFS updates, a large ACPI update, the latest version of min()/max(), lots of block device changes (including one that makes block device I/O use the page cache), a new multipath RAID personality, various architecture updates, and a great deal of merging from Alan Cox's "ac" series. And a virtual memory update from Andrea Arcangeli - we'll get to that shortly. The initial user reports on 2.4.10 are almost uniformly positive.
Alan Cox's latest is 2.4.9-ac15, which includes many more fixes, and some virtual memory patches from Rik van Riel.
Note that the finger server at finger.kernel.org now lists the latest "ac" patch along with the Linus releases.
2.0 lives. The 2.0 kernel may be ancient history to many, but David Weinehall is still carrying the torch: he has recently released 2.0.40-pre1, the first prepatch for a 2.0.40 stable release. The patch includes a small number of fixes and a number of code cleanups.
Virtual memory: the plot thickens. Readers of this page know that the Linux kernel hackers have been working to improve virtual memory performance for a long time. Since somewhere in the 2.1 series, according to some of the more cynical observers. VM performance has been, perhaps, the largest remaining issue with the 2.4 kernel. Almost everything works very well, but memory exhaustion and massive swapping have been the bane of many 2.4 users.
Quite a bit of incremental work has gone into fixing up 2.4 VM. Andrea Arcangeli, however, came to the conclusion that the incremental approach wasn't going to work; instead, he posted 2.4.10-pre10-aa1, which included a major rewrite of the VM code. This rewrite throws out much of the previous VM algorithm, including things like page aging, and replaces it with something simpler. The 2.4.10 kernel has a completely different virtual memory subsystem than its predecessors.
Even for people who are getting used to seeing large changes slip into the "stable" kernel series, this patch came as a bit of a surprise. Initial reactions were not positive:
But suddenly, the number of people who understand the Linux VM has gone from maybe 10 down to just one-and-a-bit. A large number of comments have been removed, and a year's worth of discussion has been invalidated.
I've never seen as invasive a patch merged that ran the risk of completely torpedoing stability merged into a STABLE KERNEL SERIES, nor would I ever consider submitting such a patch.
I have nothing against the code itself (the "old" code also had bugs), but a major VM rewrite at this point seems to be dangerous if we want a stable VM.
Linus 2.4.10pre is definitely 2.5 in disguise.
Look, the problem is that Linus is being an asshole and integrating conflicting ideas into both the VM and the VFS, without giving anybody prior notice and later blame others.
There is, however, one group whose complaints are notably absent: 2.4.10 users. With an occasional exception, people who have actually installed 2.4.10 seem to be running it happily. A lot of the swap-related problems from earlier 2.4.x kernels appear to have been solved. Wider use of 2.4.10 will doubtless turn up other problems - you can't make such large changes to such a complex and crucial subsystem without them - but the final judgement may well be that this was a good change.
Not everybody has bought into it yet, however. The "ac" kernel series has stayed away from the mainline VM for a while now, and, as of 2.4.9ac15, Alan was still accepting changes to that code. In other words, the Linus and Alan kernels have diverged in a much more fundamental way than ever before. For the short term, the two kernel trees can function as a laboratory to see which VM approach works better - though one does not normally use stable kernels in this mode. In the longer term, however, one can only hope that some sort of VM consensus is reached.
Should proprietary security modules be allowed? The Linux Security Module project has been working since last April to create a flexible framework that would allow the plugging of arbitrary enhanced security mechanisms into the kernel. To that end, the LSM hackers have created a lengthy series of hooks which will allow a security module to make decisions on just about any operation that a process can perform. Those who are interested in what the security module interface looks like can get a view from the well-documented security.h include file provided with the LSM patch.
The LSM patch is approaching readiness for inclusion into the (2.5) kernel. This proximity caused Greg Kroah-Hartman, perhaps rather belatedly, to submit a patch limiting the use of the security.h file to modules licensed as free software. The effect of this change is to say that all security modules must be free software; no proprietary modules need apply.
The longstanding policy for Linux kernel modules, of course, has been that closed-source modules are allowed, as long as they follow the (not well defined) module interface. Restricting security modules may seem, at first blush, to be a deviation from this policy. Proprietary driver modules may be loaded, why not proprietary security modules? Numerous objections to the restriction have been posted, mostly arguing along these lines. There has also been an argument that the restriction is, itself, a violation of the GPL.
The security module patch, however, is a major change to the module interface. With this new interface, a module can easily hook code into many parts of the kernel; very few operations are left untouched. Thus, security modules can change the functionality of the kernel in ways that, under the current module interface, are not possible. Using this interface, a proprietary module could add much interesting new code, which may have nothing to do with security, to the kernel.
Greg has, for now, removed the restriction as a result of the controversy. In the end, Linus will probably have to make the decision. Given that closed-source security modules will be able to do many things that are currently forbidden to proprietary code, however, there is a good chance that the security module patch will not be accepted without a licensing restriction.
(The latest security module patch is the September 23 version).
A proposal for module initialization changes. Rusty Russell has posted a proposal for changes to the module loading and initialization code in 2.5. These changes have a couple of goals: (1) decreasing even further the differences between linked-in and modular code, and (2) addressing the remaining race conditions associated with loadable modules. The changes also simplify the module loading code, allow the automatic exporting of module parameters to /proc, and provide a "warm fuzzy bleeding edge feel."
If this scheme is adopted, the changes for modular code will be significant, but relatively straightforward. Module initialization, for example, will be split into two phases. The first sets everything up, but does not make the module visible to the rest of the kernel. It can fail, causing the entire module load to fail, without somebody else trying to access it halfway through. The second phase then makes the module visible, and is required to succeed. Unloading works in a similar way; the first phase makes the module invisible to new users in the kernel, while the second actually shuts the module down when no more users exist.
As of this writing, there have been no comments on the proposal; people must either like it, or they don't think 2.5 will ever happen.
Other patches and updates released this week (and the week before - we're catching up) include:
Section Editor: Jonathan Corbet
September 27, 2001