Another look at the new development model
The old process, in use since the 1.0 kernel release, worked with two major forks. The even-numbered fork was the "stable" series, managed in a way which (most of the time) attempted to keep the number of changes to a minimum. The odd-numbered fork, instead, was the development series, where anything goes. The idea was that most users would use the stable kernels, and that those kernels could be expected to be as bug-free as possible.
This mechanism has been made to work, but it has a number of problems which have been noticed over the years. These include:
- The stable and development trees diverge from each other quickly,
especially since big API changes have tended to be saved for early in
the development series. This divergence makes it hard to port code
between the two trees. As a result, backporting new features into the
stable series is hard, and forward-porting fixes is also a challenge.
2.6.0 came out with a number of bugs which had long been fixed in 2.4.
- The stable tree, after a short while, lacks fixes, features, and
improvements which have been added to the development tree. That code
may well have proved itself stable in the development series, but it
often does not make it into a stable kernel for years. The kernels
that people are told to use can run far behind the state of the art.
- The stable kernels are often very heavily patched by the distributors. These patches include necessary fixes, backports of development kernel features, and more. As a result, stock distribution kernels diverge significantly from the mainline, and from each other. Distributor kernels sometimes are shipped with early implementations of features which evolve significantly before appearing in an official stable kernel, leading to compatibility problems for users.
The focus on keeping changes out of the stable kernel tree is now seen as being a bit misdirected. Well-tested patches can be safely merged, most of the time. Blocking patches, instead, creates an immense "patch pressure" which leads to divergent kernels and a major destabilizing flood whenever the door is opened a little.
So how have things changed? The "new" process is really just an acknowledgment of how things have been done since the 2.6.0 release - or, perhaps, a little before. It looks like this:
- New patches which appear to be nearing prime-time readiness are
added to Andrew Morton's -mm tree. This addition can be done by
Andrew himself, or by way of a growing number of BitKeeper
repositories which are automatically merged into -mm.
- Each patch lives in -mm and is tested, commented on, refined, etc. Eventually, if the patch proves to be both useful and stable, it is forwarded on to Linus for merging into the mainline. If, instead, it causes problems or does not bring significant benefit, the patch will eventually be dropped from -mm.
The -mm tree has proved to be a truly novel addition to the development process. Each patch in this tree continues to be tracked as an independent contribution; it can be changed or removed at any time. The ability to drop patches is the real change; patches merged into the mainline lose their identity and become difficult to revert. The -mm tree provides a sort of proving ground which the kernel process has never quite had before. Alan Cox's -ac trees were similar, but they (1) were less experimental than -mm (distributors often merged -ac almost directly into their stock kernels), and (2) -mm does a much better job of tracking each patch independently.
In essence, -mm has become the new kernel development tree. The old process created a hard fork and was not designed to merge changes back into the "old" stable tree. -mm is much more dynamic; it exists as a set of patches to the mainline, and any individual patch can move over to the mainline at any time. New features get the testing they need, then graduate to the mainline when they are ready. New developments move into the stable kernel quickly, the development kernel benefits from all fixes made to the stable branch, and the whole process moves in a much faster and smoother way.
More than one observer in Ottawa made this ironic observation: it would appear that Andrew Morton is now in charge of the development kernel, while Linus manages the stable kernel. That is not quite how things were expected to turn out, but it seems to be working. Consider some of the changes which have been merged since 2.6.0:
- 4K kernel stacks
- NX page protection and ia32e architecture support
- The NUMA API
- Laptop mode
- The lightweight auditing framework
- The CFQ disk I/O scheduler
- Netpoll
- Cryptoloop, snapshot, and mirroring in the device mapper
- Scheduling domains
- The object-based reverse mapping VM
Some of these changes are truly significant, and things have not stopped there: new patches are going into the kernel at a rate of about 10MB/month. Yet 2.6.7 was, arguably, the most stable 2.6 kernel yet. It contains many of the latest features, has few performance problems, and the number of bug reports has been quite small. The new process is yielding some good results.
Naturally, there are some issues to resolve. One of those is the deprecation of features, which used to be tied to the timing of the old process. The new plan, it seems, is to give users a one-year notice, including a printk() warning in the kernel. The first features to be removed by this path are likely to be devfs and cryptoloop. There is also the question of changes which are simply too disruptive to merge anytime soon. Page clustering, if it is merged, could be one of those. When such a feature comes along, we may yet see the creation of a 2.7 tree to host it. Even then, however, 2.7 will track 2.6 as closely as possible, and it may go away when the feature which drove its existence becomes ready to go into the mainline.
This change to the development process is significant. It is not
particularly new, however. The actual change happened the better part of a year
ago; it was simply hidden in plain sight. All that has really happened in
Ottawa is that the developers have acknowledged that the process is working
well. One can easily argue, in fact, that the kernel development process
has never functioned better than it does now. So, rather than break such a
successful model, the developers are going to let it run.
Index entries for this article | |
---|---|
Kernel | Development model |
Kernel | Releases |
Posted Jul 29, 2004 6:58 UTC (Thu)
by tgb (guest, #745)
[Link] (3 responses)
Consider some of the changes which have been merged since 2.6.0: ... The first features to be removed by this path are likely to be devfs and cryptoloop. Pardon my ignorance, but why is cryptoloop, which appears to be a relatively new feature, being pulled already?
Posted Jul 29, 2004 7:30 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Encrypted loopback devices (implemented by cryptoloop outside the device mapper) are very old: I remember them from the 2.0 days, and they may predate that. One question: if cryptoloop is going away, what's replacing it? Is the CryptoAPI there for no reason, or is there some new magical way to encrypt filesystems that I've overlooked?
Posted Jul 29, 2004 9:37 UTC (Thu)
by james (subscriber, #1325)
[Link]
The old cryptoloop support is allegedly "buggy, unmaintained, and
reportedly has mutliple [sic] security weaknesses," and the kernel crew feel that vulnerable encrypted filesystem support is worse than no support at all: at least if there's no support, people know their data is vulnerable...
James.
Posted Jul 29, 2004 22:23 UTC (Thu)
by Ross (guest, #4065)
[Link]
Posted Jul 29, 2004 9:49 UTC (Thu)
by tarvin (guest, #4412)
[Link] (3 responses)
Posted Jul 29, 2004 13:37 UTC (Thu)
by elanthis (guest, #6227)
[Link] (2 responses)
Posted Jul 29, 2004 14:13 UTC (Thu)
by allesfresser (guest, #216)
[Link] (1 responses)
Any idea what that means? Will the 2.7 tree just dry up and blow away when it's not needed anymore? It might be better to call it something else than 2.7 if that's the case...
Posted Jul 29, 2004 14:18 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Jul 30, 2004 1:16 UTC (Fri)
by simon_kitching (guest, #4874)
[Link]
I have seen comments from BSD developers which were very critical of the "periodic release" style development of Linux, and praising their own system (which I know very little about).
Posted Jul 30, 2004 6:52 UTC (Fri)
by pm101 (guest, #3011)
[Link] (3 responses)
I haven't had time to migrate to 2.6. I tried once. Too much time. Didn't have time to finish. I'll do it someday, but reconfiguring all the hardware and every subsystem is a royal pain. What concerns me is the following: I'm on deadline. A security hole is found in the kernel, so I must upgrade. Say I'm running 2.6.12 at the time, since keeping up with 2.6 is too much work, and the current kernel is 2.6.41. With this model, upgrading is really untenable. I won't have time to reconfigure everything, figure out ALSA was removed in favor of some new sound system, iptables works differently, so my firewall breaks, and either way, I have 100 new/changed options in make menuconfig to redo. I don't/can't run stock distribution kernels, since I did configure up my system with a nice firewall, power management, support for esoteric hardware, etc. Some things in this (I don't recall what) weren't in the stock kernel. I'm just an end-user, but the same applies to corporate installs that want a consistent system. To you, 2.4 may be obsolete, but to me, it's stable and fast/easy to manage. I don't want to be forced to upgrade my kernel to a significantly different one anytime a security hole is found, or even for new hardware (except in extreme examples; maybe for PCI->PCI/X or something). I also don't need features backported; the only thing I might need in the new kernel are the new device drivers. I'd be much happier if some version of 2.6 was just marked as "stable," and had just drivers and security fixes backported to it. Otherwise, I'd continue the above development model with the mainstream kernel marked 2.7 "semistable," together with 2.7-mm/2.7-ac "unstable"? As with Debian, most users would run kernel/testing, developers would run kernel/unstable, and technologically-backwards old farts like me would have a nice kernel/stable. Stable here wouldn't just mean "won't crash," but the more traditional definition of "won't change much."
Posted Jul 30, 2004 9:02 UTC (Fri)
by NAR (subscriber, #1313)
[Link]
If you're up-to-date with the 2.4.x kernel, I see no reason why you wouldn't be up-to-date with the 2.6.x kernel too so this later scenario wouldn't happen to you.
Posted Jul 30, 2004 22:02 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
There is still a huge need for a stable/stabilizing branch of the Linux kernel, which is not provided in this system. If Linus doesn't want to provide or endorse one, I expect someone else will. Probably major distributions.
I'm talking about a branch for people who are using a computer to do a certain thing and as long as that thing doesn't change, they don't need new features. They do, however, need bug fixes and other minor adjustments. For these people, no matter how much a certain piece of code has stabilized in mm, it is too big a risk to add it to their system when they aren't even going to use it.
Also, removing features can only hurt these people.
I envision an expanded form of subtrees (which already exist to a small degree), wherein someone distributes 2.6.7.1, 2.6.7.2, etc. Assuming such a distributor starts up the next stabilizing series before 2.6.7 is two years old, we will avoid the pressure to make destabilizing changes to the stable series, that killed the even/odd system we had going.
Posted Dec 28, 2004 7:19 UTC (Tue)
by smamunr (guest, #26850)
[Link]
Posted Jul 30, 2004 13:04 UTC (Fri)
by fdesloges (guest, #291)
[Link]
Oh, and it will certainly offer some challenge to our beloved editor, as to when he is supposed to send the next edition of Linux Device Drivers to the press. ;-)
Posted Jul 31, 2004 21:08 UTC (Sat)
by walles (guest, #954)
[Link]
* The stable and development trees diverge from each other quickly
* The stable tree, after a short while, lacks fixes, features, and improvements which have been added to the development tree
* The stable distribution is often very heavily patched by re-distributors (Knoppix, Libranet, Lindows, ...).
The solution adopted by Debian was similar, but not the same; it was because of the above reasons that Debian Testing (http://www.debian.org/devel/testing) was invented, as a sibling to Stable and Unstable.
Just like Testing made Debian a lot more accessible to people who found Unstable to be too scary and Stable to be too far behind, here's to hoping that the -mm tree will do the same for the kernel.
Posted Aug 5, 2004 6:47 UTC (Thu)
by philips (guest, #937)
[Link]
I cannot find better explanation to this happening.
Cryptoloop does the hokey-cokey?
Cryptoloop *in the device mapper* is new.Cryptoloop does the hokey-cokey?
As I understand it, the replacement is dm-crypt: doing cryptography through DM.
Cryptoloop does the hokey-cokey?
You can use DM to encrypt a device and loopback block driver to create theCryptoloop does the hokey-cokey?
device from a file. So you end up using two tools instead of one but it
works.
Is the article to be read as there will not be a 2.7 development generation of the kernel?
Will there be a 2.7 kernel generation
No, because if you actually read the article, it explicitly states that when patches which are too disruptive even for -mm start showing up, 2.7 will likely be created.
Will there be a 2.7 kernel generation
I also had the same question; it was engendered by this curious sentence: "Even then, however, 2.7 will track 2.6 as closely as possible, and it may go away when the feature which drove its existence becomes ready to go into the mainline."Will there be a 2.7 kernel generation
Nobody really knows how 2.7 will work, once it's created. But, yes, there is a good chance that the developments which forced the creation of 2.7 will, once they are ready, be pushed downward into 2.6 and 2.7 will fade away. It all really depends on the nature of the changes, however. Someday, something sufficiently disruptive will come along and there will be no choice but to push forward to a 2.8.
Will there be a 2.7 kernel generation
Is this a move towards a system similar to the BSD development models?Another look at the new development model
I'm not a computer person. I understand computers, but I don't like dealing with computers. I read LWN every week, and am an able programmer, but fundamentally, I want computers to just work. I last installed Debian maybe four years ago, configured it up very nicely, and haven't really reconfigured anything on my computer other than the periodic apt-get update/upgrade, to keep me current for security patches. In addition, kernel security holes force me to upgrade periodically. Right now, I grab a new kernel from kernel.org, copy the .config, check for new options, build, and reboot. Minor pain-in-the-ass, but manageable even if I'm on deadline. The computer does stuff for a while, but I have 5 minutes, so I can keep everything up-to-date. My problem with this
In addition, kernel security holes force me to upgrade periodically. Right now, I grab a new kernel from kernel.org, copy the .config, check for new options, build, and reboot. Minor pain-in-the-ass, but manageable even if I'm on deadline. [...] Say I'm running 2.6.12 at the time, since keeping up with 2.6 is too much work, and the current kernel is 2.6.41.
My problem with this
Stable branch still needed
Hey the kind of issues you are talking about is real but not cheap. you My problem with this
have to hire programmers who will keep track of trivial changes and will
incorporate those trivial changes and security changes. Unfortunately
there are certain type of bugs exists (may be a security threat) which
need quite a good amount of change.
There could be another option. That is hire some expart who will plan and
design migration plan to new kernel. Then I will suggest you upgrade your
system every six months otherwise take risk of becoming obsolete. Come'on
it is free as in beer, why the developer will take the pain when you are
not ready to consider it.
Even proprietary systems are not painless. Check MS XP-SP2? How many ways
it is disruptive. Life is like that!
The only problem left with this continuum of changes is that developpers in generals (ISVs come to mind) are expecting API not to change in 2.6 and new features and API to be introduced all together with new major versions. This allows them to say "Works with kernel 2.6". The tracking of incremental changes will make things more difficult for the ISVs using cutting edge features of the kernel (CGI houses, etc.)The only problem left...
Debian was suffering from the same set of problems:Similar problem to Debian, and a similar solution
In a few words: Unleash The Power of BitKeeper.In a few words