|
|
Subscribe / Log in / New account

Linux 3.0?

By Jake Edge
September 3, 2008

The Linux kernel summit is happening this month, so various discussion topics are being tossed around on the Ksummit-2008-discuss mailing list. Alan Cox suggested a Linux release that would "throw out" some accumulated, unmaintained cruft as a topic to be discussed. Cox would like to see that release be well publicized, with a new release number, so that the intention of the release would be clear. While there will be disagreements about which drivers and subsystems can be removed, participants in the thread seem favorably disposed to the idea—at least enough that it should be discussed.

There is already a process in place for deprecating and eventually removing parts of the kernel that need it, but it is somewhat haphazardly used. Cox proposes:

At some point soon we add all the old legacy ISA drivers (barring the odd ones that turn up in embedded chipsets on LPC bus) into the feature-removal list and declare an 'ISA death' flag day which we brand 2.8 or 3.0 or something so everyone knows that we are having a single clean 'throw out' of old junk.

It would also be a chance to throw out a whole pile of other "legacy" things like ipt_tos, bzImage symlinks, ancient SCTP options, ancient lmsensor support, V4L1 only driver stuff etc.

Cox's list sparked immediate protest about some of the items on it, but the general idea was well received. There are certainly sizable portions of the kernel, especially for older hardware, that are unmaintained and probably completely broken. No one seems to have any interest in carrying that stuff forward, but, without a concerted effort to identify and remove crufty code, it is likely to remain. Cox has suggested one way to make that happen; discussion at the kernel summit might refine his idea or come up with something entirely different.

Part of the reason that unmaintained code tends to hang around is that the kernel hackers have gotten much better at fixing all affected code when they make an API change. While that is definitely a change for the better, it does have the effect of sometimes hiding code that might be ready to be removed. In earlier times, dead code would have become unbuildable after an API change or two leading to either a maintainer stepping up or the code being removed.

The need to make a "major" kernel release, with a corresponding change to the major or minor release number is the biggest question that the kernel hackers seem to have. Greg Kroah-Hartman asks:

Can't we do all of the above today in our current model? Or is it just a marketing thing to bump to 3.0? If so, should we just pick a release and say, "here, 2.6.31 is the last 2.6 kernel and for the next 3 months we are just going to rip things out and create 3.0"?

There is an element of "marketing" to Cox's proposal. Publicizing a major release, along with the intention to get rid of "legacy" code, will allow interested parties to step up to maintain pieces that they do not want to see removed. As Cox, puts it:

I thought it might be useful to actually draw some definite lines so we can actually get around to throwing stuff out rather than letting it rot forever and also if its well telegraphed both give people a chance to fix where the line goes and - yes - as a marketing thing as much as anything else to define the line in a way that non-techies, press etc get.

Plus it appeals to my sense of the open source way of doing things differently - a major release about getting rid of old junk not about adding more new wackiness people don't need 8)

Arjan van de Ven thinks that gathering the list of things to be removed is a good exercise:

I like the idea of at least discussing this, and for a bunch of people making a long list of what would go. Based on that whole list it becomes a value discussion/decision; is there enough of this to make it worth doing.

Once the list has been gathered and discussed, van de Ven notes, it may well be that it can be done under the current development model, without a major release. "But let's at least do the exercise. It's worth validating the model we have once in a while ;)"

This may not be the only discussion of kernel version numbers that takes place at the summit. Back in July, Linus Torvalds mentioned a bikeshed painting project that he planned to bring up. It seems that Torvalds is less than completely happy with how large the minor release number of the kernel is; he would like to see numbers that have more meaning, possibly date-based:

The only thing I do know is that I agree that "big meaningless numbers" are bad. "26" is already pretty big. As you point out, the 2.4.x series has much bigger numbers yet.

And yes, something like "2008" is obviously numerically bigger, but has a direct meaning and as such is possibly better than something arbitrary and non-descriptive like "26".

Version numbers are not important, per se, but having a consistent, well-understood numbering scheme certainly is. The current system has been in place for four years or so without much need to modify it. That may still be the case, but with ideas about altering it coming from multiple directions, there could be changes afoot as well.

For the kernel hackers themselves, there is little benefit—except, perhaps, preventing the annoyance of ever-increasing numbers—but version numbering does provide a mechanism to communicate with the "outside world". Users have come to expect the occasional major release, with some sizable and visible chunk of changes, but the current incremental kernel releases do not provide that numerically; instead, big changes come with nearly every kernel release. There may be value in raising the visibility of one particular release, either as a means to clean up the kernel or to move to a different versioning scheme—perhaps both at once.


Index entries for this article
KernelDevelopment model/Version numbers
KernelReleases


to post comments

Linux 3.0?

Posted Sep 4, 2008 4:29 UTC (Thu) by BrucePerens (guest, #2510) [Link] (7 responses)

If you kill ISA, you're killing PCMCIA too.

Linux 3.0?

Posted Sep 4, 2008 7:41 UTC (Thu) by johill (subscriber, #25196) [Link]

And PCMCIA exists today, for example in CF form factor.

Linux 3.0?

Posted Sep 4, 2008 9:01 UTC (Thu) by nix (subscriber, #2304) [Link]

Removing all the old legacy ISA drivers is *not* the same thing as
removing ISA. You can't remove ISA: under the name 'Low Pin Count bus' it
still exists even in the latest whizzy systems (and you still need it to
boot, IIRC).

Linux 3.0?

Posted Sep 4, 2008 9:09 UTC (Thu) by gevaerts (subscriber, #21521) [Link] (4 responses)

I read that as "kill support for old ISA devices", not as "kill support for the ISA bus"

Linux 3.0?

Posted Sep 4, 2008 18:26 UTC (Thu) by a9db0 (subscriber, #2181) [Link] (3 responses)

Some of us still use those antiquated ISA bus devices. Like the 3com 3c515 in my firewall machine. It, and the antiquated P90 it is installed in, still do a very nice job of defending my home network from interlopers. And I'd rather not replace either of them, thankyouverymuch.

Linux 3.0?

Posted Sep 4, 2008 20:09 UTC (Thu) by smoogen (subscriber, #97) [Link] (2 responses)

I don't see why you would have to get rid of the hardware.. since you are not adding anything new to the box there is no reason you could not run 2.6.x until the hardware dies its chip-death sometime in the next 10 years.

Linux 3.0?

Posted Sep 4, 2008 20:16 UTC (Thu) by job (guest, #670) [Link] (1 responses)

To be fair, you still want to run a supported kernel on those old machines.

Staying on 2.6 indefinitly isn't the best move security-wise.

Linux 3.0?

Posted Sep 8, 2008 9:19 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link]

You can use a distribution's kernel like RHEL which will get you a supported version for 7 years or so.

Lizzle

Posted Sep 4, 2008 5:54 UTC (Thu) by strcmp (subscriber, #46006) [Link]

If they remove support for all architectures but x64_64 without legacy 32 bit, /proc, sysfs, user ids, interpreted executables and physical filesystems and declare all this could be added later as plugins, they could name it Lizzle.

Linux 3.0?

Posted Sep 4, 2008 9:25 UTC (Thu) by stijn (subscriber, #570) [Link] (4 responses)

For my incredibly little-spread software I use the scheme YY-DDD, so the 08-157 release refers to day 157 in the year 2008. It scales till the year 2099 (and further by then moving to YYY-DDD), and has the not to be underestimated advantage of providing a very easy way to calculate number of days between releases. The idea of 2008 in a version number does not appeal to me, those first two bytes are really wasted.

Linux 3.0?

Posted Sep 4, 2008 15:15 UTC (Thu) by jengelh (guest, #33263) [Link] (3 responses)

>The idea of 2008 in a version number does not appeal to me, those first two bytes are really wasted.

Please thought the same in the 20th century and used two-digit year numbers everywhere (like 24.12.21 to denote 1921-Dec-24), and that backfired when it approached the year 2000. Truncating a year number to two-digits is like retrieving the short SHA for a git commit—it only works at this point in time. The next commit may cause the length of the shortest possible unique SHA to increase, which is why SHAs in commits are often not abbreviated at all, just to *keep* it unambiguous for the future.

Linux 3.0?

Posted Sep 4, 2008 15:30 UTC (Thu) by stijn (subscriber, #570) [Link] (1 responses)

Whoever said anything about truncating? I've just chosen to arbitrarily set the starting point to 2000 -- it is a new epoch. I am pretty sure I'll never make releases in the year 507. If somehow people decide to stick my version tag in a 6-byte field, that is not my problem. Come the year 2100, I'll happily release (very happily I should say, if still around and able) version 100-152.

Linux 3.0?

Posted Sep 4, 2008 19:33 UTC (Thu) by dlang (guest, #313) [Link]

well, then why not use 1900 as the epoc (date already does this and returns 108 for this year)

it ends up being confusing.

and you are always free to truncate the version number yourself. think of vehicle model years, you refer to the 2008 model as the oh-eight model and everyone knows what you are referring to, but if you referred to things as the 8 model most people would take a few seconds to figure out what you are talking about.

Linux 3.0 - Date-based release numbers

Posted Sep 6, 2008 0:45 UTC (Sat) by giraffedata (guest, #1954) [Link]

I think two-digit years make sense now just like they did in 1980. The cost of carrying those extra digits all that time exceeds the cost of dealing with the century turnover. Think of all the systems that didn't even survive until 2000; 4 digit years would have been a total waste in them. What's the probability that the Linux kernel will still be around, released in the same way as it is today, in 2100?

I wrote programs in 1995 that could not survive the Y2K transition. Some had to be restarted after the turnover and others had to have minor code changes after the turnover, with minor work stoppage until that happened. Many were no longer in use. It was a net win.

How about a driver cleanup on a regular basis?

Posted Sep 4, 2008 10:28 UTC (Thu) by mosfet (guest, #45339) [Link]

Since the kernel releases tend to be on a regular base anyway, why not call every n-th (n=10?) release a "remove unmaintained driver cruft release"? Maybe this would stimulate long-term maintainership ...

(Guess this has been discussed somewhere I haven't looked yet)

Linux 3.0?

Posted Sep 4, 2008 11:48 UTC (Thu) by cde (guest, #46554) [Link] (2 responses)

One cool feature I'd like to see for Linux 3.0 is the return of the 4G/4G user/kernel split. Of course, there is a performance hit on the TLB (up to 30% iirc on a P4). The nice thing about a full split is, you protect from a whole range of attacks that involve executing user-space code in the context of the kernel.

A good example is the vmsplice exploit which is quite complicated but basically lead to ring0 code execution because lower pages could be manipulated by user space (using MMAP_FIXED), and those were mapped into the kernel as well.

Now I understand not everyone would want this feature, but it'd be a plus for security-minded sysadmins. In addition, it'd be nice if Linux could move to a more micro-kernel like design. There's an additional performance hit but once again you improve security (although IPC introduces a new class of potential flaws).

Linux 3.0?

Posted Sep 4, 2008 15:16 UTC (Thu) by jengelh (guest, #33263) [Link]

32-bit x86 will most likely be on the decline and hence a 4/4 split getting less and less attention as people use 64 bit machines.

Linux 3.0?

Posted Sep 10, 2008 23:15 UTC (Wed) by PaXTeam (guest, #24616) [Link]

if you're playing with i386 features then you're a whole lot better off by using UDEREF in PaX. it's got no performance impact basically and properly separates userland/kernel address spaces.

Linux 3.0?

Posted Sep 4, 2008 17:02 UTC (Thu) by iabervon (subscriber, #722) [Link] (7 responses)

I think spending 3 months removing things is a bit excessive. I'd say Linus should queue up a set of deprecated code removal patches in a branch, and, when he releases 2.6.30 (or something), merge that branch and release 3.0 fifteen seconds later. This has the big advantage that there's no non-deprecated difference between 2.6.30 and 3.0; so there's no migration pain going from 2.6.30 to 3.0 (aside from the fact that you'll have to have made sure that 2.6.30 isn't giving you any deprecation complaints), and people who have to roll back to 2.6.x because of finding that they're still using a deprecated feature don't have problems going back.

Then there would be a release cycle in which nothing could depend extensively on the deprecated stuff being gone (because there's zero time to write such a change between the 2.6.30 release and the 3.0.1 merge window), meaning that it would be 3.0.2 which people who needed the deprecated stuff really couldn't use (since it would be what includes the "now that that junk is gone, we can clean this code up and move forward" patches). In the 3.0.1 merge window, there would probably be a lot of dropping the parts of patches/merges that fix removed code for API changes (since -next wouldn't account for the removal), but that's easy enough.

Queuing up the actual removal commits for a just-removal release also means that they can be carefully vetted for only removing things that are producing loud warnings already and where exactly what will be removed has been publicized in patch-level detail.

RFC: adding a 'version' flag process to kernel development in a non-destructive manner

Posted Sep 5, 2008 1:34 UTC (Fri) by kirkengaard (guest, #15022) [Link] (6 responses)

That may just be the prima-facie-smartest thing I've heard all day, for what it's worth. If the flag for 3.0 is decluttering, then why should it wait on new features, which are currently being added in a perfectly functional process already? Why should it have anything to do with the new feature processes at all? (Besides canonical meaning, which is noticeably flexible)

As we have it now, version.major.minor.micro is a functional system, but we currently only increment 'minor' and 'micro'. We deprecated the system whereby 'major' was incremented, and I'm not sure if there was a system by which 'version' incremented, except by feel. Of course, that's only happened twice.

'minor' increments by stable release cycle timing, measured in release-candidate testing phases and the associated regression-fixing cycle. (Not arguing about operation quality, but the system works, however fuzzy.) 'micro' increments based on regression-fixing during the life of a given stable 'minor' release. Canonically, inferior numbers reset when superior numbers increment. We continue to do this with 'micro', but 'minor' hasn't had a reset since 2.6, when we deprecated the 'major' process. Simple fact, not complaint. Seems to be at least part of the bee in Linus' bonnet.

It isn't necessarily wise to recycle the 'major' number within 2.x, for the simple confusion of what that means -- that's quite another bike-shed altogether, and touchy. Making this cleanup into 2.7, and stabilizing it into 2.8 is like the old pattern, and its replacement is a very profitable minor.micro development/incrementation process. More, we're not talking about doing precisely what we used to do for 'major' increments, but about doing something different, which makes it an inappropriate use of 'major' anyways.

If we develop this argument into a process by which 'version' is incremented based on a desired development goal, in this example cruft-deprecation and removal, this will do several things.

0) it resets the clock on figuring out what to do with 'major' in terms of association with a development process goal.
0.1) it doesn't interfere with the touchy issue of what process *should* be associated with 'major' increments.
1) it adds a process and signal to the overall kernel development environment in pursuit of a desirable goal.
2) it does not remove existing process elements of the overall kernel development environment in pursuit of that goal.
2.1) it does not therefore require redefinition of 'minor' and 'micro' signals, retaining existing meaning along with functionality.

As Mr. Barkalow points out, removal can be run as a git-tree or some similar parallel process, which has its own associated costs. These costs can be mitigated by spending little actual time as a parallel git-tree. Pulling the tree and incrementing 'version' does the job, obviously once the patches/changesets associated have been vetted and tested through normal channels. Voici, a new version, 3.0(.0.0). The -stable process and the normal course of 'minor' updates apply to the new 'version' of Linux just like to the old.

Problem 0) what to do with 2.6.y? The canonical process, of which Marcelo Tosatti was the last victim, was to provide for a new chief maintainer of the previous 'major' branch while development continued in the next 'major' branch. We've since tried doing likewise with 'minor' branches, which experiment was abandoned eventually. The new 'minor' process is now even more forward-oriented than it had been under the old 'major.minor' process.

The real problem is that the less time that 2.6 needs to be run, the better. (Of course, that's the same problem that 2.4->2.6 still has.)

Problem 0.1) what about hardware that won't run under 3.0, and requires 2.6?

There are several remedies for this. One is to set a (preferably conservative) threshold above which hardware is deemed to be worth supporting. This is bound to be unpopular. I'm not sure if there is hardware you can't run 2.6 on -- that's why this is styled an RFC -- but I do know that it works the other way. There is newer hardware that you simply can't run 1.x or 0.x on. I suspect that after 3.0 goes on, however it works, that there will be newer hardware that you simply can't run 2.x on. Which raises:

Problem 1) what, exactly, are we defining as deserving of deprecation?

My understanding, consonant with what I know of the people involved, is that the goal is not to deprecate hardware that works; that's regression, pure and simple. Has been for some time now. The goal is to deprecate code that doesn't work. To remove software 'features' that are no longer required by the current feature-set of the kernel for the support of currently-working hardware. "There is no such thing as obsolete hardware, just hardware somebody else doesn't want." And to do it in a transparent, loud-and-clear manner that doesn't invite easy reversal, while retaining the normal amenities of kernel development (cf. the devfs fiasco).

I'm sure there's plenty in benefits and problems I haven't hit, too. I'm not counting the practical details of ironing out bugs across the transition, or the bugs to be worked out in the precise details of interaction between the two existing and one proposed new processes. Those are surmountable if it's worth doing; someone will have enough of an itch to come up with a good solution.

Fire away!

Matt Frost

(Yes, I'm aware this isn't linux-kernel, sorry.)

RFC: adding a 'version' flag process to kernel development in a non-destructive manner

Posted Sep 5, 2008 15:13 UTC (Fri) by iabervon (subscriber, #722) [Link] (2 responses)

I assume that the stable team would maintain the last 2.6.x.y in parallel with 3.0.0.y (particularly if they're identical aside from 3.0.0.y having chunks removed), and drop it along with 3.0.0.y around when there are no known regressions left in 3.0.1.y.

It should be relatively easy to minimize the stuff that requires 2.6, because the criteria for deprecating something is that either it doesn't exist any more (so far as anybody can tell) or it has a working replacement. If anyone actually can't switch from 2.6 to 3.0, then something from 2.6 needs to be brought back. This is different from 2.4->2.6, where there were a number of things that had to be done in combination with the transition because neither 2.4 nor 2.6 was a superset of the other. If 3.0 is exclusively a feature-removal release, then it's a proper subset of the last 2.6, and if you can get to the last 2.6 and have your system work, and you run without getting deprecation warnings, then you can move to 3.0 without any more changes.

RFC: adding a 'version' flag process to kernel development in a non-destructive manner

Posted Sep 5, 2008 21:11 UTC (Fri) by nix (subscriber, #2304) [Link]

Yeah, but some of the comments in this thread have been talking about
dropping old syscalls and breaking userspace compatibility. Personally I
don't think anyone would care if, say, a.out and all syscalls obsoleted
before the ELF transition stopped working (it happens regularly already
for many kernel releases at a time: Alan Cox was the last person I've
heard of who runs stuff on libc2 and libc3, and even he's stopped now),
but breaking anything from the ELF era is probably a mistake.

RFC: adding a 'version' flag process to kernel development in a non-destructive manner

Posted Jul 2, 2009 18:48 UTC (Thu) by duncan1 (guest, #59412) [Link]

3.0 is exclusively a feature-removal release,
I don't think so. If they are worried about obsolescence or bloat, then they should set a ratio of adding new features to removing obsolete features. Then change the ratio as necessary.

Hobby Horse

Posted Sep 5, 2008 15:55 UTC (Fri) by Baylink (guest, #755) [Link] (2 responses)

<sigh>

Version numbers *mean* something to people, as much as Linus would like to assume they don't.

Lots of people have been numbering lots of software for a long, long time, and the conventions that have sprung up around that happened because they were useful, both to people who assign numbers, and because they were useful to people who need to read them.

It's just like the recent trend of renumbering Interstate exits to match the mile markers: changing the convention provides no *new* information (there were mile markers along the side of the road already, thank-you-very-much) and deprives you of *useful* information that there is now no way to get (did I miss the last Sarasota exit, honey? Hell if I know, dear).

At their base, version numbers are a contract between a user who reports them, and a tech support person who has to know what you're running, and for that purpose, yes, anything will do.

But people, including end users as well as release managers for distributions, make other conventional assumptions about release numbers, to help them make decisions about what they should do in upgrade situations, and breaking those assumptions seems fraught with peril.

Not to mention that if RPM can't manage to figure out that 1.0.0rc5 => 1.0.0 is an *upgrade*, and shouldn't require --force... leading a project to have to name its first production release 1.0.1, how in *hell* should we expect it to handle whatever whacky scheme it is that Linus has in his bikeshed?

No, I don't often think Linus is wrong, and I'm willing to be convinced otherwise, but this is one of those times.

Hobby Horse

Posted Sep 7, 2008 21:13 UTC (Sun) by kirkengaard (guest, #15022) [Link]

Please explain the breakage you imply to the natural upgrade assumptions, making reference to your parent post. Removing cruft *is* an upgrade, AFAICT, and I said nothing that should imply backwards progress linked to forwards numbers. 3.0 will mean something, whatever the process that is linked to that number happens to be. Ripping out cruft is simply the discussed process for calling the 3.0 flag this time.

Also, didn't I explicitly reject reusing the major revision flag for this exact reason?

And, you're misusing bike-shed; the canonical usage which has been followed above is as an object, not a container, and the reference is to what color we'll paint it. Absent that allusion, what are you talking about WRT Linus, numbering, and "wacky schemes" that would violate incremental version-numbering?

[OT] mile markers

Posted Sep 8, 2008 19:52 UTC (Mon) by roelofs (guest, #2599) [Link]

It's just like the recent trend of renumbering Interstate exits to match the mile markers: changing the convention provides no *new* information (there were mile markers along the side of the road already, thank-you-very-much)

Not in California (much to my amazement).

:-/

Greg

Linux 3.0?

Posted Sep 4, 2008 20:17 UTC (Thu) by dambacher (subscriber, #1710) [Link]

Hey, I like this idea

Linux could be the first software to REMOVE unused features and bloat on major version increase!!!

This is definitly worth to do a 3.0.

Linux 3.0?

Posted Sep 5, 2008 8:23 UTC (Fri) by yodermk (subscriber, #3803) [Link] (1 responses)

Any chance of also cleaning up some parts of the public API that are sub-optimal, breaking user space compatibility?

Seems like that could be a good idea at some point.

In that case, 2.6 would need at least basic maintenance for at least 10 years.

Linux 3.0?

Posted Sep 6, 2008 14:45 UTC (Sat) by addw (guest, #1771) [Link]

Breaking userland API is bad.

How about changing the ELF magic number in some way to bring in a compatability layer ? OK: this does somewhat defeat the object, but all the depricated API could be kept in one module (hopefully rarely loaded).
This would allow old binaries to run.

I suspect that most programs would be OK on a recompile, only a few would genuinely break since they rely on really old/removed bits of the kernel API.

What do we do about bits of /proc that change ?

It would be nice if some kind of warning message could be generated in advance so that we would know what would break.

Linux 3.0?

Posted Sep 10, 2008 21:35 UTC (Wed) by MLKahnt (guest, #6642) [Link] (1 responses)

I saw one comment brush in the neighbourhood of my thoughts - I agree there is value in just basic memory management and build time, particularly on smaller footprints, to stripping out older unsupported and (with only a few exceptions) unused code. But if that is done, as that does reflect a change in the effective functionality of the kernel, a clear break in the kernel numbering practise should be included for those that actually rely still on the old architectures being retired. The remaining ISA users continue with the 2.6 kernels, the rest of us moving to the opportunities extended by code that can be planned without having to work around the limited services available from the ISA bus.

Linux 3.0?

Posted Sep 11, 2008 7:15 UTC (Thu) by dlang (guest, #313) [Link]

almost all of these things can be configured out when you build a kernel. you don't need to remove the code from the source.


Copyright © 2008, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds