New kernels and old distributions
The problem, as it turns out, is caused by some sysfs changes designed to improve power management in the kernel. The immediate problem can be fixed by adding another patch, but that, in turn, only leads to further problems; a number of distributions will break because the version of udev they ship is too old to understand the new sysfs format. Andrew Morton complained that Fedora Core 3 breaks, but the problem is likely to be more widespread than that.
Greg Kroah-Hartman, the developer behind the changes, responded this way:
How long do you expect the kernel to support unsupported, community based distros that thrive on the fact that they are quickly updated? [...]
And yes, I will revert the patch in mainline that causes people to have to upgrade to a udev that is in FC5, and wait till the next release for that to happen (the minimum will be 081, which was released in January, 2006, by the time 2.6.19 is out, that will be about 10 months old.)
Andrew was unimpressed:
Among others, distributions scheduled to break with the 2.6.19 kernel include Ubuntu 6.06 LTS ("dapper") and the not-yet-released Slackware 11. So, unsurprisingly, it's not just Andrew who is displeased by this change; there is a definite chance that the whole set of patches will be withdrawn and rethought.
Greg asks a fundamental question, however:
"How long should the community have to care about a distro after the
creators of it have abandoned it?
" The traditional answer has been
"forever," but the new generation of "kernel in user space" tools is making
that promise harder to keep. Tools like udev are tightly tied to
the sysfs filesystem which, in turn, is a nearly direct representation of internal
kernel data structures. Sysfs functions, in some ways, like an internal
kernel API, but it is, in reality, a user-space interface. Keeping it
stable and avoiding compatibility problems with older user-space tools is a
difficult challenge, aggravated by the fact that the kernel developers are
still well within the process of figuring out how sysfs should really work.
At this year's Kernel Summit, there was some talk of folding tools like udev into the kernel code base and distributing them together. New kernels would always come with a version of udev that worked, and some of these compatibility problems would go away. There are limits, however, to how many tools can be packaged in this way, and, in any case, it can be hard to see this approach as anything other than a hack to avoid the hard problem of keeping such a wide and complex ABI stable.
This particular problem will likely be worked around, one way or another.
But it won't be the last such. If the kernel developers are going to
continue to promise that the user-space ABI will remain stable
indefinitely, they will have to get a handle on all aspects of that ABI -
not just the system calls. It will not be easy: modern systems require
complex communications between the user and kernel realms. But the kernel
developers have solved plenty of "not easy" problems so far; given the
increased attention being paid to ABI regressions, they will probably
figure this one out too.
Posted Aug 3, 2006 2:39 UTC (Thu)
by davecb (subscriber, #1574)
[Link] (2 responses)
A million years ago I was on a project hosted at
HI-Multics.ARPA, and had to learn how Multics
dealt with API and ABI changes
To brutally oversimplify, one version-numbers
the interfaces (well, structs, actually), and
provides updaters and downdaters, so that the
producer and consumer can change asynchronously
with each other.
I've used this on Unix to avoid flag-days
in a project that had a common main producer and a bunch of
library-based back-end consumers.. The main program
author (Hi, Edsel!) could change the interface
and add an an adaptor function to main, and my
consumers would automatically do the right thing.
I
could then change the consumers when I had time.
--davecb@spamcop.net
Posted Aug 3, 2006 7:13 UTC (Thu)
by AnswerGuy (guest, #1256)
[Link] (1 responses)
Sure you can claim to deprecate the things and try to wean the users of each old version off. This can take some edge off the transition but you're only delaying the inevitable day when you say: all of you have to fix this!
Ideally you can make good decisions about your data structures in advance. In those cases you can add stuff to them and well written code can get an opaque data blob back, use the parts they understand and ignore the rest.
As a practical matter there are cases where doing that is just too ungainly and the old supported struct needs to be cleaned up or you go down the "ever growing mass of cruft" path.
Personally I think maintaining a corpus of "kernel coupled" user space code (a set of system level libraries; and perhaps some utilities like udev, lspci, the initramfs core, etc) is a good idea. The can be maintained by the kernel developers (or a group who forms and stays close to kernel development), shipped with the kernel, and packaged up like the kernel and its modules. Distros can then make calls (and dlopen()s) down into /lib/$(uname -r)/lib/... to ensure that they get the versions of these that are couple to the currently running kernel.
The VDSO mechanism might also be appropriate for some additional (though highly limited) purposes.
JimD
Posted Aug 3, 2006 14:34 UTC (Thu)
by davecb (subscriber, #1574)
[Link]
I quite understand: I said I was brutally oversimplifying (;-)).
Paul Stachour had an hour-long lecture on the subject, and that
just touched on the easy examples, in structs and RDBMSs.
In practice, the consumer only has a finite period in
which to adopt the new version of the interface, which was
reportedly only two change cycles, and only incompatable
changes, such as completely rewriting the structures used
caused a forced change.
At that point, one was saying exactly that: all of you have to
fix this! The advantage is that they didn't have to fix it
on your schedule. Instead they fixed it on theirs. If they didn't
want to fix it, of course, they could always go out of business (;-))
This ability to schedule disparate teams was the big
value-add: you didn't have to convince everyone in the world
to ship a patch next Tuesday.
Compatible changes, like adding a field to the end (normal
practice), were commonly ignored by consumers who didn't
use the field, but consumers who did could get it added
without forcing all the other consumers to change.
That was the common case, by the way: I'd ask Edsel for something
for one consumer, he'd add it, my code would start using it
and the other consumers wouldn't care.
Finally, I quite agree that there should be a corpus of
kernel coupled sources, maintained either by the kernel
community or, in the case of some hardware, by teams
funded by those vendors. The latter practice is visible in
the Samba team, where committed users of the software have
staff working in or with the core team.
--davecb@spamcop.net
Posted Aug 3, 2006 5:10 UTC (Thu)
by xoddam (subscriber, #2322)
[Link] (8 responses)
Posted Aug 3, 2006 10:58 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (2 responses)
Btw, if the patch really breaks all 10-month-old distributions, it will break currently supported SUSE distributions, too. Yet another class of users who might have problems in using (and thus testing!) current kernels.
Cheers, Joachim
Posted Aug 4, 2006 4:32 UTC (Fri)
by xoddam (subscriber, #2322)
[Link] (1 responses)
Posted Aug 4, 2006 8:41 UTC (Fri)
by jschrod (subscriber, #1646)
[Link]
The problem is not that there are possibilities to handle the change; the problem is that Greg does the change without the possibilities in place. Wasn't it Greg in his OLS talk that promoted the high quality of the kernel and wanted more users to test the latest mainstream kernels, besides the kernel developers themselves? Well, then he has to care more about user requirements and user problems, like Andrew does.
Cheers, Joachim
Posted Aug 3, 2006 12:57 UTC (Thu)
by smitty_one_each (subscriber, #28989)
[Link] (2 responses)
Calibrate me if I'm off, but udev is linux-specific, whereas HAL is a Gnome desktop component, and therefore goes anywhere X goes, no?
Posted Aug 3, 2006 14:57 UTC (Thu)
by Sho (subscriber, #8956)
[Link] (1 responses)
(b) HAL is desktop-agnostic. It's not part of the Gnome desktop platform. HAL is hosted on Freedesktop.org. Several of the projects that participate in the Freedesktop.org effort make active use of HAL, notably KDE and Gnome.
(c) HAL does not depend on the X Window System.
Couple of links:
Posted Aug 4, 2006 17:36 UTC (Fri)
by bfeeney (guest, #6855)
[Link]
Posted Aug 3, 2006 21:19 UTC (Thu)
by wilck (guest, #29844)
[Link] (1 responses)
There is a huge difference between sysfs and /dev/kmem.
sysfs is globally visible and has a lot of reasonable uses. For example, there are lots of tunables in sysfs that are outside the scope of udev and HAL but useful for other system applications. /dev/kmem, on the other hand, is usually only accessible to root, and useful only for kernel debugging.
Perhaps the opposite is true - perhaps the 'stable API' is a more complex subject than Greg and his followers pretend? Perhaps it is not such total nonsense, after all? The discussion between Greg and Andrew is interesting in that respect: it appears that Andrew considers user space breakage more 'utterly unacceptable' than the restrictions on development progress caused by the stable sysfs API.
Posted Aug 4, 2006 4:33 UTC (Fri)
by xoddam (subscriber, #2322)
[Link]
Posted Aug 3, 2006 5:22 UTC (Thu)
by jesper (guest, #23316)
[Link] (2 responses)
Just make the udev version dependent, like this kernel-modules:
ls /lib/modules/`uname -r`
There could be a:
And this problem would be long gone forever.
Jesper
Posted Aug 3, 2006 16:00 UTC (Thu)
by vmole (guest, #111)
[Link] (1 responses)
Steve
Posted Aug 4, 2006 4:39 UTC (Fri)
by xoddam (subscriber, #2322)
[Link]
Posted Aug 3, 2006 11:19 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
I'm amazed the lwn editors let this FUD pass - Fedora Core 3 is not an "abandonned" distro, it's still supported by the Fedora Legacy project, and will probably be till Fedora Core 7 Test 2 is released
"New kernels would always come with a version of udev that worked, and some of these compatibility problems would go away"
That would make multibooting between old safe kernels and new experimental ones incredibly difficult.
Posted Aug 3, 2006 14:52 UTC (Thu)
by incase (guest, #37115)
[Link]
> That would make multibooting between old safe kernels and new experimental
Why? udev could use a versioned main part, i.e. the "udev" script would call some /lib/udev/`uname -r`/udev-main script/binary. I don't see much of a problem there.
However, in one sense, you are right: This would only work for future kernels. Booting - for example - 2.6.8 like this won't work because there won't be any /lib/udev/2.6.8/udev-main to accompany it. Unless someone created some sort of 'backport' to achieve that.
Regards,
Posted Aug 3, 2006 14:48 UTC (Thu)
by yodermk (subscriber, #3803)
[Link] (1 responses)
Seems to me like a distributor or community supporter (or whoever) should, if they want this new kernel to work on an old distro, package it along with any dependencies.
This "problem" doesn't seem like a good reason to hold back progress in the kernel.
Posted Aug 3, 2006 21:24 UTC (Thu)
by wilck (guest, #29844)
[Link]
Posted Aug 3, 2006 16:23 UTC (Thu)
by iabervon (subscriber, #722)
[Link]
Posted Aug 3, 2006 19:29 UTC (Thu)
by jbailey (guest, #16890)
[Link] (3 responses)
If an end-user is using a distro, then they should use what the distro provides. Anything else is just completely unsupportable.
Tks,
Posted Aug 3, 2006 20:54 UTC (Thu)
by wilck (guest, #29844)
[Link] (1 responses)
Posted Aug 7, 2006 12:01 UTC (Mon)
by malor (guest, #2973)
[Link]
They have declared that the distros are the ones that have to make it work. Kernel.org kernels are officially no longer for end-user consumption. So, of course, people just run distro kernels.
I used to roll my own all the time, but I've been forced into that corner along with everyone else.
If they took responsbility for making stable kernels STABLE (which means they need to support them longer than two bloody months), they'd get many more testers.
Their decision to just handwave and expect the distros to actually make the code work means that only the distros do any testing.
Fundamentally, Linux is moving too fast. They are blaming the users for not testing enough, instead of themselves for shoveling in reams of untested code before the last batch has even started to settle.
It's only the fact that they're such brilliant coders that's saving them. And as good as they are, they're still having major problems. I guaran-damn-tee you that we're gonna be digging up severe security flaws for YEARS from this high-speed, low-contemplation environment.
There were 27 releases of 2.6.16. If they found that many problems that fast, just think of how many *subtle* security issues must be lurking.
Posted Aug 5, 2006 1:15 UTC (Sat)
by dlang (guest, #313)
[Link]
it's not always possible to have a working (or at least reasonably performing) system with the 'stock' distro kernel
yes this makes support harder. charge for the time when you run into such problems, don't just cry 'Unsupported' and scurry away leaving the user in a lurch.
Posted Aug 6, 2006 20:05 UTC (Sun)
by mattmelton (guest, #34842)
[Link]
Embedded Perl, for example, requires you to call perl_alloc(), before you do a printf, because the printf is cobbled to a perl-happy one.
Nothing stops me for doing, init_libraryv1.2() or init_libraryv2.3b(), when initiating different library versions that are tightly coupled.
From a userspace point of view, of course, there are no simple #define hacks to switch subsystem versions. Userspace does and should not care about versions... but here is the crux of the issue. It should.
sysfs is a tightly coupled subsystem, and tightly couple systems must be either maintained together, or comprehensively split. Why sysfs does not have some kind of versioning system already is something of a worry - maybe the whole focus on exciting exports to userspace blurred the coupling line a little.
I'd like, but I know I'll not get, a nice number interface - /sysfs/1.2/blah maybe? A link can provide a current issue, /sysfs/current/blah etc
Developers are too fixed, almost obessive, on code functionality than they are on legacy. LEGACY IS GOOD. Legacy code is meant to be left to rot. Legacy code does not need to be maintained - the entire point of code being demoted to a legacy level is that it should not be maintained. Unmaintained code is not a problem if it is superceeded.
(legacy code does need some kind of eviction management however, but that, is another topic)
I don't care if there's an incompatability in 10 versions of a subsystem for a new product. A new product should be made for the older subsystems, not for a bleeding edge one. People who write for and use bleeding edge software tend to really miss that point.
(security fixes aside... of course)
I know my point is very short sighted, and I could easily side the other way, if I did not write this. But the truth of the matter is that there is no easy way with something that evolves this fast.
Matt
Posted Aug 9, 2006 2:51 UTC (Wed)
by bluefoxicy (guest, #25366)
[Link] (1 responses)
Who CARES? Dapper is not going to one day suddenly start shipping post-2.6.15 kernels; and Slackware 11 is not yet released so they can upgrade udev before they do that.
The only people who are going to break are those who upgrade their kernel by building their own; and if you can roll your own kernel, you can roll your own udev. If you're a distributor and you want to upgrade the kernel like that, upgrade udev with it. Either way, you're only going to break if you make a change to a major base component of your system (the kernel); if you're making such a change, make the other one (udev) too.
Posted Aug 11, 2006 9:43 UTC (Fri)
by kreutzm (guest, #4700)
[Link]
Ans remember the *try out* part. Lets say, you maintain a cluster of machines where you don't have the throughput you like. So you prepare a bleeding edge kernel, install it on a few nodes, take them offline and reboot. Next you ask a user to test drive this kernel. Once the testing is done (or your boss requires all nodes again) you will have to go back to the "old" kernel for production use. Of course, if the tests were fine, than you could upgrade all nodes, but here you'll have to wait for the maintenance window to come!
Another problem is that sometimes external kernel patches are required (e.g. grsecurity), so the latest udev might be "too new" which makes fun looking for the right version.
I don't mind upgrading udev, as long as old kernels continue to work. And as this is required (also for other reasons not listed here) I refuse to use udev, how nice it may be. I suppose, at one time I will have to reconsider, hopefully these problems have been solved by then.
I just wonder about the upgrade path from Debian Sarge to Debian Etch for people using 2.6.8 from Sarge. I hope the udev system has enough magic in it by now to make this a smooth upgrade (reading bug logs it appeared that this has been a bumpy road in Testing).
ABI mutation (was New kernels and old distributions)
The problem with that approach (which is sorta similar to the model used by many UNIX vendors, and Microsoft) is that we end up with an ever growing mass of code that supports every old, deprecated, obsolete, and broken, struct you EVER supported.ABI mutation (was New kernels and old distributions)
ABI mutation (was New kernels and old distributions)
Would Andrew Morton be complaining if a userspace program broke that Keeping up with the Kroah-Hartmans (who upgrade without notice)
depended on the layout of kernel objects in /dev/kmem? The whole point
of /sys (as opposed to the now-somewhat-deprecated /proc) is that it
reflects internal kernel data structures. Which are subject to change
without notice, according policy which has been stated most vociferously
by Greg K-H himself.
This means that Linus' policy against messing with userspace interfaces
cannot meaningfully apply to sysfs, any more than to kmem. But since
sysfs has only a couple of meaningful userspace clients, namely the HAL
library and the udev daemon, which are updated more-or-less in parallel
with the kernel, the stable interface boundary moves to the library API
and the 'far side' of the daemon: its configuration files and the device
nodes created in /dev are what must remain consistent.
Whether or not udev and HAL are maintained as part of the kernel source
tree (as recently proposed), the solution to this problem is simple: if
you want to run a new kernel, update userspace packages that depend on
kernel internals correspondingly. More recent udev daemons ought to
continue to understand older kernels, so users need only keep the latest
one on their systems: this is the right place for translations as
proposed by a sibling post. But the converse obligation, that the latest
kernel should support old versions of udev and HAL, imposes *exactly* the
'stable API nonsense' that the kernel developers have declared utterly
unacceptable.
The same issue has come up before -- eg. with insmod -- and the solution
has often been not only to pull the implementation into the kernel tree
so it can be maintained properly, but actually to do the work in
kernelspace, despite an earlier preference that it be done by a user
process. I suspect there are several other pieces of code which might be
better implemented as userspace utilities maintained alongside the
kernel, rather than *inside* it.
> There are limits, however, to how many tools can be packaged
> in this way
What limit, exactly, is there on userspace packages maintained in sync
with the kernel tree? The kernel tree is already huge and growing
rapidly. The advocated solution for any project (usually an out-of-tree
driver) which begs for a stable kernel API is to bring it in-tree. Why
should that rule be any different for those userspace tools that are
necessarily coupled to kernel internals? They're usually maintained by
kernel developers anyway; wouldn't it smooth the workflow to have it all
in the same place?
Projects such as KDE and GNOME manage to maintain numerous libraries and
applications in a combined source tree and to release them together. I
can't see why AM thinks people running Fedora Core 3 should want to
upgrade to an arbitrarily recent kernel without also updating a couple of
dependencies. Does he likewise think the users should be able to upgrade
GNOME without updating any supporting packages?
The bottom line is that sysfs is a window into the interior of the
kernel, which maintains the right to change without notice. Userspace in
general cannot depend on it. udev and HAL *are* the interface and
therefore must remain current with the kernel.
Keeping up with the Kroah-Hartmans (who upgrade without notice)
Whether or not udev and HAL are maintained as part of the kernel source
tree (as recently proposed), the solution to this problem is simple: if
you want to run a new kernel, update userspace packages that depend on
kernel internals correspondingly.
If you would have read the linked full email from Andrew, you would have seen:
This stuff breaks my FC3 test box and there is, afaict, no clear
way for users to upgrade udev to unbreak it.
(emphasis mine).
Thus, your solution doesn't seem to work.
> If you would have read the linked full email from Andrew, Keeping up with the Kroah-Hartmans (who upgrade without notice)
Actually I did read it, but I missed this detail.
> you would have seen:
> > This stuff breaks my FC3 test box and there is, afaict,
> > no clear way for users to upgrade udev to unbreak it.
> (emphasis mine). Thus, your solution doesn't seem to work.
You are correct at the time of writing. I'd consider this a bug in udev
rather than the kernel. A workable solution would be sticking `uname -r`
in the path to the udev daemon binary, as suggested below and as is
currently done for modules. This would be a reasonable change for
updated kernel and udev packages to make for older distributions. People
who build their own kernels are capable of hacking such a change by hand.
I'm with you that a solution that would require kernel and udev upgrade at the same time would be workable. E.g., I run a self-compiled kernel on my laptop, to get suspend2, and I would surely find it acceptable. But my SUSE 9.2 is more than 10 month old and is therefore supposed to be hit by the introduced change by Greg -- and that I don't find acceptable without a clear upgrade path for udev... :-) (I have to say that I didn't test it the patch, but want to wait until the dust of this discussion has settled.)Keeping up with the Kroah-Hartmans (who upgrade without notice)
>udev and HAL are maintained as part of the kernel source Keeping up with the Kroah-Hartmans (who upgrade without notice)
(a) HAL stands for 'Hardware Abstraction Layer'. According to its website, HAL currently depends on Linux 2.6.15 or later, as well as on udev and D-Bus. While the idea behind an abstraction layer is obviously in line with trying to get it to work on multiple backend platforms, in practice, HAL is presently tied to Linux.Keeping up with the Kroah-Hartmans (who upgrade without notice)
HAL website: http://www.freedesktop.org/wiki/Software_2fhal
HAL 0.5.8 Specification: http://webcvs.freedesktop.org/hal/hal/doc/spec/hal-spec.h...
Actually there are ongoing efforts to port HAL to FreeBSD and to Keeping up with the Kroah-Hartmans (who upgrade without notice)
OpenSolaris (the latter being headed by the Nexenta people I believe).
It's unlikely to remain a Unix thing for very long. In the meantime, KDE 4
will feature the Solid library to handle hot-plugging: it'll use HAL where
available, and system specific code otherwise. I don't think Gnome has any
specific plans as yet.
Keeping up with the Kroah-Hartmans (who upgrade without notice)
This means that Linus' policy against messing with userspace interfaces cannot meaningfully apply to sysfs, any more than to kmem.
But the converse obligation, that the latest
kernel should support old versions of udev and HAL, imposes *exactly* the
'stable API nonsense' that the kernel developers have declared utterly
unacceptable.
> There is a huge difference between sysfs and /dev/kmem. Keeping up with the Kroah-Hartmans (who upgrade without notice)
> sysfs is globally visible and has a lot of reasonable uses.
/dev/kmem used to have reasonable uses too, like insmod. Now insmod is
done inside the kernel, and there are no reasonable uses left for it
besides, as you say, debugging (but it is very limited even for that
purpose).
> Perhaps the opposite is true - perhaps the 'stable API' is a
> more complex subject than Greg and his followers pretend?
My point exactly -- on the one hand the 'no stable API' party wants to be
able to change internals at will, maintaining a stable interface to
userspace only, and on the other hand the primary spokesman for this
party exposes internals to userspace in such a way that changing them
*will* break userspace.
I'm suggesting that the real boundary of the stable API is on the far
side of userspace utilities like udev and modutils which *must* know
about kernel internals.
It's nice to *claim* a firm userspace/kernel boundary line, but
maintaining that boundary will mean freezing interfaces to userspace
technologies like hotplug in concrete from the very beginning. This is
not reasonable. Hotplug is *required* for suspend; suspend was wrong;
hotplug must change.
The only alternative is to do *all* interesting hardware-related work
in-kernel. Down with daemons!
I really dont get why this problem isn't fixed the same way as these problems usually are fixed. New kernels and old distributions
And the firmwares:
ls /lib/firmware/`uname -r`
/lib/udev/`uname -r`
One problem that doesn't solve is that udev configuration is also potentially version dependent. Spare me from /etc/udev/`uname -r`.New kernels and old distributions
> Spare me from /etc/udev/`uname -r` New kernels and old distributions
Someone has to draw the line somewhere. It used to be 'userspace', but
that is blurred by kernel-aware userspace utilities and libraries.
I think config file parsers are a great place to maintain backwards
compatibility (sendmail.cf notwithstanding).
"How long should the community have to care about a distro after the creators of it have abandoned it?"New kernels and old distributions
(I must admit being faintly amused by the yet-to-be-released slackware part)
>> "New kernels would always come with a version of udev that worked, andNew kernels and old distributions
>> some of these compatibility problems would go away"
> ones incredibly difficult.
Sven
Shouldn't this be a distributor problem? Should it even be *supported* or even expected to work when someone compiles a new kernel and sticks in an old distro? The days of compiling our own kernels are, for almost all users of almost all distros (Gentoo being the notable exception), long gone.New kernels and old distributions
What's progress worth if nobody uses these kernels? It's already a problem that kernels start to get tested by a broader user base only after distributions start packaging them. Do you want to aggravate this further?New kernels and old distributions
It's worth mentioning that, IIRC, what old udev lacks is support for certain things being symlinks. So there's a certain extent to which udev needs to be upgraded to a future-proof version, at which point it can be supported well when the kernel changes further.New kernels and old distributions
While the kernel developers seem to consider it an interesting thing to have users upgrade kernels without touching anything else in the distro, I don't think many distros share their enthusiasm. ;) Certainly in cases where I've done Ubuntu and Debian troubleshooting, a first response of mine is often to ask them to undo local changes like custom kernels.New kernels and old distributions
Jeff Bailey
If all end-users followed our advice, the number of people deploying (and therefore testing) kernel.org kernels would be even smaller than it currently is. That's one thing Andrew doesn't want to happen.New kernels and old distributions
The way to get more people to test is by making kernel.org kernels actually stable enough for use. New kernels and old distributions
frankly this would mean that a lot of distros would be completely unuseable for many people.New kernels and old distributions
From a programming point of view, when you link to 3rd party libraries of different versions but one SDK (Microsoft's DirectX for example), you call on a version specific init function.New kernels and old distributions
> Among others, distributions scheduled to break with the 2.6.19 kernelNew kernels and old distributions
> include Ubuntu 6.06 LTS ("dapper") and the not-yet-released Slackware 11.
If I am a distributor, fine, I can do the QA, roll all things and get going (but see below). But I used to be a system administrator, taking care of several archs (in the 2.2/2.4 days). So when a new computer gets bought, some components might not work (newer drivers required). So I try out a more recent kernel. Emphasis on *try out*. Back in those days, this was no problem. Build the kernel, reboot, test, reboot back if necessary, redo. Now with udev loudly complaining (e.g. if I try to go from 2.6.8 in Debian Sarge to a recent 2.6) this is a hell more complicated, especially the "going back" part.New kernels and old distributions