User: Password:
|
|
Subscribe / Log in / New account

real world?

real world?

Posted Aug 1, 2010 7:03 UTC (Sun) by mingo (subscriber, #31122)
In reply to: real world? by nix
Parent article: Realtime Linux: academia v. reality

This depends very much on the position of that bit of userspace in the dependency chain, for me.

It largely depends on how serious the effects of a bad upgrade are and how hard it is to go back to the old component.

The kernel is unique there: there can be multiple kernel packages installed at once, and switching between them is as easy as selecting a different kernel on bootup.

With glibc (or with any other user-space library) there is no such multi-version capability: if the glibc upgrade went wrong and even /bin/ls is segfaulting then it's game over and you are on to a difficult and non-standard system recovery job.

So yes, i agree with the grandparent and i too see it in the real world that the kernel is one of the easiest components to upgrade and is one of the easiest components to downgrade. It's also very often dependency-less. (there's a small halo of user-space tools like mkinitrd but nothing that affects many apps)

Try to upgrade/downgrade Xorg or glibc from a rescue image. I've yet to see a distro that allows that in an easy way.

(The only inhibitor to kernel upgrades are environments where rebooting is not allowed: large, shared systems. Those are generally difficult and constrained environments and you cannot do many forms of bleeding-edge development of infrastructure packages in such environments.)


(Log in to post comments)

real world?

Posted Aug 8, 2010 12:33 UTC (Sun) by nix (subscriber, #2304) [Link]

With glibc (or with any other user-space library) there is no such multi-version capability: if the glibc upgrade went wrong and even /bin/ls is segfaulting then it's game over and you are on to a difficult and non-standard system recovery job.
Though a copy of sash helps immensely there.

xorg is pretty easy to upgrade and downgrade actually because its shared library versioning is so strictly maintained. If you downgrade a really long way you might get burnt by the Xi1 -> Xi2 transition or the loss of the Render extension, but that's about all.

The kernel is particularly easy to upgrade *if you run the system and can reboot it on demand* (which is a good thing given the number of security holes it has!), but if both of those conditions are not true it is nearly impossible to upgrade. (Let's leave out of this the huge number of people running RHEL systems who think they're forbidden from upgrading the kernel by their support contract, even though they aren't...)

real world?

Posted Aug 8, 2010 20:50 UTC (Sun) by mingo (subscriber, #31122) [Link]

xorg is pretty easy to upgrade and downgrade actually because its shared library versioning is so strictly maintained. If you downgrade a really long way you might get burnt by the Xi1 -> Xi2 transition or the loss of the Render extension, but that's about all.

I guess we are getting wildy off-topic, but my own personal experience is very different: on my main desktop i run bleeding edge everything (kernel, Xorg, glibc, etc.) and just this year i've been through 3 difficult Xorg breakages which required the identification of some prior version of Xorg and libdrm packages and the unrolling of other dependencies.

One of them was a 'black screen on login' kind of breakage. Xorg breakages typically take several hours to resolve because pre-breakage packages have to be identified, downloaded and the dependency chain figured out - all manually.

Current Linux distributions are utterly incapable of doing a clean 'go back in time on breakage, and do it automatically, and allow it even on a system which was rendered unusable by the faulty package'. This is a big bleeding-edge-testers handicap for any multi-package infrastructure component such as Xorg.

OTOH single-package, multiple-installed-versions packages (such as the kernel) are painless: i don't remember when i last had a kernel breakage that prevented me from using my desktop - if then it took me no more than 5 minutes to resolve via: 'reboot, select previous kernel, there you go'.

glibc is _mostly_ painless for me, because breakages are rare - it's a very well-tested project. But if glibc breaks it's horrible to resolve due to not having multiple versions installed: everything needs glibc. My last such experience was last year, and it required several hours of rescue image surgery on the box to prevent a full reinstall - and all totally outside the regular package management system.

Plain users or even bleeding edge developers generally don't have the experience or time to resolve such problems, and as a result we have very very few bleeding edge testers for most big infrastructure packages but the kernel.

Thanks,

Ingo

real world?

Posted Aug 8, 2010 21:35 UTC (Sun) by nix (subscriber, #2304) [Link]

Oh, gods, yes, the libdrm/Mesa/driver-version combination nightmare is a tricky one I'd forgotten about. Of course that itself is sort of kernel-connected, because the only reason most of that stuff is revving so fast is because of changes on the kernel side :)

The go-back-in-time stuff is probably best supported by the Nix distro (no relation); the downside of that is that it requires an insane amount of recompilation whenever anything changes because it has no understanding of ABI-preservation conventions (assuming that all library users need to be rebuilt whenever a library is).


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds