User: Password:
|
|
Subscribe / Log in / New account

real world?

real world?

Posted Jul 27, 2010 18:33 UTC (Tue) by mingo (subscriber, #31122)
In reply to: real world? by deater
Parent article: Realtime Linux: academia v. reality

ummm... what "several hundred megabytes" statement?

Here's a link to your earlier statement about this topic from not so long ago. Quote:

It would also be nice to be able to build perf without having to download the entire kernel tree, I often don't have the space or the bandwidth for hundreds of megabytes of kernel source when I just want to quick build perf on a new machine.

(Plus your claim that there is no "perf only" mailing list is wrong as well, there's one at linux-perf-users@vger.kernel.org.)

Thanks,

Ingo


(Log in to post comments)

real world?

Posted Jul 28, 2010 2:46 UTC (Wed) by deater (subscriber, #11746) [Link]

Ummm who is being off topic now. Though I'm glad to hear that some of the issues I raised 4 months ago have finally been addressed. As you probably noticed I've moved on to different issues.

I wouldn't mind this article if it phrased things as academia being very different than kernel devel. I do object to the idea of kernel devel being the real-world common case. I'm pretty sure for most people the real world is being stuck in userspace, often without the ability to do things as root. As a user, building a custom library in my home dir and linking my tools against it is easy; getting the sysadmins to replace the kernel is hard.

I brought up the recent perf-events issue as there's a large overlap between the rt-linux developers and the perf developers, and the whole idea of what constitutes real-world to perf developers came up recently in this lkml thread.

real world?

Posted Jul 28, 2010 3:18 UTC (Wed) by foom (subscriber, #14868) [Link]

FWIW, in my real world, it's usually easier to push out a new kernel than to upgrade userspace. Got some Fedora 3 boxes running 2.6.26 kernels, for instance.

real world?

Posted Jul 28, 2010 15:52 UTC (Wed) by nix (subscriber, #2304) [Link]

This depends very much on the position of that bit of userspace in the dependency chain, for me.

Upgrade glibc? Much more hair-raising and rarely done than a kernel upgrade. Upgrade some heavily-used library like libpng to an incompatible release? Not common, and not often done. Upgrade a userspace performance counter library? This isn't going to be the most widely-used thing on earth, upgrading it should be easy. Plus, as deater points out, random users can do this and keep it out of the way of other users completely. Random users cannot upgrade kernels, no matter what they do. Only the machine's sysadmins can do that, and often refuse on production systems unless the need is utterly horrifically critical.

real world?

Posted Aug 1, 2010 7:03 UTC (Sun) by mingo (subscriber, #31122) [Link]

This depends very much on the position of that bit of userspace in the dependency chain, for me.

It largely depends on how serious the effects of a bad upgrade are and how hard it is to go back to the old component.

The kernel is unique there: there can be multiple kernel packages installed at once, and switching between them is as easy as selecting a different kernel on bootup.

With glibc (or with any other user-space library) there is no such multi-version capability: if the glibc upgrade went wrong and even /bin/ls is segfaulting then it's game over and you are on to a difficult and non-standard system recovery job.

So yes, i agree with the grandparent and i too see it in the real world that the kernel is one of the easiest components to upgrade and is one of the easiest components to downgrade. It's also very often dependency-less. (there's a small halo of user-space tools like mkinitrd but nothing that affects many apps)

Try to upgrade/downgrade Xorg or glibc from a rescue image. I've yet to see a distro that allows that in an easy way.

(The only inhibitor to kernel upgrades are environments where rebooting is not allowed: large, shared systems. Those are generally difficult and constrained environments and you cannot do many forms of bleeding-edge development of infrastructure packages in such environments.)

real world?

Posted Aug 8, 2010 12:33 UTC (Sun) by nix (subscriber, #2304) [Link]

With glibc (or with any other user-space library) there is no such multi-version capability: if the glibc upgrade went wrong and even /bin/ls is segfaulting then it's game over and you are on to a difficult and non-standard system recovery job.
Though a copy of sash helps immensely there.

xorg is pretty easy to upgrade and downgrade actually because its shared library versioning is so strictly maintained. If you downgrade a really long way you might get burnt by the Xi1 -> Xi2 transition or the loss of the Render extension, but that's about all.

The kernel is particularly easy to upgrade *if you run the system and can reboot it on demand* (which is a good thing given the number of security holes it has!), but if both of those conditions are not true it is nearly impossible to upgrade. (Let's leave out of this the huge number of people running RHEL systems who think they're forbidden from upgrading the kernel by their support contract, even though they aren't...)

real world?

Posted Aug 8, 2010 20:50 UTC (Sun) by mingo (subscriber, #31122) [Link]

xorg is pretty easy to upgrade and downgrade actually because its shared library versioning is so strictly maintained. If you downgrade a really long way you might get burnt by the Xi1 -> Xi2 transition or the loss of the Render extension, but that's about all.

I guess we are getting wildy off-topic, but my own personal experience is very different: on my main desktop i run bleeding edge everything (kernel, Xorg, glibc, etc.) and just this year i've been through 3 difficult Xorg breakages which required the identification of some prior version of Xorg and libdrm packages and the unrolling of other dependencies.

One of them was a 'black screen on login' kind of breakage. Xorg breakages typically take several hours to resolve because pre-breakage packages have to be identified, downloaded and the dependency chain figured out - all manually.

Current Linux distributions are utterly incapable of doing a clean 'go back in time on breakage, and do it automatically, and allow it even on a system which was rendered unusable by the faulty package'. This is a big bleeding-edge-testers handicap for any multi-package infrastructure component such as Xorg.

OTOH single-package, multiple-installed-versions packages (such as the kernel) are painless: i don't remember when i last had a kernel breakage that prevented me from using my desktop - if then it took me no more than 5 minutes to resolve via: 'reboot, select previous kernel, there you go'.

glibc is _mostly_ painless for me, because breakages are rare - it's a very well-tested project. But if glibc breaks it's horrible to resolve due to not having multiple versions installed: everything needs glibc. My last such experience was last year, and it required several hours of rescue image surgery on the box to prevent a full reinstall - and all totally outside the regular package management system.

Plain users or even bleeding edge developers generally don't have the experience or time to resolve such problems, and as a result we have very very few bleeding edge testers for most big infrastructure packages but the kernel.

Thanks,

Ingo

real world?

Posted Aug 8, 2010 21:35 UTC (Sun) by nix (subscriber, #2304) [Link]

Oh, gods, yes, the libdrm/Mesa/driver-version combination nightmare is a tricky one I'd forgotten about. Of course that itself is sort of kernel-connected, because the only reason most of that stuff is revving so fast is because of changes on the kernel side :)

The go-back-in-time stuff is probably best supported by the Nix distro (no relation); the downside of that is that it requires an insane amount of recompilation whenever anything changes because it has no understanding of ABI-preservation conventions (assuming that all library users need to be rebuilt whenever a library is).

real world?

Posted Aug 8, 2010 12:38 UTC (Sun) by nix (subscriber, #2304) [Link]

Let me elaborate on 'utterly horrifically critical' here. 'fork() fails due to the machine being a 64-bit box with 64Gb RAM yet running a 32-bit kernel and having run out of low memory' is not sufficiently critical: the database is still running, after a fashion, and that's what matters.

They're scared of going to 64-bit kernels no matter what the benefits because that's not what they currently have installed so 'stuff might break' (as if 'cannot fork()' is not breakage): 32->64, the kernels simply must be completely different, right? Have to retest everything.

This is not a rare attitude among people who run big production Oracle systems without really knowing what they're doing because they're Oracle DBAs at heart, who learnt to handle Solaris and have now been forced to Linux by the lower costs. Yes, you'd hope that everyone running big iron databases backing huge financial things with billions riding on them would have a sysadmin who understood the machine a bit as well as DBAs who understood the database: you'd be wrong.

real world?

Posted Jul 28, 2010 20:09 UTC (Wed) by tglx (subscriber, #31301) [Link]

> I brought up the recent perf-events issue as there's a large overlap between the rt-linux developers and the perf developers, and the whole idea of what constitutes real-world to perf developers came up recently in this lkml thread.

This LKML thread says in your own words:

"...This is why event selection needs to be in user-space... it could be fixed instantly there, but the way things are done now it will take months to years for this fix to filter down to those trying to use perf counters..."

That's complete and utter bullshit and you know that.

The interface allows you to specify raw event codes. So you can fix your problem entirely in user space even w/o writing a single line of code.

Stop spreading FUD.

Thanks,

tglx


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds