|
|
Subscribe / Log in / New account

Patch backports

Patch backports

Posted Feb 22, 2019 15:44 UTC (Fri) by nix (subscriber, #2304)
In reply to: Patch backports by Spack
Parent article: The case of the supersized shebang

If that was the way it worked, then even serious bugs wouldn't get fixed more often than once every three months. That would render the stable kernels almost pointless, since you could always just upgrade to the also-stable Linus kernel the fixes came out of instead.


to post comments

Patch backports

Posted Feb 23, 2019 21:32 UTC (Sat) by NAR (subscriber, #1313) [Link] (3 responses)

I guess people use -stable kernels because they explicitly do not want the latest version (due to fears about regressions, being locked to a version number, etc.). I don't know if they want a new -stable kernel every 3-4 days or just take a look at the top of their preferred -stable tree at their convenience (maybe once every quarter) and use that version.

Patch backports

Posted Feb 24, 2019 22:52 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

I use stable kernels because I want serious bugfixes, stability fixes, and security fixes but don't want to run -rc kernels on the systems my job depends on that house all my data thankyouverymuch. This... does not seem like a terribly unusual requirement, to me. I specifically do not avoid updating because I "don't want the latest version": if I could reliably update without rebooting I'd do it within minutes of every stable kernel coming out (but ksplice is fiddly for a self-compiled kernel and requires patch-by-patch analysis to determine which changes can be applied and kgraft is just as bad, AIUI: not a thing you can just throw a new stable kernel at and say "magically update me to this without rebooting").

Like most people operating small numbers of machines rather than huge failovered farms, I upgrade at irregular intervals, when a stable kernel with a bugfix seemingly serious enough to make it worth the annoyance-cum-terror of rebooting and flushing all my caches comes along -- though I suspect most people don't routinely read the git log and patch series of everything that hits -stable the way I do. (Rebooting is much less bad for performance than it used to be, thanks to bcache caching all the seeky metadata, but rebooting my core server is *always* terrifying: what if it never comes back up? It always has so far but this is a PC which means it's shit by definition, and I am not confident that all-Intel-mobo-plus-Intel-UEFI-only-one-corp-to-blame means it's reliable before the OS has started, not when I've *seen* the thing lock up once or twice when trying to enumerate its USB ports, exhaust some sort of watchdog timer, and autoreboot again before completing POST. I'm tempted to switch to kexec just to avoid most of that terror, but unfortunately kexec is even *less* routinely tested so the terror quotient would be greater. Yes of course I have backups, and backups of backups, and backups of backups of backups, but terror does not yield to common sense. I keep the ludicrous levels of backups anyway.)

Patch backports

Posted Mar 1, 2019 1:48 UTC (Fri) by flussence (guest, #85566) [Link] (1 responses)

I'm thinking the whole concept of having a -stable branch might be wrong if this is how it works in practice. The label “stable” is always going to be wishful thinking at best, since the halting problem applies.

One could argue it'd make sense to bless individual kernel versions as “stable” after a grace period passes with no complaints raised, but the existing process needs to be fixed so those complaints are heard before that can happen.

Patch backports

Posted Mar 1, 2019 16:30 UTC (Fri) by nix (subscriber, #2304) [Link]

That's more or less what I do locally, with most of my systems containing no real persistent state (either all contents are rsynced nightly from the big network-critical server, or they're just outright NFS-mounted from it) and running latest stable and sometimes random rcs with local hacks... and said server running a "stable stable" which survived at least a couple of weeks on the other boxes, with obviously-crucial security fixes cherry-picked back into that if need be. (I tried the distributed systems approach, having N of every distributable crucial service on different machines, and found that unless you spent *ages* analyzing you ended up introducing N single points of failure rather than just one, so I've gone back to the "one great big single point of failure which is at least easy to identify" approach. It is also too heavy to lift and won't fit through the door so is probably fairly theft-proof.)

This approach seems to work. I haven't lost any filesystems on the big server for, oh, almost a year now! :P


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds