|
|
Subscribe / Log in / New account

White paper: Vendor Kernels, Bugs and Stability

White paper: Vendor Kernels, Bugs and Stability

Posted May 17, 2024 18:48 UTC (Fri) by bluca (subscriber, #118303)
In reply to: White paper: Vendor Kernels, Bugs and Stability by hmh
Parent article: White paper: Vendor Kernels, Bugs and Stability

Warnings that _only_ users actually running BTRFS, and only those who actually look for them, are not really enough. I mean we do have a CI running BTRFS, but it logs twenty million things per run, so who would notice such a warning?

And besides, the main point is a different one: if there are users of a userspace API, do not remove it. Or do, but then stop claiming "we do not break userspace", and don't publish papers wondering why users don't trust new upstream kernel releases and just stay on enterprise stable kernels.


to post comments

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 3:59 UTC (Sun) by wtarreau (subscriber, #51152) [Link] (32 responses)

> I mean we do have a CI running BTRFS, but it logs twenty million things per run, so who would notice such a warning?

So that may be what needs fixing in the first place. If new kernel warnings are not detected by the CI, it's for sure aiming at unnoticeable breakage upon upgrades.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 10:56 UTC (Sun) by bluca (subscriber, #118303) [Link] (20 responses)

Why would it? It's not a kernel CI, we barely have time and resources to test our own stuff, we are certainly not going to do QA for the kernel

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 11:17 UTC (Sun) by wtarreau (subscriber, #51152) [Link] (19 responses)

It's not about testing the kernel, it's about testing that what you're relying on still works. That's the same for any project, we all depend on lower layers and the tests are expected to catch failures. If you know you're relying only on abstraction layers, you probably don't care, but when you start dealing with syscalls or mount options yourself, you know that the risk becomes higher to face some changes over time. I know it's difficult and such changes are rare enough that even having tests doesn't warrant that this or that change will be caught. But probably that kernel warnings for an init system are worth being at least logged for occasional inspection.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 13:18 UTC (Sun) by bluca (subscriber, #118303) [Link] (18 responses)

That is only the case if "we do not break userspace" is false advertisement, and every userspace interface provided by the kernel is subject to breakages and incompatible changes. Is it?

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 13:59 UTC (Sun) by mb (subscriber, #50428) [Link] (16 responses)

There are different kinds of interfaces.
There are interfaces used by normal programs and there are special interfaces used by special programs like systemd and udev.

Normal interfaces are changed extremely rarely and these obviously are the ones meant by the "do not break userspace" rule.

Yes, it is annoying, if systemd/udev are affected by an interface change. Especially, if this interface change could have been avoided. But it's not the end of the world.
Every other decades old application will continue to work. That is what counts.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 14:11 UTC (Sun) by bluca (subscriber, #118303) [Link] (15 responses)

Sorry, but this is nonsense. Apart from the fact that a mount option, that can be used by anything or anyone with a simple mount command, seems hardly any "special", either the userspace interface is stable and breaking it is bad, or not. Kernel maintainers say "we do not break userspace", they don't say "we do not break userspace, apart from programs X, Y and X - because fuck those people". This is something that you have just made up post-facto to retroactivly justify an obvious and clear regression.

Which by the way, neatly explains why vendor kernels are needed and are in fact the only sane choice, despite what the paper cited in this article says. Nobody should run production payloads on upstream kernels at this point, given basic stuff like mount options just breaks left and right.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 15:20 UTC (Sun) by mb (subscriber, #50428) [Link] (14 responses)

> Apart from the fact that a mount option, that can be used by anything or anyone with a simple mount
> command, seems hardly any "special"

I didn't say that it is. I did not talk about this specific thing, because I don't know anything about it. I was talking about "do not break userspace" in the general form, not in this specific case.

Whether this mount change is a sane change is up to somebody else to judge.

> they don't say "we do not break userspace, apart from

Well, they pretty much do exactly that.
Sometimes they actually spell out what "apart from" means.
For example trace points are an exception. There are more exceptions.

If you want 100% full "don't break userspace" without exceptions, we must basically stop all kernel development now.
Every change is user visible eventually. Even simple changes like adding a new syscall can break programs, if the program was using the new syscall number and depended on it returning ENOSYS.

Having a "don't break userspace without exceptions" is impossible.

> This is something that you have just made up

No. See my example of trace points.

> this is nonsense
> because fuck those people

I think it would be good to calm down before continuing the discussion.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 17:32 UTC (Sun) by bluca (subscriber, #118303) [Link] (13 responses)

> I didn't say that it is. I did not talk about this specific thing, because I don't know anything about it. I was talking about "do not break userspace" in the general form, not in this specific case.

This is very much about "do not break userspace" in the general form. It's the perfect example of why that mantra needs to be put to bed, once and for all, as it's completely disconnected from reality.

> Well, they pretty much do exactly that.

No, they very much do not. Look at all the enthusiastic comments from kernel people pointing to the paper in the article and saying "See? Vendor kernels are BAD, just upgrade to upstream kernels, it's fine really", and when told that new kernel version break applications and that's the real reason why vendor kernels are used, they shrug it away as "impossible, we do not break userspace"

> No. See my example of trace points.

Yes, it is exactly what you did, and there was no mention anywhere of trace points:

> Yes, it is annoying, if systemd/udev are affected by an interface change. Especially, if this interface change could have been avoided. But it's not the end of the world.
> Every other decades old application will continue to work. That is what counts.

You have made up a new rule according to which it's fine to break systemd or udev (if it's not made up, then just point to where on https://kernel.org/doc/ it is defined), but *unspecified other applications* must continue to work. That is very convenient of course, it's always unspecified other applications that are supported, and the ones that break are never actually supported. That's a very easy way of guaranteeing compatibility - every time something goes wrong just say that case was never actually supposed to continue working and move on.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 18:10 UTC (Sun) by mb (subscriber, #50428) [Link] (11 responses)

>You have made up a new rule

There have always been exceptions and I didn't make that up. That's just silly. I even gave you an example (tracepoints).
It's up to you to ignore that. But please stop saying that I made it up.

I respect you for what you do for Linux, Systemd and so on. But you're acting like a child right now.

>it's always unspecified other applications that are supported

Yes. That is exactly like it is.

I understand that you are upset that the kernel apparently frequently breaks systemd/udev. But keep in mind that these applications are tightly coupled to the kernel. It's natural that these see more breakage than other average applications.
Yes, that is unfortunate and could certainly be improved.
But please don't generalize to other applications.

>every time something goes wrong just say that case was never actually supposed to continue working and move on.

That's not how things are done, though.
There have been reverts of ABI changes due to application breakages in the past.
It's done on a case by case basis.

Now you will reply: You have made up yet another rule!

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 18:32 UTC (Sun) by bluca (subscriber, #118303) [Link] (10 responses)

> There have always been exceptions and I didn't make that up. That's just silly. I even gave you an example (tracepoints).
It's up to you to ignore that. But please stop saying that I made it up.

Literally nobody has mentioned tracepoints. I mean I'm not even sure that really qualifies as a userspace interface - maybe it does, it would seem strange, but I am not a tracing experts. But it is completely unrelated to mount options being removed.

> I understand that you are upset that the kernel apparently frequently breaks systemd/udev. But keep in mind that these applications are tightly coupled to the kernel. It's natural that these see more breakage than other average applications.

Says who? That is very much not true. Every interface that I can think of is used by multiple unrelated applications. I have no idea where you get this from. Cgroups and namespaces? Throw a rock in the general direction of a container runtime and you'll hit either or both. Netlink? There are as many network and interface managers as there are Linux vendors. Process management? That's been around since literally forever, and see the point about container management again. Mounting filesystems? fstab is older than me, I am quite sure.

'We do not break userspace, as long as userspace is a statically linked printf("hello world\n") /sbin/init' doesn't sound as catchy, does it now?

> It's done on a case by case basis.

I am well aware. And the triaging of that case by case goes like this: did it affect the machine that Linus happened to boot on that week? If so, it gets reverted and unpleasant emails are shot left and right. Else, nothing to see, move along.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 18:48 UTC (Sun) by mb (subscriber, #50428) [Link] (9 responses)

> Literally nobody has mentioned tracepoints.

What the? I did. I mentioned them as an example for a non-stable interface. After you have asked.

> I mean I'm not even sure that really qualifies as a userspace interface

Oh. I get it. *You* want to define what a userspace interface is and what not.
And everybody who disagrees is "making it up" or talking "nonsense".

That is silly.

> Says who?

Me. But I'm not sure why that matters.

> We do not break userspace, as long as userspace is a statically linked printf("hello world\n") /sbin/init

Well. I have never experienced a breakage due to a kernel interface change.
I run a two decades old binary and it still works fine.

That is my experience.

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 19:31 UTC (Sun) by bluca (subscriber, #118303) [Link] (8 responses)

> I mentioned them as an example for a non-stable interface. After you have asked.

Again, I do not know the first thing about tracepoints and have zero interest in that. Maybe it's a supported interface, maybe it's not, I really cannot say, nor care, and can't see what it has to do with mount options.

> *You* want to define what a userspace interface is and what not.

No, userspace defines what is a userspace interface, as per Hyrum's Law.

> Me. But I'm not sure why that matters.

Because it's just wrong, as explained, there are no "special custom interfaces" being used anywhere, just bog standard stuff used by most components of an operating system.

> I run a two decades old binary and it still works fine.

'We do not break userspace, as long as userspace is mb's statically linked printf("hello world\n") /sbin/init' still not quite as catchy I'm afraid

White paper: Vendor Kernels, Bugs and Stability

Posted May 19, 2024 20:23 UTC (Sun) by mb (subscriber, #50428) [Link] (7 responses)

Ok. Everybody is wrong, except you. I got it.
I'll stop here.

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 9:30 UTC (Mon) by LtWorf (subscriber, #124958) [Link] (2 responses)

I mean. The not breaking userspace only applies to a subset of things userspace can do.

I've had to fix software because of a kernel update, because some files in /sys were moved. But for some reason that doesn't count.

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 9:41 UTC (Mon) by mb (subscriber, #50428) [Link]

> I mean. The not breaking userspace only applies to a subset of things userspace can do.

That is exactly what I was saying. Yet, I'm apparently wrong.

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 9:45 UTC (Mon) by bluca (subscriber, #118303) [Link]

> I mean. The not breaking userspace only applies to a subset of things userspace can do.

Where is that subset defined?

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 11:09 UTC (Mon) by wtarreau (subscriber, #51152) [Link]

> Ok. Everybody is wrong, except you. I got it.
> I'll stop here.

Welcome to discussions with bluca. Agressivity, half-reading of arguments, and accusations often arrive in the second or third message when he disagrees with you. There are such people who constantly criticize Linux and who would probably do good to the community by switching to another OS of choice :-/

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 11:26 UTC (Mon) by bluca (subscriber, #118303) [Link] (2 responses)

You are literally trying to gaslight me into believing that dropping a mount option (not just making a noop, literally deleting it so that a hard error is returned where it wasn't previously), that is currently in use by userspace, is not a compatibility breakage. Make of that what you will.

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 11:54 UTC (Mon) by mb (subscriber, #50428) [Link] (1 responses)

>You are literally trying to gaslight me

Wow. This is a new level.

Stop here please

Posted May 20, 2024 12:57 UTC (Mon) by corbet (editor, #1) [Link]

This clearly is not going anywhere useful, can we all let it go at this point, please?

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 15:48 UTC (Thu) by anton (subscriber, #25547) [Link]

This is very much about "do not break userspace" in the general form. It's the perfect example of why that mantra needs to be put to bed, once and for all, as it's completely disconnected from reality.
Is it? When the breakage of existing code is reported as a bug, do the kernel developers declare the bug report as invalid, or do they fix the bug? If it's the latter, they live up to the principle. Sure, one might wish that such bugs would never happen, but apparently they feel that that going for that would be too constricting for kernel development.

Whether that means that vendor kernels are needed, or that one can use upstream kernels if one is selective about them is up to the vendors and their customers to decide.

White paper: Vendor Kernels, Bugs and Stability

Posted May 20, 2024 9:28 UTC (Mon) by LtWorf (subscriber, #124958) [Link]

Well the stuff in /sys moves around a lot.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 10:12 UTC (Thu) by tlamp (subscriber, #108540) [Link] (10 responses)

IMO what actually needs fixing is how deprecated options, maybe even drivers, are communicated and tracked.

A major improvement here could consist of adding a common infrastructure in the kernel to track deprecation.
It should allow the kernel build system to generating a declarative list (or something more structured like JSON) that includes info like "driver/module", "option" name, "kernel release it got deprecated", and "kernel release where removal is planned".

This data should be assembled on kernel build, possibly even made available on runtime in one of the virtual FS, would allow distros and projects with a lot of kernel interaction like systemd to actually track those and notice those for sure, as scanning for arbitrary warnings that can change wording every point release is just an ugly mess with lots of false-positives/negatives waiting to happen.

If it was available on runtime then checks could be added to the pre- / post-installation scripts/hooks of the kernel distro packages so that users can get a much more noticeable warning printed out on upgrade if their system is affected by such an option removal.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 12:35 UTC (Thu) by mb (subscriber, #50428) [Link] (6 responses)

> A major improvement here could consist of adding a common infrastructure in the kernel to track deprecation.

We had such a deprecation list under Documentation, but I think it got removed a couple of years ago.
It was not very useful and suffered from major bitrot.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 13:20 UTC (Thu) by Wol (subscriber, #4433) [Link] (5 responses)

> We had such a deprecation list under Documentation, but I think it got removed a couple of years ago.
It was not very useful and suffered from major bitrot.

Far better to do it in the kernel itself. Probably not easy, but move all deprecated stuff into a (or several) modules behind an option "deprecated-6.8" or whatever. Bleeding edge sets all these to "no", and either someone steps up and supports it (removing the deprecated option), or it bitrots until someone says "oh, this broke ages ago, let's delete it".

And then, if there's stuff you really want to get rid off but people need it, every year or so it gets upgraded to "deprecated latest kernel", so hopefully people stop using it and it finally drops out of sight ...

Cheers,
Wol

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 14:45 UTC (Thu) by mb (subscriber, #50428) [Link] (4 responses)

>but move all deprecated stuff into a (or several) modules behind an option "deprecated-6.8" or whatever.

It would change nothing.
Every distribution and everyone building their kernel will just enable this option, because stuff will break without enabling it.
Just like everybody enabled the - how was it called? - EXPERIMENTAL option.
Such options are useless.

>let's delete it

And that is exactly when people will first start to notice.
And there is not much anybody can do about that *except* to not break/deprecate stuff.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 20:56 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

> It would change nothing.

Except it changes everything

> Every distribution and everyone building their kernel will just enable this option, because stuff will break without enabling it.

You just said it!

The distributions are enabling something that is disabled by default? They're accepting responsibility for keeping it working.

Developers are enabling something that is disabled by default? They're accepting the associated risks.

People are enabling something that is marked "deprecated"? They're being placed on notice that it's being left to bit-rot.

The fact that people have to actively enable something that developers clearly don't want activated means that anybody using it will have three choices - migrate their code away, take over maintenance, or do an ostrich and bury their heads in the sand. Users will still be able to be complain "I didn't know", but their upstream won't have that excuse.

Cheers,
Wol

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 21:01 UTC (Thu) by mb (subscriber, #50428) [Link] (1 responses)

>Developers are enabling something that is disabled by default? They're accepting the associated risks.

Do you realize, that most kernel options are disabled by default?

>The fact that people have to actively enable something that developers clearly don't want activated means

It means that developers don't have a clue what people (users!) actually want and need.

Closing your eyes won't make the demand go away, unless you are less than three years old.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 22:41 UTC (Thu) by Wol (subscriber, #4433) [Link]

> Do you realize, that most kernel options are disabled by default?

And how many of those options have "deprecated" in their name? Surely that's a massive red flag.

> It means that developers don't have a clue what people (users!) actually want and need.

And how many developers are employed by (therefore are) users? I believe Alphabet employs loads. Meta employs loads. Most of the kernel developers I have contact with are employed by large end users. It's a little difficult to be oblivious of your own needs. (Some people manage, I'm sure ...)

How difficult is it - to set a "not enabled" flag that cannot be accessed without some sort of warning that this flag will enable deprecated functionality Surely it's not beyond the wit of your typical kernel developer? That's ALL that's required.

Cheers,
Wol

White paper: Vendor Kernels, Bugs and Stability

Posted May 24, 2024 8:15 UTC (Fri) by tlamp (subscriber, #108540) [Link]

> It would change nothing.
> Every distribution and everyone building their kernel will just enable this option, because stuff will break without enabling it.
> Just like everybody enabled the - how was it called? - EXPERIMENTAL option.
> Such options are useless.

I don't think so, mostly because my spitballed proposal was not targeted at solving the "distros get never hurt by deprecation", as IMO that cannot be solved, besides not doing any deprecation at all anymore which hardly is a good solution. Rather, I wanted to target the "how things get communicated and noticed" part and having ab extra compile option with something like "deprecated-6.8-removal-6.12" in the name could actually be quite good for that. The build configs are often tracked and even diffed, and as simple single file can be easily grep'ed against _DEPRECATION_ and then diffed for ones that would trigger soon or new ones, probably even by a CI like systemd uses.

I.e., the status quo is having warnings for deprecation, which can be brittle and are not easy to digest/parse, having that info in an easier to digest manner would help a lot as tools/distros that depend on such options can easily find out when a used one will vanish soon(ish), then they also have no excuse about being unprepared.

> And that is exactly when people will first start to notice.

If their distro or tooling did not do their work then yes, but it wouldn't be the fault of the kernel having a messy deprecation process. IME most bigger distros or big projects like systemd want to avoid that, so if they'd have the definitive information required to do so in just somewhat digestible way, then I really think that most would actually act on that.

White paper: Vendor Kernels, Bugs and Stability

Posted May 23, 2024 14:57 UTC (Thu) by smurf (subscriber, #17840) [Link] (2 responses)

I don't need a static declarative list of all deprecations that might or might not exist in my kernel.

I need a list of those deprecated calls/options/whatever that the current system is actually using (or rather has been using since booting).

A data structure that gets added to a list which you can check via /proc/deprecated would be quite sufficient for this.

No JSON fanciness required; a textual table identifying the subsystem or module, source file, first and last use timestamps, and its identifier in linux/Documentation/deprecations.yml [yes I know that file doesn't exist yet] would be quite sufficient.

White paper: Vendor Kernels, Bugs and Stability

Posted May 24, 2024 8:01 UTC (Fri) by tlamp (subscriber, #108540) [Link]

> I don't need a static declarative list of all deprecations that might or might not exist in my kernel.

As said, I'd assemble on build so those options that are not relevant for a kernel build config would not be in there (or if still wanted could be tracked differently, i.e., with an extra flag or separate list)

> I need a list of those deprecated calls/options/whatever that the current system is actually using (or rather has been using since booting).

With a declarative list this is trivial to create, as a tool can just scan all modules, mount, ... options and compare if anything explicitly set is in the static list. So if you want this then a static list is IMO really the best way to achieve it, one first needs the definitive list of information before being actually able to do something with it, keeping it dumb on the kernel side and include as much as possible allows (user space or build) tooling to actually do the smart checks.

> A data structure that gets added to a list which you can check via /proc/deprecated would be quite sufficient for this.

Not sure how this minus bikeshedding is any different what I proposed, but I like that we agree in general.

> No JSON fanciness required; a textual table identifying the subsystem or module, source file, first and last use timestamps, and its identifier in linux/Documentation/deprecations.yml [yes I know that file doesn't exist yet] would be quite sufficient.

I'd named JSON simply as an option, explicitly also stated that a simple list could do. But, I named JSON as 1. generating it is trivial (compared to parsing, which isn't hard either, but not trivial anymore) 2. Allows more flexible extension for whatever data or use case gets relevant in the future without having to do a /proc/deprecation2 3. in my projects I try to avoid another not-invented-here format with it subtleties to be added, but sure if it's a simple CSV list that gets generated by the common infra (i.e., not under the control of each kernel dev with their own opinions of the day) then fine by me (not that my acknowledgment would matter anything :)

White paper: Vendor Kernels, Bugs and Stability

Posted May 24, 2024 13:45 UTC (Fri) by donald.buczek (subscriber, #112892) [Link]

> I don't need a static declarative list of all deprecations that might or might not exist in my kernel.
>
> I need a list of those deprecated calls/options/whatever that the current system is actually using (or rather has been using since booting).
>
> A data structure that gets added to a list which you can check via /proc/deprecated would be quite sufficient for this.
>
> No JSON fanciness required; a textual table identifying the subsystem or module, source file, first and last use timestamps, and its identifier in linux/Documentation/deprecations.yml [yes I know that file doesn't exist yet] would be quite sufficient.

This would be perfect! We would see, what we need to address in our fleet (we are not using a distribution). But distributions would have something to build on, too. They might create a feedback path of this information from the systems of their users to the distribution. The basis for everything is that the information "you are using a mechanism, which will go away" is made available in a structured way.

It is important that the information cannot only be found by digging through masses of unstructured text in mailing lists, documentation, NEWS files, dmesg or other sources and then having to analyze in each individual case whether it is is relevant to you at all.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds