A turning point for CVE numbers
CVE numbers can be useful for anybody who is interested in whether a given software release contains a specific vulnerability or not. Over time, though, they have gained other uses that have degraded the value of the system overall. Developers within software distributors, for example, have been known to file CVE numbers in order to be able to ship important patches that would, without such a number, not be accepted into a stable product release. Security researchers (and their companies) like to accumulate CVE numbers as a form of resume padding and skill signaling. CVE numbers have become a target in their own right; following Goodhart's law, they would appear to have lost much of their value as a result.
Specifically, in many cases, the CVE numbers resulting from these activities do not correspond to actual vulnerabilities that users need to be worried about. The assignment of a CVE number, though, imposes obligations on the project responsible for the software; there may be pressure to include a fix, or developers may have to go through the painful and uncertain process of contesting a CVE assignment and getting it nullified. As this problem has worsened, frustration with the CVE system has grown.
One outcome of that has been an increasing number of projects applying to become the CVE Numbering Authority (CNA) for their code. If a CNA exists for a given program, all CVE numbers for that program must be issued by that CNA, which can decline to issue a number for a report that, in its judgment, does not correspond to a real vulnerability. Thus, becoming the CNA gives projects a way to stem the flow of bogus CVE numbers. In recent times, a number of projects, including curl, PostgreSQL, the GNU C Library, OpenNMS, Apache, Docker, the Document Foundation, Kubernetes, Python, and many others have set up their own CNAs. The OpenSSF has provided a guide to becoming a CNA for other projects that might be interested in taking that path.
Corporations, too, can become the CNA for their products. Many companies want that control for the same reasons that free-software projects do; they grow tired of responding to frivolous CVE-number assignments and want to put and end to them. Of course, control over CVE assignments could be abused by a company (or a free-software project) to try to sweep vulnerabilities under the rug. There is an appeal process that can be followed in such cases.
The kernel CNA
The kernel project has, for the most part, declined to participate in the CVE game. Famously, the project (or, at least, some of the most influential developers within it) has long taken the position that all bugs are potentially security issues, so there is no point in making a fuss over the fixes that have been identified by somebody as having security implications. Still, the kernel has proved fertile ground for those who would pad their resumes with CVE credits, and that grates on both developers and distributors.
The situation has now changed, and the kernel will be assigning CVE numbers for itself. If that idea brings to mind a group of grumpy, beer-drinking kernel developers reviewing and rejecting CVE-number requests, though, then a closer look is warranted. The key to how this is going to work can be found in this patch to the kernel's documentation:
As part of the normal stable release process, kernel changes that are potentially security issues are identified by the developers responsible for CVE number assignments and have CVE numbers automatically assigned to them. These assignments are published on the linux-cve-announce mailing list as announcements on a frequent basis.Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team.
(Emphasis added). What this text is saying is that anything that looks like a bug fix — meaning many of the changes that find their way into the stable kernel updates — will have a CVE number assigned to it. Bear in mind that, as of 6.1.74, the 6.1 kernel (which has been out for just over one year) has had 12,639 fixes applied to it. The 4.19 kernel, as of 4.19.306, has had 27,952. Not all of these patches will get CVE numbers, but many will. So there are going to be a lot of CVE numbers assigned to the kernel in the coming years.
Back in 2019, LWN covered a talk by Greg Kroah-Hartman about the CVE-number problem. From that article:
Kroah-Hartman put up a slide showing possible "fixes" for CVE numbers. The first, "ignore them", is more-or-less what is happening today. The next, option, "burn them down", could be brought about by requesting a CVE number for every patch applied to the kernel.
It would appear that, nearly five years later, a form of the "burn them down" option has been chosen. The flood of CVE numbers is going to play havoc with policies requiring that shipped software contain fixes for all CVE numbers filed against it — and there are plenty of policies like that out there. Nobody who relies on backporting fixes to a non-mainline kernel will be able to keep up with this CVE stream. Any company that is using CVE numbers to select kernel patches is going to have to rethink its processes.
A couple of possible outcomes come to mind. One is that the CVE system will be overwhelmed and eventually abandoned, at least with regard to the kernel. There was not much useful signal in kernel CVE numbers before, but there will be even less now. An alternative is that distributors will simply fall back on shipping the stable kernel updates which, almost by definition, will contain fixes for every known CVE number. That, for example, is the result that Kees Cook seemed to hope for:
I'm excited to see this taking shape! It's going to be quite the fire-hose of identifiers, but I think that'll more accurately represent the number of fixes landing in stable trees and how important it is for end users to stay current on a stable kernel.
It is easy to get the sense, though, that either outcome would be acceptable to the developers in charge of mainline kernel security.
However it plays out, it is going to be interesting to watch; popcorn is
recommended. The CVE system has been under increasing stress for years,
and it hasn't always seemed like there has been much interest in fixing it.
The arrival of the kernel CNA will not provide that fix, but it may reduce
the use of kernel CVE numbers as resume padding or ways to work around
corporate rules and, perhaps, draw attention to the fact that keeping a
secure kernel requires accepting a truly large number of fixes. That might
just be a step in the right direction.
Index entries for this article | |
---|---|
Kernel | Security/CVE numbers |
Security | Bug reporting/CVE |
Security | Linux kernel |
Posted Feb 14, 2024 17:22 UTC (Wed)
by corbet (editor, #1)
[Link] (5 responses)
That's what we have done in the kernel for a very long time, and I predict this fascination of unique identifiers somehow meaning something is going to go away over time as it's obviously not sustainable for anyone involved.
Posted Feb 14, 2024 18:24 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (4 responses)
"...and breaks backward compatibility in at least twice as many ways, half of which are unknown and you'll only find in production. Good luck!"
^^^ this part was strangely missing from the quote, for some reason /s
Posted Feb 14, 2024 18:33 UTC (Wed)
by pizza (subscriber, #46)
[Link]
Of course, you're welcome to ask for (and will receive!) a complete refund if you're unhappy with that level of service.
Posted Feb 16, 2024 10:35 UTC (Fri)
by taladar (subscriber, #68407)
[Link]
Posted Feb 19, 2024 9:10 UTC (Mon)
by gmgod (guest, #143864)
[Link]
Plus, and that's the whole point, pulling a google and updating the kernel once in a bluemoon, using "critical" CVEs as an indicator for backports and most ARM boards not even doing that and giving you the finger are not practices that should be condoned. This kind of backporting gives you an illusion of stability and security (it does improve ABI stability a fair bit but at the expense of other things).
I know the term has lost quite a fair bit of meaning but I'd argue the whole point of devop-ing is precisely developing ways to perform updates (and rollbacks) that are the most seamless so that your systems are always up and if something had to be rolled-back, gives you and your team the time to sort it out.
If you do so often, it's only little bits and bobs you need to fix here and there. If you don't and use Ubuntu LTS, it's a full migration, with many arguments, tears and pain every time you switch to a new LTS.
The debian model is catered towards sysadmins who install everything by hand once and generally would be in it very deep if any of their servers actually fell because of hardware or because config became invalid after an update.
If kernel updates are all that is possible to really get a secure one (because backports require too much work), the Linux ecosystem will evolve. Debian, for instance, will probably track LTS kernels instead but will also ask the kerbel dev to be much more methodic with how changes are made and documented.
The kernel devs' point is that CVEs are rubbish. A lot of them being redundant or not real and a lot of them downplaying the impact they have on the system. My current understanding is that backporting doesn't scale well. But we'll see how it pans out. In the meantime, if an "unfortunate" kernel upgrade has large business impact, try to think and implement ways to negate that impact. We are mostly using on-prem resources here and yet I can redeploy bare-metal servers as I wish and most our OSes provided automatic rollback should anything go wrong. That's the way to go, instead of complaining an update, any update, broke something in prod.
Posted Feb 22, 2024 8:36 UTC (Thu)
by Aissen (subscriber, #59976)
[Link]
Posted Feb 14, 2024 17:23 UTC (Wed)
by dullfire (guest, #111432)
[Link] (3 responses)
Does this mean the CVE generation part can basically be fully automated. But with possible touch ups for patches with bad commit messages.
Posted Feb 14, 2024 18:15 UTC (Wed)
by gregkh (subscriber, #8)
[Link] (2 responses)
Posted Feb 15, 2024 23:00 UTC (Thu)
by Darakian (guest, #96997)
[Link] (1 responses)
Posted Feb 19, 2024 5:46 UTC (Mon)
by apollock (subscriber, #14629)
[Link]
The CVE 5 schema has a variety of ways of expressing machine-readable data about the vulnerability
https://cveproject.github.io/cve-schema/schema/v5.0/docs/...
I spent a lot of energy dealing with not-so-machine-readable data to convert non-CVE 5 schema CVE records to OSV for https://osv.dev/blog/posts/introducing-broad-c-c++-support/
Posted Feb 14, 2024 18:44 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (75 responses)
It really feels like said 'influential developers' live in a world of their own and have never actually spoken to anybody working in a commercial project anywhere, as this is such a naive and disingenuous take that it is the most charitable interpretation possible. The point of the CVE system should in theory be (yes of course there's plenty of misuse and straight out abuse as noted in the article, those are all very real problems with the system) that it allows to quickly decide whether it's worth to drop everything on the floor and pay a large sum of money and disrupt all your customers to go and do a kernel update, which in most cases results in unrelated stuff breaking left and right, due to how regressions are routinely ignored upstream, and how backward compatibility is not really a thing that any kernel maintainer cares about, outside of the syscalls ABI. A bug that is knowingly exploited in the wild is very, very very different from any other random bug fixes that doesn't affect you in any way, and it's incredibly naive to pretend they are all the same. They are very much not for anybody running any production system.
> A couple of possible outcomes come to mind. One is that the CVE system will be overwhelmed and eventually abandoned, at least with regard to the kernel. There was not much useful signal in kernel CVE numbers before, but there will be even less now. An alternative is that distributors will simply fall back on shipping the stable kernel updates which, almost by definition, will contain fixes for every known CVE number.
The third possible outcome is that, given shipping with known security problems (which is synonym of unfixed CVEs, like it or not) is slowly becoming the target of legislation in the US and the EU, companies will just stop using Linux in their products, starting with anything to do with government contracts, given it's essentially impossible to continuously update the kernel in production, due to how disruptive it is and also how often new versions break backward compatibility all over the place. Now _that_ would be a hilarious unintended consequence.
Posted Feb 14, 2024 19:21 UTC (Wed)
by jbenc (subscriber, #40051)
[Link] (17 responses)
Posted Feb 14, 2024 19:28 UTC (Wed)
by DemiMarie (subscriber, #164188)
[Link] (8 responses)
Posted Feb 15, 2024 4:09 UTC (Thu)
by Darakian (guest, #96997)
[Link] (3 responses)
Funds for a team to test and curate which bugs actually have security implications
Posted Feb 16, 2024 1:48 UTC (Fri)
by dralley (subscriber, #143766)
[Link] (1 responses)
Posted Feb 16, 2024 23:30 UTC (Fri)
by Darakian (guest, #96997)
[Link]
Posted Mar 7, 2024 5:50 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link]
Posted Feb 15, 2024 6:35 UTC (Thu)
by marcH (subscriber, #57642)
[Link] (3 responses)
Companies using Linux "for free" should hire fewer amateurs and more "real"software engineers who actually know how to:
You get what you paid for; if you don't pay for quality, then you don't get quality.
[indefinite "you", not answering anyone in particular]
If stable branches are full of regressions then _prove_ it. Overwhelm them with bug reports and... even more CVEs! The very first step is sharing _evidence_ of the problem, otherwise nothing ever changes.
If nothing changes even after sharing evidence then maybe Linux was too cheap and too good to be true and the wrong choice for you. Either write your own kernel and operating system or buy a better one. Linux has been incredibly successful but many companies still do that.
Whatever you do, before whining remember how much you paid for it.
Posted Feb 15, 2024 15:07 UTC (Thu)
by bferrell (subscriber, #624)
[Link] (1 responses)
There are simply not enough "qualified" individuals to support the "I want it NOW" world we have. And I don't mean in any given country. So, it's become grab a warm body that comes close, pay the going rate and pray.
If you think the people doing code are under paid, you likely thing they ought to be paid like rock stars... And that too is part of the problem.
Posted Feb 15, 2024 16:11 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
But still: don't come and complain that some Linux branches are buggy when you got them for free and did barely any QA on them yourself. You got what you paid for.
I think there is a perception problem because quality is even less tangible than lines of code. But good companies making quality products (Linux-based and not) know very well how much it's really worth.
Posted Feb 20, 2024 8:50 UTC (Tue)
by gmgod (guest, #143864)
[Link]
Posted Feb 14, 2024 19:54 UTC (Wed)
by mokki (subscriber, #33200)
[Link] (2 responses)
I would hope the criteria will allow cases where companies just need to ensure there are product is safe. That can be done by locking it down or by many other means. But if there is a security breach as a result of a known bug that had a fix available, but that was not provided to the customers. Then company could be held liable.
And I think that will work transiently too. If the company in the chain did not apply the provided upstream fix, then they themselves should be liable to their customers.
Posted Feb 15, 2024 11:06 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (1 responses)
But if the kernel tries to game the system by flooding it with bogus CVEs - one for each commit as it was suggested - then the above process breaks, and suddenly companies shipping products will no longer be able to self-certify that. There will be short term solutions, and then there will be long-term solutions, which might very well involve at least recalculating whether it still makes economic sense to rely on Linux.
Posted Feb 15, 2024 14:30 UTC (Thu)
by pbonzini (subscriber, #60935)
[Link]
It's not going to be one for each commit according to Greg. https://lwn.net/ml/linux-kernel/2024021447-fastball-twili...
I am cautious about the announcement. If the floodgates open but the result is useful, I hope that whatever tooling distros create to handle kernel CVEs will be public. And also perhaps it will encourage more people to do stable backports of patches that do not apply directly.
If the result is useless, on the other hand, I will just stop suggesting patches for stable. *shrug*
Posted Feb 20, 2024 8:48 UTC (Tue)
by gmgod (guest, #143864)
[Link] (4 responses)
The message is clear: you want security, you use the latest kernel (LTS is probably fine because fixes are backported there too).
I do agree and I do think that if the plan goes through, the kernel devs will have to up their testing game.
But at any rate, the time of the go-lucky approach of installing Debian stable and believing the system will never come down after an update is over. You talk about "real problems people have"... If you can afford a hardware failure or a borked package after such an upgrade, you can afford a kernel "failure". If you can't, there are already measures in place to handle those, "bad" kernel update included.
And again, this will go both ways. I bet you we'll see a lot of investment in testing in the next couple years.
Posted Feb 20, 2024 13:42 UTC (Tue)
by pizza (subscriber, #46)
[Link] (2 responses)
What's the standard financial disclaimer... "past performance is no guarantee of future success"?
I once publicly called out someone (who _definitely_ should have known better) for professional incompetence after they went on a "systemd is responsble for everything wrong with society!!!111" rant after something went wrong on a Debian 9 (I think) upgrade on a critical system. A remote, (completely) headless critical system.
...Because you don't do _any_ updates on critical systems without some measure of testing first. Or, at minimum, some sort of reversion/recovery procedure. While even basic (end-user) smoke tests would have caught this particular failure [1] the fact that there wasn't any thought given to recovering from an update failure (not even "remote hands" capable of hooking up and looking at the local console) was inexcusable.
[1] Due to non-Debian-supplied software failing to start properly and systemd actually catching the failure instead of ignoring it.
Posted Feb 20, 2024 14:48 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (1 responses)
Your anecdote links to a known change we're seeing in the software world: failure is less and less of an option over time. Back In The Day™ (for various values of back in the day), it was fine to depend on user complaints to tell you if a service was running or not. It was fine for anyone who could telnet to a host to be able to log in as root with just a plaintext password to authenticate them. It was fine for a system to have a few days downtime while broken hardware got replaced. It was fine for sysadmins to go digging in people's files just to see if there was something interesting in there.
None of this is OK any more; arguably, much of it was never OK, it was just accepted because doing better cost more than people were willing to pay. But time has moved on, and we expect more for less money, and to some extent, we get it - I can pay someone like Fastmail for better e-mail service than I used to be able to get from an in-house server, backed up by improvements to connectivity (where my LAN might have shared a single dial-up link 30 years ago, I've now got high speed Internet that's faster than the LAN speeds I got 30 years ago, and mail protocols designed to cope with the latency added by going to an outside datacentre instead of to a machine on the 10BASE2 network).
Posted Feb 20, 2024 15:34 UTC (Tue)
by pizza (subscriber, #46)
[Link]
Note that "for less money" in practice, means an increasing unwillingness to pay _anything at all_, because "something else is paying/subsidizing the cost of service"
(And one of those "something elses" is our service provider snooping on everything we do, including our at-rest data, finding "interesting" things to monetize. But hey, it's not "money", so that's fine!)
Posted Feb 20, 2024 19:41 UTC (Tue)
by bluca (subscriber, #118303)
[Link]
Debian stable regularly ships upstream kernel stable releases. Which is a problem, as we found out a couple of months ago, as "stable" kernels are not stable at all and can corrupt your disks and require a reinstall and restore from backups.
Posted Feb 14, 2024 21:44 UTC (Wed)
by mfuzzey (subscriber, #57966)
[Link] (42 responses)
But that does not and cannot work for something that is as wide scoped as the Linux kernel.
A local privilege escalation on a server providing shell acounts is probably a big deal whereas the same vulnerabiliity on an embedded device that is only running the intended software (maybe already as root) who cares?
The problem is that CVE numbers are conceptually simple and hide a lot of the real complexity and nuances that need to go into their sensible interpretation and you end up with stupid policies that say "you have to fix all CVEs" (of course that's not directly the fault of the CVE numbers themselves but the way people try to use them).
Giving managers something they think they can understand and make decisions on when the realities are much more complicated is generally a bad idea.
Also in my experience updates in the same stable kernel series very rarely cause issues and those that ocasionally do slip though can be mitigated with reasonable testing. Updating to a new kernel release does need a bit more care and tesrting though. I think the risk of *not* updating is higher than that of updating, providing you do test to some extent.
> companies will just stop using Linux in their products, starting with anything to do with government contracts,
Unlikely I think. At this point is there are few viable alternatives to Linux for vast swathes of applications. The alternatives generally either aren't open source and involve per instance license fees (making them either insufficiently flexible or too expensive) or lack the breadth of hardware support that Linux enjoys (making them unusable for many, partiuclarly in embedded).
Posted Feb 15, 2024 0:50 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (41 responses)
Of course it can and does, this is just the usual kernel developers misplaced exceptionalism and sense of grandeur. It's just some piece of software like many others.
> Even assuming the CVE does refer to a real vulnerability the impact is very usecase dependent.
And that's what the impact assessment and other data are used for, you are stating the obvious. "Does this exploit apply to our product" is the standard minimum assessment that everyone does.
> Also in my experience updates in the same stable kernel series very rarely cause issues
They break apart all the time, as soon as they involve anything that is not exercised on a couple dozens kernel developers laptops or desktops, and sometimes even there, like the disk corruption bug of a couple of months ago. New major releases are even worse, with userspace interfaces being intentionally broken left and right.
> At this point is there are few viable alternatives to Linux for vast swathes of applications.
I'm sure the developers of all past software that was once widespread and then faded into obscurity thought the same at some point or another. It just needs to stop making economic sense to use it, and that's exactly what it will happen - back to being a toy for hobbyists. We live in a capitalist society, and all those companies that are directly or indirectly sponsoring the vast, vast majority of development feel no attachment nor loyalty to anything but their share prices and profit margins.
Posted Feb 15, 2024 0:54 UTC (Thu)
by pizza (subscriber, #46)
[Link] (27 responses)
[citation needed]
(Especically given this flies against a _very_ longstanding "don't break userspace" rule that's kept all manner of crappy interfaces around)
Posted Feb 15, 2024 1:06 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (26 responses)
Posted Feb 15, 2024 1:19 UTC (Thu)
by pizza (subscriber, #46)
[Link] (3 responses)
Then why are you so grumpy about not getting something that was never promised to begin with?
Seriously, write and/or maintain your own kernel/system if that sort of stability matters so much to you.
("But waaah, that's too much work!" you exclaim. So if you're not willing to do it, why do you expect others to do it for you, for free?)
Posted Feb 15, 2024 1:28 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (2 responses)
Posted Feb 15, 2024 8:15 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Depends what you're talking about. As far as I know udev is (developer wise) absolutely nothing to do with the kernel.
"Do not break user space" is the rule Linus applies to the linux kernel. And that is a big part of the reason linux is so successful. Who knows what rules the udev guys apply to udev...
Cheers,
Posted Feb 15, 2024 10:35 UTC (Thu)
by bluca (subscriber, #118303)
[Link]
And that's a legitimate answer to give of course, it's their kernel after all. The problem is taking that approach _and_ then going around proudly proclaiming "we do not break userspace".
Posted Feb 15, 2024 21:51 UTC (Thu)
by fw (subscriber, #26023)
[Link] (21 responses)
Posted Feb 16, 2024 0:21 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (20 responses)
Posted Feb 16, 2024 10:03 UTC (Fri)
by timon (subscriber, #152974)
[Link] (19 responses)
Posted Feb 16, 2024 12:12 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (18 responses)
Posted Feb 16, 2024 14:28 UTC (Fri)
by corbet (editor, #1)
[Link] (17 responses)
Posted Feb 16, 2024 15:36 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (16 responses)
https://lists.freedesktop.org/archives/systemd-devel/2022...
The other one that hit me personally was when overlayfs was made incompatible with selinux. And then there's all the times that netlink changed. And all the time uevents changed. And all the times sysfs changed. And so on and so forth. The reality is that "we don't break userspace" is a nice story that kernel developers like to go around tell anybody who's willing to listen, but it's just that, a story. They barely care about syscall ABI stability, and even that gets broken from time to time as already pointed out by another comment.
Posted Feb 16, 2024 16:01 UTC (Fri)
by mb (subscriber, #50428)
[Link] (2 responses)
What I and most users care about is whether actual user applications break.
It doesn't affect my application, because the OS as a whole still works as before, after porting systemd/udev to the new interfaces. A combination of updated kernel and incompatible systemd/udev would never hit stable distributions.
Therefore, do you have examples of real user applications breaking, that are not part of the OS?
Posted Feb 16, 2024 16:08 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (1 responses)
Need some help with all that goal post moving? Must be exhausting, going that far
Posted Feb 16, 2024 16:54 UTC (Fri)
by mb (subscriber, #50428)
[Link]
I've just set things into perspective. That is no goal post moving.
Posted Feb 16, 2024 16:06 UTC (Fri)
by corbet (editor, #1)
[Link] (12 responses)
I have to say, Luca, that I would expect a systemd developer to understand how this kind of constant badmouthing from outside can make an environment toxic; systemd has certainly suffered its share of that. Why continue with that pattern? A more constructive approach might work wonders.
Posted Feb 16, 2024 16:49 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (10 responses)
These things do get reported, and they get ignored/shrugged away if you are lucky, and if not you get taken for a long ride. For this case it's explained in the link above as well. For the overlayfs case Google even went as far as sending 20 revisions of a patchset to try and restore backward compatibility, albeit optionally, and it was stonewalled: https://lore.kernel.org/lkml/20211117015806.2192263-1-dva...
So yeah, trying and dispelling this myth that "the kernel doesn't break userspace" is pretty much all that's left. Reading blatantly false statements being made irks me really badly, especially when used to justify some potentially damaging process changes as it happened here.
Posted Feb 16, 2024 17:01 UTC (Fri)
by mb (subscriber, #50428)
[Link] (9 responses)
Quite honestly, systemd and udev also broke lots and lots of things about how the Linux operating system works.
But the correct answer to users complaining often enough is to "ignore it" or "shrug it away".
What should be avoided is breaking changes that don't have positive sides. Changes just for the sake of changing and breaking things. That is bad and must be avoided. And it should always be considered, if a non-breaking change is possible.
But if a change breaks things and at the same time brings big benefits (relative to the breakage)?
Posted Feb 16, 2024 18:05 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (8 responses)
Or in other words, different software work differently. Or, yet again, you are moving the goal posts. Because nobody ever said "systemd works in exactly the same way as your 1980s garden variety collection of shell scripts", in fact the idea was very much the opposite. Some compat layers for the main interfaces were provided, which were always clearly documented as sub-optimal and wonky and intended for transition purposes, and after 20 years or so we'll remove them too, with ample advance notice. But nobody ever claimed that every single workflow in existence would continue unchanged after switching.
In fact, we don't even make absolute claims such as "we never break compatibility, period". From time to time we do breaking changes, and we try to announce them in advance, and for really impactful ones we try to get consensus on the mailing list first, and in rare cases we even try to help distributions migrate ahead of time to ensure the impact is nominal only - see when we dropped support for unmerged-usr last year for example - this happened in v255, and nobody noticed.
But what we most certainly don't do, is going around claiming "we never break compatibility", and I certainly don't use such a claim to start firing a bogus CVE for each commit that I backport to every stable branch I maintain.
See where the difference is?
Posted Feb 16, 2024 18:59 UTC (Fri)
by mb (subscriber, #50428)
[Link] (7 responses)
>See where the difference is?
Nope.
Posted Feb 16, 2024 19:03 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (6 responses)
The difference is that kernel developers have publicly committed to never breaking userspace. Systemd developers haven't. It is the disconnect between the public messaging and reality that's causing the contention. Not the changes themselves necessarily.
Posted Feb 16, 2024 19:10 UTC (Fri)
by mb (subscriber, #50428)
[Link] (3 responses)
Things like uevents, tracepoints, sysfs files, etc... were pretty much never part of that claim.
> It is the disconnect between the public messaging and reality that's causing the contention.
The disconnect between the expectation and the reality is causing the contention.
Posted Feb 16, 2024 19:21 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (2 responses)
Citation needed. That is very much not evident from any claim anybody has ever made that I have seen.
Posted Feb 16, 2024 19:41 UTC (Fri)
by mb (subscriber, #50428)
[Link] (1 responses)
Even syscalls have been removed in the past, breaking applications.
Citation: Look at the sources.
There has never been a thing like a general stability guarantee.
Posted Feb 16, 2024 20:07 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
> If a change only breaks udev or systemd and nothing else, it might make sense to do it.
I beg to differ
Posted Feb 16, 2024 19:25 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Where?
Okay, I know Linus says "never break user-space", and he is very strict about it. But at the end of the day, shit happens.
And there's plenty of kernel developers who *haven't* signed up to it. They just know that trying to get it past Linus is not a battle worth fighting most of the time.
There's one big example I can think of, that had a rather nasty fall-out, in the raid world. So bad, in fact, that kernels were modified to have an explicit "fail to boot" config, iirc!
Something to do with the fact that raid layout was accidentally changed. So you have pre-change kernels that will trash post-change arrays, pre-discovery kernels that will trash pre-change arrays, and post-discovery kernels that will refuse to access arrays without a "this is a pre/post-layout flag".
Sometimes that's all you can do :-(
Cheers,
Posted Feb 16, 2024 20:04 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Posted Feb 18, 2024 5:41 UTC (Sun)
by ras (subscriber, #33059)
[Link]
Posted Feb 15, 2024 1:08 UTC (Thu)
by pizza (subscriber, #46)
[Link] (12 responses)
It's _vastly_ cheaper to keep using it than to replace it with something else. By multiple orders of magnitude.
...Any replacement will necessarily need to be roughly equivalent in features and complexity, and even if you completely discount the initial development costs, you're going to still end up with a similar ongoing maintenance burden.
Posted Feb 15, 2024 10:39 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (6 responses)
Posted Feb 16, 2024 13:17 UTC (Fri)
by hkario (subscriber, #94864)
[Link] (5 responses)
If you have a policy that says you need to ship fixes for all CVEs, then that's a stupid policy. It just conditions vendors to refuse each and every CVE until it goes through arbitration (something proprietary vendors already do).
What consumers of CVEs need to do is be selective, evaluate if the CVE is relevant, what are the effects of exploiting it, etc. and only then backport it to the product they ship that uses the kernel or other CVEs. Same for end users, if the bug is in an API that's not used by any software that is running, then, no, you don't have to install updates.
The problem is that all of it requires actual work, not blind adherence to the policy, and it's for security, so the business also doesn't want to spend money for it.
It's a complex problem and there are no simple solutions.
Posted Feb 16, 2024 13:35 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (4 responses)
Nobody I know of has such a policy, so that sounds like yet another of those made-up strawman that the kernel people pushing for this have conjured out of thin air.
We rely on CVE metadata&al to decide whether we need to pick a fix or not. If the metadata is bogus, because the kernel maintainers just flood the system with bogus CVEs, then we can't do that sensibly anymore, and the process is broken.
Posted Feb 16, 2024 13:43 UTC (Fri)
by pizza (subscriber, #46)
[Link] (3 responses)
I worked for a company that had such a policy.
Respectfully, you need to STFU about stuff that is outside your realm of expertise and experience.
Posted Feb 16, 2024 14:09 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (2 responses)
Sounds like a problem in that company then, why should that justify breaking everything for everybody else?
> Respectfully, you need to STFU about stuff that is outside your realm of expertise and experience.
Respectfully, you need to STFU about my expertise and experience, because you have no idea about either (just like I don't about yours)
Posted Feb 16, 2024 15:32 UTC (Fri)
by pizza (subscriber, #46)
[Link]
*shrug* You made an assertion such organizations do not exist (because you didn't know any) and used that to accuse others of making things up or otherwise speaking in bad faith. You were incorrect on both fronts.
You're free to argue that the current status quo has problems (or not). You're free to talk about *your* experiences, and how proposed actions by others will have ill effects on you or third parties.
But you don't get to claim that other people's direct experiences are wrong, incorrect, or irrelevant, and accuse them of bad faith for taking steps to improve the messes they are dealing with, "because you have no idea about either".
Posted Feb 16, 2024 15:49 UTC (Fri)
by pizza (subscriber, #46)
[Link]
Incidently, that company was that way because *EU regulations required them to be*.
(They laid off my research team on the tail end of a major process/policy revamp brought about by new regulations soon to come into effect. I was made to endure many training sessions about how those new/updated regulations affected every part of the overall product lifecycle, from early design to manufacturing to label placement/content to post-sales support to how end-of-life would be handled)
So it's not "that company's problem" so much as "the problem of any company operating in a regulated space"
Posted Feb 15, 2024 11:43 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (2 responses)
I would beg to differ. How much software - that was perfectly good at doing its job - has been replaced by a FAR inferior solution because one bunch of suits with a big marketing budget schmoozed another bunch of suits with a big spending budget. (Or other things like underhand shenanigans etc etc.)
Okay, there's loads of counter-pressures in place certainly as regards linux, but there's nothing stopping massively inferior solutions driving out far better ones.
Cheers,
Posted Feb 15, 2024 14:00 UTC (Thu)
by pizza (subscriber, #46)
[Link] (1 responses)
The problem is that in the real world, _hardware_ (and its configurations, and the expectations of the software running on top of it) is more complex than ever, and that requires ever-more-complex software to sanely manage it. Decry that reality all you want, but at the end of the day, reality doesn't care about feelings.
So I stand by my point. You want "simpler/inferior" operating systems? They already exist [1], and it turns out nobody wants to use them, or invest the (considerable!) effort needed to adapt/maintain them for their own needs.
[1] Or rather, existed, having never grown beyond the "academic toy" status or long since confined to the dustbins of history.
Posted Feb 15, 2024 17:24 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
And how much software, having started out as an "academic toy", is now mainstream despite being unfit for purpose precisely because it's all the CS grads know?
Pretty much all the software I swear BY, was designed and then built. Pretty much all the software I swear AT, was cobbled together and the cracks papered over. Unfortunately, properly designed software is a rarity :-( It's also usually older software which imho is still better in many cases than its modern replacements, which just aren't "fit for purpose".
Even if it's only in the programmer's head, a truth table of all possible options leads to a far better program than a programmer responding "oh I didn't think of that" when faced with an end user pointing out the beedin' obvious! (And no, I don't expect the first programmer to *implement* all possible options, just the fact that they were considered in the design results in a far better design.)
Cheers,
Posted Mar 9, 2024 0:50 UTC (Sat)
by DanilaBerezin (guest, #168271)
[Link] (1 responses)
This is a pretty large blanket statement that definitely isn't always true. If it were as true as you claim it was, things wouldn't fade into obscurity or ever be replaced. There are plenty of conditions where replacing something is cheaper than continuing to use it. X is a great recent example.
Posted Mar 9, 2024 1:21 UTC (Sat)
by pizza (subscriber, #46)
[Link]
I didn't claim it was true in a general sense; I only claimed it was true for the Linux kernel.
Posted Feb 15, 2024 6:22 UTC (Thu)
by kees (subscriber, #27264)
[Link] (3 responses)
There are currently no plans to assign CVSS scores from cve@kernel.org, so this may happen externally, which kind of puts things back to square one: external entities will call out specific fixes as "important", and the cherry-picking will continue.
Honestly, when I would do security flaw lifetime analysis, I only ever looked at "Critical" and "High" CVEs (as rated by the Ubuntu security and kernel teams), since there was already such a giant long tail of "Medium" and "Low". E.g. see slides 4 & 5:
Posted Feb 15, 2024 10:59 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (2 responses)
Posted Feb 15, 2024 13:44 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (1 responses)
The point is that, because the kernel is a CNA, doing anything with kernel vulnerabilities requires you to either get the kernel team to issue the CVE number, CVSS score etc, or show MITRE that you engaged the kernel team, and they refused to issue a CVE number, attach a CVSS score etc.
A CNA cannot block publication of CVEs in their product, nor can a CNA prevent a CVSS score being attached to a CVE it's responsible for. It just gets "first dibs" on dealing with the issue, but you can still go around them if they're being slow or obstructive. Most of the gain for the kernel is that this blocks the noise, because it blocks the zero-effort CNAs from issuing CVE numbers (or attaching CVSS scores) to a kernel vulnerability. These CNAs are the ones who'd happily issue a CVE for the same vulnerability in the source code once for each supported architecture, or put a 9.8/10.0 CVSS score on a vulnerability because if you have credentials to SSH as root to a host, you can exploit the vulnerability remotely.
Posted Feb 16, 2024 10:56 UTC (Fri)
by taladar (subscriber, #68407)
[Link]
Posted Feb 15, 2024 17:00 UTC (Thu)
by wtarreau (subscriber, #51152)
[Link] (7 responses)
Maybe actually that would be the best way to get rid of the tiny portion of perpetual complainers who don't want to miss any single security notification but are upset when the most likely ones are going to be reported because there are too many. Or their level of expectations just means that they want for free something that costs a lot of man power and that from the beginning they should have gone with paid distros whose job is to invest more time in that triage.
Posted Feb 15, 2024 18:55 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (5 responses)
This underlies a lot of people's complaints about open source projects; you get something for free that's anything from 10% to 95% of what you need, and then complain that the remaining parts are expensive to do. They don't account for the savings when compared to licensing a commercial product in that; they just want the open source product to be a free drop-in replacement for the thing they'd otherwise be paying for, and they don't want to spend on it because then it's not "free", and the accounting gets hard.
Posted Feb 16, 2024 0:25 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (4 responses)
Posted Feb 16, 2024 10:12 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (3 responses)
That ship sailed a long time ago; the system is currently being flooded with bogus CVEs by "security" people looking to pad their CVs with a large number of discovered CVEs. At least this way round, the kernel controls the flood, instead of being flooded by other people's demands.
Posted Feb 16, 2024 11:06 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (2 responses)
Posted Feb 16, 2024 12:00 UTC (Fri)
by pizza (subscriber, #46)
[Link] (1 responses)
Look, you may have expertise in some areas (eg knowledge of how EU regs work etc) but that does not automatically make you the domain expert in other areas.
Especially when you're digging in on a position directly contrary to the literal "this is why we're doing this" words coming out of the actual domain experts' mouths.
Posted Feb 16, 2024 12:17 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Look, every single Linux distribution has systems and teams to deal with CVEs and security updates. Of course there is abuse, of course there are bogus ones being raised. It is not 80%, it is not the majority, it is not flooding. Could things be improved? Sure. Flooding the system with a bogus CVE for every commit is not the way to do that, quite the opposite.
Posted Feb 16, 2024 7:54 UTC (Fri)
by jikos (subscriber, #43140)
[Link]
The problem is, that with this new system, paid distros are going to suffer a big time (with no benefit to anybody at all).
We'll have to put a lot of productive and creative (upstream) work on hold in order to have enough resources to sort out the unnecessary havoc that LTS team is apparently going to create by DoSing the world with a truckload of irrelevant CVEs.
Posted Mar 7, 2024 5:58 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link] (1 responses)
There are obviously many exceptions to this (HPC and embedded systems come to mind), but this seems to be the general rule. As far as stable kernel releases breaking stuff, is that something that should be caught in testing, preferable before the new stable tree is released?
Posted Mar 13, 2024 12:45 UTC (Wed)
by ju3Ceemi (subscriber, #102464)
[Link]
When I need it, I just reboot the boxes (one by one, unless monitoring cries)
"If you cannot handle maintenances, you cannot handle incidents"
Posted Feb 14, 2024 19:52 UTC (Wed)
by pbonzini (subscriber, #60935)
[Link] (10 responses)
Any examples? Is this practice continuing?
Posted Feb 15, 2024 1:16 UTC (Thu)
by sashal (✭ supporter ✭, #81842)
[Link] (9 responses)
Posted Feb 15, 2024 6:04 UTC (Thu)
by pbonzini (subscriber, #60935)
[Link] (8 responses)
But yeah, I can see that it's a nuisance from the upstream point of view and I agree that assigning the CVEs proactively can be an improvement.
Posted Feb 15, 2024 13:47 UTC (Thu)
by sashal (✭ supporter ✭, #81842)
[Link] (7 responses)
"""
RH *explicitly* called this out as something done to backport this patch to older releases.
Posted Feb 15, 2024 15:51 UTC (Thu)
by pbonzini (subscriber, #60935)
[Link] (6 responses)
However that doesn't mean that Red Hat creates CVEs *because otherwise the backport wouldn't be allowed*. For example, a serious bug in 8.6 can be fixed without a CVE, and a low priority vulnerability wouldn't be fixed even with a CVE. (Also, giving an artificially high CVSS would be against Red Hat's interest for multiple reasons—it gets noticed and decreases credibility, forces customers to scramble, and imposes stricter deadlines that everyone would rather avoid).
I do agree that this is one case in which the new process can help, in multiple ways: 1) it makes it easier for distros not using LTS to identify candidate backports 2) it prevents confusion if Red Hat and friends do a late backport, and it gives a heads up to the upstream CVE team if Red Hat decides to assign a security impact to a fix a couple years down the line 3) it *may* provide impetus for manufacturers of embedded Linux products to get their act together and keep the f***ing kernel up to date, through either Linux LTS releases or distro vendors.
So I appreciate the example. However, I think you're reading from it a gaming of Red Hat policies that isn't there.
Posted Feb 16, 2024 0:31 UTC (Fri)
by sashal (✭ supporter ✭, #81842)
[Link] (5 responses)
Given that you feel that we should go for completeness around our CVE reporting, I'm more than happy to personally check the CVEs assigned by kernel.org against RH's kernel trees, and request CVEs for issues that may affect RH's trees explicitly from the RH CNAs.
Posted Feb 16, 2024 3:34 UTC (Fri)
by dgc (subscriber, #6611)
[Link] (1 responses)
That escalated quickly, didn't it?
We've gone from LTS maintainers defending kernel developers against bad CVEs straight to LTS maintainers using their new authority to make extortion threats towards independent downstream CNAs in the space of a few discussion points.
It's no wonder there's a significant amount of distrust of this new power grab by the LTS maintainers. It will do nothing to lighten the CVE-related workload of downstream distros, and they seem to think nothing of using their authority as a weapon against independent, competing stable kernel products.
Posted Feb 19, 2024 13:53 UTC (Mon)
by sashal (✭ supporter ✭, #81842)
[Link]
Posted Feb 16, 2024 4:50 UTC (Fri)
by pbonzini (subscriber, #60935)
[Link] (1 responses)
I've been working with Red Hat kernels for 15 years and I can confidently say that a CVE number is neither necessary nor sufficient to commit to old releases. For example https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h... will be in 8.6 soon and it does not have a CVE.
This statement was from someone who is clearly not too fluent in English, in fact the only way I can make *any* sense of the sentence is if it refers to the Bugzilla rather than the CVE. I understand that the grammar makes you read it like that, and I understand how this was annoying to you so I thanked you for showing it to me. But please concede that I might know Red Hat policies better than you, will you?
Posted Feb 19, 2024 13:59 UTC (Mon)
by error27 (subscriber, #8346)
[Link]
Posted Feb 16, 2024 13:44 UTC (Fri)
by hkario (subscriber, #94864)
[Link]
it clearly states that Urgent Priority Bug Fix Advisories (RHBAs) can be fixed in extended support channels.
Posted Feb 15, 2024 7:16 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
Thank you for the Goodhart reference. Like most people I had seen many examples of this law but I could not yet see the forest for the trees and now I'm finally connecting all those dots.
Typical release rules like "no more than X bugs of priority Y" always felt subjective and artificial but now I understand exactly why. Priorities are by definition _relative_, so how could such rules make sense? If anything these rules should look at some absolute "severity", not a relative "priority", right? But in reality, the use of the word "priority" is a incredibly honest admission of Goodhart's law :-)
From at least that particular "metrics" perspective, security bugs are indeed just like other bugs.
Posted Feb 17, 2024 7:34 UTC (Sat)
by rwmj (subscriber, #5474)
[Link]
Posted Feb 18, 2024 16:00 UTC (Sun)
by mdolan (subscriber, #104340)
[Link]
If you want to keep pretending and hoping, just filter the kernel CNA out. Greg has been telling everyone for a decade+ the solution is to keep updating with the LTS kernel. If you've done the engineering required to implement that process, nothing changes. If you ignored him and others and are still pretending... government regulation is here+more_coming. Like it or not, I'm sorry to say this isn't optional anymore, and it's not just going to be the kernel changing. I remember the days of happily running my own FTP+SMTP server. Things change.
I hadn't seen this quote from Greg Kroah-Hartman before posting the article, but it kind of reinforces the point:
An additional quote
The "simple solution" for all of this is just to have open source projects say "You must update to our latest version, it fixes all known issues at this time".
An additional quote
An additional quote
An additional quote
An additional quote
An additional quote
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
- write test code,
- automate their validation
- quickly test stable branches
- bisect regressions
- file good bugs
- [optional] fix regressions themselves
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
Even assuming the CVE does refer to a real vulnerability the impact is very usecase dependent.
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
Wol
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
Never break userspace
Never break userspace
Examples? Preferably with pointers to the discussion around the issue? I don't doubt there are places where the kernel community is failing to live up to its goals, but it's hard to make things better without some clarity about where the problem exists.
Never break userspace
Never break userspace
https://lists.freedesktop.org/archives/systemd-devel/2022...
Never break userspace
But in reality, systemd and udev are parts of the operating system.
So I don't care that much, if the OS breaks itself. That will get fixed eventually.
And that extremely rarely happens.
I run decades old binaries that work just fine.
Never break userspace
Never break userspace
There is no such thing as a general interface stability guarantee.
As yes, the BIND/UNBIND thing was a big enough deal that I wrote about it at the time. What I suggested there might still seem to make sense: rather than sniping at the kernel community from the sideline, work with them to improve the situation. Let Thorsten know about regressions, preferably early enough to keep them from making it into a release. Things can be improved.
Never break userspace
Never break userspace
>
> I have to say, Luca, that I would expect a systemd developer to understand how this kind of constant badmouthing from outside can make an environment toxic; systemd has certainly suffered its share of that. Why continue with that pattern? A more constructive approach might work wonders.
Apparently breaking userspace can just be waved through, while fixing it requires "building a security model" and other extremely high-bars to be met. All in the meanwhile anybody using selinux needs to completely open up the security policy to make it work at all, of course, which I guess makes for a very interesting "security model". I could go on, but can't be bothered to look up yet more references.
Never break userspace
Administrators had to change tons of scripts, because some things suddenly worked differently after the distribution updated from classic init to systemd.
Things work differently now. Get used to it. That's the correct answer surprisingly often.
That's true for systemd and it's also true for parts of the kernel.
*shrug*
Never break userspace
Sometimes things break accidentally, and sometimes they get fixed and sometimes they don't.
Never break userspace
It's exactly the same thing. Things change. Deal with it like everybody has to deal with systemd.
Never break userspace
Never break userspace
Devs try hard to not make unnecessary breakages, but if a sysfs file disappears/changes or an uevent changes, programs have to deal with it.
Has always been like that.
Never break userspace
Never break userspace
BUT these applications always were very limited in count and usually part of the OS itself.
It always has been a matter of common sense.
If a change only breaks udev or systemd and nothing else, it might make sense to do it.
Never break userspace
Never break userspace
Wol
Never break userspace
Never break userspace
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
Wol
A turning point for CVE numbers
A turning point for CVE numbers
Wol
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
https://outflux.net/slides/2021/lss/kspp.pdf
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
Or their level of expectations just means that they want for free something that costs a lot of man power
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
This one is for backport to older versions of Red Hat Linux, because original request was:
"reported experiencing a UAF in RHEL8.6."
"""
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
> against RH's kernel trees, and request CVEs for issues that may affect RH's
> trees explicitly from the RH CNAs.
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers
A turning point for CVE numbers