|
|
Subscribe / Log in / New account

How kernel CVE numbers are assigned

June 19, 2024

This article was contributed by Lee Jones

It has been four months since Greg Kroah-Hartman and MITRE announced that the Linux kernel project had become its own CVE Numbering Authority (CNA). Since then, the Linux CNA Team has developed workflows and mechanisms to help manage the various tasks associated with this challenge. There does however, appear to be a lack of understanding among community members of the processes and rules the team have been working within. The principal aim of this article, written by a member of the Linux kernel CNA team, is to clarify how the team works and how kernel CVE numbers are assigned.

Some early CVE announcements raised questions both on the mailing lists and off. The Linux CNA Team has received messages of firm support, particularly from those dedicating significant time to Linux security. Other messages, largely received from distributors and teams that look after enterprise platforms and attempt to remain stable yet secure by taking the fewest changes possible, have reflected some concern. Some of the stronger points raised were about how the rise in the number of CVEs would increase workload and overwhelm security teams attempting to review them all. Others have suggested that consumers of CVEs at the distribution and enterprise level, particularly those charging for this service, should have been reviewing all stable commits for fixes to relevant security flaws all along. One independent, security-related maintainer was particularly taken aback that paid-for distributions were not reviewing additional stable fixes beyond those identified as CVE candidates as they should have been.

Whichever side of the fence contributors sit on, one thing is almost universally agreed upon: for a plethora of reasons, the old CVE process wasn't working well. LWN listed many of the major points in this recent article. An additional point that deserves attention is that many downstream maintainers (myself included to a point, although Android did have the additional safety net of regular merges from the long-term-support stable kernels) were content with the strategy of cherry-picking all relevant CVEs raised and calling it good in terms of ongoing security updates. This practice, of course, would lead to a false sense of security, since it misses hundreds of security-related fixes and ultimately results in less-secure kernels.

The new process is more exhaustive and aims to identify every commit that fixes a potential security issue. Some people have mentioned that they consider this strategy to be a little overzealous, however since we started this endeavor back in February, it has only resulted in 863 allocations out of the 16,514 commits between v6.7.1 and v6.8.9. That's a mere 5% hit rate.

Negative opinions have been exacerbated by historical thoughts shared by Kroah-Hartman and others, and by a misunderstanding of the current literature. In an article about a 2019 Kernel Recipes talk, Kroah-Hartman is paraphrased as saying: "The next, option, 'burn them down', could be brought about by requesting a CVE number for every patch applied to the kernel." In truth, the plan was never to create a flood of CVE numbers to overwhelm the current system such that it would eventually be abandoned, nor has that come close to becoming a reality. The Linux CNA Team is careful to keep CVE assignments to a minimum, only assigning numbers to flaws that pose a possible risk to security.

Unfortunately, some of the phrases in the documentation haven't helped much to quell these fears. For instance, this section is often quoted:

Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team.

This section is often misunderstood or taken too literally. The part concerning assigning CVE numbers to "any bugfix", should be expanded to say "any bugfix relevant to the kernel's security posture". For instance, a fix repairing a broken LED driver would never be sensibly considered for assignment. That said, prescribing the exact types of issues that are considered is a slippery slope. A recent attempt at doing so was submitted to security@kernel.org for pre-review and was promptly rejected by the group. However, a non-exhaustive list of considerations I look for are bugs like buffer overflows, data corruption, crashes (BUGs, oopses and panics, including those affected by panic_on_warn), use-after-frees (UAF), lock-ups, double frees, denial of service (DoS), data leaks, etc.

One question that crops up from time to time can be summarized as: "why are we so overzealous and why can't we only create CVEs for severe security fixes?"; the answer is that quality assessment is an impossible task. Since the kernel is infinitely configurable and adaptable, it's not possible to know all the ways in which it could be deployed and utilized. Evaluating potential vulnerabilities and associating generic levels of bug reachability, severity, and impact is infeasible. An unreachable vulnerability on one platform may be trivial to exploit on another. Even if a particular issue could be proven to be universally low-risk, it might still be used as a stepping stone in a more involved, chained attack. For all these reasons and more, we find the most sensible approach is to assume that "security bugs are just bugs" and assign any issue with possible security-related ramifications.

The Linux CNA Team does not take the process of allocating CVE numbers lightly and the process is not automated. Over the first full release (6.7.y), the process consisted of all three members of the team (Greg Kroah-Hartman, Sasha Levin, and myself) manually reviewing every single patch hitting stable and voting on each. If a commit obtained three positive votes, it was allocated a CVE number. Commits with two votes were subjected to a second review, followed by discussion.

The team members review candidates in various ways. One utilizes Mutt in exactly the same way as they would review mainline patch submissions, another is in the process of training a machine-learning (ML) model to identify hard-to-spot issues, and I prefer to use Git output piped through a helper tool that highlights telltale words and phrases for easy "yes" votes. "No" votes take more time to review. The current thinking is that, by using different tools and methods, our positive results would be more robust; at least that's the theory.

Once CVEs are created, they are submitted to the linux-cve-announce mailing list, where interested parties are able to review them at their leisure. The engineers at SUSE deserve a lot of credit here. They have been instrumental in highlighting allocations that ended up being duplicate CVEs raised by previous CNAs in a non-searchable way, or ones that did not merit a CVE assignment. Their input helped shape the way the team now conducts analysis. Anyone is free and even encouraged to review the linux-cve-announce list and respond to CVEs they consider invalid. If the team agrees with the evaluation, the CVE assignment will be promptly rejected. Since the start of this endeavor, 65 such instances have occurred.

Hopefully this helps to clear up some of the current misconceptions in terms of the methods used to review, identify, and process CVE candidates and allays some of those fears people have been communicating to me recently. Specifically, CVE numbers are not assigned in an automated manner, and they are only assigned to bugs that might reasonably be believed to have security implications. The team remains open to constructive feedback and genuine suggestions for improvements; it is committed to its responsibilities and exercises care and due diligence during all phases of the process. If you have any questions or suggestions for us, then you can use the contact details located in the kernel's Documentation/process/cve.rst file. We'd be happy to hear from you.

Index entries for this article
KernelSecurity/CVE numbers
GuestArticlesJones, Lee


to post comments

Hmmm

Posted Jun 19, 2024 17:41 UTC (Wed) by lee_duncan (subscriber, #84128) [Link] (7 responses)

Many of these "CVEs" are fixes for things that just can't normally happen. One I worked on recently was for the case where a hardware error, for very rare hardware, would cause a memory leak. So you think bad actors are carrying around bad hardware so they can exhaust the kernel of memory?

In my particular case, the new flood of "CVE"s (call them CVE-lite) are not helpful, and don't seem to be adding security.

Hmmm

Posted Jun 19, 2024 18:22 UTC (Wed) by mb (subscriber, #50428) [Link] (5 responses)

>So you think bad actors are carrying around bad hardware so they can exhaust the kernel of memory?

Yes, intentionally bad hardware is a very real attack vector.

Exhausting kernel memory by itself probably can't be exploited, but it can make entering buggy error paths possible. Exploits are often chained these days.

Marking this bug as a security vulnerability is probably correct. (Disclaimer: I have not seen the code)

Hmmm

Posted Jun 19, 2024 18:31 UTC (Wed) by lee_duncan (subscriber, #84128) [Link] (4 responses)

I respectfully disagree. This seems like worrying about swabbing the deck as the Titanic sinks, and blocks work on real attacks that actually have exploits. Disclaimer: I speak only for myself and will shut up now.

Hmmm

Posted Jun 19, 2024 22:11 UTC (Wed) by flussence (guest, #85566) [Link] (3 responses)

For the low price of convincing an inattentive user to click "Allow" on the free flash drive they just found outside/got at a conference, an attacker can graft onto the system an entire PCI topology they control (thanks USB4), and get the kernel to load almost any driver they want.

It's not even all that expensive to manufacture such a device. DIY PCB from a small-run supplier, some SBC components, a USB PHY, scrounge some pentesting code off Github, 3D print a plastic shell. Whoever wrote that parallel-PCI RAID interface driver 30 years ago probably didn't account for that in their threat model, or error codepaths.

Hmmm

Posted Jun 20, 2024 1:14 UTC (Thu) by khim (subscriber, #9252) [Link]

> thanks USB4

USB4 is still very rare today, but Thunderbolt gave the exact same abilities many years ago. Andrey Konovalov have fuzzed Linux drivers and actually made hardware that tries to crash Linux kernel few years back using dozens of exploits their fuzzer have found. For laughs we tried them with Windows laptop. After two dozen or so Windows dutifully blue screened.

I'm not sure whether anyone brought that to actual real-world exploit or not, but that's the issue with CVEs: different people work in different environments, what is “something that should never happen” for one could be “something that is routine” for another.

Hmmm

Posted Jun 20, 2024 7:31 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (1 responses)

Yes, issues triggered by plugable devices are a real thing and I suspect Lee didn't have those in mind. I think he was talking about issues like CVE-2021-47329 fixed by b5438f48fdd8 ("scsi: megaraid_sas: Fix resource leak in case of probe failure").

Ease of building a Thunderbolt device that exploits any PCI or PCIe driver in the kernel

Posted Jun 20, 2024 8:34 UTC (Thu) by farnz (subscriber, #17727) [Link]

The issue is that I can (for affordable sums - certainly under $250 in one-off quantities once the design is done, given that I can buy an Intel Thunderbolt peripheral controller for $16 and an FPGA capable of PCIe for $150) build a device that's Thunderbolt 3 or later, including USB4, compatible, and that appears on the PCIe bus as an add-in card driven by megaraid_sas, but where the hardware is, in fact, something running on an FPGA I configured.

This means that any issue with PCI or PCIe drivers is, definitionally, an issue triggered by pluggable devices; an attacker can read the fix for CVE-2021-47329, and design a board that exploits that bug, since as soon as it's far enough through the initialization sequence to have triggered the failure, the fake PCIe device can go through a hotplug cycle and disappear off the bus.

Add some flash, and a USB flash controller, and you now have a board that looks like a USB-C flash drive, but also exploits the megaraid_sas bug; it's not even hard to ensure that the USB controller stays powered down until the beginning of the megaraid_sas probe sequence, thus ensuring that if the user has configured their system to not trust Thunderbolt devices, and has failed to tell it to trust this one, the USB flash drive appears not to work.

Or go a different route - NVMe isn't that complicated to implement badly, and so your FPGA can appear to be a PCIe switch with an NVMe controller on one port, and the megaraid_sas exploit on the other. That way, the user has reason to enable Thunderbolt PCIe tunnelling for this device (since it doesn't work otherwise), and thus also enable the exploit.

Hmmm

Posted Jun 20, 2024 12:22 UTC (Thu) by hkario (subscriber, #94864) [Link]

this sounds dangerously close to "if it doesn't affect me, then it's not a real issue", and that's a slippery slope we don't want to be on

what's important is to establish a metadata standard that will allow easy sorting and rejection for CVEs that don't affect _you_ based on criteria _you_ decide on, not put that work on the Kernel's CNA.

I'm pretty sure that everybody will agree that you don't need to backport a CVE fix to a module you don't compile, especially if you don't plan to ever compile it.

kernel CNA

Posted Jun 19, 2024 18:55 UTC (Wed) by mstsxfx (subscriber, #41804) [Link] (8 responses)

Full disclosure, I am working for SUSE and I am commenting with my CVE consumer hat on.

I have to say that I have couple of comments to the article.

First and foremost, I am afraid it looks more like a self promotion rather than a serious review of how the process works. This is unfortunate because it doesn't really describes challenges this process has created for downstreams that are serious about security. I cannot really provide unbiased view either but the Linux kernel users and community really deserves proper analysis.

The biggest problem I can see is that the new process cares much more about fixes than actual security threats. There is a very important distinction between the two. It is quite easy to look for patterns in patch changelogs and assign a CVE for them because they _might_ have security implications. On the other hand it is really costly to analyse those CVEs for real life security implications. Especially when they are coming in volumes we can observe. Kernel CNA refuses to do a real security assessment for CVEs they are creating [1]. This generates a lot of low quality and/or dubious impact CVEs that need to be triaged and assessed (e.g. assigned CVSS scoring) which imposes a big engineering cost on all those downstreams which are pretty much forced to do that because CVEs are recognized as security standard in the industry.

I do appreciate a recognition of SUSE contribution during the evaluation process but let me make it clear that we have resorted to reject only CVEs which where outright _wrong_. Initially we have tried to dispute also dubious ones but have very quickly learned that "We do not assume usecases" is something really hard to have a productive and technical discussion about. I was really hopeful that moving CVEs to the Kernel CNA would lead to more constructive discussions TBH.

[1] https://lore.kernel.org/all/2024052453-afar-tartly-3721@g...

kernel CNA

Posted Jun 19, 2024 19:16 UTC (Wed) by mb (subscriber, #50428) [Link] (3 responses)

>but the Linux kernel users and community really deserves proper analysis.

You are free do to that analysis and publish it.
Or do you want somebody else to do it for your business for free?

kernel CNA

Posted Jun 20, 2024 7:09 UTC (Thu) by msmeissn (subscriber, #13641) [Link] (2 responses)

The problem is scale / multiplication. Currently all downstreams have to do the same evaluation, and its dozens. And after that come customers of the distributors, so it turns into hundreds if not thousands.

It would be preferred to have more due diligence at the root of the tree.

Also for SUSE we share our CVSS 3.1 scoring, same as Red Hat and others that can be reused by the community.

kernel CNA

Posted Jun 20, 2024 8:29 UTC (Thu) by kleptog (subscriber, #1183) [Link] (1 responses)

> The problem is scale / multiplication. Currently all downstreams have to do the same evaluation, and its dozens. And after that come customers of the distributors, so it turns into hundreds if not thousands.

Perhaps these people should be working together instead of duplicating effort? Some kind of CVE forge?

> It would be preferred to have more due diligence at the root of the tree.

Perhaps these downstreams could actually become part of the root instead of expecting someone else to do the work? This sounds like an example of an embarrassingly parallel problem: throwing more people at it actually makes it go faster.

I'm not sure what the effective difference is between getting a CVE rejected and just giving it a low score. It's not like you can actually prove there's no vulnerability, only that you haven't found one yet.

kernel CNA

Posted Jun 20, 2024 17:38 UTC (Thu) by vegard (subscriber, #52330) [Link]

> Perhaps these people should be working together instead of duplicating effort? Some kind of CVE forge?

It's difficult because we don't have objective criteria, we don't have common security models, and we don't have a good way to review/collaborate. It also seems hard to get people to agree -- both on definitions and models, but also on individual bugs. Different distros run with slightly different mitigations enabled which can make a big difference in whether it is even possible to exploit many of the bugs.

That said, I proposed something here, but didn't get very much response from other distros:

https://lore.kernel.org/all/20240311150054.2945210-2-vega...

(Disclaimer: I'm an Oracle employee working on Oracle Linux.)

kernel CNA

Posted Jun 20, 2024 1:41 UTC (Thu) by roc (subscriber, #30627) [Link] (2 responses)

Yes, this is the first time I remember that LWN has published an article by a contributor who is actually a major player in the controversy being reported on.

kernel CNA

Posted Jun 20, 2024 6:43 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (1 responses)

IMHO there's nothing wrong as long as it's properly disclosed (which it clearly was in this case). Lots of news organizations run editorials about controversial issues every day. Just look at the Op-Ed section of the New York Times.

(Also, it would seem that the CNA team is not acknowledging that there is a legitimate "controversy" in the first place, based on the overall tone of this piece, but I could be misreading it. Either way, that's clearly disclosed as their opinion, not LWN's.)

kernel CNA

Posted Jun 20, 2024 7:52 UTC (Thu) by mstsxfx (subscriber, #41804) [Link]

Just to clarify my comment. I, by no means, meant to criticize LWN for publishing this article! I am also not criticizing Lee Jones for aiming to clarify inner workings of the kernel CNA team. That is actually appreciated.

What I didn't really like is that statement like
"An additional point that deserves attention is that many downstream maintainers ([...]) were content with the strategy of cherry-picking all relevant CVEs raised and calling it good in terms of ongoing security updates. This practice, of course, would lead to a false sense of security, since it misses hundreds of security-related fixes and ultimately results in less-secure kernels."

which establishing a perception for what is less or more secure kernel without any actual data to back those claims. We have already seen "studies" claiming that nothing except for stable is safe to use and IMHO statements like this just add to that narrative.

kernel CNA

Posted Jun 24, 2024 3:30 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> challenges this process has created for downstreams that are serious about security

I think the challenge is probably more that any given Linux kernel probably has hundreds or thousands of exploitable security vulnerabilities, of various difficulty and popularity to exploit, and a constant stream of fixes, it's fundamentally a firehose and the CVE process is now documenting how much of a firehose has been actually going on all these years, below the surface of a lot of industry security practice, which was more focused on the on the much smaller number of visible popular vulns which were under active public exploit, not the entire attack surface of the kernel.

Or at least that's the impression that I have as a lay observer, maybe your perspective closer to the source is different

End-user implications of the new process

Posted Jun 19, 2024 19:13 UTC (Wed) by jikos (subscriber, #43140) [Link] (5 responses)

One thing which is not clear to me is -- does the kernel CNA realize that turning the whole CVE process upside down broke expectations of end users of the project we are all together working on?

Before this revolution, CVE for consumers meant "someone has actually invested a lot of thinking, analysis and review, and here is why this is a real security threat: because if you do X and Y the result will be Z, which is a security compromise".

Even worse, the new process delegated (pretty much by force, without asking) a gigantic responsibility to decide whether something is, or is not, a security issue to *them* (most of whom of course have no means of evaluating that, because they were always fully relying on CVE being assigned to something that has been properly analyzed do depth and confirmed to really be a real security issue).

The real paradigm change comes with change from "CVEs are assigned to _security vulnerabilities_" to "CVEs are assigned to _code fixes_, and go figure".

I don't feel Linux userbase ecosystem is ready to absorb that. I don't even think it's properly understood by general public, and is causing a lot of chaos now. "Wow, is Linux really *all that* insecure all of a sudden, compared to all the other projects?". Of course it's not. But until we keep assigning CVEs to e.g. patches against tools/testing ....

End-user implications of the new process

Posted Jun 19, 2024 19:46 UTC (Wed) by pizza (subscriber, #46) [Link] (4 responses)

> Before this revolution, CVE for consumers meant "someone has actually invested a lot of thinking, analysis and review, and here is why this is a real security threat: because if you do X and Y the result will be Z, which is a security compromise".

I think you vastly overstate the amount of "thinking, analysis, and review" that was the norm.

End-user implications of the new process

Posted Jun 19, 2024 19:59 UTC (Wed) by jikos (subscriber, #43140) [Link] (3 responses)

> I think you vastly overstate the amount of "thinking, analysis, and review" that was the norm.

I never viewed the CVE reports as something that'd be having high standards even before these changes were implemented, sure.
Still, somebody had to really invest quite some time focused just on making that one particular issue a justifiable CVE.

These days, it's based on regexp matching in the commit log [1] and subsequently stating "what do we know".

I understand both aproaches, I simply just don't think the current one is an improvement.

[1] https://git.kernel.org/pub/scm/linux/security/vulns.git/t...

Effort in getting a CVE for the kernel

Posted Jun 20, 2024 8:32 UTC (Thu) by farnz (subscriber, #17727) [Link] (2 responses)

Still, somebody had to really invest quite some time focused just on making that one particular issue a justifiable CVE

I think you misunderstand the nature of the CVE process quite significantly here; before the kernel became a CNA, the only extra effort in justifying a CVE for a given kernel bug was finding a CNA that would issue you a CVE number. Using the CNA of last resort isn't that hard, either.

So the actual change is that previously, CVEs for security bugs were only issued if a third party happened to look at a Linux security bug (no matter how exploitable), and decided that they wanted a CVE number for it; part of the obligations on a CNA that "owns" a product for CVE purposes is that you put a CVE number on all security bugs, and so now you get CVEs for all bugs.

Effort in getting a CVE for the kernel

Posted Jun 20, 2024 12:58 UTC (Thu) by msmeissn (subscriber, #13641) [Link] (1 responses)

A CNA by the CNA Rules only SHOULD but not MUST assign CVEs to vulnerabilities.

The Kernel CNA assigns CVEs to fixes, without kind of even looking if they are actual vulnerabilities.

A limitation in scope of assignment would easily be possible (like no "testsuite problems", "no boot problems")

When should a CNA assign a CVE

Posted Jun 20, 2024 13:26 UTC (Thu) by farnz (subscriber, #17727) [Link]

My understanding from talking to people who work on the CVE Project itself (rather than at a CNA) is that the word "SHOULD" in this case is meant to be interpreted as "you should normally assign a CVE for all vulnerabilities, fixed or unfixed, but we understand that there are cases where assigning a CVE for an unfixed vulnerability is problematic, and we'll consider your behaviour on a case by case basis".

The intent, however, is that you assign CVEs to all vulnerabilities you know about in the project, even if you only learn about them as part of getting a fix for the vulnerability.

And since you're saying that the kernel CNA assigns CVEs to things that aren't vulnerabilities, can you give a CVE number assigned by the kernel CNA to something that's not a vulnerability at all? Not "minor", or "too hard to exploit", but "not a vulnerability at all".

Why is the kernel so special?

Posted Jun 19, 2024 19:29 UTC (Wed) by SLi (subscriber, #53131) [Link] (32 responses)

So, I don't really have a big opinion on this entire dispute, but this sounds a bit silly if taken to the extreme (and it sounds like the idea is to take it to the extreme since "we cannot know"):

> One question that crops up from time to time can be summarized as: "why are we so overzealous and why can't we only create CVEs for severe security fixes?"; the answer is that quality assessment is an impossible task. Since the kernel is infinitely configurable and adaptable, it's not possible to know all the ways in which it could be deployed and utilized. Evaluating potential vulnerabilities and associating generic levels of bug reachability, severity, and impact is infeasible.

I have hard time imagining the kernel is particularly special here compared to any other complex piece of software. Sure, users can make arbitrarily silly decisions like keep usernames secret and publish passwords. I think it's perfectly fine to say that any user that does so gets to keep the breakage.

Why is the kernel so special?

Posted Jun 19, 2024 21:18 UTC (Wed) by flussence (guest, #85566) [Link] (31 responses)

The way I see it, one *or both* of the following must be true:

1. The CVE system is a farce if it completely melts down over a single project releasing a few hundred of them over the course of several months.
2. The CVE system has always been a farce because Debian's security team didn't give two craps about exposing its users and downstreams to libav for years, while ffmpeg received hundreds of CVE fixes.

People who put this much stock in CVEs will apparently fall for a road tunnel painted onto a cliff face, and that kind of sums up most of the security industry. It's all prayer and superstition obfuscated through tech jargon.

Why is the kernel so special?

Posted Jun 19, 2024 21:24 UTC (Wed) by jikos (subscriber, #43140) [Link] (30 responses)

So, if that's, hypothetically, indeed the case (both of them), what's your proposed solution?

Why is the kernel so special?

Posted Jun 19, 2024 22:10 UTC (Wed) by pizza (subscriber, #46) [Link] (29 responses)

> So, if that's, hypothetically, indeed the case (both of them), what's your proposed solution?

The _only_ path forward is for folks that care about this stuff to acknowledge that they don't want _software_, they want an ongoing _service_, with correspondingly different economic properties.

This means they have to, on an ongoing basis, expend the resources to (a) continually evaluate everything in their supply chain themselves, or (b) pay someone else to do it for them. As opposed to the current status quo of (c) expecting someone else to do it for them, for free, indefinitely.

Why is the kernel so special?

Posted Jun 20, 2024 0:53 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (1 responses)

Isn't this part of the service that Red Hat, Canonical, and SUSE provide?

Why is the kernel so special?

Posted Jun 20, 2024 6:45 UTC (Thu) by NYKevin (subscriber, #129325) [Link]

Yes, but that also means it's their problem. They signed up to deliver the service, now they have to do the work.

Why is the kernel so special?

Posted Jun 20, 2024 9:03 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (26 responses)

The fact is that the Kernel CNA is the _only_ authority to assign CVEs to the Linux kernel. So this is not relying on sombody else doing the work but rather being forced to copy with whatever CNA comes up with. Please note that neither the approach to assign CVEs nor involvement in the process has been consulted with other downstream except for some stable tree maintainers

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 10:03 UTC (Thu) by farnz (subscriber, #17727) [Link] (25 responses)

The issue is that the rules for being a CNA require you to issue a CVE for all vulnerabilities in your product, regardless of severity; the only way for downstreams to reduce the number of CVEs the kernel CNA issues is to find a convincing way to explain that a given bug is not a vulnerability.

A CNA that claims responsibility for a project must issue CVE numbers for all vulnerabilities it knows about in its project, or lose its CNA status. In return for this, the CVE Program grants CNAs the right to block other CNAs from issuing CVE numbers for vulnerabilities in their projects. Previously, no CNA claimed responsibility for the Linux kernel, so while anyone could issue a CVE number for it, nobody was required to; now the kernel CNA claims responsibility for the Linux kernel, and thus has to issue CVE numbers for it, but now a separate security researcher can't get another CNA to issue a CVE number for the kernel (or convincing the CNA of last resort that the kernel CNA is refusing to issue a CVE number for bogus reasons).

And note that if the CNA of last resort issues too many CVE numbers for a project, the CNA that claimed it stops being a CNA. Similar applies if the CVE Program discovers that a CNA is not issuing CVE numbers for vulnerabilities in projects it claims, even if those vulnerabilities are low severity.

A lot of the noise around this process comes from people simply not understanding how the CVE Program is meant to work, because the kernel has been outside it for so long.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 12:35 UTC (Thu) by msmeissn (subscriber, #13641) [Link] (24 responses)

Greg, this is incorrect. CVE assignment is not a MUST, it is a SHOULD.

https://www.cve.org/ResourcesSupport/AllResources/CNARules

> 4.2.2.1 CNAs SHOULD assign a CVE ID if:

> the CNA has reasonable evidence to determine the existence of a Vulnerability (4.1), and
> the Vulnerability has been or is expected to be Publicly Disclosed, and
> the CNA has appropriate scope (3.1).

> 4.2.2.2 CNAs SHOULD Publicly Disclose and assign a CVE ID if the Vulnerability:

> has the potential to cause significant harm, or
> requires action or risk assessment by parties other than the CNA or Supplier.

There is also a clause on if reporters come to your CNA you SHOULD take their input seriously, and if you do not assign they can go to your root CNA tree.

Also likely interesting for you, as you are NOT following it:

> 4.2.7 CNAs SHOULD assign CVE IDs to Vulnerabilities, not Fixes for Vulnerabilities.

> CNAs SHOULD assign CVE IDs whether or not a Fix is available.

So basically you should prove a vulnerability before you assign a CVE.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 12:58 UTC (Thu) by farnz (subscriber, #17727) [Link] (23 responses)

I'm not Greg. Please don't be rude by misnaming me, unless your intention is to offend.

My read of those rules is that the kernel is following them; there is a vulnerability, it's publicly disclosed by the fix, and the kernel is assigning a CVE number to it.

AFAICT, none of the kernel CNA assigned CVE numbers are going to things that are not a vulnerability; the complaint is that the kernel is assigning CVE numbers to "minor" vulnerabilities that are "too hard" to exploit.

And I'm not a CNA at all, so what I do is irrelevant.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 13:18 UTC (Thu) by gioele (subscriber, #61675) [Link] (2 responses)

> AFAICT, none of the kernel CNA assigned CVE numbers are going to things that are not a vulnerability; the complaint is that the kernel is assigning CVE numbers to "minor" vulnerabilities that are "too hard" to exploit.

Proposal: A group of people that dislike the current approach gets together and nominates one "minor" vulnerability that is "too hard to exploit". The Linux Foundation then announces a high-5-digit USD bounty for an exploit for that specific vulnerability.

If the bounty is not claimed in one month the Linux CNA is asked to tune down their approach.

If the bounty is claimed, then the whole discussion about "minor" vulnerabilities is put to rest.

Bounty for an exploit

Posted Jun 20, 2024 13:46 UTC (Thu) by farnz (subscriber, #17727) [Link] (1 responses)

Given the one-off costs of exploiting some vulnerabilities, 5-digit USD (no more than $99,999) isn't enough and one month from nomination to exploit isn't enough, either. Low 7-digit USD would be - at least $1,000,000, and a year would be more practical than a month, to allow motivated parties to find the exploit.

Note that the reason I say that 5-digit USD is not enough is that many vulnerabilities will require you to build hardware to exploit them - for example, CVE-2021-47329 would need you to design, debug, and build a Thunderbolt 3 to FPGA board so that you could exploit it. While the board's BOM price is about $250, you can expect to spend much more than that during the debugging stage unless you already have the tools needed to debug PCIe and Thunderbolt hardware - protocol analyzers to let you monitor the USB4/Thunderbolt 3 traffic and the PCIe traffic to/from your FPGA are going to set you back about $20,000 in total, for example.

Similarly, while debugging you won't want to be buying one-off builds of your board design; you'll want to pay for about 10 populated units, so that you don't wait the full turnaround time (typically a month for cheap manufacturers) for a new board if there's a manufacturing issue caused by a marginal design, but can discard the faulty boards and just use the ones that work well enough for testing.

Bounty for an exploit

Posted Jun 21, 2024 3:15 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

Not just this, it's also that some of those able to write exploits for minor vulns sometimes work for organizations that make a revenue from such exploits and pay the developers much higher than these bounties. Why would they compete with their employer for a one-time bounty ?

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 14:38 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (19 responses)

> AFAICT, none of the kernel CNA assigned CVE numbers are going to things that are not a vulnerability; the complaint is that the kernel is
> assigning CVE numbers to "minor" vulnerabilities that are "too hard" to exploit.

I do not agree with this statement. Let's put aside CVEs that got successfully disputed (that requires engineering time to do btw.). Just to give couple of examples
- CVE-2023-52596 - there is no upstream kernel code that could trigger the failure so we are talking about unknown 3rd party modules.
- Fixes in tests getting CVEs with a very vague argument that somebody is running them in production environments. I find this really dubious justification because even if somebody does that it is mostly shooting their own feet. I fail to see a security threat. Sure you can shoot your machine down by running tests in the kernel space but it is not an untrusted entity to run those tests if this is a production system.
- the whole WARN_ON story because somebody might be running panic_on_warn (as we cannot assume usecases). WARN* are defined and used to flag recoverable conditions. panic_on_warn is a debugging tool to trigger kdump early to do analysis! Whoever runs with panic_on_warns deliberately makes decision to panic the system on recoverable conditions. That doesn't resemble security vulnerability by far because the only fix for that is to remove all WARN*. Plugging one at the time doesn't make the system safer.

This is not an exhaustive list of course. It is possible to assign CVSS 0 to those CVEs but as already mentioned elsewhere. Every downstream has to do that evaluation which costs a lot of engineering time all that with dubious if any actual value IMHO.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 14:57 UTC (Thu) by farnz (subscriber, #17727) [Link] (18 responses)

I look at the fix for CVE-2023-52596, and it's clearly exploitable by anyone who can load LKMs into the running kernel. And that makes it a vulnerability where the CVE Program rules say it should be given a CVE number, albeit one with a very low severity.

Same applies to fixes in tests; CVE numbers aren't just for "production environments", they're for all vulnerabilities in projects. You're arguing that because the test cases "shouldn't" be run, vulnerabilities in them shouldn't be reported, contrary to CVE Program rules.

And the whole "WARN_ON" story isn't about CVEs, AFAICT - you're pointing to something that's not a CVE number, and using that to argue that the kernel is too eager to assign CVE numbers. All the CVEs I can find that have WARN_ON mentioned in them are cases where there's a security vulnerability separate to the WARN_ON; some are cases where the WARN_ON body is itself wrong, some are cases where code after the WARN_ON assumes that the WARN_ON did not fire.

What you seem to be complaining about is all stuff where the CVE Program rules are intended to assign CVE numbers. And that's not the kernel CNA's fault - that's the CVE Program rules.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 15:33 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (17 responses)

> I look at the fix for CVE-2023-52596, and it's clearly exploitable by anyone who can load LKMs into the running kernel. And that makes it a
> vulnerability where the CVE Program rules say it should be given a CVE number, albeit one with a very low severity.

Anyone who can load LKM in to running kernel can do whatever to the kernel. Your security just ended there!

[...]

> What you seem to be complaining about is all stuff where the CVE Program rules are intended to assign CVE numbers. And that's not the kernel
> CNA's fault - that's the CVE Program rules.

As has been explained elsewhere this is not really the case. CVE rules do not dictate anything like that. Sure you can argue to an extreme but please keep in mind that CVEs are there to have a practical meaning. The more they are diverging from that the less useful they are.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 15:42 UTC (Thu) by pizza (subscriber, #46) [Link] (2 responses)

> Anyone who can load LKM in to running kernel can do whatever to the kernel. Your security just ended there!

In other words, something should be only considered a vulnerability if there are no other [potential] vulnerabilities, got it.

Module loading

Posted Jun 20, 2024 15:48 UTC (Thu) by corbet (editor, #1) [Link] (1 responses)

I would read that more like "something should only be considered a vulnerability if the system is not already fully compromised".

Module loading

Posted Jun 20, 2024 16:24 UTC (Thu) by mstsxfx (subscriber, #41804) [Link]

Correct, exactly what I meant.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 15:48 UTC (Thu) by farnz (subscriber, #17727) [Link] (13 responses)

My security only ended if someone can load an arbitrary LKM into the kernel; the kernel has a bug where an LKM without a security bug itself can cause a vulnerability to exist in the system, and that's pretty much the definition of a vulnerability that needs a CVE number.

The explanation you've given elsewhere contradicts what the rules say, and also the way CVE Program people have told me the rules are meant to be understood. You are supposed to issue CVE numbers for all known vulnerabilities, because the goal of the CVE Program is to provide a quick shorthand for talking about vulnerabilities.

So, you can quickly say, "this kernel has CVE-2023-52596", which tells the person loading an apparently safe LKM that the kernel has a bug that will turn into a vulnerability when this LKM is loaded, even though the LKM does not have vulnerabilities itself, and if you rebuilt it for a kernel without CVE-2023-52596, loading it would be safe.

And that's all that the CVE Program aims to do; classifying CVEs by how dangerous they are is the CVSS program, which is separate precisely because the CVE Program does not want to be limited to "dangerous" vulnerabilities, but wants to give you a quick way to refer to all vulnerabilities.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 16:50 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (12 responses)

> My security only ended if someone can load an arbitrary LKM into the kernel; the kernel has a bug where an LKM without a security bug itself can
> cause a vulnerability to exist in the system, and that's pretty much the definition of a vulnerability that needs a CVE number.

Kernel module interfaces have never been designed to be defensive. The core kernel trusts the module is using them properly. The above mentioned CVE has potential security implications iff kernel module or built in code creates an empty sysctl directory. Which no kernel code in the tree does.

Protection against theoretical out-of-tree modules using kernel interfaces incorrectly has never been a concern - out of tree modules are not even recognized as supported by the kernel community in fact. Creating CVEs for issues like that simply doesn't make much sense.

Anyway, it seems that our views of what CVEs are supposed to represent are way too distant to find a common ground. So I leave it at that.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 16:57 UTC (Thu) by pizza (subscriber, #46) [Link] (8 responses)

> Anyway, it seems that our views of what CVEs are supposed to represent are way too distant to find a common ground. So I leave it at that.

I feel the need to point out that *neither* of your views matter here. The only ones whose views actually matter are the ones running the Linux kernel CNA, and like it or not, they have made their policies clear.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 17:10 UTC (Thu) by mstsxfx (subscriber, #41804) [Link]

I am really wondering who of the CVE consumers find the current mode of operation of the Kernel CNA useful.

Kernel CNA acts as required by CVE rules

Posted Jun 21, 2024 6:51 UTC (Fri) by Wol (subscriber, #4433) [Link] (4 responses)

Actually, their views don't matter either!

The only views that matter are the views of the top-level CNA - the people who set the rules.

As others have pointed out, if the linux guys don't follow those rules, their CNA status will be taken away and we'll be back to the status quo ante, where ANY CNA can issue a linux CVE, and nobody in the linux community has any say at all.

We can bikeshed all we like about whether the rules are sensible, but unless you're prepared to try and change the rules, whinging about Greg and co following the rules is a waste of hot air ...

Cheers,
Wol

Kernel CNA acts (or not?) as required by CVE rules

Posted Jun 21, 2024 7:46 UTC (Fri) by SLi (subscriber, #53131) [Link] (3 responses)

I'm not sure. If one wants a change, a lot of affected people complaining that (my paraphrase:) the previous approach was useful to them but the current one is useful to nobody sounds like pretty useful activity. It may be conducive to any combination of rule changes; changes in how the rules are interpreted (there are pretty clear and sufficiently substantiated allegations in these comments that the claim that the CNA rules absolutely mandate the current approach is simply not true); or even parallel approaches to tracking vulnerabilities springing up and potentially eventually being similarly (misguidedly or not) mandated to be considered as parts of some processes.

I think it's clear that there are lots of people who were happier with the previous approach. Whether that was reasonable or not is a very good question; if the reality is, as the kernel devs seem to be arguing, that it basically resulted in paper pushing and checkmarks for a false sense of security, then probably those people should not be enabled. Having said that, I find it a difficult allegation to accept at face value that Linux distribution security teams are essentially either clueless or malicious.

What is unclear to me if there are any people who are happy with the current approach. Well, except maybe in the sense that they are happy that those paper pushers are stymied. Instead the justification seems to be (whether true or not) that "the rules require this".

TBH, I can see, from the kernel security team POV, the justification. I think it's this (correct me if wrong): What the distribution security teams were doing is harmful to security because it leads to the mistaken idea that anything but the latest stable kernel can be secure. Thus, this is a way to pressure them to move to the latest stable kernel, making up-to-date installations of Linux more secure (which is a very reasonable goal).

But at the same time, I do think it's fair to consider that pretty mean. "We think what Linux distributions are doing doesn't work, so we spend some extra effort to sabotage it" doesn't sound so benign to me.

Changing the CNA rules

Posted Jun 21, 2024 8:27 UTC (Fri) by farnz (subscriber, #17727) [Link]

The hard question for any rules change: what rules change would permit the kernel CNA to not issue CVEs for genuine vulnerabilities that only affect a small number of users, while not permitting a motivated vendor to disguise serious issues under the same rubric?

The current rules prohibit that by saying that if it's a vulnerability, even a minor one, even one that only affects a tiny subset of users, it gets a CVE number. That means that once a vendor knows they have a vulnerability, whether or not it's fixed, they're expected to issue a CVE number for it in due course; as a result, every vulnerability in a product claimed by a CNA can be talked about in terms of its CVE number.

If the CVE Project changes the rules to allow you to refuse to issue CVE numbers unless you have a "serious" issue (for whatever definition of "serious" it chooses), then you give vendors an incentive to describe all their vulnerabilities as "not serious"; we already see this with the CVSS scoring vendors give their own vulnerabilities, where it's common for some vendors to downplay the severity of the vulnerability compared to independent scorers, in the hope of disguising the issue.

And that leads to another point; anyone who only cares about "serious" vulnerabilities can already limit themselves by refusing to consider CVEs with low CVSS scores, either overall or in the areas they care about. That limits you to only the vulnerabilities that have been analysed and determined to be a significant risk, leaving the "noise" behind.

Kernel CNA acts (or not?) as required by CVE rules

Posted Jun 21, 2024 8:41 UTC (Fri) by mstsxfx (subscriber, #41804) [Link] (1 responses)

> the previous approach was useful

Let me just clarify on this. The previous approach had many flaws and problems as well. Going all the way back would be just a regression. It is really nice to see a better transparency (https://git.kernel.org/pub/scm/linux/security/vulns.git). It is also much better that the kernel community is involved (although there are improvements possible in that). I think the main improvement to the current process would be to start evaluating actual vulnerabilities rather than just tagging code fixes. That will certainly require more than 3 people in the team!

I think it would also help to agree on who those CVEs are actually created for because that might help to scope the land. It is really easy to keep arguing to the extreme but I do not think this is helpful in any way. There is nothing like 100% security. What really matters are threat models that people actually care about. There are many of them and it is really great to talk to those people. Assuming we cannot assume anything will very unlikely help none of them though because they will just be constantly flooded with stuff they mostly do not care about.

Sorry if that sounds like wasting a hot air but I really do care about having a useable CVE model.

Next improvement to the process

Posted Jun 21, 2024 9:27 UTC (Fri) by farnz (subscriber, #17727) [Link]

I think the next missing piece is getting people to work together on providing (partial or full) CVSS vectors for all kernel CVEs; then, individual consumers of CVEs can filter on components of the CVSS vector so far to determine whether they spend any time on this (which can be automated).

For example, the sysctl table vulnerability is almost certainly going to have AV:L, PR:H and UI:R in its vector; you need a local account to trigger it, and you need high privileges to load LKMs, which don't just appear on the system themselves, but need user interaction to make available or to load into the kernel.

This then spreads the load of determining which vulnerabilities are serious; as a distro security team, you contribute to CVSS vectors, and then use them to decide what you're going to do - you might, for example, decide that because of the intended use cases for your distro, if a vulnerability has PR:H and UI:R in its vector, it's not considered further, or that AV:L and AV:P vulnerabilities are low priority for detailed assessment.

It's important to note that my suggestion is that people who care contribute parts of CVSS vectors, not scores. It should be perfectly fine for you to say "this is PR:H, UI:R, so I'm stopping here", and for a security team that cares about PR:H vulnerabilities in the kernel to pick up from there and add the rest of the vector.

Kernel CNA acts as required by CVE rules

Posted Jun 21, 2024 9:53 UTC (Fri) by zdzichu (subscriber, #17118) [Link] (1 responses)

It's a nitpicking indicator.

Kernel CNA acts as required by CVE rules

Posted Jun 21, 2024 14:37 UTC (Fri) by zdzichu (subscriber, #17118) [Link]

Sorry, wrong comment! I wonder why this is happening from time to time.

Kernel CNA acts as required by CVE rules

Posted Jun 20, 2024 17:00 UTC (Thu) by farnz (subscriber, #17727) [Link] (2 responses)

If we're able to exclude vulnerabilities from consideration because "we don't support that use of the product", then virtually all buffer overflows can be excluded as "unsupported", since you sent data to the product outside its supported cases. Unless you're willing to argue that one, saying "Protection against theoretical out-of-tree modules using kernel interfaces incorrectly has never been a concern - out of tree modules are not even recognized as supported by the kernel community in fact" is the same as saying "protection against theoretical clients that send too much data incorrectly has never been a concern - unapproved clients are not even recognised as supported by the project community in fact".

Kernel CNA acts as required by CVE rules

Posted Jun 21, 2024 1:39 UTC (Fri) by foom (subscriber, #14868) [Link] (1 responses)

Those two hypotheticals are the same only if the "unapproved client" can only possibly connect to the service if the fully-trusted administrator of the service has installed that client.

Kernel CNA acts as required by CVE rules

Posted Jun 21, 2024 8:12 UTC (Fri) by farnz (subscriber, #17727) [Link]

I just declare that unapproved clients must not be allowed to connect to the service, so it's unsupported. That then puts me in the same place as the kernel not supporting out-of-tree modules - you're doing something unsupported, ergo it's not a vulnerability if the sysadmin chooses to install an unapproved client.

7.5% of kernel “CVEs” rejected on further examination

Posted Jun 19, 2024 20:38 UTC (Wed) by ewen (subscriber, #4772) [Link] (4 responses)

Reading this article (which seems to be an insider “please understand what we’re doing” post so presumably using the most favourable numbers), I was struck by the contrast between these two quotes:

“since we started this endeavor back in February, it has only resulted in 863 allocations out of the 16,514 commits”

“If the team agrees with the evaluation, the CVE assignment will be promptly rejected. Since the start of this endeavor, 65 such instances have occurred.”

65 / 863 is 7.5%. So even just counting the period where third parties tried to stem the tide of noise “CVEs” just asking “are you sure” in a convincing manner caused the kernel “CVE” team to concede 7.5% of their “CVEs” shouldn’t have been issued :-/ (Up thread some seem to have realised it’s a lot of work for little result to challenge these “CVE”s and stopped trying to do so.)

I get that analysing bugs for security risks is a lot of work. But the switch from “someone else should do this analysis on all bugs” (no kernel originated CVEs) to “someone else should do this analysis on all bugs that could possibly be a security risk” (many kernel originated “CVEs) doesn’t seem all that different to me. It’s still passing most of the difficult work off to “someone else”. (It’s worth noting at least one of the kernel “CVE” team is paid by the Linux Foundation to work on the kernel for the broader community good. And is still choosing to insist “someone else” should do the work of analysing the real risk, while they just keep adding to the “to do” list.)

Ewen

7.5% of kernel “CVEs” rejected on further examination

Posted Jun 20, 2024 7:09 UTC (Thu) by vegard (subscriber, #52330) [Link] (3 responses)

Keep in mind that those 65 include duplicates, i.e. CVEs that were assigned by other CNAs before kernel.org became one. So it's not like they are all false positives/non-issues.

7.5% of kernel “CVEs” rejected on further examination

Posted Jun 20, 2024 8:27 UTC (Thu) by mstsxfx (subscriber, #41804) [Link] (2 responses)

Incorrect, those were rejected as really bogus.

7.5% of kernel “CVEs” rejected on further examination

Posted Jun 20, 2024 10:47 UTC (Thu) by vegard (subscriber, #52330) [Link] (1 responses)

My reading of the penultimate paragraph is that those 65 included the duplicates reported by SUSE.

7.5% of kernel “CVEs” rejected on further examination

Posted Jun 20, 2024 11:01 UTC (Thu) by mstsxfx (subscriber, #41804) [Link]

Rejects do not contain reasoning for the rejections so you will need to follow discussions on the ML for the CVE.

I do not know where the idea of duplicates came from. We have rejected CVE filed by the kernel CNAs. Generally falling into several categories - fixes for userspace tools like perf, annotations like data_race which do not affect generated code, build fixes, incorrect fixes reverted later on etc. They generally seemed to fall into pattern matching pointed elsewhere.

Patch author notification

Posted Jun 19, 2024 21:25 UTC (Wed) by paulbarker (subscriber, #95785) [Link] (1 responses)

After reading this article, reading the docs [1] and skimming some recent reports on the linux-cve-announce list [2], one thing is left unclear to me: Does the author of a patch get notified if it is considered to qualify for a CVE? How about the relevant subsystem maintainers?

[1]: https://docs.kernel.org/process/cve.html
[2]: https://lore.kernel.org/linux-cve-announce/

Patch author notification

Posted Jun 19, 2024 22:44 UTC (Wed) by kdave (subscriber, #44472) [Link]

I'm maintaining a subsystem (btrfs) and I learn about CVEs assigned to my code only from external sources, never any CC or a question about CVE eligibility. I've evaluated some reports for rejection in the beginning but it turned out to be quite futile very soon. From what I've seen the bar is set too low for assignment and too high for rejection.

The forest for the trees

Posted Jun 20, 2024 16:51 UTC (Thu) by atnot (subscriber, #124910) [Link] (3 responses)

The discussions around this are always super infuriating to me. Yes, sure, the line between exploitable and non-exploitable memory bugs is extremely subtle and subjective.

But this wouldn't even be a discussion if there weren't an unhandleable torrent of them in the first place!

If you're having non-stop arguments about which trees are burning, you're probably in a forest fire!

The forest for the trees

Posted Jun 20, 2024 18:47 UTC (Thu) by mb (subscriber, #50428) [Link] (2 responses)

> But this wouldn't even be a discussion if there weren't an unhandleable torrent of them in the first place!

Yes. That is true.
But the bugs and possible vulnerabilities are there, no matter whether you close your eyes or not.

> If you're having non-stop arguments about which trees are burning, you're probably in a forest fire!

Correct. Pretty much all software is a forest burning to the ground and many people like it for the warm and fuzzy feeling. Unless somebody assigns a CVE to each tree.

The forest for the trees

Posted Jun 20, 2024 21:08 UTC (Thu) by atnot (subscriber, #124910) [Link] (1 responses)

> But the bugs and possible vulnerabilities are there, no matter whether you close your eyes or not.

Yeah, that's what I'm saying. If you're shocked by the number of CVEs, what you should really be concerned with is that the kernel has so many memory issues that nobody even has time to triage them. That's the real problem. But nobody is talking about that.

It's like that joke about turning off the carbon monoxide detector because the constant beeping and blinking is giving you a headache.

The forest for the trees

Posted Jun 20, 2024 22:50 UTC (Thu) by Wol (subscriber, #4433) [Link]

> It's like that joke about turning off the carbon monoxide detector because the constant beeping and blinking is giving you a headache.

Sadly, that's NOT a joke.

We picked up our new caravan a couple of months back, and apparently it really did happen - a guy brought his van in for a service, and the service bloke's reaction was "where's the CO monitor gone?". "Oh we took it out, it wouldn't stop bleating"

Fortunately for that guy, the service bloke asked, rather than just replaced it. They did a gas check, and discovered a leak ...

Cheers,
Wol

For instance, a fix repairing a broken LED driver would never be sensibly considered for assignment.

Posted Jun 21, 2024 8:32 UTC (Fri) by geert (subscriber, #98403) [Link] (1 responses)

Depends what the purpose of the attached LED is? Safety indicator? Infrared-LED for wireless transmission of commands? Opto-coupler controlling heavy equipment?

For instance, a fix repairing a broken LED driver would never be sensibly considered for assignment.

Posted Jun 21, 2024 14:38 UTC (Fri) by zdzichu (subscriber, #17118) [Link]

It's a nitpicking indicator.

How should CVE allocation fail?

Posted Jun 21, 2024 16:01 UTC (Fri) by madhatter (subscriber, #4665) [Link] (1 responses)

It seems to me that there are two basic ways to err in assigning CVEs: all security bugs will have a CVE (but some non-security bugs might also get a CVE), or all things with a CVE are security bugs (but some security bugs might not get a CVE). It further seems to me that some people (for example, those who make patching decisions based on the existence of CVEs) are implicitly using one scheme, while others (including the kernel CNA guys) are using the other. I'm not suggesting that either scheme is right, but all human processes are fallible, so deciding how you're going to fail is quite important.

Until there is consensus on this, I fear we will see people talking past each other, because they're missing the mismatch in underlying assumptions. I personally feel I can see a fair bit of that mismatch in some of the comment exchanges above: people are making excellent points at each other, but no high-level agreement is possible, because there is no low-level agreement.

How should CVE allocation fail?

Posted Jun 23, 2024 15:33 UTC (Sun) by farnz (subscriber, #17727) [Link]

From what I can gather, the CVE Project would prefer that if you err in assigning CVEs, you do so in the first manner - all security issues have CVE numbers, so that we can use the CVE number as shorthand for discussing a given security issue. If you want to filter out some security issues, that's what the CVSS vector is for - but that's metadata attached to CVEs, not an integral part.

And it's my belief that the next step forward for the kernel is going to be a way for the parties who care about security bugs (distros, security researchers etc) to contribute partial CVSS vectors for kernel CVEs, so that people who depend on not wasting time on "minor" (by their values) CVEs can filter based on partial CVSS vectors, and contribute back the bare minimum CVSS vector pieces that they've determined as part of "nope, not for us".


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds