The state of the kernel Rust experiment
Headlines
Miguel Ojeda, who led the session, started with some headlines. The Nova driver for NVIDIA GPUs is coming, with pieces already merged into the mainline, and the Android binder driver was merged for 6.18. Even bigger news, he said, is that Android 16 systems running the 6.12 kernel are shipping with the Rust-written ashmem module. So there are millions of real devices running kernels with Rust code now.
Meanwhile, the Debian project has, at last, enabled Rust in its kernel
builds; that will show up in the upcoming "forky" release. The amount of
Rust code in the kernel is "exploding
", having grown by a factor of
five over the last year. There has been an increase in the amount of
cooperation between kernel developers and Rust language developers, giving
the kernel project significant influence over the development of the
language itself. The Rust community, he said, is committed to helping the
kernel project.
The rust_codegen_gcc effort, which grafts the GCC code generator onto the rustc compiler, is progressing. Meanwhile the fully GCC-based gccrs project is making good progress. Gccrs is now able to compile the kernel's Rust code (though, evidently, compiling it to correct runnable code is still being worked on). The gccrs developers see building the kernel as one of their top priorities; Ojeda said to expect some interesting news from that project next year.
With regard to Rust language versions, the current plan is to ensure that the kernel can always be built with the version of Rust that ships in the Debian stable release. The kernel's minimum version would be increased 6-12 months after the corresponding Debian release. The kernel currently specifies a minimum of Rust 1.78, while the current version is (as of the session) 1.92. Debian is shipping 1.85, so Ojeda suggested that the kernel move to that version, which would enable the removal of a number of workarounds.
Jiri Kosina asked how often the minimum language version would be increased; Ojeda repeated that it would happen after every Debian stable release, though that could eventually change to every other Debian release. It is mostly a matter of what developers need, he said. Linus Torvalds said that he would be happy to increase the minimum version relatively aggressively as long as it doesn't result in developers being shut out. Distributors are updating Rust more aggressively than they have traditionally updated GCC, so requiring a newer version should be less of a problem.
Arnd Bergmann said that the kernel could have made GCC 8 the minimum supported version a year earlier than it did, except that SUSE's SLES was running behind. Kosina answered that SUSE is getting better and shipping newer versions of the compiler now. Dave Airlie worried that problems could appear once the enterprise distributors start enabling Rust; they could lock in an ancient version for a long time. Thomas Gleixner noted, though, that even Debian is now shipping GCC 14; the situation in general has gotten better.
Still experimental?
Given all this news, Ojeda asked, is it time to reconsider the "experimental" tag? He has been trying to be conservative about asking for that change, but said that, with Android shipping Rust code, the time has come. Airlie suggested making the announcement on April 1 and saying that the experiment had failed. More seriously, he said, removing the "experimental" tag would help people argue for more resources to be directed toward Rust in their companies.
Bergmann agreed with declaring the experiment over, worrying only that Rust
still "doesn't work on architectures that nobody uses
". So he
thought that Rust code needed to be limited to the well-supported
architectures for now. Ojeda said that there is currently good support for
x86, Arm, Loongarch, RISC-V, and user-mode Linux, so the main architectures
are in good shape. Bergmann asked about PowerPC support; Ojeda answered
that the PowerPC developers were among the first to send a pull request
adding Rust support for their architecture.
Bergmann persisted, asking about s390 support; Ojeda said that he has looked into it and concluded that it should work, but he doesn't know the current status. Airlie said that IBM would have to solve that problem, and that it will happen. Greg Kroah-Hartman pointed out the Rust upstream supports that architecture. Bergmann asked if problems with big-endian systems were expected; Kroah-Hartman said that some drivers were simply unlikely to run properly on those systems.
With regard to adding core-kernel dependencies on Rust code, Airlie said that it shouldn't happen for another year or two. Kroah-Hartman said that he had worried about interactions between the core kernel and Rust drivers, but had seen far fewer than he had expected. Drivers in Rust, he said, are indeed proving to be far safer than those written in C. Torvalds said that some people are starting to push for CVE numbers to be assigned to Rust code, proving that it is definitely not experimental; Kroah-Hartman said that no such CVE has yet been issued.
The DRM (graphics) subsystem has been an early adopter of the Rust
language. It was still perhaps surprising, though, when Airlie (the DRM
maintainer) said that the subsystem is only "about a year away
" from
disallowing new drivers written in C and requiring the use of Rust.
Ojeda returned to his initial question: can the "experimental" status be
ended? Torvalds said that, after nearly five years, the time had come.
Kroah-Hartman cited the increased support from compiler developers as a
strong reason to declare victory. Steve Rostedt asked whether function
tracing works; the answer from Alice Ryhl was quick to say that it does
indeed work, though "symbol demangling would be nice
".
Ojeda concluded that the ability to work in Rust has succeeded in bringing in new developers and new maintainers, which had been one of the original goals of the project. It is also inspiring people to do documentation work. There are a lot of people wanting to review Rust code, he said; he is putting together a list of more experienced developers who can help bring the new folks up to speed.
The session ended with Dan Williams saying that he could not imagine a
better person than Ojeda to have led a project like this and offered his
congratulations; the room responded with strong applause.
| Index entries for this article | |
|---|---|
| Kernel | Development tools/Rust |
| Conference | Kernel Maintainers Summit/2025 |
Posted Dec 13, 2025 3:02 UTC (Sat)
by Paf (subscriber, #91811)
[Link]
Posted Dec 13, 2025 4:48 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
It might be something worth investigating for these niche architectures.
Posted Dec 13, 2025 13:57 UTC (Sat)
by patrick_g (subscriber, #44470)
[Link] (13 responses)
Wow. That's much quicker than I expected.
Posted Dec 13, 2025 14:19 UTC (Sat)
by pizza (subscriber, #46)
[Link]
Eh, not really. That kinda was the whole point of this "experiment" [1].
To me the more salient questions are how long before (a) we get Rust in a core subsystem (thus making Rust truly _required_ instead of "optional unless you have hardware foo"), and (b) requiring Rust for _all_ new code.
There is no point in half-measures; those only result in higher costs and fewer benefits.
[1] As the saying goes, "It's only temporary if it doesn't work"
Posted Dec 13, 2025 14:41 UTC (Sat)
by iabervon (subscriber, #722)
[Link] (11 responses)
Posted Dec 14, 2025 3:14 UTC (Sun)
by riking (subscriber, #95706)
[Link] (10 responses)
Posted Dec 14, 2025 22:23 UTC (Sun)
by hailfinger (subscriber, #76962)
[Link] (9 responses)
Posted Dec 15, 2025 9:54 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (4 responses)
Posted Dec 15, 2025 10:23 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (3 responses)
Posted Dec 15, 2025 12:54 UTC (Mon)
by pizza (subscriber, #46)
[Link] (2 responses)
...How does one tell the difference between malicious and merely damaged?
(Because a "DoS" preventing accessing the former is arguably good, but arguably bad for the latter..)
Posted Dec 15, 2025 12:57 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
As a quality of implementation matter, being able to retrieve as much data as possible from a damaged filesystem is nice, but it shouldn't affect availability of the system as a whole, only that one filesystem.
Posted Dec 16, 2025 2:09 UTC (Tue)
by corbet (editor, #1)
[Link]
Posted Dec 15, 2025 10:38 UTC (Mon)
by jengelh (subscriber, #33263)
[Link]
Posted Dec 17, 2025 8:09 UTC (Wed)
by koflerdavid (subscriber, #176408)
[Link] (2 responses)
Writing file systems is already complicated enough without having to worry about it being malicious. And as we have seen from the security hole in a Rust tar library, merely using Rust is not the solution.
Posted Dec 17, 2025 9:07 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (1 responses)
That does not mean that it doesn't help you find those by saving you effort on all the other security issues though.
Posted Dec 17, 2025 10:37 UTC (Wed)
by koflerdavid (subscriber, #176408)
[Link]
Posted Dec 13, 2025 14:19 UTC (Sat)
by smcv (subscriber, #53363)
[Link]
Debian stable isn't actually as old as some of the "enterprise" distributions: it has a lifetime of about 2 years before being superseded by the next stable release, so even if we pessimistically assume the freeze takes a full year, that's about 3 years from "can have new things" to being superseded. Ubuntu LTS has a similar lifetime, but about a year out of phase with Debian.
The limiting factor for being able to assume new versions are available in the Debian ecosystem is whether you're aiming to support "oldstable" releases that have already been superseded by a newer stable release, but are still supported, like Debian 12 or Ubuntu 22.04 at the time of writing (superseded by Debian 13 and Ubuntu 24.04 respectively, but still supported), or even older LTS releases that have limited or third-party support, like Debian 11 or Ubuntu 20.04. Ubuntu derivatives like Linux Mint and pop!OS are sometimes more of a limiting factor here than Ubuntu itself, because they don't always base their recommended releases on the newest Ubuntu LTS.
Because these older releases have declining-but-nonzero support levels for rather a long time, upstreams will often have to draw a line somewhere, which is a choice based on tradeoffs rather than something with a single correct answer. For example I've found that "newest Debian stable or newest Ubuntu LTS, whichever is older; if your OS is older than this, please upgrade" is a policy that seems to work fairly well in practice.
Posted Dec 15, 2025 10:00 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (35 responses)
Posted Dec 15, 2025 10:44 UTC (Mon)
by hailfinger (subscriber, #76962)
[Link] (1 responses)
Posted Dec 15, 2025 12:35 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
Posted Dec 15, 2025 10:49 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (32 responses)
As a result, the kernel doesn't want to require a newer compiler than the stable distros, because the stable distro wants to stick to a known set of bugs, and not take the risk of new bugs coming in with the new compiler version. You are free to use newer compilers (e.g. while the kernel requires at least GCC 8.1, many kernels are built with newer GCC versions - the one I'm running right now is built with GCC 15.2, for example), but the kernel will not accept patches to C code that require a newer compiler version than GCC 8.1, because there's a significant stable distro still using 8.1.
Posted Dec 15, 2025 12:37 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (29 responses)
They should just stop doing that, it is quite frankly annoying enough with existing software that has huge version spread between the oldest LTS distro and the latest releases. And I say that as a sysadmin who runs all of those and needs to keep them running for customers.
Posted Dec 15, 2025 12:50 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (9 responses)
The goal is that for any x and y such that x < y, the bug list for RHEL M.y is equal to, or a subset of, the bug list for RHEL M.x, and thus that if I take your service that's running on RHEL M.x, at worst, I will find the same issues on RHEL M.y as you did on M.x, unless you depended on the OS underneath you being buggy.
This has nothing to do with backwards compatibility - your new release may be 100% compatible with the older version, and still fall foul of this policy unless it can guarantee that all bugs in the new release were also present in the older release.
Posted Dec 15, 2025 17:30 UTC (Mon)
by khim (subscriber, #9252)
[Link] (1 responses)
But why that reasoning is not stopping them from bringing new code, that may need new version of compiler into the picture? That is what driving people mad: old, stable, code is fine, I tinker with old hardware and software regularly… but then you use old tools to deal with it, too! Why try to mix old compiler and new Linux kernel? This really makes no sense to me!
Posted Dec 15, 2025 17:44 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
The kernel, however, wants people who find bugs in the distro kernel to be able to test the latest kernel, with possible fixes (and even patches on top of Linus's latest release), rather than forcing them to wait until the next distro release. And that's what drives keeping old compiler versions for the kernel - not the distros, but the kernel developers wanting to build on distros with older versions of compilers etc.
Posted Dec 16, 2025 9:14 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (6 responses)
RHEL is one of the worst distros in terms of bugs, I have literally had multiple occasions where the core package management tools had severe bugs that made me end up with duplicate installed RPMs after a transaction segfaulted right in the middle of installing an update. I have had tools that work perfectly fine on my unstable Gentoo for years of continous updates to the latest upstream releases just break with repeated crashes.
This whole idea that the backporting process guarantees that no new bugs are introduced is just one of those lies people tell to management when they demand that something impossible has to be accomplished.
Posted Dec 16, 2025 14:36 UTC (Tue)
by pj (subscriber, #4506)
[Link] (5 responses)
This. Why does management not understand that changing ('fixing') currently-broken code via a backported change ('fix') is at least as unstable as just upgrading? Except that by upgrading you get the benefits of all the other work that's gone into testing and fixing everything else as well, so you're much more likely to end up with a working system, and if you don't then at least it's a bug you can file against a standard version!
Posted Dec 16, 2025 15:08 UTC (Tue)
by pizza (subscriber, #46)
[Link] (4 responses)
That assertion is trivially disproven by the vast difference in the size of the two diffs.
> Except that by upgrading you get the benefits of all the other work
You're conveniently discounting all of the downsides of all that other work.
Posted Dec 17, 2025 9:04 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Dec 17, 2025 13:13 UTC (Wed)
by pizza (subscriber, #46)
[Link] (2 responses)
Actually, I am not; I (and others here) are saying that different folks legitimately place different weights on the costs and benefits of each option, leading to different "optimal" outcomes given one's operational constraints (equipment/time/budget/regulations/etc)
...You act as if it is all costs for one option, but all benefit on the other.
Posted Dec 18, 2025 9:09 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Dec 18, 2025 13:48 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Sure, that's fine.
Except pose this same question to *users* of LTS distros and you'll get a very different answer.
Again, different folks place different weights on the relative importance of each of the costs and benefits.
(Personally I just ignore 'em unless there's $$$ (or a patch) attached to the problem report)
Posted Dec 15, 2025 12:51 UTC (Mon)
by pm215 (subscriber, #98099)
[Link] (3 responses)
The promise of a stable distro release is that it will make minimal cautious changes, to minimise the risk of disrupting stuff you are using and relying on. If you'd prefer a faster updating "rolling" distro, those are also available, but I think there are good reasons why those are not the only kind.
Posted Dec 16, 2025 9:17 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (2 responses)
Posted Dec 16, 2025 16:36 UTC (Tue)
by pm215 (subscriber, #98099)
[Link] (1 responses)
Posted Dec 19, 2025 0:15 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link]
Some of this may depend on how long the distro promises to stay stable. The longer the stability promise, the more divergence there can be between the "stable" and current versions of the software and the harder backports get. That's going to put a practical limit on the length of any stability guarantee, presumably related to how well key upstream projects support older versions. A distribution that tries to handle its own kernel backports beyond what's supported by official LTS versions is setting itself up for problems.
Posted Dec 15, 2025 13:03 UTC (Mon)
by pizza (subscriber, #46)
[Link] (1 responses)
That only applies if your code (and every crate it pulls in) doesn't use any "unstable" Rust features.
Posted Dec 16, 2025 9:20 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Dec 15, 2025 13:32 UTC (Mon)
by johill (subscriber, #25196)
[Link] (6 responses)
Posted Dec 16, 2025 9:26 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Dec 16, 2025 11:17 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (4 responses)
The driving force for wanting "no new bugs" in patch releases is the time it takes from "bug discovered" to "bug fixed" in any piece of software. In the days of physical media distribution, the process of sending a fixed binary by mail meant that new bugs were a serious problem - you'd either have to accept downtime until the bug fix arrived in the mail, or you'd have to find a new workaround while you waited for the bug fix to arrive.
It's not clear to me that this is as important as it used to be - because I can get bug fixes over the network, turnaround has gone from days/weeks to hours/days. The question is whether we need the "no new bugs" promise, or whether a "regressions fixed quickly" promise would be of more value nowadays - higher churn, yes, but also rapid fixes of regressions.
Posted Dec 19, 2025 0:38 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link] (3 responses)
I think it depends a lot on how easily you can update your application software to match changes in the base system. If you have full control over your applications- they're FOSS or developed in-house- it probably makes sense to stay close to the cutting edge. If you can't easily change your application software, it's a lot more important to make sure your base isn't making incompatible changes.
There plenty of reasons why someone might not be in complete control of their application. Some people are still using proprietary software provided as binary only, and they have to live with what the vendor provides them. That can cause all kinds of problems. Sometimes the vendor doesn't want to provide updates (or charges exorbitant prices for them) or has even gone out of business, and they have to try to keep the old software running as long as possible. Sometimes the vendor does provide regular updates, but they include incompatible changes that break existing workflows, so users want to keep updates few and far between. Sometimes a regulatory agency will require a costly and time consuming validation processes any time the software is updated, so sticking with an old version saves a lot of labor.
Posted Dec 19, 2025 5:06 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (1 responses)
So I understand the desires which drive LTS/Enterprise distros to freeze everything, outside of the kernel, glibc and a few other core utilities there isn't the kind of ABI discipline which would be required to make updating blindly safe, not because all the developers which make the OS great are incompetent but many are volunteers and it's an unreasonable expectation to say that the upstream for every tool and library which distro-makers scavenge off the Internet should be held to the same standard for ABI breaks. The distro-makers solve this problem by freezing and carefully patching to minimize, but not eliminate (thanks perl-DBD-MySQL update in RHEL9.7) disruption. People who use rolling distros for production, how much do they have to retest and fix when something changes which affects their workload, its probably not zero, even if they are only using software packaged with the OS and not using it as a platform for their own in-house tools.
Posted Dec 19, 2025 10:17 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
My lived experience in all three cases is that the closer you hew to the upstream platform, the less total time you spend on platform rebasing - if you rebase onto "current" every week, your experience gets better than if you rebase every 3 years, for multiple reasons:
But this is a major attitude shift from the traditional LTS/Enterprise thinking (that you've very clearly described) - and it is only possible because you can directly communicate with upstream developers between releases, testing things that they're doing (so it's also not possible with proprietary dependencies). It's also terrifying if you're expecting each platform rebase to be as costly as the big platform rebases you do today, and it takes time to show the benefits (since at the point where you'd schedule 6 months to rebase, you realise that you've spent an aggregate of 3 months since the last big rebase on rebasing to newer platforms).
Posted Dec 19, 2025 9:23 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
And "fix regressions fast" so that users stay on the latest versions of your software has a serious advantage; by definition, it's easier to locate which change in the last week could have caused a bug than to locate which change in the last 15 years. Users don't stay put forever - someone who was using RHEL6 up until its end of support in 2024, waited (unsupported) for RHEL10, and is now on RHEL10 is going to want everything that (from their point of view) broke between RHEL6 and RHEL10 fixed. But that's a huge amount to deal with in one go - it's "regressions" in everything that's part of the base OS, and it's nearly 14 years worth of changes to look through and determine what's gone wrong.
Worse, because RHEL7, 8 and 9 were in the middle, it may be impossible to fix RHEL10 to suit someone who's just migrated from RHEL6, because there is no way to fix the "regression" between RHEL6 and RHEL10 without causing a regression between RHEL7 (or RHEL8, or RHEL9) and RHEL10. The only way this works is if you accept a time limit on support (as RHEL enforces), and understand that you will have to recertify/buy an update/find a new vendor who's still trading when the support window expires; but I'm not convinced that "big panic, maybe business-ending" every 14 years is better than "small panic every day", and while I've seen "big panic, maybe business-ending" from LTS distros such as Debian (Sarge to Etch broke stuff badly at my then job, and we couldn't have the regression undone since it was already relied upon by other users, but had to update our stuff), I'm also not convinced that frequent updates is actually "small panic every day" - it might well be "small panic every month, as upstream add your use case to their regression tests".
Posted Dec 16, 2025 14:29 UTC (Tue)
by laarmen (subscriber, #63948)
[Link] (5 responses)
Your LTS distro packages a *lot* of crates, mostly outdated, and you can't update them easily. In that setting, bumping the version of the default Rust compiler suddenly becomes much riskier.
Posted Dec 16, 2025 14:34 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (1 responses)
Do you have examples of crates that were written for older versions of rustc, didn't use unstable features, but that don't work with current rustc? I'm curious to see what's actually broken - I know about the changes to std::env::set_var, for example, and I'm wondering what else has broken over the last decade or so.
Posted Dec 16, 2025 14:48 UTC (Tue)
by laarmen (subscriber, #63948)
[Link]
Posted Dec 16, 2025 15:03 UTC (Tue)
by pizza (subscriber, #46)
[Link] (2 responses)
...."hundreds" isn't "that many" ?
(this anectdata is brought to you by every Rust-based thing I have deployed right now)
Posted Dec 16, 2025 15:23 UTC (Tue)
by laarmen (subscriber, #63948)
[Link]
Don't take this as an endorsement of the sprawling dep trees of some Rust projects :-)
To give some hard numbers, Rust 1.85 in Ubuntu vendors 672 crates. Since that directory contains dependencies for the entire toolchain, including cargo and rustdoc, I think they're on the conservative side of the ecosystem.
Posted Dec 17, 2025 9:02 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Posted Dec 15, 2025 14:48 UTC (Mon)
by iabervon (subscriber, #722)
[Link] (1 responses)
It's probably best not to rebuild python-cryptography and curl with a new Rust compiler release just because it exists, but that doesn't mean you can't build your kernel with a newer Rust compiler when that's what people are doing upstream.
Posted Dec 15, 2025 14:59 UTC (Mon)
by pizza (subscriber, #46)
[Link]
...Unless said kernel relies upon "unstable" Rust features that are treated differently in different compiler revisions.
(And correct me if I'm wrong, but there are still a pile of unstable-isms in mainline R4L right now. In time that number will presumably continue to go down but it's quite reasonable expect to occasionally go back up for new/optional Linux features)
Posted Dec 15, 2025 13:41 UTC (Mon)
by salimma (subscriber, #34460)
[Link] (1 responses)
Not sure how SLES and Ubuntu LTS is affected by this, but CentOS Stream / RHEL rebases Rust every 6 months, and thus all their downstreams too
Posted Dec 15, 2025 19:11 UTC (Mon)
by salimma (subscriber, #34460)
[Link]
$ fedrq pkgs --src rust -b ubi9
$ fedrq pkgs --src rust -b ubi10
$ fedrq pkgs --src rust -b c9s
$ fedrq pkgs --src rust -b c10s
$ fedrq pkgs --src rust -b f42
$ fedrq pkgs --src rust -b f43
$ fedrq pkgs --src rust -b rawhide
Posted Dec 23, 2025 16:07 UTC (Tue)
by moltonel (subscriber, #45207)
[Link] (3 responses)
Would a CVE filed against an experimental feature get rejected ? Is the CVE assignment process systematic/exhaustive enough to catch all bug fixes ? Is the main reason we have so few CVEs for the rust experiment that experimental code is intentionally excluded ? Is CVE-2025-38033 a counter example ?
Posted Dec 25, 2025 7:57 UTC (Thu)
by gregkh (subscriber, #8)
[Link] (2 responses)
No.
> Is the CVE assignment process systematic/exhaustive enough to catch all bug fixes ?
We hope so, if we miss any, please let the developers at cve@k.o know.
> Is the main reason we have so few CVEs for the rust experiment that experimental code is intentionally excluded ?
No.
And as always, people can just _ask_ the kernel cve maintainers stuff like this if they are curious :)
Posted Dec 26, 2025 19:25 UTC (Fri)
by moltonel (subscriber, #45207)
[Link] (1 responses)
So, what do you think is the reason we had so few RfL CVEs so far ? Is that related to the code being considered "experimental" ? Does CVE-2025-38033 count as a RfL CVE ? It's hard to imagine that no bugs had been found yet, maybe the fixes were not backported to stable kernels ?
Posted Dec 27, 2025 9:04 UTC (Sat)
by gregkh (subscriber, #8)
[Link]
Perhaps because there has not been much rust code that is used in the kernel before now? That's just my guess, could be wrong.
> Is that related to the code being considered "experimental" ?
Again, no, the cve team does not use that as a criteria at all.
> Does CVE-2025-38033 count as a RfL CVE ?
No idea, feel free to count if it you want to :)
> It's hard to imagine that no bugs had been found yet,
Take a look at the code changes over time to determine this is true or not, I do not know.
Congratulations indeed!
Translation into C
No resistance to change?
No resistance to change?
No resistance to change?
No resistance to change?
Filesystem drivers in Rust
Filesystem drivers in Rust
Depends how far the denial of service extends; if it just restricts you from accessing the malicious filesystem, that's not a problem, but if a malicious filesystem can cause the FS driver to hold locks that prevent you accessing any filesystem at all, that's a denial of service attack.
Filesystem drivers in Rust
Filesystem drivers in Rust
If the filesystem is damaged, all bets are off anyway - the damage may extend to the point where the data is irrecoverable. Thus, the contents of the damaged filesystem are already not part of "service" in a security sense, since you've already lost them.
Filesystem drivers in Rust
Linux has a number of filesystems that can deal properly with a "merely damaged" (corrupted filesystem). A maliciously corrupted filesystem will have correct metadata checksums, and is much harder to detect. There is a qualitative difference.
Filesystem drivers in Rust
Filesystem drivers in Rust
Filesystem drivers in Rust
Filesystem drivers in Rust
Filesystem drivers in Rust
"Even" Debian?
Rust compiler support works differently
Rust compiler support works differently
Sure, you can use a newer Rust compiler to compile code targeted for an older compiler version, but you can't use an older Rust compiler to compile code targeted for a newer compiler version simply because some features may not yet exist in the older version.
Rust compiler support works differently
Because "stable" distros try not to update versions significantly after release. If you use RHEL,for example, the compiler gets locked down when RHEL releases, and is rarely changed in point releases (although there's usually a "toolset" package that lets you grab a newer release of the compiler). This is done so that you get a known set of bugs with a given release version - ideally monotonically decreasing as the stable distro gets fixes.
Rust compiler support works differently
Rust compiler support works differently
No: the reason stable distros do that is to minimize the chance that a new bug is added.
Rust compiler support works differently
> No: the reason stable distros do that is to minimize the chance that a new bug is added.
Rust compiler support works differently
The distro doesn't bring new code that needs new versions of the compiler into the picture - it locks down kernel version as well as everything else when it releases.
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Part of the problem is that we've not seriously re-evaluated the "stable distro" concept since moving from physical media as the primary distribution medium to network links.
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
On three different occasions, with very different in-house software, and total developers at the company ranging from 3 (about half the employees) to over 10,000 developers, I've pushed us to rebase onto newer platforms faster.
Rust compiler support works differently
The thing is that "no new bugs" is not the only way to get "no incompatible changes"; a "fix regressions fast" policy also gets to the same point by fixing any changes that cause users problems.
Rust compiler support works differently
Rust compiler support works differently
I've dug out some rather old code (written for 1.24, based on the rust-toolchain file), and it still builds quite happily with current nightly.
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Rust compiler support works differently
Wouldn't affect RHEL at least
Wouldn't affect RHEL at least
rust-1.88.0-1.el9.src
rust-1.88.0-1.el10.src
rust-1.91.0-1.el9.src
rust-1.91.0-1.el10.src
rust-1.91.1-1.fc42.src
rust-1.91.1-1.fc43.src
rust-1.92.0-1.fc44.src
CVEs for experimental (rust) code ?
CVEs for experimental (rust) code ?
CVEs for experimental (rust) code ?
CVEs for experimental (rust) code ?
