|
|
Log in / Subscribe / Register

The state of the kernel Rust experiment

By Jonathan Corbet
December 13, 2025

Maintainers Summit
The ability to write kernel code in Rust was explicitly added as an experiment — if things did not go well, Rust would be removed again. At the 2025 Maintainers Summit, a session was held to evaluate the state of that experiment, and to decide whether the time had come to declare the result to be a success. The (arguably unsurprising) conclusion was that the experiment is indeed a success, but there were some interesting points made along the way.

Headlines

Miguel Ojeda, who led the session, started with some headlines. The Nova driver for NVIDIA GPUs is coming, with pieces already merged into the mainline, and the Android binder driver was merged for 6.18. Even bigger news, he said, is that Android 16 systems running the 6.12 kernel are shipping with the Rust-written ashmem module. So there are millions of real devices running kernels with Rust code now.

[Miguel Ojeda] Meanwhile, the Debian project has, at last, enabled Rust in its kernel builds; that will show up in the upcoming "forky" release. The amount of Rust code in the kernel is "exploding", having grown by a factor of five over the last year. There has been an increase in the amount of cooperation between kernel developers and Rust language developers, giving the kernel project significant influence over the development of the language itself. The Rust community, he said, is committed to helping the kernel project.

The rust_codegen_gcc effort, which grafts the GCC code generator onto the rustc compiler, is progressing. Meanwhile the fully GCC-based gccrs project is making good progress. Gccrs is now able to compile the kernel's Rust code (though, evidently, compiling it to correct runnable code is still being worked on). The gccrs developers see building the kernel as one of their top priorities; Ojeda said to expect some interesting news from that project next year.

With regard to Rust language versions, the current plan is to ensure that the kernel can always be built with the version of Rust that ships in the Debian stable release. The kernel's minimum version would be increased 6-12 months after the corresponding Debian release. The kernel currently specifies a minimum of Rust 1.78, while the current version is (as of the session) 1.92. Debian is shipping 1.85, so Ojeda suggested that the kernel move to that version, which would enable the removal of a number of workarounds.

Jiri Kosina asked how often the minimum language version would be increased; Ojeda repeated that it would happen after every Debian stable release, though that could eventually change to every other Debian release. It is mostly a matter of what developers need, he said. Linus Torvalds said that he would be happy to increase the minimum version relatively aggressively as long as it doesn't result in developers being shut out. Distributors are updating Rust more aggressively than they have traditionally updated GCC, so requiring a newer version should be less of a problem.

Arnd Bergmann said that the kernel could have made GCC 8 the minimum supported version a year earlier than it did, except that SUSE's SLES was running behind. Kosina answered that SUSE is getting better and shipping newer versions of the compiler now. Dave Airlie worried that problems could appear once the enterprise distributors start enabling Rust; they could lock in an ancient version for a long time. Thomas Gleixner noted, though, that even Debian is now shipping GCC 14; the situation in general has gotten better.

Still experimental?

Given all this news, Ojeda asked, is it time to reconsider the "experimental" tag? He has been trying to be conservative about asking for that change, but said that, with Android shipping Rust code, the time has come. Airlie suggested making the announcement on April 1 and saying that the experiment had failed. More seriously, he said, removing the "experimental" tag would help people argue for more resources to be directed toward Rust in their companies.

Bergmann agreed with declaring the experiment over, worrying only that Rust still "doesn't work on architectures that nobody uses". So he thought that Rust code needed to be limited to the well-supported architectures for now. Ojeda said that there is currently good support for x86, Arm, Loongarch, RISC-V, and user-mode Linux, so the main architectures are in good shape. Bergmann asked about PowerPC support; Ojeda answered that the PowerPC developers were among the first to send a pull request adding Rust support for their architecture.

Bergmann persisted, asking about s390 support; Ojeda said that he has looked into it and concluded that it should work, but he doesn't know the current status. Airlie said that IBM would have to solve that problem, and that it will happen. Greg Kroah-Hartman pointed out the Rust upstream supports that architecture. Bergmann asked if problems with big-endian systems were expected; Kroah-Hartman said that some drivers were simply unlikely to run properly on those systems.

With regard to adding core-kernel dependencies on Rust code, Airlie said that it shouldn't happen for another year or two. Kroah-Hartman said that he had worried about interactions between the core kernel and Rust drivers, but had seen far fewer than he had expected. Drivers in Rust, he said, are indeed proving to be far safer than those written in C. Torvalds said that some people are starting to push for CVE numbers to be assigned to Rust code, proving that it is definitely not experimental; Kroah-Hartman said that no such CVE has yet been issued.

The DRM (graphics) subsystem has been an early adopter of the Rust language. It was still perhaps surprising, though, when Airlie (the DRM maintainer) said that the subsystem is only "about a year away" from disallowing new drivers written in C and requiring the use of Rust.

Ojeda returned to his initial question: can the "experimental" status be ended? Torvalds said that, after nearly five years, the time had come. Kroah-Hartman cited the increased support from compiler developers as a strong reason to declare victory. Steve Rostedt asked whether function tracing works; the answer from Alice Ryhl was quick to say that it does indeed work, though "symbol demangling would be nice".

Ojeda concluded that the ability to work in Rust has succeeded in bringing in new developers and new maintainers, which had been one of the original goals of the project. It is also inspiring people to do documentation work. There are a lot of people wanting to review Rust code, he said; he is putting together a list of more experienced developers who can help bring the new folks up to speed.

The session ended with Dan Williams saying that he could not imagine a better person than Ojeda to have led a project like this and offered his congratulations; the room responded with strong applause.

Index entries for this article
KernelDevelopment tools/Rust
ConferenceKernel Maintainers Summit/2025


to post comments

Congratulations indeed!

Posted Dec 13, 2025 3:02 UTC (Sat) by Paf (subscriber, #91811) [Link]

Congratulations indeed - as a long time C developer, it's very exciting to see something better seeing real use in this space. Now it's just a year or two more before I'll get to use it for something, I think.

Translation into C

Posted Dec 13, 2025 4:48 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

BTW, there's a new project to translate Rust into _readable_ C: https://jonathan.protzenko.fr/2025/10/28/eurydice.html

It might be something worth investigating for these niche architectures.

No resistance to change?

Posted Dec 13, 2025 13:57 UTC (Sat) by patrick_g (subscriber, #44470) [Link] (13 responses)

> the DRM (graphics ) subsystem is only "about a year away" from disallowing new drivers written in C and requiring the use of Rust.

Wow. That's much quicker than I expected.

No resistance to change?

Posted Dec 13, 2025 14:19 UTC (Sat) by pizza (subscriber, #46) [Link]

> Wow. That's much quicker than I expected.

Eh, not really. That kinda was the whole point of this "experiment" [1].

To me the more salient questions are how long before (a) we get Rust in a core subsystem (thus making Rust truly _required_ instead of "optional unless you have hardware foo"), and (b) requiring Rust for _all_ new code.

There is no point in half-measures; those only result in higher costs and fewer benefits.

[1] As the saying goes, "It's only temporary if it doesn't work"

No resistance to change?

Posted Dec 13, 2025 14:41 UTC (Sat) by iabervon (subscriber, #722) [Link] (11 responses)

It's quicker than I had been expecting, but I was thinking of areas where there are a lot of new small similar drivers. DRM has large drivers that cover all of the similar devices up to the point where they've diverged enough to warrant a new design, so it makes sense to be very forward-looking any time you actually start a new one. The resistance to change here is in not asking existing drivers to be rewritten in Rust in order to add new functionality.

No resistance to change?

Posted Dec 14, 2025 3:14 UTC (Sun) by riking (subscriber, #95706) [Link] (10 responses)

GPU drivers are also famously the most crash prone part of modern computers, and it looks like Android is getting good results with their "no new memory unsafe code components" policy. It makes sense to copy.

Filesystem drivers in Rust

Posted Dec 14, 2025 22:23 UTC (Sun) by hailfinger (subscriber, #76962) [Link] (9 responses)

I am looking forward to having more filesystem drivers in Rust. This would solve the "user mounts malicious filesystem" problem to a certain extent (no pun intended). Sure, denial of service would still be possible, but having no code execution bugs in a filesystem driver is a big plus in my book.

Filesystem drivers in Rust

Posted Dec 15, 2025 9:54 UTC (Mon) by taladar (subscriber, #68407) [Link] (4 responses)

Can "denial of service" with a "malicious filesystem" even be considered a problem? I mean wouldn't that essentially always be possible for someone who can craft a malicious filesystem merely by strategically overwriting central filesystem structures?

Filesystem drivers in Rust

Posted Dec 15, 2025 10:23 UTC (Mon) by farnz (subscriber, #17727) [Link] (3 responses)

Depends how far the denial of service extends; if it just restricts you from accessing the malicious filesystem, that's not a problem, but if a malicious filesystem can cause the FS driver to hold locks that prevent you accessing any filesystem at all, that's a denial of service attack.

Filesystem drivers in Rust

Posted Dec 15, 2025 12:54 UTC (Mon) by pizza (subscriber, #46) [Link] (2 responses)

> Depends how far the denial of service extends; if it just restricts you from accessing the malicious filesystem,

...How does one tell the difference between malicious and merely damaged?

(Because a "DoS" preventing accessing the former is arguably good, but arguably bad for the latter..)

Filesystem drivers in Rust

Posted Dec 15, 2025 12:57 UTC (Mon) by farnz (subscriber, #17727) [Link]

If the filesystem is damaged, all bets are off anyway - the damage may extend to the point where the data is irrecoverable. Thus, the contents of the damaged filesystem are already not part of "service" in a security sense, since you've already lost them.

As a quality of implementation matter, being able to retrieve as much data as possible from a damaged filesystem is nice, but it shouldn't affect availability of the system as a whole, only that one filesystem.

Filesystem drivers in Rust

Posted Dec 16, 2025 2:09 UTC (Tue) by corbet (editor, #1) [Link]

Linux has a number of filesystems that can deal properly with a "merely damaged" (corrupted filesystem). A maliciously corrupted filesystem will have correct metadata checksums, and is much harder to detect. There is a qualitative difference.

Filesystem drivers in Rust

Posted Dec 15, 2025 10:38 UTC (Mon) by jengelh (subscriber, #33263) [Link]

look no further than https://lwn.net/Articles/1044432/

Filesystem drivers in Rust

Posted Dec 17, 2025 8:09 UTC (Wed) by koflerdavid (subscriber, #176408) [Link] (2 responses)

Maliciously corrupted file systems are a significant attack vector that the kernel is only insufficiently protected from. There is a reason why mount privileges are part of root.

Writing file systems is already complicated enough without having to worry about it being malicious. And as we have seen from the security hole in a Rust tar library, merely using Rust is not the solution.

Filesystem drivers in Rust

Posted Dec 17, 2025 9:07 UTC (Wed) by taladar (subscriber, #68407) [Link] (1 responses)

Rust implementations can indeed not protect you from bad, ambiguous standards like those two size fields in the tar one. Nor can it protect you from logic bugs.

That does not mean that it doesn't help you find those by saving you effort on all the other security issues though.

Filesystem drivers in Rust

Posted Dec 17, 2025 10:37 UTC (Wed) by koflerdavid (subscriber, #176408) [Link]

Being safe from malicious file systems comes from conscious design decisions that follow from recognizing file systems as an attack vector. Ambiguity is not magically absent from file systems either, especially if those with multiple implementations. Rust can help with these things of course, but overpromising on that is not a good look.

"Even" Debian?

Posted Dec 13, 2025 14:19 UTC (Sat) by smcv (subscriber, #53363) [Link]

> Thomas Gleixner noted, though, that even Debian is now shipping GCC 14

Debian stable isn't actually as old as some of the "enterprise" distributions: it has a lifetime of about 2 years before being superseded by the next stable release, so even if we pessimistically assume the freeze takes a full year, that's about 3 years from "can have new things" to being superseded. Ubuntu LTS has a similar lifetime, but about a year out of phase with Debian.

The limiting factor for being able to assume new versions are available in the Debian ecosystem is whether you're aiming to support "oldstable" releases that have already been superseded by a newer stable release, but are still supported, like Debian 12 or Ubuntu 22.04 at the time of writing (superseded by Debian 13 and Ubuntu 24.04 respectively, but still supported), or even older LTS releases that have limited or third-party support, like Debian 11 or Ubuntu 20.04. Ubuntu derivatives like Linux Mint and pop!OS are sometimes more of a limiting factor here than Ubuntu itself, because they don't always base their recommended releases on the newest Ubuntu LTS.

Because these older releases have declining-but-nonzero support levels for rather a long time, upstreams will often have to draw a line somewhere, which is a choice based on tradeoffs rather than something with a single correct answer. For example I've found that "newest Debian stable or newest Ubuntu LTS, whichever is older; if your OS is older than this, please upgrade" is a policy that seems to work fairly well in practice.

Rust compiler support works differently

Posted Dec 15, 2025 10:00 UTC (Mon) by taladar (subscriber, #68407) [Link] (35 responses)

I honestly don't understand why so many people completely ignore that the Rust compiler is built in such a way that (barring bugs) the latest version is supposed to compile any code that compiled on older versions, making this whole game of using old versions unnecessary.

Rust compiler support works differently

Posted Dec 15, 2025 10:44 UTC (Mon) by hailfinger (subscriber, #76962) [Link] (1 responses)

But isn't this a case of raising the minimum required version of the Rust compiler, i.e. the other way round?
Sure, you can use a newer Rust compiler to compile code targeted for an older compiler version, but you can't use an older Rust compiler to compile code targeted for a newer compiler version simply because some features may not yet exist in the older version.

Rust compiler support works differently

Posted Dec 15, 2025 12:35 UTC (Mon) by taladar (subscriber, #68407) [Link]

Yes, but they are only raising it to a version that is already several versions outdated again.

Rust compiler support works differently

Posted Dec 15, 2025 10:49 UTC (Mon) by farnz (subscriber, #17727) [Link] (32 responses)

Because "stable" distros try not to update versions significantly after release. If you use RHEL,for example, the compiler gets locked down when RHEL releases, and is rarely changed in point releases (although there's usually a "toolset" package that lets you grab a newer release of the compiler). This is done so that you get a known set of bugs with a given release version - ideally monotonically decreasing as the stable distro gets fixes.

As a result, the kernel doesn't want to require a newer compiler than the stable distros, because the stable distro wants to stick to a known set of bugs, and not take the risk of new bugs coming in with the new compiler version. You are free to use newer compilers (e.g. while the kernel requires at least GCC 8.1, many kernels are built with newer GCC versions - the one I'm running right now is built with GCC 15.2, for example), but the kernel will not accept patches to C code that require a newer compiler version than GCC 8.1, because there's a significant stable distro still using 8.1.

Rust compiler support works differently

Posted Dec 15, 2025 12:37 UTC (Mon) by taladar (subscriber, #68407) [Link] (29 responses)

But the reason stable distros do that with other software is that most programs are not backwards compatible in the way the Rust compiler is.

They should just stop doing that, it is quite frankly annoying enough with existing software that has huge version spread between the oldest LTS distro and the latest releases. And I say that as a sysadmin who runs all of those and needs to keep them running for customers.

Rust compiler support works differently

Posted Dec 15, 2025 12:50 UTC (Mon) by farnz (subscriber, #17727) [Link] (9 responses)

No: the reason stable distros do that is to minimize the chance that a new bug is added.

The goal is that for any x and y such that x < y, the bug list for RHEL M.y is equal to, or a subset of, the bug list for RHEL M.x, and thus that if I take your service that's running on RHEL M.x, at worst, I will find the same issues on RHEL M.y as you did on M.x, unless you depended on the OS underneath you being buggy.

This has nothing to do with backwards compatibility - your new release may be 100% compatible with the older version, and still fall foul of this policy unless it can guarantee that all bugs in the new release were also present in the older release.

Rust compiler support works differently

Posted Dec 15, 2025 17:30 UTC (Mon) by khim (subscriber, #9252) [Link] (1 responses)

> No: the reason stable distros do that is to minimize the chance that a new bug is added.

But why that reasoning is not stopping them from bringing new code, that may need new version of compiler into the picture?

That is what driving people mad: old, stable, code is fine, I tinker with old hardware and software regularly… but then you use old tools to deal with it, too!

Why try to mix old compiler and new Linux kernel? This really makes no sense to me!

Rust compiler support works differently

Posted Dec 15, 2025 17:44 UTC (Mon) by farnz (subscriber, #17727) [Link]

The distro doesn't bring new code that needs new versions of the compiler into the picture - it locks down kernel version as well as everything else when it releases.

The kernel, however, wants people who find bugs in the distro kernel to be able to test the latest kernel, with possible fixes (and even patches on top of Linus's latest release), rather than forcing them to wait until the next distro release. And that's what drives keeping old compiler versions for the kernel - not the distros, but the kernel developers wanting to build on distros with older versions of compilers etc.

Rust compiler support works differently

Posted Dec 16, 2025 9:14 UTC (Tue) by taladar (subscriber, #68407) [Link] (6 responses)

The problem is that stable distros pretend that backports are not just new versions of the code done by people not very familiar with the code base and with a lot less testing than an upstream release though.

RHEL is one of the worst distros in terms of bugs, I have literally had multiple occasions where the core package management tools had severe bugs that made me end up with duplicate installed RPMs after a transaction segfaulted right in the middle of installing an update. I have had tools that work perfectly fine on my unstable Gentoo for years of continous updates to the latest upstream releases just break with repeated crashes.

This whole idea that the backporting process guarantees that no new bugs are introduced is just one of those lies people tell to management when they demand that something impossible has to be accomplished.

Rust compiler support works differently

Posted Dec 16, 2025 14:36 UTC (Tue) by pj (subscriber, #4506) [Link] (5 responses)

>This whole idea that the backporting process guarantees that no new bugs are introduced is just one of those lies people tell to management when they demand that something impossible has to be accomplished.

This. Why does management not understand that changing ('fixing') currently-broken code via a backported change ('fix') is at least as unstable as just upgrading? Except that by upgrading you get the benefits of all the other work that's gone into testing and fixing everything else as well, so you're much more likely to end up with a working system, and if you don't then at least it's a bug you can file against a standard version!

Rust compiler support works differently

Posted Dec 16, 2025 15:08 UTC (Tue) by pizza (subscriber, #46) [Link] (4 responses)

> This. Why does management not understand that changing ('fixing') currently-broken code via a backported change ('fix') is at least as unstable as just upgrading?

That assertion is trivially disproven by the vast difference in the size of the two diffs.

> Except that by upgrading you get the benefits of all the other work

You're conveniently discounting all of the downsides of all that other work.

Rust compiler support works differently

Posted Dec 17, 2025 9:04 UTC (Wed) by taladar (subscriber, #68407) [Link] (3 responses)

And you are conveniently discounting the opportunity cost of applying all that back porting work to something more long-term useful, e.g. adding more tests to the test-suite of the project.

Rust compiler support works differently

Posted Dec 17, 2025 13:13 UTC (Wed) by pizza (subscriber, #46) [Link] (2 responses)

> And you are conveniently discounting the opportunity cost of applying all that back porting work to something more long-term useful, e.g. adding more tests to the test-suite of the project.

Actually, I am not; I (and others here) are saying that different folks legitimately place different weights on the costs and benefits of each option, leading to different "optimal" outcomes given one's operational constraints (equipment/time/budget/regulations/etc)

...You act as if it is all costs for one option, but all benefit on the other.

Rust compiler support works differently

Posted Dec 18, 2025 9:09 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

As a programmer and sysadmin of many years I am the one paying the costs for long term support distros existing at all, that is why I consider it all costs because I see how bad the LTS distros are at their stability guarantees and how many compromises we have to make by having to delay adoption of solutions for so many issues because the oldest LTS distro still doesn't have support for it.

Rust compiler support works differently

Posted Dec 18, 2025 13:48 UTC (Thu) by pizza (subscriber, #46) [Link]

> As a programmer and sysadmin of many years I am the one paying the costs for long term support distros existing at all, that is why I consider it all costs because I see how bad the LTS distros are at their stability guarantees and how many compromises we have to make by having to delay adoption of solutions for so many issues because the oldest LTS distro still doesn't have support for it.

Sure, that's fine.

Except pose this same question to *users* of LTS distros and you'll get a very different answer.

Again, different folks place different weights on the relative importance of each of the costs and benefits.

(Personally I just ignore 'em unless there's $$$ (or a patch) attached to the problem report)

Rust compiler support works differently

Posted Dec 15, 2025 12:51 UTC (Mon) by pm215 (subscriber, #98099) [Link] (3 responses)

I think stable distros do that with other software not only because they don't want to take intentional new backwards incompatible changes, but also because they don't want to take the unintentional new bugs that inevitably sneak in with new versions of software. Even the best run project with the most careful backwards compatibility promises is still subject to adding new bugs when they write new code. This doesn't apply only to Rust, either -- nobody expects the "ls" interface to change incompatibly, but stable distros still stick with the version they shipped at release.

The promise of a stable distro release is that it will make minimal cautious changes, to minimise the risk of disrupting stuff you are using and relying on. If you'd prefer a faster updating "rolling" distro, those are also available, but I think there are good reasons why those are not the only kind.

Rust compiler support works differently

Posted Dec 16, 2025 9:17 UTC (Tue) by taladar (subscriber, #68407) [Link] (2 responses)

The problem is that this just doesn't work in practice. The LTS distros like RHEL are full of new bugs introduced by people who have to handle backports for dozens or sometimes hundreds of packages at the same time and who are just not very familiar with the way the code base works. Backports are new versions, just badly tested new versions.

Rust compiler support works differently

Posted Dec 16, 2025 16:36 UTC (Tue) by pm215 (subscriber, #98099) [Link] (1 responses)

I guess my experience differs -- I've been using Debian and Ubuntu stable distros for a couple of decades and can't remember running into a bug that was introduced by a stable/security bugfix update.

Rust compiler support works differently

Posted Dec 19, 2025 0:15 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link]

Some of this may depend on how long the distro promises to stay stable. The longer the stability promise, the more divergence there can be between the "stable" and current versions of the software and the harder backports get. That's going to put a practical limit on the length of any stability guarantee, presumably related to how well key upstream projects support older versions. A distribution that tries to handle its own kernel backports beyond what's supported by official LTS versions is setting itself up for problems.

Rust compiler support works differently

Posted Dec 15, 2025 13:03 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

> But the reason stable distros do that with other software is that most programs are not backwards compatible in the way the Rust compiler is.

That only applies if your code (and every crate it pulls in) doesn't use any "unstable" Rust features.

Rust compiler support works differently

Posted Dec 16, 2025 9:20 UTC (Tue) by taladar (subscriber, #68407) [Link]

Yes, it only applies if you actually use a released stable version. All the more reason to get the Linux kernel to a Rust version where all the features it needs are stabilized instead of applying some sort illusion of stability that will just lead to a huge jump from an old version of that unstable feature to whatever is in the version they update to years later.

Rust compiler support works differently

Posted Dec 15, 2025 13:32 UTC (Mon) by johill (subscriber, #25196) [Link] (6 responses)

"My/this software is so special so stable distros shouldn't bother being stable for it" is probably not an argument you'll ever win.

Rust compiler support works differently

Posted Dec 16, 2025 9:26 UTC (Tue) by taladar (subscriber, #68407) [Link]

You are likely correct, the snake oil industry selling backports as some sort of panacea that provides new security fixes and sometimes even features without risk of new bugs despite being essentially just badly tested forks by people who have very little experience with the code base (because they are responsible for dozens or hundreds of packages) is just too entrenched at this point.

Rust compiler support works differently

Posted Dec 16, 2025 11:17 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Part of the problem is that we've not seriously re-evaluated the "stable distro" concept since moving from physical media as the primary distribution medium to network links.

The driving force for wanting "no new bugs" in patch releases is the time it takes from "bug discovered" to "bug fixed" in any piece of software. In the days of physical media distribution, the process of sending a fixed binary by mail meant that new bugs were a serious problem - you'd either have to accept downtime until the bug fix arrived in the mail, or you'd have to find a new workaround while you waited for the bug fix to arrive.

It's not clear to me that this is as important as it used to be - because I can get bug fixes over the network, turnaround has gone from days/weeks to hours/days. The question is whether we need the "no new bugs" promise, or whether a "regressions fixed quickly" promise would be of more value nowadays - higher churn, yes, but also rapid fixes of regressions.

Rust compiler support works differently

Posted Dec 19, 2025 0:38 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link] (3 responses)

I think it depends a lot on how easily you can update your application software to match changes in the base system. If you have full control over your applications- they're FOSS or developed in-house- it probably makes sense to stay close to the cutting edge. If you can't easily change your application software, it's a lot more important to make sure your base isn't making incompatible changes.

There plenty of reasons why someone might not be in complete control of their application. Some people are still using proprietary software provided as binary only, and they have to live with what the vendor provides them. That can cause all kinds of problems. Sometimes the vendor doesn't want to provide updates (or charges exorbitant prices for them) or has even gone out of business, and they have to try to keep the old software running as long as possible. Sometimes the vendor does provide regular updates, but they include incompatible changes that break existing workflows, so users want to keep updates few and far between. Sometimes a regulatory agency will require a costly and time consuming validation processes any time the software is updated, so sticking with an old version saves a lot of labor.

Rust compiler support works differently

Posted Dec 19, 2025 5:06 UTC (Fri) by raven667 (subscriber, #5198) [Link] (1 responses)

In my experience with very small teams or single developers that the maintenance cost of retesting and fixing your own in-house software when the platform changes under it is not insignificant, without there needing to be some regulatory body demanding thorough validation, and not every org is willing to take a significant pause on feature development or bugfixes to perform platform rebasing at an irregular and inconvenient schedule. For some it's better to plan on that every 5-10 years when retiring hardware and you need to migrate to a new OS anyway, while you have the overlap of two stacks in parallel to do your integration testing and the org has time blocked out for migration, then to constantly be chasing outages because various leaf libraries intentionally or accidentally changed behavior when you perform regular updates. Sure, this is a solvable problem if you have a QA environment and a full test suite, but that's an extravagant time expense for a small shop with a handful of developers and few change coordination/communication needs, it directly competes for time with other priorities in a way that larger teams can spread out. Larger teams also need more formality in process, CI, testing, etc. because not everyone can understand all the changes going on, the communication overhead is too high, so spending time on automated testing is more useful than on solo or partner projects.

So I understand the desires which drive LTS/Enterprise distros to freeze everything, outside of the kernel, glibc and a few other core utilities there isn't the kind of ABI discipline which would be required to make updating blindly safe, not because all the developers which make the OS great are incompetent but many are volunteers and it's an unreasonable expectation to say that the upstream for every tool and library which distro-makers scavenge off the Internet should be held to the same standard for ABI breaks. The distro-makers solve this problem by freezing and carefully patching to minimize, but not eliminate (thanks perl-DBD-MySQL update in RHEL9.7) disruption. People who use rolling distros for production, how much do they have to retest and fix when something changes which affects their workload, its probably not zero, even if they are only using software packaged with the OS and not using it as a platform for their own in-house tools.

Rust compiler support works differently

Posted Dec 19, 2025 10:17 UTC (Fri) by farnz (subscriber, #17727) [Link]

On three different occasions, with very different in-house software, and total developers at the company ranging from 3 (about half the employees) to over 10,000 developers, I've pushed us to rebase onto newer platforms faster.

My lived experience in all three cases is that the closer you hew to the upstream platform, the less total time you spend on platform rebasing - if you rebase onto "current" every week, your experience gets better than if you rebase every 3 years, for multiple reasons:

  • If you catch a change early enough (e.g. one I hit with NetworkManager looking into switching from kernel PPPoE to userspace PPPoE, breaking my use of baby jumbos on PPPoE), you can avoid reverts of that change being seen as a regression - which means that you don't get stuck in the "can't fix you, because that'll regress this other user" concern, and you're not fighting to find something that fixes the bug for both you and the person who might regress.
  • You don't end up dealing with the same bug twice (or more); if there's something broken in the platform, instead of first building a workaround in your code, then rebasing onto the new platform, then possibly removing the workaround because the bug it works around is fixed and it's now causing new problems, upstream is willing to work with you to directly fix the next platform release - so you spend (IME) about the same effort as you'd have spent building a workaround in your code to instead fix upstream and rebase onto the newer platform with your fix in it.
  • (more intangible) You get "free" improvements in your code, as upstream do their thing; for example, something gets faster because upstream found a place where they were leaving performance on the table, or you're preparing to implement a feature, and you find that upstream has recently added utilities that make that feature easier to implement.
  • You learn what's planned upstream, and you're in a position to raise concerns early - if upstream is making critical changes to your database connector, you can speak up and say "I know that this feature is a pain and you have good reasons to remove it, but this is what I'm using it for" - and upstream might help you stop using that feature, or might grudgingly agree to keep it because it's more useful than they thought.

But this is a major attitude shift from the traditional LTS/Enterprise thinking (that you've very clearly described) - and it is only possible because you can directly communicate with upstream developers between releases, testing things that they're doing (so it's also not possible with proprietary dependencies). It's also terrifying if you're expecting each platform rebase to be as costly as the big platform rebases you do today, and it takes time to show the benefits (since at the point where you'd schedule 6 months to rebase, you realise that you've spent an aggregate of 3 months since the last big rebase on rebasing to newer platforms).

Rust compiler support works differently

Posted Dec 19, 2025 9:23 UTC (Fri) by farnz (subscriber, #17727) [Link]

The thing is that "no new bugs" is not the only way to get "no incompatible changes"; a "fix regressions fast" policy also gets to the same point by fixing any changes that cause users problems.

And "fix regressions fast" so that users stay on the latest versions of your software has a serious advantage; by definition, it's easier to locate which change in the last week could have caused a bug than to locate which change in the last 15 years. Users don't stay put forever - someone who was using RHEL6 up until its end of support in 2024, waited (unsupported) for RHEL10, and is now on RHEL10 is going to want everything that (from their point of view) broke between RHEL6 and RHEL10 fixed. But that's a huge amount to deal with in one go - it's "regressions" in everything that's part of the base OS, and it's nearly 14 years worth of changes to look through and determine what's gone wrong.

Worse, because RHEL7, 8 and 9 were in the middle, it may be impossible to fix RHEL10 to suit someone who's just migrated from RHEL6, because there is no way to fix the "regression" between RHEL6 and RHEL10 without causing a regression between RHEL7 (or RHEL8, or RHEL9) and RHEL10. The only way this works is if you accept a time limit on support (as RHEL enforces), and understand that you will have to recertify/buy an update/find a new vendor who's still trading when the support window expires; but I'm not convinced that "big panic, maybe business-ending" every 14 years is better than "small panic every day", and while I've seen "big panic, maybe business-ending" from LTS distros such as Debian (Sarge to Etch broke stuff badly at my then job, and we couldn't have the regression undone since it was already relied upon by other users, but had to update our stuff), I'm also not convinced that frequent updates is actually "small panic every day" - it might well be "small panic every month, as upstream add your use case to their regression tests".

Rust compiler support works differently

Posted Dec 16, 2025 14:29 UTC (Tue) by laarmen (subscriber, #63948) [Link] (5 responses)

The whole "newer rustc is compatible with older code" isn't actually true. In practice, what holds is that at the time of its release, a new version of rustc will be able to compile the latest version of every crate on crates.io, as demonstrated by their crater runs. That's an impressive feat, to be sure. And from the standard dev perspective, both notions are fairly identical, as keeping your dependencies up to date is a common practice, and you don't have that many of them anyway.

Your LTS distro packages a *lot* of crates, mostly outdated, and you can't update them easily. In that setting, bumping the version of the default Rust compiler suddenly becomes much riskier.

Rust compiler support works differently

Posted Dec 16, 2025 14:34 UTC (Tue) by farnz (subscriber, #17727) [Link] (1 responses)

I've dug out some rather old code (written for 1.24, based on the rust-toolchain file), and it still builds quite happily with current nightly.

Do you have examples of crates that were written for older versions of rustc, didn't use unstable features, but that don't work with current rustc? I'm curious to see what's actually broken - I know about the changes to std::env::set_var, for example, and I'm wondering what else has broken over the last decade or so.

Rust compiler support works differently

Posted Dec 16, 2025 14:48 UTC (Tue) by laarmen (subscriber, #63948) [Link]

Yes, most old code will compile without issues. AFAICT, the Rust compatibility story is one of the best out there. The only specific example that comes to mind right now is Rust 1.80 vs `time`, but that one was special in that it was actually detected and disregarded. However, I do remember reading several times in Rust changelogs things like "We detected XX crates that failed to compile with this change and worked with their authors to get a new, fixed version out".

Rust compiler support works differently

Posted Dec 16, 2025 15:03 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> And from the standard dev perspective, both notions are fairly identical, as keeping your dependencies up to date is a common practice, and you don't have that many of them anyway.

...."hundreds" isn't "that many" ?

(this anectdata is brought to you by every Rust-based thing I have deployed right now)

Rust compiler support works differently

Posted Dec 16, 2025 15:23 UTC (Tue) by laarmen (subscriber, #63948) [Link]

Right, my bad, I should have qualified: not that many *compared to the entirety of crates.io*, meaning the odds of one of them being both out of date and incompatible are fairly low.

Don't take this as an endorsement of the sprawling dep trees of some Rust projects :-)

To give some hard numbers, Rust 1.85 in Ubuntu vendors 672 crates. Since that directory contains dependencies for the entire toolchain, including cargo and rustdoc, I think they're on the conservative side of the ecosystem.

Rust compiler support works differently

Posted Dec 17, 2025 9:02 UTC (Wed) by taladar (subscriber, #68407) [Link]

Hundreds of crates is probably still significantly less code than e.g. one Qt.

Rust compiler support works differently

Posted Dec 15, 2025 14:48 UTC (Mon) by iabervon (subscriber, #722) [Link] (1 responses)

I think the more plausible argument is that having a newer Rust compiler available but using the older one instead doesn't introduce any new bugs (or any new code that you're running), and building a new kernel with an old Rust compiler isn't any more stable than building a new kernel with a new Rust compiler. In the C userspace world, it's potentially a problem to build some things with a new compiler and other things with an old compiler, so you don't necessarily want to have a bunch of different versions available, but Rust likes to build all of the source that goes into an executable (including the standard library) when you build it, and the kernel does this too, even for C.

It's probably best not to rebuild python-cryptography and curl with a new Rust compiler release just because it exists, but that doesn't mean you can't build your kernel with a newer Rust compiler when that's what people are doing upstream.

Rust compiler support works differently

Posted Dec 15, 2025 14:59 UTC (Mon) by pizza (subscriber, #46) [Link]

> and building a new kernel with an old Rust compiler isn't any more stable than building a new kernel with a new Rust compiler.

...Unless said kernel relies upon "unstable" Rust features that are treated differently in different compiler revisions.

(And correct me if I'm wrong, but there are still a pile of unstable-isms in mainline R4L right now. In time that number will presumably continue to go down but it's quite reasonable expect to occasionally go back up for new/optional Linux features)

Wouldn't affect RHEL at least

Posted Dec 15, 2025 13:41 UTC (Mon) by salimma (subscriber, #34460) [Link] (1 responses)

> Dave Airlie worried that problems could appear once the enterprise distributors start enabling Rust; they could lock in an ancient version for a long time

Not sure how SLES and Ubuntu LTS is affected by this, but CentOS Stream / RHEL rebases Rust every 6 months, and thus all their downstreams too

Wouldn't affect RHEL at least

Posted Dec 15, 2025 19:11 UTC (Mon) by salimma (subscriber, #34460) [Link]

e.g. as of today

$ fedrq pkgs --src rust -b ubi9
rust-1.88.0-1.el9.src

$ fedrq pkgs --src rust -b ubi10
rust-1.88.0-1.el10.src

$ fedrq pkgs --src rust -b c9s
rust-1.91.0-1.el9.src

$ fedrq pkgs --src rust -b c10s
rust-1.91.0-1.el10.src

$ fedrq pkgs --src rust -b f42
rust-1.91.1-1.fc42.src

$ fedrq pkgs --src rust -b f43
rust-1.91.1-1.fc43.src

$ fedrq pkgs --src rust -b rawhide
rust-1.92.0-1.fc44.src

CVEs for experimental (rust) code ?

Posted Dec 23, 2025 16:07 UTC (Tue) by moltonel (subscriber, #45207) [Link] (3 responses)

I've seen a lot of disagreement on that topic, and no clear "official" statement.

Would a CVE filed against an experimental feature get rejected ? Is the CVE assignment process systematic/exhaustive enough to catch all bug fixes ? Is the main reason we have so few CVEs for the rust experiment that experimental code is intentionally excluded ? Is CVE-2025-38033 a counter example ?

CVEs for experimental (rust) code ?

Posted Dec 25, 2025 7:57 UTC (Thu) by gregkh (subscriber, #8) [Link] (2 responses)

> Would a CVE filed against an experimental feature get rejected ?

No.

> Is the CVE assignment process systematic/exhaustive enough to catch all bug fixes ?

We hope so, if we miss any, please let the developers at cve@k.o know.

> Is the main reason we have so few CVEs for the rust experiment that experimental code is intentionally excluded ?

No.

And as always, people can just _ask_ the kernel cve maintainers stuff like this if they are curious :)

CVEs for experimental (rust) code ?

Posted Dec 26, 2025 19:25 UTC (Fri) by moltonel (subscriber, #45207) [Link] (1 responses)

Thanks for these insights. Sorry for lazily asking here instead of on the cve ml.

So, what do you think is the reason we had so few RfL CVEs so far ? Is that related to the code being considered "experimental" ? Does CVE-2025-38033 count as a RfL CVE ? It's hard to imagine that no bugs had been found yet, maybe the fixes were not backported to stable kernels ?

CVEs for experimental (rust) code ?

Posted Dec 27, 2025 9:04 UTC (Sat) by gregkh (subscriber, #8) [Link]

> So, what do you think is the reason we had so few RfL CVEs so far ?

Perhaps because there has not been much rust code that is used in the kernel before now? That's just my guess, could be wrong.

> Is that related to the code being considered "experimental" ?

Again, no, the cve team does not use that as a criteria at all.

> Does CVE-2025-38033 count as a RfL CVE ?

No idea, feel free to count if it you want to :)

> It's hard to imagine that no bugs had been found yet,

Take a look at the code changes over time to determine this is true or not, I do not know.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds