|
|
Log in / Subscribe / Register

Rust compiler support works differently

Rust compiler support works differently

Posted Dec 15, 2025 10:00 UTC (Mon) by taladar (subscriber, #68407)
Parent article: The state of the kernel Rust experiment

I honestly don't understand why so many people completely ignore that the Rust compiler is built in such a way that (barring bugs) the latest version is supposed to compile any code that compiled on older versions, making this whole game of using old versions unnecessary.


to post comments

Rust compiler support works differently

Posted Dec 15, 2025 10:44 UTC (Mon) by hailfinger (subscriber, #76962) [Link] (1 responses)

But isn't this a case of raising the minimum required version of the Rust compiler, i.e. the other way round?
Sure, you can use a newer Rust compiler to compile code targeted for an older compiler version, but you can't use an older Rust compiler to compile code targeted for a newer compiler version simply because some features may not yet exist in the older version.

Rust compiler support works differently

Posted Dec 15, 2025 12:35 UTC (Mon) by taladar (subscriber, #68407) [Link]

Yes, but they are only raising it to a version that is already several versions outdated again.

Rust compiler support works differently

Posted Dec 15, 2025 10:49 UTC (Mon) by farnz (subscriber, #17727) [Link] (32 responses)

Because "stable" distros try not to update versions significantly after release. If you use RHEL,for example, the compiler gets locked down when RHEL releases, and is rarely changed in point releases (although there's usually a "toolset" package that lets you grab a newer release of the compiler). This is done so that you get a known set of bugs with a given release version - ideally monotonically decreasing as the stable distro gets fixes.

As a result, the kernel doesn't want to require a newer compiler than the stable distros, because the stable distro wants to stick to a known set of bugs, and not take the risk of new bugs coming in with the new compiler version. You are free to use newer compilers (e.g. while the kernel requires at least GCC 8.1, many kernels are built with newer GCC versions - the one I'm running right now is built with GCC 15.2, for example), but the kernel will not accept patches to C code that require a newer compiler version than GCC 8.1, because there's a significant stable distro still using 8.1.

Rust compiler support works differently

Posted Dec 15, 2025 12:37 UTC (Mon) by taladar (subscriber, #68407) [Link] (29 responses)

But the reason stable distros do that with other software is that most programs are not backwards compatible in the way the Rust compiler is.

They should just stop doing that, it is quite frankly annoying enough with existing software that has huge version spread between the oldest LTS distro and the latest releases. And I say that as a sysadmin who runs all of those and needs to keep them running for customers.

Rust compiler support works differently

Posted Dec 15, 2025 12:50 UTC (Mon) by farnz (subscriber, #17727) [Link] (9 responses)

No: the reason stable distros do that is to minimize the chance that a new bug is added.

The goal is that for any x and y such that x < y, the bug list for RHEL M.y is equal to, or a subset of, the bug list for RHEL M.x, and thus that if I take your service that's running on RHEL M.x, at worst, I will find the same issues on RHEL M.y as you did on M.x, unless you depended on the OS underneath you being buggy.

This has nothing to do with backwards compatibility - your new release may be 100% compatible with the older version, and still fall foul of this policy unless it can guarantee that all bugs in the new release were also present in the older release.

Rust compiler support works differently

Posted Dec 15, 2025 17:30 UTC (Mon) by khim (subscriber, #9252) [Link] (1 responses)

> No: the reason stable distros do that is to minimize the chance that a new bug is added.

But why that reasoning is not stopping them from bringing new code, that may need new version of compiler into the picture?

That is what driving people mad: old, stable, code is fine, I tinker with old hardware and software regularly… but then you use old tools to deal with it, too!

Why try to mix old compiler and new Linux kernel? This really makes no sense to me!

Rust compiler support works differently

Posted Dec 15, 2025 17:44 UTC (Mon) by farnz (subscriber, #17727) [Link]

The distro doesn't bring new code that needs new versions of the compiler into the picture - it locks down kernel version as well as everything else when it releases.

The kernel, however, wants people who find bugs in the distro kernel to be able to test the latest kernel, with possible fixes (and even patches on top of Linus's latest release), rather than forcing them to wait until the next distro release. And that's what drives keeping old compiler versions for the kernel - not the distros, but the kernel developers wanting to build on distros with older versions of compilers etc.

Rust compiler support works differently

Posted Dec 16, 2025 9:14 UTC (Tue) by taladar (subscriber, #68407) [Link] (6 responses)

The problem is that stable distros pretend that backports are not just new versions of the code done by people not very familiar with the code base and with a lot less testing than an upstream release though.

RHEL is one of the worst distros in terms of bugs, I have literally had multiple occasions where the core package management tools had severe bugs that made me end up with duplicate installed RPMs after a transaction segfaulted right in the middle of installing an update. I have had tools that work perfectly fine on my unstable Gentoo for years of continous updates to the latest upstream releases just break with repeated crashes.

This whole idea that the backporting process guarantees that no new bugs are introduced is just one of those lies people tell to management when they demand that something impossible has to be accomplished.

Rust compiler support works differently

Posted Dec 16, 2025 14:36 UTC (Tue) by pj (subscriber, #4506) [Link] (5 responses)

>This whole idea that the backporting process guarantees that no new bugs are introduced is just one of those lies people tell to management when they demand that something impossible has to be accomplished.

This. Why does management not understand that changing ('fixing') currently-broken code via a backported change ('fix') is at least as unstable as just upgrading? Except that by upgrading you get the benefits of all the other work that's gone into testing and fixing everything else as well, so you're much more likely to end up with a working system, and if you don't then at least it's a bug you can file against a standard version!

Rust compiler support works differently

Posted Dec 16, 2025 15:08 UTC (Tue) by pizza (subscriber, #46) [Link] (4 responses)

> This. Why does management not understand that changing ('fixing') currently-broken code via a backported change ('fix') is at least as unstable as just upgrading?

That assertion is trivially disproven by the vast difference in the size of the two diffs.

> Except that by upgrading you get the benefits of all the other work

You're conveniently discounting all of the downsides of all that other work.

Rust compiler support works differently

Posted Dec 17, 2025 9:04 UTC (Wed) by taladar (subscriber, #68407) [Link] (3 responses)

And you are conveniently discounting the opportunity cost of applying all that back porting work to something more long-term useful, e.g. adding more tests to the test-suite of the project.

Rust compiler support works differently

Posted Dec 17, 2025 13:13 UTC (Wed) by pizza (subscriber, #46) [Link] (2 responses)

> And you are conveniently discounting the opportunity cost of applying all that back porting work to something more long-term useful, e.g. adding more tests to the test-suite of the project.

Actually, I am not; I (and others here) are saying that different folks legitimately place different weights on the costs and benefits of each option, leading to different "optimal" outcomes given one's operational constraints (equipment/time/budget/regulations/etc)

...You act as if it is all costs for one option, but all benefit on the other.

Rust compiler support works differently

Posted Dec 18, 2025 9:09 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

As a programmer and sysadmin of many years I am the one paying the costs for long term support distros existing at all, that is why I consider it all costs because I see how bad the LTS distros are at their stability guarantees and how many compromises we have to make by having to delay adoption of solutions for so many issues because the oldest LTS distro still doesn't have support for it.

Rust compiler support works differently

Posted Dec 18, 2025 13:48 UTC (Thu) by pizza (subscriber, #46) [Link]

> As a programmer and sysadmin of many years I am the one paying the costs for long term support distros existing at all, that is why I consider it all costs because I see how bad the LTS distros are at their stability guarantees and how many compromises we have to make by having to delay adoption of solutions for so many issues because the oldest LTS distro still doesn't have support for it.

Sure, that's fine.

Except pose this same question to *users* of LTS distros and you'll get a very different answer.

Again, different folks place different weights on the relative importance of each of the costs and benefits.

(Personally I just ignore 'em unless there's $$$ (or a patch) attached to the problem report)

Rust compiler support works differently

Posted Dec 15, 2025 12:51 UTC (Mon) by pm215 (subscriber, #98099) [Link] (3 responses)

I think stable distros do that with other software not only because they don't want to take intentional new backwards incompatible changes, but also because they don't want to take the unintentional new bugs that inevitably sneak in with new versions of software. Even the best run project with the most careful backwards compatibility promises is still subject to adding new bugs when they write new code. This doesn't apply only to Rust, either -- nobody expects the "ls" interface to change incompatibly, but stable distros still stick with the version they shipped at release.

The promise of a stable distro release is that it will make minimal cautious changes, to minimise the risk of disrupting stuff you are using and relying on. If you'd prefer a faster updating "rolling" distro, those are also available, but I think there are good reasons why those are not the only kind.

Rust compiler support works differently

Posted Dec 16, 2025 9:17 UTC (Tue) by taladar (subscriber, #68407) [Link] (2 responses)

The problem is that this just doesn't work in practice. The LTS distros like RHEL are full of new bugs introduced by people who have to handle backports for dozens or sometimes hundreds of packages at the same time and who are just not very familiar with the way the code base works. Backports are new versions, just badly tested new versions.

Rust compiler support works differently

Posted Dec 16, 2025 16:36 UTC (Tue) by pm215 (subscriber, #98099) [Link] (1 responses)

I guess my experience differs -- I've been using Debian and Ubuntu stable distros for a couple of decades and can't remember running into a bug that was introduced by a stable/security bugfix update.

Rust compiler support works differently

Posted Dec 19, 2025 0:15 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link]

Some of this may depend on how long the distro promises to stay stable. The longer the stability promise, the more divergence there can be between the "stable" and current versions of the software and the harder backports get. That's going to put a practical limit on the length of any stability guarantee, presumably related to how well key upstream projects support older versions. A distribution that tries to handle its own kernel backports beyond what's supported by official LTS versions is setting itself up for problems.

Rust compiler support works differently

Posted Dec 15, 2025 13:03 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

> But the reason stable distros do that with other software is that most programs are not backwards compatible in the way the Rust compiler is.

That only applies if your code (and every crate it pulls in) doesn't use any "unstable" Rust features.

Rust compiler support works differently

Posted Dec 16, 2025 9:20 UTC (Tue) by taladar (subscriber, #68407) [Link]

Yes, it only applies if you actually use a released stable version. All the more reason to get the Linux kernel to a Rust version where all the features it needs are stabilized instead of applying some sort illusion of stability that will just lead to a huge jump from an old version of that unstable feature to whatever is in the version they update to years later.

Rust compiler support works differently

Posted Dec 15, 2025 13:32 UTC (Mon) by johill (subscriber, #25196) [Link] (6 responses)

"My/this software is so special so stable distros shouldn't bother being stable for it" is probably not an argument you'll ever win.

Rust compiler support works differently

Posted Dec 16, 2025 9:26 UTC (Tue) by taladar (subscriber, #68407) [Link]

You are likely correct, the snake oil industry selling backports as some sort of panacea that provides new security fixes and sometimes even features without risk of new bugs despite being essentially just badly tested forks by people who have very little experience with the code base (because they are responsible for dozens or hundreds of packages) is just too entrenched at this point.

Rust compiler support works differently

Posted Dec 16, 2025 11:17 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Part of the problem is that we've not seriously re-evaluated the "stable distro" concept since moving from physical media as the primary distribution medium to network links.

The driving force for wanting "no new bugs" in patch releases is the time it takes from "bug discovered" to "bug fixed" in any piece of software. In the days of physical media distribution, the process of sending a fixed binary by mail meant that new bugs were a serious problem - you'd either have to accept downtime until the bug fix arrived in the mail, or you'd have to find a new workaround while you waited for the bug fix to arrive.

It's not clear to me that this is as important as it used to be - because I can get bug fixes over the network, turnaround has gone from days/weeks to hours/days. The question is whether we need the "no new bugs" promise, or whether a "regressions fixed quickly" promise would be of more value nowadays - higher churn, yes, but also rapid fixes of regressions.

Rust compiler support works differently

Posted Dec 19, 2025 0:38 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link] (3 responses)

I think it depends a lot on how easily you can update your application software to match changes in the base system. If you have full control over your applications- they're FOSS or developed in-house- it probably makes sense to stay close to the cutting edge. If you can't easily change your application software, it's a lot more important to make sure your base isn't making incompatible changes.

There plenty of reasons why someone might not be in complete control of their application. Some people are still using proprietary software provided as binary only, and they have to live with what the vendor provides them. That can cause all kinds of problems. Sometimes the vendor doesn't want to provide updates (or charges exorbitant prices for them) or has even gone out of business, and they have to try to keep the old software running as long as possible. Sometimes the vendor does provide regular updates, but they include incompatible changes that break existing workflows, so users want to keep updates few and far between. Sometimes a regulatory agency will require a costly and time consuming validation processes any time the software is updated, so sticking with an old version saves a lot of labor.

Rust compiler support works differently

Posted Dec 19, 2025 5:06 UTC (Fri) by raven667 (subscriber, #5198) [Link] (1 responses)

In my experience with very small teams or single developers that the maintenance cost of retesting and fixing your own in-house software when the platform changes under it is not insignificant, without there needing to be some regulatory body demanding thorough validation, and not every org is willing to take a significant pause on feature development or bugfixes to perform platform rebasing at an irregular and inconvenient schedule. For some it's better to plan on that every 5-10 years when retiring hardware and you need to migrate to a new OS anyway, while you have the overlap of two stacks in parallel to do your integration testing and the org has time blocked out for migration, then to constantly be chasing outages because various leaf libraries intentionally or accidentally changed behavior when you perform regular updates. Sure, this is a solvable problem if you have a QA environment and a full test suite, but that's an extravagant time expense for a small shop with a handful of developers and few change coordination/communication needs, it directly competes for time with other priorities in a way that larger teams can spread out. Larger teams also need more formality in process, CI, testing, etc. because not everyone can understand all the changes going on, the communication overhead is too high, so spending time on automated testing is more useful than on solo or partner projects.

So I understand the desires which drive LTS/Enterprise distros to freeze everything, outside of the kernel, glibc and a few other core utilities there isn't the kind of ABI discipline which would be required to make updating blindly safe, not because all the developers which make the OS great are incompetent but many are volunteers and it's an unreasonable expectation to say that the upstream for every tool and library which distro-makers scavenge off the Internet should be held to the same standard for ABI breaks. The distro-makers solve this problem by freezing and carefully patching to minimize, but not eliminate (thanks perl-DBD-MySQL update in RHEL9.7) disruption. People who use rolling distros for production, how much do they have to retest and fix when something changes which affects their workload, its probably not zero, even if they are only using software packaged with the OS and not using it as a platform for their own in-house tools.

Rust compiler support works differently

Posted Dec 19, 2025 10:17 UTC (Fri) by farnz (subscriber, #17727) [Link]

On three different occasions, with very different in-house software, and total developers at the company ranging from 3 (about half the employees) to over 10,000 developers, I've pushed us to rebase onto newer platforms faster.

My lived experience in all three cases is that the closer you hew to the upstream platform, the less total time you spend on platform rebasing - if you rebase onto "current" every week, your experience gets better than if you rebase every 3 years, for multiple reasons:

  • If you catch a change early enough (e.g. one I hit with NetworkManager looking into switching from kernel PPPoE to userspace PPPoE, breaking my use of baby jumbos on PPPoE), you can avoid reverts of that change being seen as a regression - which means that you don't get stuck in the "can't fix you, because that'll regress this other user" concern, and you're not fighting to find something that fixes the bug for both you and the person who might regress.
  • You don't end up dealing with the same bug twice (or more); if there's something broken in the platform, instead of first building a workaround in your code, then rebasing onto the new platform, then possibly removing the workaround because the bug it works around is fixed and it's now causing new problems, upstream is willing to work with you to directly fix the next platform release - so you spend (IME) about the same effort as you'd have spent building a workaround in your code to instead fix upstream and rebase onto the newer platform with your fix in it.
  • (more intangible) You get "free" improvements in your code, as upstream do their thing; for example, something gets faster because upstream found a place where they were leaving performance on the table, or you're preparing to implement a feature, and you find that upstream has recently added utilities that make that feature easier to implement.
  • You learn what's planned upstream, and you're in a position to raise concerns early - if upstream is making critical changes to your database connector, you can speak up and say "I know that this feature is a pain and you have good reasons to remove it, but this is what I'm using it for" - and upstream might help you stop using that feature, or might grudgingly agree to keep it because it's more useful than they thought.

But this is a major attitude shift from the traditional LTS/Enterprise thinking (that you've very clearly described) - and it is only possible because you can directly communicate with upstream developers between releases, testing things that they're doing (so it's also not possible with proprietary dependencies). It's also terrifying if you're expecting each platform rebase to be as costly as the big platform rebases you do today, and it takes time to show the benefits (since at the point where you'd schedule 6 months to rebase, you realise that you've spent an aggregate of 3 months since the last big rebase on rebasing to newer platforms).

Rust compiler support works differently

Posted Dec 19, 2025 9:23 UTC (Fri) by farnz (subscriber, #17727) [Link]

The thing is that "no new bugs" is not the only way to get "no incompatible changes"; a "fix regressions fast" policy also gets to the same point by fixing any changes that cause users problems.

And "fix regressions fast" so that users stay on the latest versions of your software has a serious advantage; by definition, it's easier to locate which change in the last week could have caused a bug than to locate which change in the last 15 years. Users don't stay put forever - someone who was using RHEL6 up until its end of support in 2024, waited (unsupported) for RHEL10, and is now on RHEL10 is going to want everything that (from their point of view) broke between RHEL6 and RHEL10 fixed. But that's a huge amount to deal with in one go - it's "regressions" in everything that's part of the base OS, and it's nearly 14 years worth of changes to look through and determine what's gone wrong.

Worse, because RHEL7, 8 and 9 were in the middle, it may be impossible to fix RHEL10 to suit someone who's just migrated from RHEL6, because there is no way to fix the "regression" between RHEL6 and RHEL10 without causing a regression between RHEL7 (or RHEL8, or RHEL9) and RHEL10. The only way this works is if you accept a time limit on support (as RHEL enforces), and understand that you will have to recertify/buy an update/find a new vendor who's still trading when the support window expires; but I'm not convinced that "big panic, maybe business-ending" every 14 years is better than "small panic every day", and while I've seen "big panic, maybe business-ending" from LTS distros such as Debian (Sarge to Etch broke stuff badly at my then job, and we couldn't have the regression undone since it was already relied upon by other users, but had to update our stuff), I'm also not convinced that frequent updates is actually "small panic every day" - it might well be "small panic every month, as upstream add your use case to their regression tests".

Rust compiler support works differently

Posted Dec 16, 2025 14:29 UTC (Tue) by laarmen (subscriber, #63948) [Link] (5 responses)

The whole "newer rustc is compatible with older code" isn't actually true. In practice, what holds is that at the time of its release, a new version of rustc will be able to compile the latest version of every crate on crates.io, as demonstrated by their crater runs. That's an impressive feat, to be sure. And from the standard dev perspective, both notions are fairly identical, as keeping your dependencies up to date is a common practice, and you don't have that many of them anyway.

Your LTS distro packages a *lot* of crates, mostly outdated, and you can't update them easily. In that setting, bumping the version of the default Rust compiler suddenly becomes much riskier.

Rust compiler support works differently

Posted Dec 16, 2025 14:34 UTC (Tue) by farnz (subscriber, #17727) [Link] (1 responses)

I've dug out some rather old code (written for 1.24, based on the rust-toolchain file), and it still builds quite happily with current nightly.

Do you have examples of crates that were written for older versions of rustc, didn't use unstable features, but that don't work with current rustc? I'm curious to see what's actually broken - I know about the changes to std::env::set_var, for example, and I'm wondering what else has broken over the last decade or so.

Rust compiler support works differently

Posted Dec 16, 2025 14:48 UTC (Tue) by laarmen (subscriber, #63948) [Link]

Yes, most old code will compile without issues. AFAICT, the Rust compatibility story is one of the best out there. The only specific example that comes to mind right now is Rust 1.80 vs `time`, but that one was special in that it was actually detected and disregarded. However, I do remember reading several times in Rust changelogs things like "We detected XX crates that failed to compile with this change and worked with their authors to get a new, fixed version out".

Rust compiler support works differently

Posted Dec 16, 2025 15:03 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> And from the standard dev perspective, both notions are fairly identical, as keeping your dependencies up to date is a common practice, and you don't have that many of them anyway.

...."hundreds" isn't "that many" ?

(this anectdata is brought to you by every Rust-based thing I have deployed right now)

Rust compiler support works differently

Posted Dec 16, 2025 15:23 UTC (Tue) by laarmen (subscriber, #63948) [Link]

Right, my bad, I should have qualified: not that many *compared to the entirety of crates.io*, meaning the odds of one of them being both out of date and incompatible are fairly low.

Don't take this as an endorsement of the sprawling dep trees of some Rust projects :-)

To give some hard numbers, Rust 1.85 in Ubuntu vendors 672 crates. Since that directory contains dependencies for the entire toolchain, including cargo and rustdoc, I think they're on the conservative side of the ecosystem.

Rust compiler support works differently

Posted Dec 17, 2025 9:02 UTC (Wed) by taladar (subscriber, #68407) [Link]

Hundreds of crates is probably still significantly less code than e.g. one Qt.

Rust compiler support works differently

Posted Dec 15, 2025 14:48 UTC (Mon) by iabervon (subscriber, #722) [Link] (1 responses)

I think the more plausible argument is that having a newer Rust compiler available but using the older one instead doesn't introduce any new bugs (or any new code that you're running), and building a new kernel with an old Rust compiler isn't any more stable than building a new kernel with a new Rust compiler. In the C userspace world, it's potentially a problem to build some things with a new compiler and other things with an old compiler, so you don't necessarily want to have a bunch of different versions available, but Rust likes to build all of the source that goes into an executable (including the standard library) when you build it, and the kernel does this too, even for C.

It's probably best not to rebuild python-cryptography and curl with a new Rust compiler release just because it exists, but that doesn't mean you can't build your kernel with a newer Rust compiler when that's what people are doing upstream.

Rust compiler support works differently

Posted Dec 15, 2025 14:59 UTC (Mon) by pizza (subscriber, #46) [Link]

> and building a new kernel with an old Rust compiler isn't any more stable than building a new kernel with a new Rust compiler.

...Unless said kernel relies upon "unstable" Rust features that are treated differently in different compiler revisions.

(And correct me if I'm wrong, but there are still a pile of unstable-isms in mainline R4L right now. In time that number will presumably continue to go down but it's quite reasonable expect to occasionally go back up for new/optional Linux features)


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds