|
|
Log in / Subscribe / Register

Rust compiler support works differently

Rust compiler support works differently

Posted Dec 19, 2025 0:38 UTC (Fri) by rgmoore (✭ supporter ✭, #75)
In reply to: Rust compiler support works differently by farnz
Parent article: The state of the kernel Rust experiment

I think it depends a lot on how easily you can update your application software to match changes in the base system. If you have full control over your applications- they're FOSS or developed in-house- it probably makes sense to stay close to the cutting edge. If you can't easily change your application software, it's a lot more important to make sure your base isn't making incompatible changes.

There plenty of reasons why someone might not be in complete control of their application. Some people are still using proprietary software provided as binary only, and they have to live with what the vendor provides them. That can cause all kinds of problems. Sometimes the vendor doesn't want to provide updates (or charges exorbitant prices for them) or has even gone out of business, and they have to try to keep the old software running as long as possible. Sometimes the vendor does provide regular updates, but they include incompatible changes that break existing workflows, so users want to keep updates few and far between. Sometimes a regulatory agency will require a costly and time consuming validation processes any time the software is updated, so sticking with an old version saves a lot of labor.


to post comments

Rust compiler support works differently

Posted Dec 19, 2025 5:06 UTC (Fri) by raven667 (subscriber, #5198) [Link] (1 responses)

In my experience with very small teams or single developers that the maintenance cost of retesting and fixing your own in-house software when the platform changes under it is not insignificant, without there needing to be some regulatory body demanding thorough validation, and not every org is willing to take a significant pause on feature development or bugfixes to perform platform rebasing at an irregular and inconvenient schedule. For some it's better to plan on that every 5-10 years when retiring hardware and you need to migrate to a new OS anyway, while you have the overlap of two stacks in parallel to do your integration testing and the org has time blocked out for migration, then to constantly be chasing outages because various leaf libraries intentionally or accidentally changed behavior when you perform regular updates. Sure, this is a solvable problem if you have a QA environment and a full test suite, but that's an extravagant time expense for a small shop with a handful of developers and few change coordination/communication needs, it directly competes for time with other priorities in a way that larger teams can spread out. Larger teams also need more formality in process, CI, testing, etc. because not everyone can understand all the changes going on, the communication overhead is too high, so spending time on automated testing is more useful than on solo or partner projects.

So I understand the desires which drive LTS/Enterprise distros to freeze everything, outside of the kernel, glibc and a few other core utilities there isn't the kind of ABI discipline which would be required to make updating blindly safe, not because all the developers which make the OS great are incompetent but many are volunteers and it's an unreasonable expectation to say that the upstream for every tool and library which distro-makers scavenge off the Internet should be held to the same standard for ABI breaks. The distro-makers solve this problem by freezing and carefully patching to minimize, but not eliminate (thanks perl-DBD-MySQL update in RHEL9.7) disruption. People who use rolling distros for production, how much do they have to retest and fix when something changes which affects their workload, its probably not zero, even if they are only using software packaged with the OS and not using it as a platform for their own in-house tools.

Rust compiler support works differently

Posted Dec 19, 2025 10:17 UTC (Fri) by farnz (subscriber, #17727) [Link]

On three different occasions, with very different in-house software, and total developers at the company ranging from 3 (about half the employees) to over 10,000 developers, I've pushed us to rebase onto newer platforms faster.

My lived experience in all three cases is that the closer you hew to the upstream platform, the less total time you spend on platform rebasing - if you rebase onto "current" every week, your experience gets better than if you rebase every 3 years, for multiple reasons:

  • If you catch a change early enough (e.g. one I hit with NetworkManager looking into switching from kernel PPPoE to userspace PPPoE, breaking my use of baby jumbos on PPPoE), you can avoid reverts of that change being seen as a regression - which means that you don't get stuck in the "can't fix you, because that'll regress this other user" concern, and you're not fighting to find something that fixes the bug for both you and the person who might regress.
  • You don't end up dealing with the same bug twice (or more); if there's something broken in the platform, instead of first building a workaround in your code, then rebasing onto the new platform, then possibly removing the workaround because the bug it works around is fixed and it's now causing new problems, upstream is willing to work with you to directly fix the next platform release - so you spend (IME) about the same effort as you'd have spent building a workaround in your code to instead fix upstream and rebase onto the newer platform with your fix in it.
  • (more intangible) You get "free" improvements in your code, as upstream do their thing; for example, something gets faster because upstream found a place where they were leaving performance on the table, or you're preparing to implement a feature, and you find that upstream has recently added utilities that make that feature easier to implement.
  • You learn what's planned upstream, and you're in a position to raise concerns early - if upstream is making critical changes to your database connector, you can speak up and say "I know that this feature is a pain and you have good reasons to remove it, but this is what I'm using it for" - and upstream might help you stop using that feature, or might grudgingly agree to keep it because it's more useful than they thought.

But this is a major attitude shift from the traditional LTS/Enterprise thinking (that you've very clearly described) - and it is only possible because you can directly communicate with upstream developers between releases, testing things that they're doing (so it's also not possible with proprietary dependencies). It's also terrifying if you're expecting each platform rebase to be as costly as the big platform rebases you do today, and it takes time to show the benefits (since at the point where you'd schedule 6 months to rebase, you realise that you've spent an aggregate of 3 months since the last big rebase on rebasing to newer platforms).

Rust compiler support works differently

Posted Dec 19, 2025 9:23 UTC (Fri) by farnz (subscriber, #17727) [Link]

The thing is that "no new bugs" is not the only way to get "no incompatible changes"; a "fix regressions fast" policy also gets to the same point by fixing any changes that cause users problems.

And "fix regressions fast" so that users stay on the latest versions of your software has a serious advantage; by definition, it's easier to locate which change in the last week could have caused a bug than to locate which change in the last 15 years. Users don't stay put forever - someone who was using RHEL6 up until its end of support in 2024, waited (unsupported) for RHEL10, and is now on RHEL10 is going to want everything that (from their point of view) broke between RHEL6 and RHEL10 fixed. But that's a huge amount to deal with in one go - it's "regressions" in everything that's part of the base OS, and it's nearly 14 years worth of changes to look through and determine what's gone wrong.

Worse, because RHEL7, 8 and 9 were in the middle, it may be impossible to fix RHEL10 to suit someone who's just migrated from RHEL6, because there is no way to fix the "regression" between RHEL6 and RHEL10 without causing a regression between RHEL7 (or RHEL8, or RHEL9) and RHEL10. The only way this works is if you accept a time limit on support (as RHEL enforces), and understand that you will have to recertify/buy an update/find a new vendor who's still trading when the support window expires; but I'm not convinced that "big panic, maybe business-ending" every 14 years is better than "small panic every day", and while I've seen "big panic, maybe business-ending" from LTS distros such as Debian (Sarge to Etch broke stuff badly at my then job, and we couldn't have the regression undone since it was already relied upon by other users, but had to update our stuff), I'm also not convinced that frequent updates is actually "small panic every day" - it might well be "small panic every month, as upstream add your use case to their regression tests".


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds