|
|
Log in / Subscribe / Register

Rust compiler support works differently

Rust compiler support works differently

Posted Dec 19, 2025 5:06 UTC (Fri) by raven667 (subscriber, #5198)
In reply to: Rust compiler support works differently by rgmoore
Parent article: The state of the kernel Rust experiment

In my experience with very small teams or single developers that the maintenance cost of retesting and fixing your own in-house software when the platform changes under it is not insignificant, without there needing to be some regulatory body demanding thorough validation, and not every org is willing to take a significant pause on feature development or bugfixes to perform platform rebasing at an irregular and inconvenient schedule. For some it's better to plan on that every 5-10 years when retiring hardware and you need to migrate to a new OS anyway, while you have the overlap of two stacks in parallel to do your integration testing and the org has time blocked out for migration, then to constantly be chasing outages because various leaf libraries intentionally or accidentally changed behavior when you perform regular updates. Sure, this is a solvable problem if you have a QA environment and a full test suite, but that's an extravagant time expense for a small shop with a handful of developers and few change coordination/communication needs, it directly competes for time with other priorities in a way that larger teams can spread out. Larger teams also need more formality in process, CI, testing, etc. because not everyone can understand all the changes going on, the communication overhead is too high, so spending time on automated testing is more useful than on solo or partner projects.

So I understand the desires which drive LTS/Enterprise distros to freeze everything, outside of the kernel, glibc and a few other core utilities there isn't the kind of ABI discipline which would be required to make updating blindly safe, not because all the developers which make the OS great are incompetent but many are volunteers and it's an unreasonable expectation to say that the upstream for every tool and library which distro-makers scavenge off the Internet should be held to the same standard for ABI breaks. The distro-makers solve this problem by freezing and carefully patching to minimize, but not eliminate (thanks perl-DBD-MySQL update in RHEL9.7) disruption. People who use rolling distros for production, how much do they have to retest and fix when something changes which affects their workload, its probably not zero, even if they are only using software packaged with the OS and not using it as a platform for their own in-house tools.


to post comments

Rust compiler support works differently

Posted Dec 19, 2025 10:17 UTC (Fri) by farnz (subscriber, #17727) [Link]

On three different occasions, with very different in-house software, and total developers at the company ranging from 3 (about half the employees) to over 10,000 developers, I've pushed us to rebase onto newer platforms faster.

My lived experience in all three cases is that the closer you hew to the upstream platform, the less total time you spend on platform rebasing - if you rebase onto "current" every week, your experience gets better than if you rebase every 3 years, for multiple reasons:

  • If you catch a change early enough (e.g. one I hit with NetworkManager looking into switching from kernel PPPoE to userspace PPPoE, breaking my use of baby jumbos on PPPoE), you can avoid reverts of that change being seen as a regression - which means that you don't get stuck in the "can't fix you, because that'll regress this other user" concern, and you're not fighting to find something that fixes the bug for both you and the person who might regress.
  • You don't end up dealing with the same bug twice (or more); if there's something broken in the platform, instead of first building a workaround in your code, then rebasing onto the new platform, then possibly removing the workaround because the bug it works around is fixed and it's now causing new problems, upstream is willing to work with you to directly fix the next platform release - so you spend (IME) about the same effort as you'd have spent building a workaround in your code to instead fix upstream and rebase onto the newer platform with your fix in it.
  • (more intangible) You get "free" improvements in your code, as upstream do their thing; for example, something gets faster because upstream found a place where they were leaving performance on the table, or you're preparing to implement a feature, and you find that upstream has recently added utilities that make that feature easier to implement.
  • You learn what's planned upstream, and you're in a position to raise concerns early - if upstream is making critical changes to your database connector, you can speak up and say "I know that this feature is a pain and you have good reasons to remove it, but this is what I'm using it for" - and upstream might help you stop using that feature, or might grudgingly agree to keep it because it's more useful than they thought.

But this is a major attitude shift from the traditional LTS/Enterprise thinking (that you've very clearly described) - and it is only possible because you can directly communicate with upstream developers between releases, testing things that they're doing (so it's also not possible with proprietary dependencies). It's also terrifying if you're expecting each platform rebase to be as costly as the big platform rebases you do today, and it takes time to show the benefits (since at the point where you'd schedule 6 months to rebase, you realise that you've spent an aggregate of 3 months since the last big rebase on rebasing to newer platforms).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds