|
|
Subscribe / Log in / New account

MSRV-aware resolver

MSRV-aware resolver

Posted Jan 10, 2025 8:58 UTC (Fri) by josh (subscriber, #17465)
Parent article: Rust 1.84.0 released

The part I'm most excited about:

> 1.84.0 stabilizes the minimum supported Rust version (MSRV) aware resolver, which prefers dependency versions compatible with the project's declared MSRV. With MSRV-aware version selection, the toil is reduced for maintainers to support older toolchains by not needing to manually select older versions for each dependency.

This means that library crates can freely adopt the latest stable version, and people running older Rust toolchains will get matching older versions of the library. This reduces the tension in the ecosystem between people using the latest features and people running older toolchains.

I'm always happy when we can do things like this to help improve social situations in the ecosystem.


to post comments

MSRV-aware resolver

Posted Jan 10, 2025 9:23 UTC (Fri) by zdzichu (subscriber, #17118) [Link] (18 responses)

How does security fixes backporting work with Rust crates? Are old versions on separate branches and do they receive updates after newer version is available?

MSRV-aware resolver

Posted Jan 10, 2025 10:03 UTC (Fri) by DOT (subscriber, #58786) [Link]

That's up to the maintainer of each project. As you may expect with the long tail of open source work: most repos are dead, and for those that are alive, usually only the most recent version receives updates. Exceptions exist for the more popular projects that have more development resources behind them.

MSRV-aware resolver

Posted Jan 10, 2025 11:27 UTC (Fri) by farnz (subscriber, #17727) [Link] (16 responses)

Security fixes backporting is up to individual crates; I can publish multiple versions in parallel, and I can choose what my policy is on backporting fixes. Cargo as used normally will follow semver when choosing the right crate to fulfil a dependency (major must be equal, minor must be greater than or equal to requested, if minor is equal to requested, patchlevel must be greater than or equal to requested). I can use tilde or equals requirements to get more stable behaviour, at the cost of more work for me.

However, previously it was hard work publishing security fixes to older minor versions while maintaining MSRV; if I said my MSRV was 1.64, and I took a dependency on clap = ^4.0 (which I can spell in my Cargo.toml as clap = "4.0", since semver resolution is the default), Cargo would happily give me clap 4.5.26, which has an MSRV of 1.74, and I could thus end up accidentally requiring features from clap 4.5.26. rendering my MSRV claim worthless - not only does Cargo naturally choose a newer version, but my consumers can't force Cargo to unify on 4.3.23 by asking for clap = "=4.3.23" in their Cargo.toml. I'd have to manually identify this problem, and change the dependency specification to clap = ">=4.0, <4.3.24" to maintain my MSRV.

With the new resolver, the fact that I've declared an MSRV of 1.64 means that newer versions of clap will not be considered by Cargo, and I won't accidentally find myself with a problematic MSRV; if I need/want fixes that are in clap = 4.3.24 or later, I can now take a PR to the clap maintainers backporting fixes and reducing the MSRV on a 4.3 branch (or in a clap-4.3 repo) and ask them to publish 4.3.ff (where ff > 24) for me.

This now makes it reasonable for crate authors to bump the minor version whenever they increase MSRV, leaving the remains of that minor version series as space for backported fixes. We shall have to see how the ecosystem reacts to this; do we start treating an increase in MSRV as a minor version bump, or is it still something that people do in minor version series?

MSRV-aware resolver

Posted Jan 10, 2025 12:33 UTC (Fri) by zdzichu (subscriber, #17118) [Link] (13 responses)

Thanks for detailed reply!

MSRV-aware resolver

Posted Jan 13, 2025 12:20 UTC (Mon) by taladar (subscriber, #68407) [Link] (12 responses)

It should also be noted that Rust crate maintainers might be less likely to consider support for older Rust versions important enough to bother maintaining branches just for that reason since the latest stable Rust compiler should always (barring bugs) be able to compile all old versions too, unlike some other languages where new compilers or runtimes fail to handle old code correctly.

MSRV-aware resolver

Posted Jan 13, 2025 12:59 UTC (Mon) by josh (subscriber, #17465) [Link] (11 responses)

Exactly. And that means the only people likely to maintain older branches are those who care deeply about support for older Rust, or those who believe there's value in backporting fixes to a version that doesn't get new features. Which, as always, has the trade-off of being a combination fewer people are using and testing.

MSRV-aware resolver

Posted Jan 13, 2025 16:57 UTC (Mon) by raven667 (subscriber, #5198) [Link] (10 responses)

Culture and technology interact, the effort the Rust language team put into supporting old stable editions of the language on the current compiler interacts with library authors and which edition(s) of they language they commonly support. I'm not familiar with Rust operationally, and it's Monday so the brain hasn't swapped back in yet, but it sounds like you can compile projects where libraries specify different language versions into the same process. Is that right, is there some documented way code interacts when different parts use newer language features, presumably the function arguments and return are the boundary and new language features can't have side effects which would affect old code. I think we get the idea that you always need to run the latest supported version of the compiler, but IIUC code doesn't necessarily need to be ported to new versions immediately (unlike say Python), more like the Linux kernel or glibc versioned symbols. Maybe I just need to wait for the coffee to kick in...

MSRV-aware resolver

Posted Jan 13, 2025 17:01 UTC (Mon) by josh (subscriber, #17465) [Link]

That's accurate. Every three years, we have a new "edition" of the language, which is allowed to make incompatible changes. (Even those changes are generally evolutionary rather than massively disruptive.) But you can build one project that incorporates libraries from different editions; we even have support for applying macros from one edition to code from another. So, you are never *required* to forward-port to a new edition. And within an edition, the language remains stable; we have a system called "crater" for testing proposed changes with every publicly extant piece of Rust code we can find: every crate and every public repository.

MSRV-aware resolver

Posted Jan 13, 2025 17:11 UTC (Mon) by farnz (subscriber, #17727) [Link] (8 responses)

There are three broad classes of possible changes in stable Rust (my terms, not official language):

  1. Compatible changes, where the worst that happens is that an older compiler rejects the code (fails to compile), while a newer compiler can handle it.
  2. Tightening of rules, where the worst that happens is that an older compiler accepts the code, but a newer compiler rejects it.
  3. Breaking changes, where the worst that happens is that an older compiler sees different meaning to a newer compiler.

Compatible changes are always allowed, and the only way to ensure that you've not been caught out by a compatible change is to build the code with an older compiler; you can, for example, use the new strict provenance functions (stabilised in Rust 1.84) in code that claims to be Edition 2015.

Tightening of rules requires an edition change - e.g. in Rust 2015, async was not a keyword, and you could use it as a variable name; in Rust 2018 and later, it's a keyword and you can't do that. However, using an older edition with a newer compiler will still let you use compatible changes from the newer compiler.

Breaking changes are rare, and very hard to get right. These also require a new edition, but they additionally require a lot of thinking and care around what happens when a program contains code from two different editions that cuts across the breaking change. I've not yet seen a case where there's an actual breaking change across editions (but I've not looked at every change), because the usual process is that in trying to work out how to deal with the "two editions disagree on the meaning of this code" problem as it relates to macros, a way is found to make the change into a rule tightening or even compatible change.

MSRV-aware resolver

Posted Jan 14, 2025 1:12 UTC (Tue) by raven667 (subscriber, #5198) [Link] (7 responses)

So it seems that some amount of mismatched expectations from developers who aren't familiar with the system, whose instinct to freeze a version and back port patches to maintain stability works at cross purposes to how compatiblity is designed to work in Rust.

Expectations on reaching stability

Posted Jan 14, 2025 11:22 UTC (Tue) by farnz (subscriber, #17727) [Link] (6 responses)

I'm not sure this is a Rust thing, so much as it's a general programming thing. Rust provides the tools you need to stick to an older compiler and backport changes if that's your desired path, and this latest change is one more improvement to those tools.

There's two paths to stability: the "frozen feature set" path, where you minimise changes, and only back port changes that you expect will increase stability, and the "comprehensive test suite on every change" path, where you require a test case for everything that needs to be stable, and refuse to accept any change that fails a test case, or that changes a test case in a way that reduces stability.

The disadvantage of the "comprehensive test suite on every change" path is twofold: first, the tests have to exist, and not be buggy. Second, it's not at all obvious how it reaches stability over time; there's continuous changes going on, there's new features arriving (with their own new bugs that the test suite doesn't yet cover), and there's always instability that's not yet covered by a test case. The advantages are that the developer introducing a bit of instability has an incentive to fix it (has to have passing test cases to get their change in), and that your stable version is familiar to everyone working on the system, since it's also the current version.

The advantage of "freeze the current version, only backport targeted fixes" is that the route to stability is obvious - you're not changing it much, and so the set of bugs should reduce over time. The disadvantage is that as the current and stable codebases diverge, each "fix" back ported becomes less and less "a well-tested change from current that applies cleanly to stable" and more and more "new code with potential for new bugs". If you try and keep something stable for long enough, you end up fixing each bug without any back porting; the "current" code is sufficiently different that a backported fix doesn't apply any more. On top of that, there's an insidious issue that builds up over time, where issues that you care about are found in the current version, but it's hard to tell if those issues exist in your stable version or were introduced by a change made after you froze things, and it becomes possible that by back porting the fix, you in fact introduce a stability issue, since the fix assumes something that's only true after a feature change to current that you've not back ported.

Expectations on reaching stability

Posted Jan 14, 2025 13:05 UTC (Tue) by Wol (subscriber, #4433) [Link] (5 responses)

> The disadvantage of the "comprehensive test suite on every change" path is twofold: first, the tests have to exist, and not be buggy. Second, it's not at all obvious how it reaches stability over time; there's continuous changes going on,

The third disadvantage is the effort and cost in repeatedly running that test suite, and ensuring that it really is comprehensive ...

Cheers,
Wol

Expectations on reaching stability

Posted Jan 14, 2025 13:10 UTC (Tue) by farnz (subscriber, #17727) [Link]

Note that you have a related problem with "backport fixes"; given that a fix claims to improve things, how do you confirm that it doesn't regress something else in the system? If your answer is "we have a comprehensive test suite that we run on every change", well, you're 90% of the way to being able to just stay on latest instead…

Expectations on reaching stability

Posted Jan 14, 2025 15:05 UTC (Tue) by emk (subscriber, #1128) [Link]

> The third disadvantage is the effort and cost in repeatedly running that test suite, and ensuring that it really is comprehensive ...

One of the tools that the Rust compiler team makes heavy use of is "Crater", which basically downloads all open source Rust packages from the standard repo and runs their tests. This is used to detect unexpected breakages in old code caused by newer Rust compilers.

This doesn't cover private Rust projects, of course. Which is why there's a "beta" channel for the compiler. But in practice, running Crater does a pretty good job of finding almost all breaking changes.

Since 2015, I've spent an hour or three cleaning up breakage in private code caused by "stable" Rust compilers. In one case, I think a truly ancient project had a malformed "Cargo.toml" file that a newer parser rejected. (For the sake of comparison, I have spent more hours than I care to count dealing with breaking changes to the C OpenSSL library.) So realistically, yeah, Rust has amazingly good support for running ancient code. Not perfect, but more than good enough that I reflexively update to the newest Rust on a regular basis.

Expectations on reaching stability

Posted Jan 16, 2025 18:15 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (2 responses)

There's mutation testing[1] which I've found very useful for finding untested lines (which may have been "covered", but not *checked*). Think of it as the complement to fuzz testing: instead of mutating the data, you mutate the code and make sure that the test suite detects the change. I've found quite a few uncovered cases and even some bugs with it in my crates. Now they're fixed and have test coverage as well :) .

[1] https://en.wikipedia.org/wiki/Mutation_testing

Expectations on reaching stability

Posted Jan 26, 2025 16:22 UTC (Sun) by sammythesnake (guest, #17693) [Link] (1 responses)

Could you name the framework you use to do mutation testing? The Wikipedia article only mentions one Java framework...

Expectations on reaching stability

Posted Jan 26, 2025 16:50 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

I've used this one for Rust: <https://github.com/sourcefrog/cargo-mutants>. There's also this one for Python (that I've not used): <https://github.com/boxed/mutmut>.

There's a paper by Google which mentions that mutation testing has also been done on Java, Python, and C++ internally. No idea if they're open source or not.

I also have this paper on my "to read" list: https://arxiv.org/pdf/2406.09843

MSRV-aware resolver

Posted Jan 15, 2025 18:25 UTC (Wed) by apoelstra (subscriber, #75205) [Link] (1 responses)

> I'd have to manually identify this problem, and change the dependency specification to clap = ">=4.0, <4.3.24" to maintain my MSRV.

In general, you don't want to add `<` conditions to your dependency specification. It is true that if you merely had `clap = "4.0"` then cargo may pull in 4.3.25 which is in violation of your MSRV, and users will see a build failure by default. But your downstream users can fix this by running `cargo upgrade -p clap --precise 4.3.25`.

If you were to specify `clap = ">= 4.0, < 4.3.24"`, and your users wanted to use a project alongside yours with `clap = "4.3.25"` (presumably both the user and this other dependency have a higher MSRV) then the project won't compile and there is nothing the user can do to correct the situation.

One place where there is missing tooling in the Rust ecosystem is that there is no way for you, as project maintainer, to tell your users that they need to pin their dependencies in this way. The best you can do is put these `cargo update -p` invocations into your README and hope that they are noticed. (Users running old versions of rustc are likely familiar with this because this sort of breakage is constant in the Rust ecosystem, which is dominated by crates that use new compiler features almost as soon as they are available.)

As a library author, you might hope that you can solve this by providing a lockfile which specifies a particular version of clap. However, lockfiles for libraries are only applied when building the library in isolation (which only developers of the library itself ever do). Downstream users of the library ignore the lockfile entirely. So it basically serves as a hard-to-read form of documentation.

It would be nice if there were a tool -- and I hope one day to find the time to write this -- which could manipulate lockfiles to do things like:

* Merge two lockfiles, preferring the earliest or latest (or "latest which supports a given MSRV") in case of conflicts. In particular, being able to merge dependencies' "blessed lockfiles" into your project; or being able to merge only specific parts of a lockfile.
* Viewing lockfiles as first-class objects and being able to query them, split them, edit them, etc

MSRV-aware resolver

Posted Jan 15, 2025 18:31 UTC (Wed) by farnz (subscriber, #17727) [Link]

With the MSRV-aware resolver, it becomes a lot simpler; I specify that my MSRV is 1.64, and when I use Cargo commands in my project, it refuses to consider 4.3.24 or later, since they declare a higher MSRV. If you use my crate in a project with no MSRV, or an MSRV of 1.74 or later, Cargo will consider 4.3.24 or later, since the Cargo.lock is project-wide, not crate-specific. I can continue to specify clap = "4.0", and get an MSRV-compatible version for tests etc, while allowing downstream consumers to increase their MSRV and get a later clap version.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds