LWN: Comments on "Rust 1.84.0 released" https://lwn.net/Articles/1004614/ This is a special feed containing comments posted to the individual LWN article titled "Rust 1.84.0 released". en-us Thu, 18 Sep 2025 16:04:36 +0000 Thu, 18 Sep 2025 16:04:36 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Expectations on reaching stability https://lwn.net/Articles/1006192/ https://lwn.net/Articles/1006192/ mathstuf <div class="FormattedComment"> I've used this one for Rust: &lt;<a href="https://github.com/sourcefrog/cargo-mutants">https://github.com/sourcefrog/cargo-mutants</a>&gt;. There's also this one for Python (that I've not used): &lt;<a href="https://github.com/boxed/mutmut">https://github.com/boxed/mutmut</a>&gt;.<br> <p> There's a paper by Google which mentions that mutation testing has also been done on Java, Python, and C++ internally. No idea if they're open source or not.<br> <p> I also have this paper on my "to read" list: <a href="https://arxiv.org/pdf/2406.09843">https://arxiv.org/pdf/2406.09843</a><br> </div> Sun, 26 Jan 2025 16:50:46 +0000 Expectations on reaching stability https://lwn.net/Articles/1006190/ https://lwn.net/Articles/1006190/ sammythesnake <div class="FormattedComment"> Could you name the framework you use to do mutation testing? The Wikipedia article only mentions one Java framework...<br> </div> Sun, 26 Jan 2025 16:22:27 +0000 Expectations on reaching stability https://lwn.net/Articles/1005338/ https://lwn.net/Articles/1005338/ mathstuf <div class="FormattedComment"> There's mutation testing[1] which I've found very useful for finding untested lines (which may have been "covered", but not *checked*). Think of it as the complement to fuzz testing: instead of mutating the data, you mutate the code and make sure that the test suite detects the change. I've found quite a few uncovered cases and even some bugs with it in my crates. Now they're fixed and have test coverage as well :) .<br> <p> [1] <a href="https://en.wikipedia.org/wiki/Mutation_testing">https://en.wikipedia.org/wiki/Mutation_testing</a><br> </div> Thu, 16 Jan 2025 18:15:31 +0000 MSRV-aware resolver https://lwn.net/Articles/1005209/ https://lwn.net/Articles/1005209/ farnz <p>With the MSRV-aware resolver, it becomes a lot simpler; I specify that my MSRV is 1.64, and when I use Cargo commands in my project, it refuses to consider 4.3.24 or later, since they declare a higher MSRV. If you use my crate in a project with no MSRV, or an MSRV of 1.74 or later, Cargo will consider 4.3.24 or later, since the <tt>Cargo.lock</tt> is project-wide, not crate-specific. I can continue to specify <tt>clap = "4.0"</tt>, and get an MSRV-compatible version for tests etc, while allowing downstream consumers to increase their MSRV and get a later clap version. Wed, 15 Jan 2025 18:31:19 +0000 MSRV-aware resolver https://lwn.net/Articles/1005205/ https://lwn.net/Articles/1005205/ apoelstra <div class="FormattedComment"> <span class="QuotedText">&gt; I'd have to manually identify this problem, and change the dependency specification to clap = "&gt;=4.0, &lt;4.3.24" to maintain my MSRV.</span><br> <p> In general, you don't want to add `&lt;` conditions to your dependency specification. It is true that if you merely had `clap = "4.0"` then cargo may pull in 4.3.25 which is in violation of your MSRV, and users will see a build failure by default. But your downstream users can fix this by running `cargo upgrade -p clap --precise 4.3.25`.<br> <p> If you were to specify `clap = "&gt;= 4.0, &lt; 4.3.24"`, and your users wanted to use a project alongside yours with `clap = "4.3.25"` (presumably both the user and this other dependency have a higher MSRV) then the project won't compile and there is nothing the user can do to correct the situation.<br> <p> One place where there is missing tooling in the Rust ecosystem is that there is no way for you, as project maintainer, to tell your users that they need to pin their dependencies in this way. The best you can do is put these `cargo update -p` invocations into your README and hope that they are noticed. (Users running old versions of rustc are likely familiar with this because this sort of breakage is constant in the Rust ecosystem, which is dominated by crates that use new compiler features almost as soon as they are available.)<br> <p> As a library author, you might hope that you can solve this by providing a lockfile which specifies a particular version of clap. However, lockfiles for libraries are only applied when building the library in isolation (which only developers of the library itself ever do). Downstream users of the library ignore the lockfile entirely. So it basically serves as a hard-to-read form of documentation.<br> <p> It would be nice if there were a tool -- and I hope one day to find the time to write this -- which could manipulate lockfiles to do things like:<br> <p> * Merge two lockfiles, preferring the earliest or latest (or "latest which supports a given MSRV") in case of conflicts. In particular, being able to merge dependencies' "blessed lockfiles" into your project; or being able to merge only specific parts of a lockfile.<br> * Viewing lockfiles as first-class objects and being able to query them, split them, edit them, etc<br> <p> <p> </div> Wed, 15 Jan 2025 18:25:35 +0000 Expectations on reaching stability https://lwn.net/Articles/1005028/ https://lwn.net/Articles/1005028/ emk <div class="FormattedComment"> <span class="QuotedText">&gt; The third disadvantage is the effort and cost in repeatedly running that test suite, and ensuring that it really is comprehensive ...</span><br> <p> One of the tools that the Rust compiler team makes heavy use of is "Crater", which basically downloads all open source Rust packages from the standard repo and runs their tests. This is used to detect unexpected breakages in old code caused by newer Rust compilers.<br> <p> This doesn't cover private Rust projects, of course. Which is why there's a "beta" channel for the compiler. But in practice, running Crater does a pretty good job of finding almost all breaking changes.<br> <p> Since 2015, I've spent an hour or three cleaning up breakage in private code caused by "stable" Rust compilers. In one case, I think a truly ancient project had a malformed "Cargo.toml" file that a newer parser rejected. (For the sake of comparison, I have spent more hours than I care to count dealing with breaking changes to the C OpenSSL library.) So realistically, yeah, Rust has amazingly good support for running ancient code. Not perfect, but more than good enough that I reflexively update to the newest Rust on a regular basis.<br> </div> Tue, 14 Jan 2025 15:05:26 +0000 Expectations on reaching stability https://lwn.net/Articles/1005026/ https://lwn.net/Articles/1005026/ farnz <p>Note that you have a related problem with "backport fixes"; given that a fix claims to improve things, how do you confirm that it doesn't regress something else in the system? If your answer is "we have a comprehensive test suite that we run on every change", well, you're 90% of the way to being able to just stay on latest instead… Tue, 14 Jan 2025 13:10:52 +0000 Expectations on reaching stability https://lwn.net/Articles/1005025/ https://lwn.net/Articles/1005025/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; The disadvantage of the "comprehensive test suite on every change" path is twofold: first, the tests have to exist, and not be buggy. Second, it's not at all obvious how it reaches stability over time; there's continuous changes going on,</span><br> <p> The third disadvantage is the effort and cost in repeatedly running that test suite, and ensuring that it really is comprehensive ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 14 Jan 2025 13:05:57 +0000 Expectations on reaching stability https://lwn.net/Articles/1005021/ https://lwn.net/Articles/1005021/ farnz I'm not sure this is a Rust thing, so much as it's a general programming thing. Rust provides the tools you need to stick to an older compiler and backport changes if that's your desired path, and this latest change is one more improvement to those tools. <p>There's two paths to stability: the "frozen feature set" path, where you minimise changes, and only back port changes that you expect will increase stability, and the "comprehensive test suite on every change" path, where you require a test case for everything that needs to be stable, and refuse to accept any change that fails a test case, or that changes a test case in a way that reduces stability. <p>The disadvantage of the "comprehensive test suite on every change" path is twofold: first, the tests have to exist, and not be buggy. Second, it's not at all obvious how it reaches stability over time; there's continuous changes going on, there's new features arriving (with their own new bugs that the test suite doesn't yet cover), and there's always instability that's not yet covered by a test case. The advantages are that the developer introducing a bit of instability has an incentive to fix it (has to have passing test cases to get their change in), and that your stable version is familiar to everyone working on the system, since it's also the current version. <p>The advantage of "freeze the current version, only backport targeted fixes" is that the route to stability is obvious - you're not changing it much, and so the set of bugs should reduce over time. The disadvantage is that as the current and stable codebases diverge, each "fix" back ported becomes less and less "a well-tested change from current that applies cleanly to stable" and more and more "new code with potential for new bugs". If you try and keep something stable for long enough, you end up fixing each bug without any back porting; the "current" code is sufficiently different that a backported fix doesn't apply any more. On top of that, there's an insidious issue that builds up over time, where issues that you care about are found in the current version, but it's hard to tell if those issues exist in your stable version or were introduced by a change made after you froze things, and it becomes possible that by back porting the fix, you in fact introduce a stability issue, since the fix assumes something that's only true after a feature change to current that you've not back ported. Tue, 14 Jan 2025 11:22:57 +0000 MSRV-aware resolver https://lwn.net/Articles/1005010/ https://lwn.net/Articles/1005010/ raven667 <div class="FormattedComment"> So it seems that some amount of mismatched expectations from developers who aren't familiar with the system, whose instinct to freeze a version and back port patches to maintain stability works at cross purposes to how compatiblity is designed to work in Rust. <br> </div> Tue, 14 Jan 2025 01:12:11 +0000 MSRV-aware resolver https://lwn.net/Articles/1004993/ https://lwn.net/Articles/1004993/ farnz <p>There are three broad classes of possible changes in stable Rust (my terms, not official language): <ol> <li>Compatible changes, where the worst that happens is that an older compiler rejects the code (fails to compile), while a newer compiler can handle it. <li>Tightening of rules, where the worst that happens is that an older compiler accepts the code, but a newer compiler rejects it. <li>Breaking changes, where the worst that happens is that an older compiler sees different meaning to a newer compiler. </ol> <p>Compatible changes are always allowed, and the only way to ensure that you've not been caught out by a compatible change is to build the code with an older compiler; you can, for example, use the new <a href="https://doc.rust-lang.org/std/primitive.pointer.html#method.with_addr">strict provenance</a> functions (stabilised in Rust 1.84) in code that claims to be Edition 2015. <p>Tightening of rules requires an edition change - e.g. in Rust 2015, <tt>async</tt> was not a keyword, and you could use it as a variable name; in Rust 2018 and later, it's a keyword and you can't do that. However, using an older edition with a newer compiler will still let you use compatible changes from the newer compiler. <p>Breaking changes are rare, and very hard to get right. These also require a new edition, but they additionally require a lot of thinking and care around what happens when a program contains code from two different editions that cuts across the breaking change. I've not yet seen a case where there's an actual breaking change across editions (but I've not looked at every change), because the usual process is that in trying to work out how to deal with the "two editions disagree on the meaning of this code" problem as it relates to macros, a way is found to make the change into a rule tightening or even compatible change. Mon, 13 Jan 2025 17:11:41 +0000 MSRV-aware resolver https://lwn.net/Articles/1004991/ https://lwn.net/Articles/1004991/ josh <div class="FormattedComment"> That's accurate. Every three years, we have a new "edition" of the language, which is allowed to make incompatible changes. (Even those changes are generally evolutionary rather than massively disruptive.) But you can build one project that incorporates libraries from different editions; we even have support for applying macros from one edition to code from another. So, you are never *required* to forward-port to a new edition. And within an edition, the language remains stable; we have a system called "crater" for testing proposed changes with every publicly extant piece of Rust code we can find: every crate and every public repository.<br> </div> Mon, 13 Jan 2025 17:01:28 +0000 MSRV-aware resolver https://lwn.net/Articles/1004989/ https://lwn.net/Articles/1004989/ raven667 <div class="FormattedComment"> Culture and technology interact, the effort the Rust language team put into supporting old stable editions of the language on the current compiler interacts with library authors and which edition(s) of they language they commonly support. I'm not familiar with Rust operationally, and it's Monday so the brain hasn't swapped back in yet, but it sounds like you can compile projects where libraries specify different language versions into the same process. Is that right, is there some documented way code interacts when different parts use newer language features, presumably the function arguments and return are the boundary and new language features can't have side effects which would affect old code. I think we get the idea that you always need to run the latest supported version of the compiler, but IIUC code doesn't necessarily need to be ported to new versions immediately (unlike say Python), more like the Linux kernel or glibc versioned symbols. Maybe I just need to wait for the coffee to kick in...<br> </div> Mon, 13 Jan 2025 16:57:02 +0000 MSRV-aware resolver https://lwn.net/Articles/1004892/ https://lwn.net/Articles/1004892/ josh <div class="FormattedComment"> Exactly. And that means the only people likely to maintain older branches are those who care deeply about support for older Rust, or those who believe there's value in backporting fixes to a version that doesn't get new features. Which, as always, has the trade-off of being a combination fewer people are using and testing.<br> </div> Mon, 13 Jan 2025 12:59:55 +0000 MSRV-aware resolver https://lwn.net/Articles/1004889/ https://lwn.net/Articles/1004889/ taladar <div class="FormattedComment"> It should also be noted that Rust crate maintainers might be less likely to consider support for older Rust versions important enough to bother maintaining branches just for that reason since the latest stable Rust compiler should always (barring bugs) be able to compile all old versions too, unlike some other languages where new compilers or runtimes fail to handle old code correctly.<br> </div> Mon, 13 Jan 2025 12:20:46 +0000 Strict Provenance APIs https://lwn.net/Articles/1004839/ https://lwn.net/Articles/1004839/ proski <div class="FormattedComment"> I like that writing unsafe code doesn't mean that I'm totally on my own and nobody would help me. Tools like MIRI would still help me catch errors, and if I follow some rules (such as strict provenance) those tools would be more helpful.<br> </div> Sun, 12 Jan 2025 01:19:55 +0000 MSRV-aware resolver https://lwn.net/Articles/1004664/ https://lwn.net/Articles/1004664/ zdzichu <div class="FormattedComment"> Thanks for detailed reply!<br> </div> Fri, 10 Jan 2025 12:33:33 +0000 Strict Provenance APIs https://lwn.net/Articles/1004656/ https://lwn.net/Articles/1004656/ tialaramex <div class="FormattedComment"> Really glad to see this stabilized, it's very much down in the weeds but this is a down in the weeds language at heart.<br> <p> For some people, working with pointers that are nothing more than a raw memory address dressed up e.g for MMIO or DMA, the Exposed Provenance API in this work is what you'll mostly interact with and these functions are morally much like PNVI-ae-udi model that C has written into a TS now and which is more or less de facto standard for optimising C compilers. Tooling both software like MIRI and hardware like CHERI can't help here, I hope you knew what you were doing, nothing really got better for you, except that this API lets you make it clear exactly what you're doing so any human reviewer can see what you meant (e.g. a subsystem owner)<br> <p> However for people who were doing tricks with pointer bits, but did NOT know the raw addresses, the Strict Provenance API means more often now the tooling can tell you if you mistakenly pointed a gun at your own foot. For these people the new APIs ought to be a real help in achieving confidence that what you did was correct as well as fast. You get a way to say "Given any pointer A, I want to fiddle with the bits and produce a pointer B which may or may not be valid" and also "Given any pointer B that I made with that previous routine, here's a way to get back pointer A" which is exactly what you need for squirrelling away some flags bits in the bottom of an aligned pointer for example. MIRI can follow along as a synthetic pointer A goes through the process to make B, stuff happens, and much later B goes through the other process to make A again, and it can say OK, that's a valid pointer, this code isn't insane. Or, if your code was faulty and that produced C instead which isn't a valid pointer under that model, MIRI can tell you hey, this doesn't work, even though if you'd released the code anyway that "invalid" pointer would have "worked" by mangling unrelated values in memory and you'd have perhaps lost a week debugging the pointer twiddling code.<br> </div> Fri, 10 Jan 2025 11:51:18 +0000 MSRV-aware resolver https://lwn.net/Articles/1004652/ https://lwn.net/Articles/1004652/ farnz Security fixes backporting is up to individual crates; I can publish multiple versions in parallel, and I can choose what my policy is on backporting fixes. Cargo as used normally <a href="https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#version-requirement-syntax">will follow semver when choosing the right crate to fulfil a dependency</a> (major must be equal, minor must be greater than or equal to requested, if minor is equal to requested, patchlevel must be greater than or equal to requested). I can use tilde or equals requirements to get more stable behaviour, at the cost of more work for me. <p>However, previously it was hard work publishing security fixes to older minor versions while maintaining MSRV; if I said my MSRV was 1.64, and I took a dependency on clap = ^4.0 (which I can spell in my Cargo.toml as <tt>clap = "4.0"</tt>, since semver resolution is the default), Cargo would happily give me clap 4.5.26, which has an MSRV of 1.74, and I could thus end up accidentally requiring features from clap 4.5.26. rendering my MSRV claim worthless - not only does Cargo naturally choose a newer version, but my consumers can't force Cargo to unify on 4.3.23 by asking for <tt>clap = "=4.3.23"</tt> in their Cargo.toml. I'd have to manually identify this problem, and change the dependency specification to <tt>clap = "&gt;=4.0, &lt;4.3.24"</tt> to maintain my MSRV. <p>With the new resolver, the fact that I've declared an MSRV of 1.64 means that newer versions of clap will not be considered by Cargo, and I won't accidentally find myself with a problematic MSRV; if I need/want fixes that are in clap = 4.3.24 or later, I can now take a PR to the clap maintainers backporting fixes and reducing the MSRV on a 4.3 branch (or in a clap-4.3 repo) and ask them to publish 4.3.ff (where ff &gt; 24) for me. <p>This now makes it reasonable for crate authors to bump the minor version whenever they increase MSRV, leaving the remains of that minor version series as space for backported fixes. We shall have to see how the ecosystem reacts to this; do we start treating an increase in MSRV as a minor version bump, or is it still something that people do in minor version series? Fri, 10 Jan 2025 11:27:40 +0000 MSRV-aware resolver https://lwn.net/Articles/1004654/ https://lwn.net/Articles/1004654/ DOT <div class="FormattedComment"> That's up to the maintainer of each project. As you may expect with the long tail of open source work: most repos are dead, and for those that are alive, usually only the most recent version receives updates. Exceptions exist for the more popular projects that have more development resources behind them.<br> </div> Fri, 10 Jan 2025 10:03:33 +0000 MSRV-aware resolver https://lwn.net/Articles/1004651/ https://lwn.net/Articles/1004651/ zdzichu <div class="FormattedComment"> How does security fixes backporting work with Rust crates? Are old versions on separate branches and do they receive updates after newer version is available?<br> </div> Fri, 10 Jan 2025 09:23:32 +0000 MSRV-aware resolver https://lwn.net/Articles/1004650/ https://lwn.net/Articles/1004650/ josh <div class="FormattedComment"> The part I'm most excited about:<br> <p> <span class="QuotedText">&gt; 1.84.0 stabilizes the minimum supported Rust version (MSRV) aware resolver, which prefers dependency versions compatible with the project's declared MSRV. With MSRV-aware version selection, the toil is reduced for maintainers to support older toolchains by not needing to manually select older versions for each dependency.</span><br> <p> This means that library crates can freely adopt the latest stable version, and people running older Rust toolchains will get matching older versions of the library. This reduces the tension in the ecosystem between people using the latest features and people running older toolchains.<br> <p> I'm always happy when we can do things like this to help improve social situations in the ecosystem.<br> </div> Fri, 10 Jan 2025 08:58:11 +0000