|
|
Subscribe / Log in / New account

An update on gccrs development

By Jonathan Corbet
October 1, 2024

Cauldron
One concern that has often been expressed about the Rust language is that there is only one compiler for it. That makes it hard to say what the standard version of the language is and restricts the architectures that can be targeted by Rust code to those that the available compiler supports. Adding a Rust frontend to GCC would do much to address those concerns; at the 2024 GNU Tools Cauldron, Pierre-Emmanuel Patry gave an update on the state of that work and what its objectives are.

[Pierre-Emmanuel Patry] The GCC frontend goes by the name "gccrs"; the trademark rules around the language prevent it from being called a Rust compiler. Since the project restarted in 2019, Patry began, it has been targeting the 1.49 release of the language, which came out in 2020. It has been included in GCC since GCC 14, though that release did not include the gccrs documentation.

At the 2023 GNU Tools Cauldron, Patry had said that the project had over 800 commits in its repository that needed to be upstreamed into the GCC mainline — a significant pile of work. This year, that number has reduced to 40 — and those are new work, not leftovers from the previous year. So some progress has been made. The goal had been to push changes upstream every two weeks, but that has proved to be too much work; they are still aiming to get their code upstream as quickly as they can, though they have backed off on that specific goal.

Over the last year, gccrs development was helped by three Google Summer of Code (GSoC) interns. One of them (Jasmine Tang) worked on support for inline assembly code, a feature that the Rust-for-Linux project needs; most of the frontend work for that is now done. Inline assembly in Rust looks like:

    unsafe {
        asm!("assembly code here", "other info")
    };

While the frontend work is in place, there had been some difficulties getting that assembly code through the later compilation stages intact. Those problems have been fixed, but there are still various details to be dealt with.

The second GSoC student (Muhammad Mahad) worked on a test-suite adapter, with the goal of helping gccrs pass the Rust test suite. There is an impedance mismatch to work around here; GCC uses DejaGnu for its tests, so some sort of conversion of Rust's tests is needed. The final GSoC student (Kushal Pal) worked on borrow checking in the GCC intermediate representation and getting proper error messages to users.

Looking forward to the upcoming GCC 15 release, Patry said that the primary focus is on implementing features that are needed to compile kernel code with gccrs. The most important task there is getting to a point where gccrs can compile the Rust core library, which turns out to not be an easy task. The developers had thought they had implemented enough to do that, but were surprised by many problems in need of fixing. The Rust project's habit of using experimental features in the core library does not help there.

Those problems are being worked through, Patry said, but even when it works, the compiler is currently quite slow. That, though, is a problem to be worried about later, once the feature and correctness work is done and the core library compiles.

Closing out with longer-term goals, Patry started with, once again, compiling the core library — but this time without making any changes to the library itself. The developers would like to catch up to the 1.78 release that is currently used by Rust for Linux. That is not as big a leap as might be thought, he said; many of the 1.78 features had been experimental long before and are already implemented in gccrs. Beyond that, he said, the project will eventually address building the std library and, someday, work to catch up with the rustc compiler.

The impression left by this session is that the gccrs developers are working hard to fill the second-compiler gap for Rust, but that there are far too few of them. This is an important project that really should attract some significant company support but, for the most part, that is not happening. Unless more in the industry step up to push this work forward, Rust may be fated to remain a language with a single compiler for a long time to come.

Index entries for this article
ConferenceGNU Tools Cauldron/2024


to post comments

But why

Posted Oct 1, 2024 14:23 UTC (Tue) by dev-ardi (guest, #172609) [Link] (87 responses)

I really do not see the point in gccrs, this means a whole frontend for Rust targeting an ancient version of it... For the goal of targeting other architectures it is much simpler to just add libgccjit as a backend, and in fact it's already being done: https://github.com/rust-lang/rustc_codegen_gcc

But why

Posted Oct 1, 2024 15:08 UTC (Tue) by mcon147 (subscriber, #56569) [Link] (29 responses)

Just using gcc as the backend means that the rust language is defined by the specific implementation quirks of the official compiler. An alternative frontend can expose the quirks

But why

Posted Oct 1, 2024 16:29 UTC (Tue) by atnot (subscriber, #124910) [Link] (25 responses)

> An alternative frontend can expose the quirks

To what end? People always say this but it seems like circular logic to me. What is the benefit of surfacing those quirks, unless you're trying to write another frontend? Writing a standard? Again, what is the benefit of writing a standard besides writing another implementation, which brings us back to the start.

Developing a canonical standard vs developing a canonical implementation are both perfectly fine approaches with their own advantages. Although it's worth noting that the major reason C and C++ ended up that way is because of the wide use of proprietary compilers, which is not really a thing affecting any modern language. But things that benefit one of these models do not necessarily benefit the other.

But why

Posted Oct 1, 2024 16:40 UTC (Tue) by smurf (subscriber, #17840) [Link] (1 responses)

> What is the benefit of surfacing those quirks, unless you're trying to write another frontend?

My goal as a kernel and/or application and/or library ^w crate programmer is to write code that do not depend on any quirks / holes / whatever in the Rust standard. They may accidentally change in the future, thereby breaking my code without notice and, more to the point, without error messages.

As Rust actually guarantees (OK … tries very hard to do so) that this won't happen, a second opinion i.e. compiler is very helpful for achieving this goal.

But why

Posted Oct 1, 2024 18:01 UTC (Tue) by atnot (subscriber, #124910) [Link]

> As Rust actually guarantees (OK … tries very hard to do so) that this won't happen, a second opinion i.e. compiler is very helpful for achieving this goal.

The Rust project compiles and tests most of the publicly available Rust code (crates.io) many times for each compiler release. That's significantly more than any other compiler is doing, to my knowledge. Not just against deliberate changes, but accidental breakage, which all compilers have. A standard does not make you immune to shipping bugs after all. Arguably having more implementations multiplies the risk there even :) Besides, undefined and implementation defined behavior and compiler extensions give gcc and clang and msvc plenty of leeway to break things in practice too. Any large C and C++ project has dozens of workarounds for various compilers and compiler versions as a result.

If there's an easy way to the dream world where nothing ever breaks, clearly that's not it. Diversity of implementations has a very high cost and
doesn't actually solve any problems on it's own.

Clearly delinating quirks from intended behaviour

Posted Oct 1, 2024 17:43 UTC (Tue) by farnz (subscriber, #17727) [Link] (10 responses)

You want to know what the quirks are so that you can fix them before they become de-facto parts of your language; it's OK if the quirk turns out to be "this perfectly reasonable program is rejected because of a quirk in the implementation", but it's not OK if that quirk turns out to be "this unreasonable program that we ought to reject is accepted because of a quirk in the implementation".

Rust has recently had a brush with this; Rust 1.80 broke type inference for most released versions of a popular crate. A quirk of the language happened to break a single crate; if it had broken much more, then it could have forced the project to find a way to enshrine that quirk forever, even though it wasn't a deliberate design decision.

Clearly delinating quirks from intended behaviour

Posted Oct 1, 2024 18:28 UTC (Tue) by atnot (subscriber, #124910) [Link] (9 responses)

> You want to know what the quirks are so that you can fix them before they become de-facto parts of your language

That is fair, but the only way you find out is if people actually write code that explores those quirks, and write it where you can see it breaking, and then you have to actually implement it incompatibly between versions or compilers. That isn't generally the case for most of these quirks. They're unknown unknowns, you can't know for sure until it happens. What you do know for sure though is that you're doubling the engineering and maintainance effort.

> Rust 1.80 broke type inference for most released versions of a popular crate, if it had broken much more, then it could have forced the project to find a way to enshrine that quirk forever, even though it wasn't a deliberate design decision.

I have written in the past about what I think of the judgement of the people involved with that situation for fixing the library instead of just reverting that change outright (it was found before that version was even released! It would have been easy at that point! That's what the crater runs are for!). But Rust already has a way to evolve the language in incompatible ways with Editions. Breaking changes to implemention details have been added through those multiple times already. Also since gccrs is, to my knowledge, planning on reusing stuff like the trait solver anyway, it wouldn't have done much here. See the point above.

As for a standard as source of truth in these situations, well, none of the compilers comply exactly anyway in practice and the C and C++ standards regularly get adjusted to match the behavior of what implementations actually do retroactively. So it's not very different.

So yes, sure, perhaps it would have found some stuff, not doubting that. But that doesn't make it a good use of resources. If what you actually care about is the quality and reliability of the one implementation you use, there's a lot better ways of improving that than writing a duplicate compiler.

Clearly delinating quirks from intended behaviour

Posted Oct 1, 2024 20:56 UTC (Tue) by raven667 (subscriber, #5198) [Link] (6 responses)

> So yes, sure, perhaps it would have found some stuff, not doubting that. But that doesn't make it a good use of resources. If what you actually care about is the quality and reliability of the one implementation you use, there's a lot better ways of improving that than writing a duplicate compiler.

I generally agree with you that the conventional wisdom advocating for standards and alternate implementations is far more useful and relevant in a world of competing proprietary implementations, and provides less value when the reference implementation is widely used open source/free software, but the value is not zero and if someone wants to fund it or volunteer for it then all the more power too them as there are benefits as described and few downsides for the primary implementation to have a viable second one. Maybe they can each learn something from the other, even if they can't directly share code.

GCC and LLVM support several frontend languages and as people have noted they are integrated in different places and support different platforms so having Rust support in GCC does open up some new opportunities to use the language. I also wouldn't discount the cost of supporting another tool chain ecosystem in places which are deeply invested in GCC, or don't have resources to handle infrastructure change, it may be easier for some to update GCC and be capable of supporting Rust than adding LLVM into the mix.

Clearly delinating quirks from intended behaviour

Posted Oct 2, 2024 7:35 UTC (Wed) by taladar (subscriber, #68407) [Link]

On the other hand that just allows those places to outsource the significant effort to support their broken processes onto everyone else who now has to test with two compilers for everything.

Clearly delinating quirks from intended behaviour

Posted Oct 2, 2024 22:43 UTC (Wed) by marcH (subscriber, #57642) [Link] (4 responses)

> I generally agree with you that the conventional wisdom advocating for standards and alternate implementations is far more useful and relevant in a world of competing proprietary implementations, and provides less value when the reference implementation is widely used open source/free software, ...

Exactly that. Standards were critical for compatibility and competition in the _old_ days before: 1. Free software, 2. The Internet, 3. Test-driven development, 4. Continuous Integration.

Even in the old days standards were not so great: just look at the C/C++ mess of undefined/implementation-defined behaviors[*], standards always trailing implementations, ugly politics (looking at you W3C), etc.

[*] plenty of elaboration and examples in other comments.

But in today's era, it is now possible to have a free implementation with an extensive test-suite, fast-paced CI and everything just a click way. THAT is better than some old-fashioned, designed-by-committee standard. Some behavior is ambiguous? Just discuss it, add documentation and a new test and problem solved!

"The standard is this [proprietary] implementation" used to really suck. But now it can really rock when done properly.

This being said, if some people see value in maintaining a 2nd class implementation, then agreed: "all the more power to them". As long as they don't pretend they're equal and defer to the reference implementation when in doubt then everything is fine. I think this is more or less what happened with Java BTW - and despite Oracle!

Clearly delinating quirks from intended behaviour

Posted Oct 2, 2024 22:58 UTC (Wed) by viro (subscriber, #7872) [Link] (1 responses)

Surely I'm not the only one to remember the chorus whinging about The Inherent Evil(tm) of clang/llvm? Complete with "that would just drain gcc contributors pool", "fracture everything", etc. - as well as "they are morally obliged to work on X rather than Y, 'cuz we sais so!!!", nevermind that advocates involved had not been contributing themselves to either project.

Clearly delinating quirks from intended behaviour

Posted Oct 2, 2024 23:57 UTC (Wed) by marcH (subscriber, #57642) [Link]

LLVM has been a deliberate (and successful) attempt to avoid the GPL and its "copyleft". That unsurprisingly upset FSF zealots very much. It's not surprising they threw everything they could at it.

The other way round, "cloning" Apache to GPL should be much more "peaceful", technical and rational :-)


Clearly delinating quirks from intended behaviour

Posted Oct 3, 2024 9:01 UTC (Thu) by kleptog (subscriber, #1183) [Link] (1 responses)

> Some behavior is ambiguous? Just discuss it, add documentation and a new test and problem solved!

How does one find the ambiguous behaviour unless you try to reimplement it from the documentation?

Documentation is written in language for humans and so by its very nature ambiguous.

Clearly delinating quirks from intended behaviour

Posted Oct 3, 2024 13:45 UTC (Thu) by marcH (subscriber, #57642) [Link]

Typically: when your code does not behave as you expected. Then you check the documentation and you find a gap or problem there.

Engineering effort in open source

Posted Oct 2, 2024 9:38 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

What you do know for sure though is that you're doubling the engineering and maintainance effort.

On the other hand, the engineering and maintenance time is coming from volunteers whose labour you would not get access to otherwise; they've decided that this is what they want to work on, and it's not an official Rust project, taking time away from the primary implementation. If the Rust Project was diverting its resources from the official compiler to gcc-rs, that'd be a bad use of resources, but it's not - gcc-rs is being worked on by resources that would work on something else (that may have zero benefit to Rust), if they weren't working on gcc-rs.

As arguments go, this is a lot like arguing that Linux should not exist, because working on a new kernel when Linus could have been working on removing AT&T code from 4.3BSD (the project that became 386BSD) was not a good use of resources.

Engineering effort in open source

Posted Oct 2, 2024 13:52 UTC (Wed) by Wol (subscriber, #4433) [Link]

> As arguments go, this is a lot like arguing that Linux should not exist, because working on a new kernel when Linus could have been working on removing AT&T code from 4.3BSD (the project that became 386BSD) was not a good use of resources.

The point to remember is that everybody has to be paid, and that payment need not be cash. Linus got paid in "learning to write an OS". Working on 4.3BSD would not have paid him in anything *he* considered useful.

I wrote an article years ago for some American magazine. I got paid in the pleasure of (a) writing, and (b) seeing the article published.

They sent me a $50 Amex cheque. It's still sitting on the fridge waiting to be cashed because I'm damned if I'm going to pay the bank £20 for the privilege of cashing it. I didn't want the cheque, I didn't see any value in the cheque, I wouldn't have been in the slightest bothered if it hadn't turned up.

You need to ask "who's paying the piper", and if the Rust Project isn't paying the gccrs folks, then it's nobody else's business to interfere.

Otherwise you're like the journalist who, in a press conference or something, made a massive deal about all the money our council had wasted on "updating" its logo, all the time it had wasted in comittee meetings discussing the change. Only for mayor, after the journalist had finally run out breath, to say that "actually, it was a dictat from the mayor, and the *entire* cost was five minutes of his secretary's time typing it up". He basically gave an order that seeing as there were four or five different logos floating around, all NEW orders were to use the old council crest. Everything else was to be used up, and then replaced as per dictat.

Cheers,
Wol

But why

Posted Oct 2, 2024 15:56 UTC (Wed) by rsidd (subscriber, #2582) [Link] (11 responses)

> Although it's worth noting that the major reason C and C++ ended up that way is because of the wide use of proprietary compilers, which is not really a thing affecting any modern language.

This is worth emphasizing. Today C and C++ standards are de facto GCC and LLVM. Extensions in those get picked up by the "standards" years later. Ok, that's still two independent compilers. But it used to be just one (gcc), at least in linux and FLOSS. Alternatives like Intel's icc had to implement many gcc-isms to survive.

When there is a good OSS compiler there is little incentive to develop another from scratch.

But why

Posted Oct 2, 2024 20:04 UTC (Wed) by dankamongmen (subscriber, #35141) [Link] (10 responses)

ignoring MSVC here seems a weird miss

But why

Posted Oct 2, 2024 20:47 UTC (Wed) by intelfx (subscriber, #130118) [Link] (9 responses)

Nobody looks to MSVC as if it was a de facto standard though?

In my experience with industry, MSVC is "that quirky thing that lags behind the real compilers, oh and I guess we need to stick a bunch of polyfillsworkarounds all over the code code for every real compiler feature it lacks or does a NIHy thing instead".

But why

Posted Oct 3, 2024 10:22 UTC (Thu) by peter-b (guest, #66996) [Link] (8 responses)

In my experience with industry, MSVC is "that quirky thing that lags behind the real compilers, oh and I guess we need to stick a bunch of polyfillsworkarounds all over the code code for every real compiler feature it lacks or does a NIHy thing instead".

Recently the MSVC team have been implementing C++ features well ahead of clang. They have been doing very good work and engaging with C++ standardization in a constructive way.

The quality of the compiler and standard library is right up there with GCC and clang, especially with /permissive-. The diagnostics often catch things that GCC and clang don't.

"MSVC is a crap compiler" was an accurate meme in 2015, but that's no longer the case.

But why

Posted Oct 3, 2024 12:44 UTC (Thu) by aragilar (subscriber, #122569) [Link] (1 responses)

Not suggesting MSVC is in any way poor quality, but I believe that there are still is a fair amount C features post C89 that are not supported, and the lack of __float128 (or its various other incantations) remains a source of pain (even icc supports it, and has for a while). I suspect it depends what part of the standard you are interested in as to who is ahead in features.

But why

Posted Oct 7, 2024 15:56 UTC (Mon) by itrue (guest, #156235) [Link]

MSVC is a bad C compiler, but a pretty solid C++ one.

But why

Posted Oct 4, 2024 16:47 UTC (Fri) by StillSubjectToChange (guest, #128662) [Link] (5 responses)

MSVC is no "Cinderella story", so to speak. Sure, Microsoft has stepped up their investment into the product, but the fact remains that MSVC has a substantial deficit to overcome. Also, there is a world of difference between advertising support for a feature and actually using it in production. For instance, MSVC has "supported" modules for quite some time now, yet it's quite difficult to find *anyone* (outside of MS at least) who genuinely uses them. Furthermore, MSVC is a total non-factor when it comes to supporting other standards like OpenMP, and eventually OpenACC and SYCL too.

I'm not trying to degrade MSVC in order to elevate GCC and/or Clang. The fact is that MSVC simply doesn't compete with them in most circumstances. Or rather, MSVC simply is not a factor for vendors building a custom toolchain (Intel, AMD, Nvidia, etc) or for anyone targeting mobile, linux dominated environments like cloud or HPC, game consoles, embedded, etc.

But why

Posted Oct 4, 2024 17:36 UTC (Fri) by excors (subscriber, #95769) [Link] (1 responses)

> MSVC simply is not a factor for vendors building a custom toolchain (Intel, AMD, Nvidia, etc) or for anyone targeting mobile, linux dominated environments like cloud or HPC, game consoles, embedded, etc

I'm not sure game consoles are a great example, because MSVC is very relevant on Xbox, so it's still an important factor for cross-platform games and libraries. (But it does seem to be retreating even from that niche - the Microsoft Game Development Kit claims to support both MSVC and Clang on Windows and Xbox, though I have no idea how often people choose Clang.)

But why

Posted Oct 4, 2024 17:56 UTC (Fri) by pizza (subscriber, #46) [Link]

> I'm not sure game consoles are a great example, because MSVC is very relevant on Xbox,

It's more accurate to say that the only reason to use MSVC is if you're targeting Microsoft platforms (including the Xbox).

> so it's still an important factor for cross-platform games and libraries.

That should be "it's still a source of major headaches for cross-platform games and libraries"

(Granted, many of those problems were due to Windows platform-isms rather than MSVC itself...)

But why

Posted Oct 13, 2024 13:59 UTC (Sun) by mathstuf (subscriber, #69389) [Link] (2 responses)

> For instance, MSVC has "supported" modules for quite some time now, yet it's quite difficult to find *anyone* (outside of MS at least) who genuinely uses them.

MSVC's module support is ahead of the other implementations IME. Adoption issues have mostly been related to tooling and using modules. Now that CMake[1] has (experimental) support for `import std;` (MSVC and Clang only right now), I think that it is a good time for projects to start investigating modules usage and file issues to compilers for what needs fixing on their side. There's still work to be done on non-compiler, non-build tooling, but I'm working on that too[2].

And I am interesting in modules adoption ecosystem-wide, not just for CMake. I've gone to many build systems trying to get modules support (e.g., xmake[3], tup[4], meson[5], bazel[6]) implemented. There's also the "Are We Modules Yet?"[7] page which aims to track progress.

[1] Full disclosure: I'm a CMake developer and implemented its C++ modules support.
[2] https://cppcon2024.sched.com/event/1gZg8/beyond-compilati...
[3] https://github.com/xmake-io/xmake/issues/386
[4] https://groups.google.com/g/tup-users/c/FhJTA6KAzWU/m/t5P...
[5] https://github.com/mesonbuild/meson/issues/5024#issuecomm...
[6] https://github.com/bazelbuild/bazel/pull/19940#issuecomme...
[7] https://arewemodulesyet.org/

But why

Posted Oct 13, 2024 15:52 UTC (Sun) by intelfx (subscriber, #130118) [Link] (1 responses)

> Full disclosure: I'm a CMake developer and implemented its C++ modules support

Can we bribe you with beer and/or cookies to write a guest article for LWN on the state of C++ modules? :-)

I remember one of your multi-screen comments talking about modules a few months ago and I would *love* to see it in form of an article...

But why

Posted Oct 13, 2024 20:11 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

Sure, cookies are more enticing than alcohol for me :) . COI policies may have things to say about actually fulfilling such an offer, but I'll put in a request to do something for LWN (we have our own blog as well, so cross-posting or the like may need considered).

But why

Posted Oct 1, 2024 18:00 UTC (Tue) by hunger (subscriber, #36242) [Link] (1 responses)

The gccrs devs said they want to make use of crates used by the compiler that may actually be called rust compiler. E.g. polonius once that becomes available. At that point you will end up sharing implementation -- and their quirks -- between the compilers.

But why

Posted Oct 13, 2024 13:49 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

There's still a difference between Polonius (the borrow checker) and the language. The language says "mutable XOR shared"; Polonius and prior borrow checking implementations are imperfect implementations of it. They'll always not be able to check *some* valid coding pattern as following the actual rule and reject code that technically obeys it.

But why

Posted Oct 1, 2024 18:17 UTC (Tue) by StillSubjectToChange (guest, #128662) [Link]

The Linux kernel itself is implemented with a plethora of non-standard GCC-isms, e.g. asm goto. It took quite a bit of effort and a few years of development to get Linux to build under Clang.

With that in mind, the issue of a single frontend for Rust seems rather misplaced. Multiple frontends for a language may help to rigorously define a language, but they can just as easily create ambiguities and incompatibilities. Moreover, the existence of an official language standard for C and C++ didn't stop the divergence(s) between Clang, GCC, or MSVC.

But why

Posted Oct 1, 2024 15:34 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (15 responses)

> For the goal of targeting other architectures it is much simpler to just add libgccjit as a backend, and in fact it's already being done: https://github.com/rust-lang/rustc_codegen_gcc

Yeah, and it's still not working. Convince me otherwise by building a usable rustc compiler for alpha, hppa, m68k or sh4.

But why

Posted Oct 1, 2024 18:28 UTC (Tue) by hunger (subscriber, #36242) [Link] (6 responses)

Do you seriously think anyone will use any of those?

But why

Posted Oct 1, 2024 18:42 UTC (Tue) by josh (subscriber, #17465) [Link] (4 responses)

People seem to keep wanting to keep those systems alive, for some reason, and to the extent they're doing so, I don't want to see the people fighting to keep those systems alive deciding that that means they need to fight usage of Rust. And that's currently what happens: a project adds Rust code, and maintainers of these niche architectures decide that that means there's a problem with the project or with Rust. That's a problem.

They're never going to be tier 1 targets, but if people want to do the work to make them be tier 2 or tier 3 targets, then so be it.

But why

Posted Oct 2, 2024 12:40 UTC (Wed) by glaubitz (subscriber, #96452) [Link] (3 responses)

> People seem to keep wanting to keep those systems alive, for some reason

It's just hobby, that's all.

Also, it continues helps bugs. I have already found and fixed tons of bugs that affect x86_64 by fixing bugs on these exotic architectures, including the Rust compiler.

> I don't want to see the people fighting to keep those systems alive deciding that that means they need to fight usage of Rust. And that's currently what happens: a project adds Rust code, and maintainers of these niche architectures decide that that means there's a problem with the project or with Rust. That's a problem.

Yep, and that's one of the reasons why rustc_codegen_gcc and gccrs were created ;-).

> They're never going to be tier 1 targets, but if people want to do the work to make them be tier 2 or tier 3 targets, then so be it.

They don't have to be and that's ok.

But why

Posted Oct 2, 2024 16:57 UTC (Wed) by josh (subscriber, #17465) [Link] (2 responses)

I agree with everything you've written here. I'm just not clear on how the very reasonable tone of your comment here is consistent with your thread opener of "Convince me otherwise".

But why

Posted Oct 2, 2024 18:42 UTC (Wed) by glaubitz (subscriber, #96452) [Link] (1 responses)

> I agree with everything you've written here. I'm just not clear on how the very reasonable tone of your comment here is consistent with your thread opener of "Convince me otherwise".

I'm not sure how you're reading any tone from what I'm saying. I don't think I was being rude and I also didn't mean to.

Please keep in mind that not everyone is a native English speaker and people from different backgrounds may have different ways to express themselves.

But why

Posted Oct 2, 2024 19:22 UTC (Wed) by josh (subscriber, #17465) [Link]

First of all: if your original comment was not intended to be rude or confrontational or escalatory, then thank you for that clarification.

I had read your original post as arguing against the post you were responding to, and the "convince me" gave it the flavor of "change my mind" / "debate me". I appreciate that that's not the connotation you intended to impart.

But why

Posted Oct 1, 2024 20:01 UTC (Tue) by darcagn (guest, #168667) [Link]

> Do you seriously think anyone will use any of those?

Why not? I've used both rustc_codegen_gcc and gccrs to compile Rust code to run on the Sega Dreamcast (sh4), and I've ported the Rust std library to it (with rustc_codegen_gcc). People still write new software for it, so why not with Rust?

But why

Posted Oct 1, 2024 18:35 UTC (Tue) by josh (subscriber, #17465) [Link] (3 responses)

m68k already supports rustc via LLVM.

But why

Posted Oct 1, 2024 18:43 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (2 responses)

> m68k already supports rustc via LLVM.

Nope, it doesn't. I actually did the port and it's currently incomplete since the LLVM backend is lacking behind.

But why

Posted Oct 1, 2024 22:58 UTC (Tue) by josh (subscriber, #17465) [Link]

I'm aware, and I didn't say that it supported it *well*. :)

LLVM 68K target

Posted Oct 3, 2024 5:24 UTC (Thu) by brouhaha (subscriber, #1698) [Link]

Sorry to be wandering off-topic here, but where would I find the LLVM 68K backend? Google turns up what appear to be several different efforts.

But why

Posted Oct 1, 2024 18:36 UTC (Tue) by StillSubjectToChange (guest, #128662) [Link] (1 responses)

You might as well add Itanium to that list while you're at it.

But why

Posted Oct 1, 2024 18:44 UTC (Tue) by glaubitz (subscriber, #96452) [Link]

> You might as well add Itanium to that list while you're at it.

Itanium is unmaintained and unsupported by the Linux kernel, the other ports are not.

But why

Posted Oct 2, 2024 14:40 UTC (Wed) by jbills (subscriber, #161176) [Link] (1 responses)

This argument isn't helpful, as neither gccrs nor rustc_codegen_gcc are feature complete, so neither of them can compile rustc for these targets. You haven't given any reason why rustc_codegen_gcc couldn't do this in the future. Development on it is slow, but that is because essentially one dev is working on it. If the project had the resources of gccrs, then it could potentially be feature complete by now.

But why

Posted Oct 2, 2024 18:17 UTC (Wed) by glaubitz (subscriber, #96452) [Link]

> This argument isn't helpful, as neither gccrs nor rustc_codegen_gcc are feature complete, so neither of them can compile rustc for these targets.

We were talking about obscure targets and software in general, not just gccrs on sh4, for example.

I have discovered many bugs on architectures such as SPARC that were not immediately visible on x86_64, yet they existed and they eventually needed to be fixed as they could still show on x86_64 under certain circumstances.

Outside of Rust, I just discovered a GCC regression while testing patches for the SH backend which affected x86_64 as well and existed since version 9.x. It just happened to have never triggered the internal compiler error as no one didn't test what I tested before.

> You haven't given any reason why rustc_codegen_gcc couldn't do this in the future. Development on it is slow, but that is because essentially one dev is working on it. If the project had the resources of gccrs, then it could potentially be feature complete by now.

I have already successfully compiled simple Rust code using gccrs while I haven't been able to do that using rustc_codegen_gcc. And you're also answering the question yourself: Despite rustc_codegen_gcc already being part of the official upstream Rust git repository, hardly anyone inside the Rust team is interested in supporting the project, despite people constantly using the project as an argument against gccrs.

But why

Posted Oct 1, 2024 16:14 UTC (Tue) by chris_se (subscriber, #99706) [Link] (6 responses)

Once a problem is complex enough, it's always good to have multiple different solutions for it. Compiling Rust is a _really_ complex problem, and having multiple implementations will be very helpful:

- Reimplementing it in a completely different codebase may uncover corner cases of features that were not properly though through when they were initially proposed and/or implemented.
- It forces Rust to adopt a more stringent separation between specification of the language and reference implementation.
- It drives some friendly competition, which can be a very good thing. (The current competition between GCC, Clang and MSVC has been very good for the state of compiler development in the C/C++ space.)
- It provides an additional pathway for bootstrapping Rust. (gccrs is not implemented in Rust itself.)

Also, you calling a 4 year old version of a compiler ancient kind of exemplifies the lack of maturity in the Rust ecosystem. Sure, you could consider it a bit on the older side, but ancient, really? There are still plenty of supported (!) Linux distributions that were released _before_ that compiler was. I'm currently learning Rust, and I really like parts of the language (both pattern matching and Result/Option/the ? operator are fantastic), but I have some major misgivings about the ecosystem as it stands today.

But why

Posted Oct 1, 2024 18:10 UTC (Tue) by hunger (subscriber, #36242) [Link] (3 responses)

> - It forces Rust to adopt a more stringent separation between specification of the language and reference implementation.

Why? So far there do not seem to be any changes in the workflow planned: You need to implement any feature in rusrc, it is evaluated there and stabilized. That process seems to wrok really well, much better than in design by committee languages.

That means gccrs will be an eternal second for any and all features. I doubt they can ever catch up.

But why

Posted Oct 2, 2024 7:40 UTC (Wed) by taladar (subscriber, #68407) [Link] (2 responses)

Nor would we really want it to catch up because the last thing we want is two equally important implementations that behave differently and neither will change their behavior to that of the other implementation. We see in C, C++ and other languages where that leads. The ideal case for gccrs is really that nobody but the people who want to keep obscure architectures alive ever use it.

But why

Posted Oct 2, 2024 15:29 UTC (Wed) by hunger (subscriber, #36242) [Link]

> The ideal case for gccrs is really that nobody but the people who want to keep obscure architectures alive ever use it.

But for that gcc-rs needs to be good enough to compile a decent amount of existing Rust code. So you need to follow upstream features closely.

But why

Posted Oct 2, 2024 18:04 UTC (Wed) by sam_c (subscriber, #139836) [Link]

There's a difference here in that the C and C++ specifications are pretty slow to move (ok, C more so) and also don't _want_ to cover certain areas. Rust has no spec right now and the stated policy of gccrs is to follow rustc behaviour and it's a bug to deviate -- of course discussions with rustc developers might come up on their end to see what is correct, but gccrs won't intentionally deviate.

If Rust does end up developing a spec, it'll look different from C and C++ I'm sure.

But why

Posted Oct 2, 2024 15:03 UTC (Wed) by dave_malcolm (subscriber, #15013) [Link]

One other benefit, to the GCC community, is that by having gccrs in our community we gain knowledge and internal features that are of use to GCC generally.

For example, supporting Rust's error codes was a good use-case when I extended how GCC tracks metadata for diagnostics; see e.g.:
https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=1aee5d2...

But why

Posted Nov 8, 2024 15:57 UTC (Fri) by Lennie (subscriber, #49641) [Link]

- It provides an additional pathway for bootstrapping Rust. (gccrs is not implemented in Rust itself.)

I really like that point, because you can build on something like this:

https://guix.gnu.org/en/blog/2023/the-full-source-bootstr...

But why

Posted Oct 1, 2024 16:45 UTC (Tue) by Subsentient (guest, #142918) [Link] (22 responses)

Every time someone discusses any other implementation of Rust, I see these comments pop up almost immediately.

Last time Rust was forked, which went nowhere (Crab language), they were terrified of being sued for trademark despite rustc being ostensibly "open source". A healthy open source project doesn't inspire such fear in forks.

I like Rust as a language, despite some very strong personal gripes about the way it handles unsafe and refuses to allow aliasing optimizations to be turned off (which actually makes unsafe rust more dangerous than C, especially when interacting with C, because of the potential for undetected heisenbugs)

I get it, it's fun to run the bleeding edge and develop fast and loose with a systems language. But that's not constructive for a systems language. A systems language needs to be reliable, stable, well-defined, and predictable, and Rust has so many shortcomings there that the Rust for Linux kernel project is forced to use the unstable rustc.

But why

Posted Oct 1, 2024 17:49 UTC (Tue) by roc (subscriber, #30627) [Link]

> A healthy open source project doesn't inspire such fear in forks.

Lots of open source projects protect their trademarks. Nothing unhealthy about that.

> despite some very strong personal gripes about the way it handles unsafe and refuses to allow aliasing optimizations to be turned off

It is good that Rust is avoiding C's mistake of forking into two incompatible "-fstrict-aliasing" and "-fno-strict aliasing" languages.

> A systems language needs to be reliable, stable, well-defined, and predictable,

Not necessarily. There are all kinds of projects that need a "systems language" and not all of them have high stability requirements.

> Rust has so many shortcomings there that the Rust for Linux kernel project is forced to use the unstable rustc.

It's kind of the opposite. It's because Rust is committed to a high standard of stability and reliability that they're conservative about admitting features to the stable set, and that's why it's taking a while for all the features Rust-for-Linux needs to mature enough to be admitted to that set.

But why

Posted Oct 1, 2024 18:17 UTC (Tue) by hunger (subscriber, #36242) [Link] (19 responses)

> I get it, it's fun to run the bleeding edge and develop fast and loose with a systems language. But that's not constructive for a systems language. A systems language needs to be reliable, stable, well-defined, and predictable, and Rust has so many shortcomings there that the Rust for Linux kernel project is forced to use the unstable rustc.

What exactly are you missing wrt. stability, reliability and predictability? In my experience the rust team invests a lot into those... ok, they do not design by committee, but that process has a very bad standing anywhere else but in language development.

But why

Posted Oct 1, 2024 22:50 UTC (Tue) by sam_c (subscriber, #139836) [Link] (18 responses)

Rust for Linux still relies on various "unstable" (in the Rust terminology as well) features that are not yet marked stable. To me, this indicates the language is not yet mature.

It's very common for applications to depend on very new (just-released) versions of Rust. It's also common for flagship Rust applications like Firefox to rely heavily on "unstable" features and break on new Rust releases -- which always makes me ask why, if something as important as Firefox needs it, something hasn't been resolved yet.

But why

Posted Oct 1, 2024 23:01 UTC (Tue) by josh (subscriber, #17465) [Link] (2 responses)

> Rust for Linux still relies on various "unstable" (in the Rust terminology as well) features that are not yet marked stable. To me, this indicates the language is not yet mature.

No, it indicates that Rust for Linux is still in development and that some of the specific features Rust for Linux needs are still in development.

> It's also common for flagship Rust applications like Firefox to rely heavily on "unstable" features and break on new Rust releases

To the best of my knowledge, Firefox has not needed that for a long time. That perception was created by one particular crate in Firefox's dependency tree that, without coordination with the rest of the ecosystem, forcibly opted the entire build into nightly-only features in order to use SIMD for acceleration. That crate no longer does so, but the reputation damage that crate did still remains to this day.

But why

Posted Oct 1, 2024 23:09 UTC (Tue) by sam_c (subscriber, #139836) [Link] (1 responses)

> No, it indicates that Rust for Linux is still in development and that some of the specific features Rust for Linux needs are still in development.

I think it's easy for me to make the conclusion I did when Rust as an ecosystem has several prominent packages which need nightly crates?

If it's several and not limited to a specific domain (like just Rust for Linux), it's hard for me to *not* make that conclusion?

> That perception was created by one particular crate in Firefox's dependency tree that, without coordination with the rest of the ecosystem, forcibly opted the entire build into nightly-only features in order to use SIMD for acceleration.

It looks like the last time it broke in FF was packed_simd in Rust 1.78 (many times before that). I hope it is indeed fixed now, but I would say that May from this year isn't a long time ago.

But why

Posted Oct 4, 2024 9:57 UTC (Fri) by josh (subscriber, #17465) [Link]

> It looks like the last time it broke in FF was packed_simd in Rust 1.78

That's news to me; sorry to hear that. I would not have said it was a long time ago if I'd known about the break in 1.78.

But why

Posted Oct 2, 2024 1:41 UTC (Wed) by intelfx (subscriber, #130118) [Link] (9 responses)

> Rust for Linux still relies on various "unstable" (in the Rust terminology as well) features that are not yet marked stable. To me, this indicates the language is not yet mature.

Or maybe it indicates that Rust-for-Linux is an incredibly special case (for multitude of reasons, the main of which is the need to integrate into an existing giant, very low-level C project), and it's very specifically support for _that_ special case which isn't mature?

But why

Posted Oct 2, 2024 2:33 UTC (Wed) by sam_c (subscriber, #139836) [Link] (8 responses)

I refer you to my other comments -- if it were just Rust for Linux, I think I could understand it, but we run into projects relying on nightly a lot.

But why

Posted Oct 2, 2024 10:49 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (7 responses)

In your other comments you likewise insist that there's "a lot" of this, but don't provide examples beyond Rust for Linux and your faulty memory that Firefox (which used to require nightly but hasn't for some time) does this.

To me that suggests that you've extrapolated wildly rather than actually having "a lot" of these cases.

But why

Posted Oct 2, 2024 17:53 UTC (Wed) by sam_c (subscriber, #139836) [Link] (6 responses)

I think one should start with an assumption of good faith, please do that with my comments. I want Rust to succeed, I just wish some things about the ecosystem were different, and I hope we'll get there.

I already gave evidence that it wasn't faulty at all -- it happened in May for Firefox. We had an issue with rustls' build.rs needing nightly in March (they were happy to fix it though). For a long time, 'eww' (not the Emacs one) required nightly but they fixed that in Feb. Chromium requires nightly features (at least for various -Z ... flags it passed) at the moment. Those are examples I've hit personally and that I can easily recall, and I don't represent all developers in our distro.

But why

Posted Oct 2, 2024 17:54 UTC (Wed) by sam_c (subscriber, #139836) [Link]

> and I don't represent all developers in our distro.

(My point being that others mention such cases as well, I just don't remember the package names without checking logs.)

But why

Posted Oct 3, 2024 8:33 UTC (Thu) by tialaramex (subscriber, #21167) [Link] (4 responses)

I'm comfortable with "good faith" requirements, but remember your good faith position is that Rust isn't a mature language because of Rust for Linux, and when it's pointed out that there's obviously a better explanation, you point at "a lot" of similar cases without specifying, yet look how thin the actual examples you can scrape up are.

How confident are you that every single C and C++ program that's packaged has all the correct feature tests for the features it depends on? Both languages do notionally provide these tests. I've never used them. My guess is that actually very few even try and that the feature tests which are present in the rest are often faulty, so long as the code builds and seems superficially to work on popular platforms that's enough.

But why

Posted Oct 4, 2024 3:28 UTC (Fri) by sam_c (subscriber, #139836) [Link] (3 responses)

I'm not sure I agree with that interpretation: I said we see too much reliance on nightly; someone explains that Rust for Linux is special; I say that my opinion is influenced by other projects too. I'd like to see Rust projects overall relying less on nightly.

I didn't even say "it's not mature", I said it indicates to me that it isn't yet - very much about my perception and open to discussion, which is why I'm commenting at all...

> How confident are you that every single C and C++ program that's packaged has all the correct feature tests for the features it depends on?

I spent a lot of time on this sort of thing last year, which LWN wrote about. Not very confident at all! But my point was and remains that if Rust is mature, I'd expect use in the kernel to not need heaps of nightly features. It comes across as odd to me because if Rust is deployed widely in kernel-like environments, surely this kind of functionality should be worked out already?

But why

Posted Oct 4, 2024 13:03 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

> surely this kind of functionality should be worked out already?

That would depend what "this kind of functionality means". For example Linux really can't abide floating point, so for them it's crucial that you can't by any means end up doing floating point operations in Rust for Linux unless you've spelled out that intent very clearly. Unsurprisingly in most applications it's enough that (as is true in stable Rust) you don't stumble onto floating point operations when you never needed them. For example in Rust, just as in C, 5 / 2 is 2 because it's not performing a floating point divide and then transmuting the type as it would in say, Python.

So to adjust from "Yeah, so don't do that" to "We'll make sure you won't do that" Rust for Linux needed tweaks and the stable Rust does not have those tweaks.

Are you astonished that nobody else had this requirement? Nobody in embedded cares, most of the other operating systems are not in a place where "enforce micro-optimisation of context switch overhead" is a focus. If there are people who care (maybe at Microsoft?), and are using Rust for the OS kernel, they're not advertising this use.

In 2023 you could point at alloc, back then Rust for Linux relied on a fork of alloc because Linus requires that allocations are fallible at the point of use, which means all the Rust APIs which panic on allocation failure must be removed from Rust for Linux. Rust's core does not allocate - it doesn't have an allocator so it can't, and Rust's std layer requires an operating system so it will never be part of Rust for Linux - we're implementing the OS not using it. But alloc was in the middle, and so for a period Rust for Linux forked it. That's done, the stable Rust ships all the explicit allocation failure stuff which Linus insists on.

But actually in alloc we do see things where (I did not check specifics) Rust for Linux might want them but they aren't in stable Rust today. Vec::push_within_capacity for example. In C you'd just use it and YOLO. In Rust that will not work† and you must ask for nightly Rust if you want these features.

† You could use Mara's nighty_crimes! macro or write your own, but that's not a serious suggestion and in practice amounts to the same thing.

But why

Posted Oct 6, 2024 22:55 UTC (Sun) by raven667 (subscriber, #5198) [Link] (1 responses)

> I didn't even say "it's not mature", I said it indicates to me that it isn't yet - very much about my perception and open to discussion

I think this whole effort _is_ the process by which the specific changes needed for Rust to support the existing Linux kernel (rather than a custom all-Rust kernel or userspace application) become mature, Rust is clearly capable of it, the clock for stability _starts_ when the implementation of Rust is ready, but it's only marked as "stable" after some time of real-world use. A half-thought-through analogy is that some special vintage wine is the same liquid put in the bottle the day it was made, when it wasn't special, but time and some tiny changes made it special later, a feature might land in nightly but with time and some tiny changes it becomes stable infrastructure, but it was substantially there the day it was made in nightly. Rust for Linux is going to take several more years to all come together and be seen as old-and-stable, but that is the trend.

But why

Posted Oct 7, 2024 14:13 UTC (Mon) by raven667 (subscriber, #5198) [Link]

Thinking about this again while procrastinating before starting work.

You can describe several milestones of maturity, Rust as a mature standalone application development language, Rust maturity for mixed C/C++ applications and library development, Rust maturity building a standalone kernel, Rust used in some generic mixed C/C++ kernels and Rust for Linux the specifically fits in with the internal style and abstractions of Linux. People can end up talking past one another a bit by not being clear as to what they are referring to.

But why

Posted Oct 11, 2024 9:28 UTC (Fri) by moltonel (guest, #45207) [Link]

> It's very common for applications to depend on very new (just-released) versions of Rust.

That's in large part "because we can", not because older versions are buggy or unusable. Installing the latest or multiple versions is trivial with rustup, and compatibility is usually perfect (notwithstanding the recent 1.80 time mess). So some developers might jump on the latest 1.81 just for a small improvement like Duration::abs_diff() in 1.81.

Compare that with C++, Python, and other venerable languages where the only reasonable version to use is the one shipped by your distro, which can take months to handle the various regressions before pushing an update, and where it's common for your users to be years behind the latest compiler.

Being able to easily use the latest compiler and even experimental features is undeniably a good thing, even if part of the community raises their minimum required version a bit too aggressively.

But why

Posted Oct 11, 2024 9:41 UTC (Fri) by bfeeney (guest, #6855) [Link] (3 responses)

> Rust for Linux still relies on various "unstable" (in the Rust terminology as well) features that are not yet marked stable. To me, this indicates the language is not yet mature.

I think you're conflating "Rust for Linux" and the "Rust language".

I think you're also being misled by the term "unstable".

Linux currently is built with a variety of non-standard GCC extensions that are not part of the C language specification. These come and go at any point, which is why the kernel tends to pin a particular GCC version.

Rust for Linux is in the exact same situation: it uses extensions that are not part of the format Rust language specification.

Unlike with GCC however, you cannot use extensions from a stable release of the compiler. You have to use one of the nightly releases. This is combination of non-standard feature and nightly compiler release is the meaning of the word "unstable" here.

However despite the "unstable" description, one should note that Rust uses the unstable-branch/stable-trunk philosophy, and has an extensive test suite employed for every "nightly release" which it must pass in order to be made available.

The only difference between C/GCC and Rust/Rustc is there is a plan to formally incorporate the extensions for Rust for Linux into the Rust language specification in the short term, which does not exist for C/GCC.

The Rust compiler and language are stable, in the sense that they're bug free.

But why

Posted Oct 11, 2024 11:48 UTC (Fri) by pizza (subscriber, #46) [Link] (2 responses)

> Linux currently is built with a variety of non-standard GCC extensions that are not part of the C language specification. These come and go at any point, which is why the kernel tends to pin a particular GCC version.

Got a citation for the "and go at any point" ?

GCC has historically been _very_ consistent with intentional behavior, and has a very long timeline for the deprecation and removal of features.

Meanwhile, with regards to "pinning" GCC versions, (when I checked a few weeks ago) the minimum required version to compile Linux is v5.1 [1], released over 9 years ago, and the system I'm typing this on uses the latest GCC release (14.2). Regressions do happen of course, but GCC and Kernel developers are quite diligent about fixing those when they occur..

(It's been what, over two decades since I last remember pinning an old GCC version for kernel use?)

[1] Granted, you may need a newer version than that if you have a more recent CPU architecture, or turn on an optional feature (eg one of the many hardening plugins) that needs a newer GCC version.

But why

Posted Oct 11, 2024 12:20 UTC (Fri) by intelfx (subscriber, #130118) [Link] (1 responses)

> Got a citation for the "and go at any point" ?

Like VLAs?

But why

Posted Oct 11, 2024 12:45 UTC (Fri) by pizza (subscriber, #46) [Link]

> Like VLAs?

I asked for an actual citation, not a TLA.

...The only reference I can find about VLAs was when they were formally _documented_ as a GCC extension to C++ (as opposed to just an extension to C since appoximately forever). I found nothing about support for VLAs being _removed_.

Meanwhile, VLAs are actually part of the C99 spec and documented as supported in the latest GCC 14 releases.

So... again, [citation needed] about "and go at any point".

But why

Posted Oct 5, 2024 11:26 UTC (Sat) by slanterns (guest, #173849) [Link]

> Last time Rust was forked, which went nowhere (Crab language), they were terrified of being sued for trademark despite rustc being ostensibly "open source".

That's not a truth. The crablang guys indeed created the project as a troll / pushback towards Rust foundation's new trademark policy (and some other dramas). They already sed-ed all Rust to Crab so there's no possibility to be sued. The project died out simply because the creaters do not have the ability to develop a compiler and no real Rust maintainer showed any interest — I also view it as an unnecessary overreaction since the foundation withdraw the new policy in time. So the only thing they can do it grab code from upstream without their own meaningful work, and the project thus remains a joke.

But why

Posted Oct 1, 2024 22:49 UTC (Tue) by sam_c (subscriber, #139836) [Link] (9 responses)

Obviously, the intention is not to support an "ancient version" forever. I also maintain that the ecosystem will eventually have to stop chasing nightly or bleeding-edge MSRVs to be truly seen as a stable, mature language. See e.g. https://github.com/rust-lang/cargo/pull/13270.

An alternative implementation is valuable for many reasons. Rust only has one compiler with a very fast release cycle, no LTS releases, and it's also not flawless (who is?). The other comments mention crater runs, but that wasn't infallible, as we saw with the "time" inference fiasco.

Many ecosystems benefit from a 2nd implementation as bugs end up found in the first implementation, or unintended ambiguities in the "specification" (even a somewhat-informal one as Rust has). It also leads to alternative perspectives on proposals ("this might be easily implementable in rustc because of X, but it's not in Y because of Z" or the opposite, or "we managed to do this in X via Y"), and it will help with opening up a larger audience to Rust both via licencing and also tooling.

Portability is part of this but it's not the only reason to care about it.

Bootstrapping is another, given rustc is written in Rust. Rust is being added to core system libraries and tools, so it makes sense to ask if we can get a bootstrappable Rust compiler.

Something can have a point even if you personally don't have interest in it.

But why

Posted Oct 1, 2024 23:05 UTC (Tue) by josh (subscriber, #17465) [Link] (7 responses)

> I also maintain that the ecosystem will eventually have to stop chasing nightly or bleeding-edge MSRVs to be truly seen as a stable, mature language.

The ecosystem largely *doesn't* chase nightly. Some crates use nightly to test or demonstrate new features; the vast majority of the ecosystem uses stable.

And, empirically, there are plenty of people seeing it as a stable, mature language *now*.

There's ongoing work to help people who are stuck on old compiler versions, by helping them run correspondingly older versions of crates. But in general, distributions that are snapshotting old versions of Rust and not updating them should also be snapshotting old versions of crates that work with that old version of Rust. Nonetheless, we're adding some support for people who don't do so.

But why

Posted Oct 1, 2024 23:14 UTC (Tue) by sam_c (subscriber, #139836) [Link] (6 responses)

> The ecosystem largely *doesn't* chase nightly. Some crates use nightly to test or demonstrate new features; the vast majority of the ecosystem uses stable.

Okay, but there's enough people who do that it becomes problematic for people shipping Rust applications. It doesn't really matter to me if most of them are fine if there's a substantial minority which aren't and it seems like it's accepted in the ecosystem.

On the other hand, it is very rare for us to come across software which needs e.g. bleeding edge GCC or Clang (both as an individual and even more so at scale).

It is *not* uncommon for us to find a package is relying on crates from a git repo (not even releases) and also using unstable features. I wish that weren't the case, I hope it will stop being the case, but it does happen to us a lot. I believe you of course that many aren't like that, but it doesn't change that there's a substantial chunk which are?

Rust applications that require nightly

Posted Oct 2, 2024 9:59 UTC (Wed) by farnz (subscriber, #17727) [Link] (5 responses)

Your experience doesn't match mine; the only Rust code I have on my system (despite having several Rust applications compiled from source on here) that requires nightly is a toy OS kernel I'm writing for the fun of it. Everything else I use is compiled with a stable compiler - including everything my employer's code depends upon.

What packages are you seeing that depend on unstable features and/or a nightly compiler?

Rust applications that require nightly

Posted Oct 4, 2024 3:31 UTC (Fri) by sam_c (subscriber, #139836) [Link] (4 responses)

I gave a list of examples that came to mind at https://lwn.net/Articles/992686/. Chromium is the biggest hassle at the moment for us. It's important to keep in mind that for many, this might be their first experience with packaging Rust.

Rust applications that require nightly

Posted Oct 4, 2024 4:07 UTC (Fri) by sam_c (subscriber, #139836) [Link]

From some discussion with others off-LWN, it was suggested that some of the cases we've hit are where it's a just-stabilised feature in the latest rustc release, which would make sense with how we then hit it again in our stable version of rust before we stabilise a new version of it on our end.

Rust applications that require nightly

Posted Oct 4, 2024 8:57 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

I've never had much luck building any browser from source without severe pain, going right back to the days of the Mozilla suite (before Firefox existed), so I can well believe that browsers are a complete pain. Back in the day, the trouble I kept hitting with browsers was that they depended on patched versions of bleeding edge libraries, often built with very specific C++ compiler versions otherwise they wouldn't work, and I can easily believe that that attitude has carried on into their forays with Rust.

But two browser projects is not a significant chunk of the Rust ecosystem; would you accept me dismissing C++ as a language because you can't compile anything with less than 16 GiB RAM + swap (the current minimum for Firefox and Chromium)?

Rust applications that require nightly

Posted Oct 4, 2024 19:36 UTC (Fri) by sam_c (subscriber, #139836) [Link] (1 responses)

Oh yes, very much agreed.

But no, we're all used to C++. We're not used to Rust. Perceptions and getting familiar with something new does matter? Especially with the background I mentioned.

Rust applications that require nightly

Posted Oct 5, 2024 13:13 UTC (Sat) by farnz (subscriber, #17727) [Link]

The thing I was calling you out on was the claim that the Rust ecosystem as a whole (ripgrep, alacritty, deno, mdbook, wasmer, wezterm, ruffle, atuin, librsvg and others) require nightly; I'm only aware of three projects in the ecosystem that do - the two big browsers, and Rust for Linux.

But why

Posted Oct 2, 2024 19:31 UTC (Wed) by ralfj (subscriber, #172874) [Link]

Note that the Rust project itself already has alternative implementations of many parts of rustc. For instance, rust-analyzer is an alternative parser and partial type-checker. Miri is an alternative execution mode -- basically a codegen backend, except it does interpretation instead of compilation. And of course there is ongoing work on a specification, not with the goal of saying "every compiler that implements this spec is a Rust compiler", but with a goal of saying "this is how rustc behaves". That effort has and will continue to uncover various language quirks, which can then either be fixed or be accepted as "well that's just how things work now", depending on the trade-offs involved.

So we're already getting many of the benefits of alternative implementations and specifications, without the downsides like fracturing the ecosystem due to incompatibilities.

That said, as far as I know the gccrs people are committed to bug-for-bug compatibility with rustc. As long as that remains the case, there shouldn't be a risk of fragmentation, which alleviates most of the concerns. If they find a weird language quirk, what I hope will happen is that this is brought to the attention of the rust-lang project, and until the question is resolved gccrs behaves like rustc. That way hopefully the entire ecosystem can benefit. :)

But why

Posted Oct 22, 2024 7:05 UTC (Tue) by ras (subscriber, #33059) [Link]

> I really do not see the point in gccrs

I find this puzzling. Does there have to be a point to it? People do all sorts of weird things - climb mountains, race cars at lethal speeds, walk across deserts. I guess most people who don't see the point in doing those things either, but beyond "it's not my cup of tea" don't question it. Here someone is donating time and/or money to an open source project. Does it matter why they are doing it, except perhaps to figure out ways to encourage more of it?

I made some (very minor) contributions dotgnu, a C# implementation. There already was a perfectly good open source C# implementation (Mono), but this one was created by someone local and I like playing with compliers, so why not? It was a small group scattered across the globe; great to be part of. I enjoyed myself immensely at the time. DotGNU never really went anywhere, but for me personally I can't think of a better way to learn in great depth about C#, CLR, and it's libraries.

What more reason do need to work on gccrs?

And it's GPL

Posted Oct 1, 2024 16:40 UTC (Tue) by TheGopher (subscriber, #59256) [Link] (11 responses)

Never to be forgotten, gccrs will be GPL which rustc is not.

This is especially important to anyone working in the embedded domain where there was previously a history of shipping proprietary compilers with arch specific features, something that went away when the industry moved to gcc. rustc with it's current license can be shipped binary only to developers, with no right to access the source code. Binary only tooling is something embedded vendors would love to do again as charging money for tools and tool upgrades is very lucrative.

And it's GPL

Posted Oct 1, 2024 17:11 UTC (Tue) by roc (subscriber, #30627) [Link] (7 responses)

What's wrong with charging money for tools? Is it immoral for tool developers to get paid for their work?

And it's GPL

Posted Oct 1, 2024 19:30 UTC (Tue) by TheGopher (subscriber, #59256) [Link] (6 responses)

There is nothing wrong with charging money for tools per-se. Most embedded vendors do, even if they are shipping GPL tools. What's wrong is the planned obsolescence of the toolchain which is not under the control of the developer, as well as the lack of freedrom to modify the toolchain as the developer may need to do.
A lot of these toolchains are released on a "fire and forget basis", but that isn't the reality of some of the devices developed with these toolchains.
An option is of course to enter into contracts regarding years of support, release cadence, compiler versions in relation to language features and security fixes etc. This can only typically be done by large corporations though and even then the contracts are seldom watertight.

And it's GPL

Posted Oct 1, 2024 21:38 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> What's wrong is the planned obsolescence of the toolchain which is not under the control of the developer, as well as the lack of freedrom to modify the toolchain as the developer may need to do.

Nothing in GPL stops planned obsolescence.

And it's GPL

Posted Oct 1, 2024 21:54 UTC (Tue) by TheGopher (subscriber, #59256) [Link] (4 responses)

Yes it does, as you have access to the source code and are able to forward port patches etc. GPL changes the game when it comes to toolchains.

And it's GPL

Posted Oct 1, 2024 22:37 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

You can do the same with MIT/Apache2?

And it's GPL

Posted Oct 1, 2024 23:06 UTC (Tue) by TheGopher (subscriber, #59256) [Link]

MIT/Apache2 does not contain any obligation to provide the source code. So patches applied by the toolchain supplier may simply not be available.

And it's GPL

Posted Oct 1, 2024 23:07 UTC (Tue) by pizza (subscriber, #46) [Link]

> You can do the same with MIT/Apache2?

Only if you have the source code. Which, in my experience, is very much not a given in the world of vendor toolchains.

At $dayjob-2 I had two different proprietary LLVM-based toolchains on my workstation. At $dayjob-2, I had three. All had features we needed that were not (and never would be) present upstream and required talking to a license server to do anything.

Both of those gigs involved products with potentially multi-decade support lifecycles, forcing us to keep paying every year for the "right" to use an increasingly-ancient not-updated toolchain.

Speaking of, at $dayjob-3, we actually abandoned a product line because the toolchain vendor *refused* to supply a license to the ancient toolchain it had been built around, insisting instead we upgrade to a newer version that had documented regressions in our use case. The cost of migrating (and subsequent testing/recertification) was not economically justifiable.

And it's GPL

Posted Oct 3, 2024 19:33 UTC (Thu) by sobkas (subscriber, #40809) [Link]

You know that with MIT/Apache you can get only binary and it would be ok with the license?
It's harder to change/port code when you don't have access to it.

And it's GPL

Posted Oct 1, 2024 17:27 UTC (Tue) by pizza (subscriber, #46) [Link] (1 responses)

> Binary only tooling is something embedded vendors would love to do again as charging money for tools and tool upgrades is very lucrative.

They never stopped doing so. Only now (nearly?) all of those proprietary toolchains are now based on LLVM.

And it's GPL

Posted Oct 1, 2024 18:46 UTC (Tue) by Trelane (subscriber, #56877) [Link]

They did; for a while GCC was the main game in town. Lots of super proprietary companies contributed to it (Apple and nVidia, for example.)

The corporations didn't like giving the source to their users. The corporations wanted to keep the internals secret from them. (Turns out, having serfs instead of colleagues is more profitable to the aristocracy, though not nearly as good for the serfs.) That is the main reason why LLVM exists.

And it's GPL

Posted Oct 11, 2024 10:41 UTC (Fri) by moltonel (guest, #45207) [Link]

Pretty ironic then, that the original/main sponsor of Gccrs is OpenSourceSecurity, who sells proprietary gcc plugins (https://grsecurity.net/featureset/gcc_plugins). Like it or not, Gccrs exists partly to protect the interest of proprietary software.

Another thing to note is that Gcc is still very monolithic, to the point that you usually need to fork the whole compiler to add a new backend for your embedded hardware. When it's maintained at all, it's usually frozen in time, frontend-wise. This is much less an issue with LLVM.

One last data point: Ferrocene, the certified Rust compiler which drove the efforts to write a Rust spec, is by policy the exact same code as upstream rustc.

I don't know the future and am happy that Rust doesn't put all its eggs in the same basket, but I don't see gccrs as a protection agains an hypothetical threat of proprietary Rust.

Great progress

Posted Oct 2, 2024 9:14 UTC (Wed) by amit (subscriber, #1274) [Link]

It sounds like very good progress is being made, and it's an active community interested in getting gccrs working on par with "the standard compiler". So the last paragraph that paints a very dire picture of lack of time or developers is surprising. I didn't find a list of TODO items that show lagging progress. Their latest monthly report shows 9 contributors for the month (which include the interns), and a call for contributors with one item needing attention. Overall, looks like a well-managed project with good progress. Perhaps some context about what they're working towards at a priority (like compiling the std or compiling the kernel) will help understand how much and what work is pending.


Copyright © 2024, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds