Debian, Rust, and librsvg
Debian, Rust, and librsvg
Posted Nov 14, 2018 13:38 UTC (Wed) by moltonel (guest, #45207)In reply to: Debian, Rust, and librsvg by glaubitz
Parent article: Debian, Rust, and librsvg
The standard Go compiler is standalone, so writing a GCC frontend (and an LLVM frontend) made sense for Go because they'd have to port *everything* to the new arch otherwise. Same story for the D language. But Rust is already a frontend, delegating most of the portability work to thebackend. Adding a new frontend one doesn't bring much.
Saying that "for portability, all projects should use GCC" is a bit ironic. There are many projects which require or benefit from LLVM nowadays. Arguably nowadays, if an arch doesn't have a good LLVM backend, it's that arch's problem and not the problem of the project that needs LLVM to build. If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch.
That's not to say that porting Rust is just a matter of porting LLVM. There are surely Rust-porting bugs that have nothing to do with LLVM. We all would like Rust's platform support to improve (I think there's going to be a bigger push for that once edition 2018 is out), but writing a Rust GCC frontend doesn't seem like a good use of community resources.
Posted Nov 14, 2018 13:53 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (66 responses)
There are efforts for that, see, for example:
> https://reviews.llvm.org/D50314
The problem is that LLVM has stricter rules which targets they accept into LLVM as a backend as compared to gcc. The m68k backend was not accepted for the time being as LLVM upstream feared the additional burden of another backend.
> The standard Go compiler is standalone, so writing a GCC frontend (and an LLVM frontend) made sense for Go because they'd have to port *everything* to the new arch otherwise. Same story for the D language. But Rust is already a frontend, delegating most of the portability work to thebackend. Adding a new frontend one doesn't bring much.
Adding a new frontend means you get a second, independent implementation of the language which is always of advantage. And with a gcc backend it means that Rust will be available on a new architecture the moment someone has ported gcc which always happens for new architectures.
> Saying that "for portability, all projects should use GCC" is a bit ironic. There are many projects which require or benefit from LLVM nowadays. Arguably nowadays, if an arch doesn't have a good LLVM backend, it's that arch's problem and not the problem of the project that needs LLVM to build. If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch.
There are architectures that might be niche architectures from the Western PoV but they aren't in other countries. What about the Russian Elbrus architecture, the Chinese C-Sky architecture or the Chinese Sunway architecture used in two of the fastest supercomputers on top500? All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler. It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
> That's not to say that porting Rust is just a matter of porting LLVM. There are surely Rust-porting bugs that have nothing to do with LLVM. We all would like Rust's platform support to improve (I think there's going to be a bigger push for that once edition 2018 is out), but writing a Rust GCC frontend doesn't seem like a good use of community resources.
I disagree. Once you have the gcc backend, you will max out Rust's portability. So, it's definitely worth the effort.
PS: I have invested efforts into making Rust more portable myself, so it's not that I am just complaining:
> https://lists.debian.org/debian-devel-announce/2018/11/ms...
Posted Nov 14, 2018 15:18 UTC (Wed)
by cesarb (subscriber, #6266)
[Link] (5 responses)
Posted Nov 14, 2018 23:29 UTC (Wed)
by roc (subscriber, #30627)
[Link]
Posted Nov 16, 2018 10:50 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (3 responses)
There are already prototypes of Rust frontend supporting other backends, but this only works if Rust generates some IR for the backend, and then the backend compiles this IR.
GCC's IRs are designed to be incompatible with this model, to prevent industry from reusing GCC as a backend. The GCC IRs require a lot of back and forth between the different frontend and backend phases to work. This means that you can't use a front-end designed to support multiple "reasonable" backends with GCC, because it is, by design, an unreasonable backend.
Posted Nov 18, 2018 9:21 UTC (Sun)
by drago01 (subscriber, #50715)
[Link] (2 responses)
Posted Nov 18, 2018 10:33 UTC (Sun)
by hsivonen (subscriber, #91034)
[Link]
Posted Nov 29, 2018 22:13 UTC (Thu)
by tschwinge (subscriber, #43910)
[Link]
Posted Nov 14, 2018 15:37 UTC (Wed)
by moltonel (guest, #45207)
[Link] (36 responses)
> There are architectures that might be niche architectures from the Western PoV but they aren't in other countries. What about the Russian Elbrus architecture, the Chinese C-Sky architecture or the Chinese Sunway architecture used in two of the fastest supercomputers on top500?
I was talking about "niche" archs more in the sense of "arches that require more work to use because they lack the usual tools" than in the sense of "arches used by only X people". In that sense and in my view, any arch that doesn't have LLVM is niche. It is cutting itself from a lot of possible uses. Getting LLVM on an arch seems more important to me than getting Rust on it, because it opens up the door to more than just Rust.
In that light, LLVM's stricter rules for backend upstream inclusion are indeed a PITA. I'm not sure how it should be solved, appart from demonstrating that whoever cares about a given arch will help maintain it.
> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler. It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
Rust upstream *is* very interested in porting Rust to more arches. But the plan is (AFAIK) essentially to upgrade supported arches to higher tiers, and bring new arches, using the main compiler. Efforts on other compilers are welcome, but the core team is currently focusing its non-infinite workforce on the main compiler. There are two important aspects of Rust culture that lean towards that decision: the ideas that the language, compiler and libs should evolve very progressively, and that "correct and good design" is more important than "first to market". A new GCC frontend faces strong obstacles to adoption in that context (see also the "it'll never catch up" issue above) .
Underlying all this is the question of "Who's at fault for libFOO not being available on archFOO ? Who should do the work ?" Is it Rust, because they should write a GCC frontend ? Is it LLVM, because they should support archFOO ? Is it archFOO, because they should be the ones maintaining the compiler backends ? Is it the libFOO devs, because they should have know better than using a technology that isn't universally available ? There's no right answer; depending on your affiliations and abilities, you're likely to think that it's someone else's problem.
> I have invested efforts into making Rust more portable myself, so it's not that I am just complaining
I first read about your work in "This week in Rust" and am very grateful for the result. Sorry if my "please help fix that arch" felt out of place. It was a bit strange to read in LWN today that the person bringing Rust to all official Debian archs is also seemingly annoyed by Rust and by people using or packaging it, but I understand that it's a complicated subject.
Posted Nov 14, 2018 16:05 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (35 responses)
This is no different than CPython being the only implementation of 3.x while PyPy and other projects only had 3.x-2 support or so. Now with CPython explicitly slowing down after the BDFL stuff (and calling out letting other implementations catch up). I imagine that Rust will, at some point, slow down for other implementations, but it took CPython 20+ years to do that. And 10 years (Jython started in 2001) for alternate implementations to even get an initial release.
Posted Nov 15, 2018 6:25 UTC (Thu)
by edomaur (subscriber, #14520)
[Link] (34 responses)
https://blog.rust-lang.org/2018/07/27/what-is-rust-2018.html
Posted Nov 15, 2018 9:54 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (32 responses)
Posted Nov 15, 2018 18:23 UTC (Thu)
by peter-b (guest, #66996)
[Link] (1 responses)
Sorry, I don't really understand this argument. Does openSUSE build GCC without GCC?
Posted Nov 18, 2018 10:27 UTC (Sun)
by roblucid (guest, #48964)
[Link]
Posted Nov 15, 2018 20:14 UTC (Thu)
by josh (subscriber, #17465)
[Link] (29 responses)
Posted Nov 15, 2018 20:57 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (28 responses)
Posted Nov 15, 2018 21:14 UTC (Thu)
by josh (subscriber, #17465)
[Link] (27 responses)
Posted Nov 15, 2018 21:23 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (24 responses)
Posted Nov 15, 2018 21:45 UTC (Thu)
by ssokolow (guest, #94568)
[Link]
(mrustc is a Rust compiler, written in C++, which makes some assumptions about the correctness of the code, but is complete enough to compile certain older versions of rustc so that it can be proved free of a Trusting Trust attack.)
Posted Nov 15, 2018 21:46 UTC (Thu)
by josh (subscriber, #17465)
[Link] (20 responses)
You might check with the Debian folks on reproducible build strategies for self-hosting compilers.
Posted Nov 15, 2018 22:15 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (9 responses)
Posted Nov 15, 2018 23:01 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (7 responses)
To build the minimal version still requires GCC, and to bootstrap GCC you need some form of C compiler (though not necessarily GCC itself).
I don't know of any distros with a requirement to be able to build every package "from scratch" with no outside dependencies. Frankly I'm not even sure how that would work; does every rebuild need to start with procuring a blank-slate system with a direct hardware interface for loading hand-generated machine code? In practice you always start out using host tools of one form or another. If you look at the bootstrap process for something like Linux From Scratch[1] it involves building the toolchain three times: First you build a cross-compiler using the host tools, then you use that cross-compiler on the host to build a native toolchain and other packages required for standalone operation (what LFS calls the "temporary system"), and finally use this temporary system to build the host-independent tools.
This is for a complete system, of course, but bootstrapping a compiler is similar. Use the host's Rust compiler to build a cross-compiler which doesn't rely on the host environment, then use that to build the native Rust compiler. The result of the second build should be deterministic even if the first stage isn't due to variations in host compilers.
Posted Nov 16, 2018 1:11 UTC (Fri)
by pabs (subscriber, #43278)
[Link]
Posted Nov 16, 2018 15:39 UTC (Fri)
by moltonel (guest, #45207)
[Link] (5 responses)
I understand that many platforms get a C compiler before any other, but that doesn't change anything to the fact that a pre-existing binary was used to compile that "initial" compiler. The language of the pre-existing binary shouldn't matter, as long as it can generate native code for your target platform. It's a mindset issue, not a technical issue.
For fun, have a look at https://github.com/nebulet/nebulet, a microkernel written in Rust that only executes wasm, and does so in ring 0. No C in sight ;)
Posted Nov 16, 2018 16:47 UTC (Fri)
by zlynx (guest, #2285)
[Link] (1 responses)
Posted Nov 23, 2018 10:03 UTC (Fri)
by ewen (subscriber, #4772)
[Link]
If you’re thinking of another one and do find the link I’d be curious to read about their approach.
Ewen
Posted Nov 16, 2018 17:11 UTC (Fri)
by madscientist (subscriber, #16861)
[Link] (2 responses)
You can find a C compiler from decades ago that is sufficient to build GCC (because GCC starts out by building a "stage 1" version of the GCC compiler which relies only on very portable C, then builds the rest of GCC, including all the different front-ends like C++, gfortran, etc., using that stage 1 compiler). You can cache some obsolete version of GCC, or get TinyCC or something like that, and use that for bootstrapping. It's super-easy and virtually never has to change.
Posted Nov 16, 2018 18:30 UTC (Fri)
by josh (subscriber, #17465)
[Link]
Rust doesn't systematically demand the latest prior release to build itself; it simply does not refrain from using features as they become available. As mentioned elsewhere in this thread, folks are working on creating a minimized series of rustc versions to successively bootstrap from the last OCaml version.
Posted Nov 20, 2018 5:57 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link]
Posted Nov 16, 2018 14:09 UTC (Fri)
by glondu (subscriber, #70274)
[Link]
Not exactly.
In OCaml world, there is a portable bytecode for binaries (like Java) and the OCaml sources contain a binary of the full compiler (to bytecode) that is used to compile everything (including the compiler itself). The interpreter for that bytecode is written in C. Hence, "bootstrapping" a new architecture is just porting the interpreter to the new architecture, which is easy.
Additionally, there is a compiler to native code, but it is not strictly needed to enjoy OCaml on a new architecture. And it can be compiled using the bytecode compiler, so no bootstrapping issues.
Posted Nov 15, 2018 22:25 UTC (Thu)
by nix (subscriber, #2304)
[Link] (3 responses)
For Rust, this makes sense, because the language is evolving fast and this does mean that at least the compiler can dogfood experimental features added in compiler version N in version N+1, then mark them non-experimental once they know they work for at least that use case. :)
Posted Nov 16, 2018 0:01 UTC (Fri)
by josh (subscriber, #17465)
[Link]
Posted Nov 18, 2018 1:45 UTC (Sun)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
Maybe it's just me, but it's hard to see a system like this as ready for building OS critical components. When the compiler demands bleeding edge features such that it can't be built with a version of itself that's even 3 months old, it is not a stable platform on which to build a system.
Posted Nov 18, 2018 23:51 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Nov 16, 2018 14:11 UTC (Fri)
by plugwash (subscriber, #29694)
[Link] (5 responses)
As the maintainer of Raspbian there have been several things that have made rust particuarly painful for us. I suspect some of these would also be applicable to re-bootstrapping efforts.
1. The upgrade treadmill is unusually rapid. You need the previous release of rust and rust makes a new release about every 6 weeks.
Posted Nov 16, 2018 15:14 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (1 responses)
I have no idea what you expected from googling "rust fabricate", but if I just go to the documentation site of the toolchain: https://doc.rust-lang.org/rustc/index.html , there is a frickin book about how the Rust toolchain is built, how to run tests, debug issues, configure everything, the overall design of the compiler, the different modules, etc.
Posted Nov 17, 2018 2:02 UTC (Sat)
by plugwash (subscriber, #29694)
[Link]
I was hoping to find some documentation on what "fabricate" (which was hanging during the rustc build on one of our buildboxes, I suspect a bug in the arm32 compatibility layer of the arm64 kernel on said box) did and how to troubleshoot problems with it.
But instead I just found a load of results about metal fabrication :(
Someone did later tell me what fabricate was (apparently it's the rust-installer crate), though I haven't got around to trying to take a closer look. For now I have been working around the issue by doing our rust builds on a different system.
> there is a frickin book about how the Rust toolchain is built
Using the search tool in said book to search for fabricate doesn't seem to turn up any results.
Still thanks for the pointer.
Posted Nov 16, 2018 21:09 UTC (Fri)
by josh (subscriber, #17465)
[Link] (2 responses)
My understanding is that both Rust and Cargo only require the previous stable version at most, and should never have dependency loops between their latest versions.
> 4. After a failed build attempting to re-run the failed command manually using the command line printed by the build process (for example to run it in a debugger or under strace or something) produces completely unrelated errors. I guess there are some special environment variables or something but I can't seem to find any documentation of them if there are.
This is being fixed right now, and yes, there are environment variables. See https://github.com/rust-lang/cargo/pull/5683 for details.
Posted Nov 19, 2018 14:15 UTC (Mon)
by dgm (subscriber, #49227)
[Link] (1 responses)
Doesn't that imply that you (potentially) have to build each and every version in succession to get the latest one?
Posted Nov 20, 2018 11:57 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Nov 16, 2018 15:04 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (1 responses)
(what you are saying does not make sense for a toolchain)
Posted Nov 18, 2018 0:01 UTC (Sun)
by JoeBuck (subscriber, #2330)
[Link]
Posted Nov 20, 2018 5:56 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link] (1 responses)
That sounds ridiculous.
Posted Nov 20, 2018 9:01 UTC (Tue)
by Jonno (subscriber, #49613)
[Link]
Of course not, rust support cross-compilation perfectly well!
Sure, that means that if you want to re-verify the chain of trust from scratch you are going to need two computers (one of each arch), but that is a comparatively minor inconvenience...
Posted Nov 15, 2018 20:13 UTC (Thu)
by josh (subscriber, #17465)
[Link]
Posted Nov 14, 2018 15:40 UTC (Wed)
by josh (subscriber, #17465)
[Link] (14 responses)
Seems like a sensible degree of caution, to think about the impact of a new backend on the rest of the ecosystem. If having an m68k backend makes mainstream architectures even slightly harder to maintain, or if it slows down development due to lower developer resources, it isn't worth it.
> Adding a new frontend means you get a second, independent implementation of the language which is always of advantage.
Not at all, no. A second, independent implementation is only interesting if it provides features the primary implementation *cannot*, and still keeps up with the primary implementation. I'd much rather see rustc itself add the necessary portability and provide the same functionality everywhere.
A new backend is a good idea, though it'd still be a monumental undertaking. (There are projects already in progress for new backends, though none for a GCC backend that I know of.) But a new frontend? No. Let's not.
As far as I can tell, the *only* justification for mrustc (an implementation which only works for code already verified as correct by rustc) is for diverse double-compilation; nobody is seriously proposing using mrustc for other purposes.
> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler.
"non-portable" is hyperbole; it can already target many architectures, including 14 of Debian's architectures.
But no, all those usecases are only missing from Rust because nobody has done the work. Those architectures could have an LLVM target, and a Rust target, if a group of developers contributed them *and committed to ongoing maintenance*. Or if someone was sufficiently motivated to produce a GCC backend, they could.
> It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
I really don't think Rust upstream is bemoaning the loss of potential popularity to be had there. Such decisions are tradeoffs, and it's not unreasonable to consider what would be traded off in exchange.
However, speaking as a part of Rust upstream: patches and commitments to ongoing maintenance welcome.
Posted Nov 14, 2018 15:57 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (6 responses)
> Not at all, no. A second, independent implementation is only interesting if it provides features the primary implementation *cannot*, and still keeps up with the primary implementation. I'd much rather see rustc itself add the necessary portability and provide the same functionality everywhere.
Well, then we have a different opinion on that. I think it's definitely valuable to have a second, independent implementation of the language. This would also help with standardization.
> As far as I can tell, the *only* justification for mrustc (an implementation which only works for code already verified as correct by rustc) is for diverse double-compilation; nobody is seriously proposing using mrustc for other purposes.
There is nothing though that speaks against using mrustc for other purposes. Most upstream projects which are using Rust are not using the latest features of the language all the time, so I guess many projects will build just fine with mrustc.
>> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler.
Ignoring the fact that it's currently broken again on multiple Debian release architectures in Debian experimental.
Don't you think it's a bit of the burden for the Rust maintainers in Debian if they constantly have to fix these issues over and over again because Rust upstream doesn't provide an LTS variant of the language? And what's the point having to constantly update and fix compiler regressions when many upstream projects aren't using the latest features anyway.
I sometimes have the impression that Rust is primarily developed for the language's own sake instead of providing a useful language for projects to adopt.
> But no, all those usecases are only missing from Rust because nobody has done the work. Those architectures could have an LLVM target, and a Rust target, if a group of developers contributed them *and committed to ongoing maintenance*. Or if someone was sufficiently motivated to produce a GCC backend, they could.
Well, I know of two very important upstream projects that have contemplated using Rust code in their projects but have discarded the idea because of the issues I mentioned.
>> It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
> I really don't think Rust upstream is bemoaning the loss of potential popularity to be had there. Such decisions are tradeoffs, and it's not unreasonable to consider what would be traded off in exchange.
Well, after my post to debian-devel-announce, I received exactly this feedback from two very important upstream projects. So, I think I really have a valid point.
> However, speaking as a part of Rust upstream: patches and commitments to ongoing maintenance welcome.
Yes, I'm aware of that and I have done that.
Posted Nov 14, 2018 16:18 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
mrustc ignores lifetimes completely, assuming that they are correct. Granted, for released code, this is probably fine, but I certainly wouldn't want to apply a patch to a project without checking with rustc first.
Posted Nov 14, 2018 16:30 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (3 responses)
Yes, I'm aware of that. But I wouldn't use mrustc for developing code either. Just for building release code.
Posted Nov 14, 2018 16:36 UTC (Wed)
by josh (subscriber, #17465)
[Link] (2 responses)
Also worth noting: mrustc currently has far less portability than Rust, and I don't see any signs of changing that.
Posted Nov 16, 2018 14:18 UTC (Fri)
by glaubitz (subscriber, #96452)
[Link] (1 responses)
Did you even try mrustc yourself or are you just guessing? It took me 5 minutes to make it build on m68k.
mrustc generates C code out of Rust code. How should it be **not** portable?
Posted Nov 16, 2018 21:13 UTC (Fri)
by josh (subscriber, #17465)
[Link]
> - Supported Targets:
I based my comment on that. This isn't about the host platform of mrustc, it's about what the generated code targets, and using C as an intermediate language does not automatically make code portable.
Posted Nov 14, 2018 16:28 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Only if it manages to keep up. And even then, I'd expect it to generate ecosystem issues of its own due to implementation-specific quirks and bugs.
The primary implementation is Open Source; the value of having multiple independent implementations seems low compared to the cost. (The value of having multiple backends, on the other hand, seems quite reasonable for the cost.)
> Most upstream projects which are using Rust are not using the latest features of the language all the time, so I guess many projects will build just fine with mrustc.
mrustc doesn't implement borrow checking. It'd be a *bad* idea to use it for that purpose. It primarily exists so that it's possible to independently compile rustc (which is written in rust) and prove that there aren't any "trusting trust" attacks.
> Don't you think it's a bit of the burden for the Rust maintainers in Debian if they constantly have to fix these issues over and over again because Rust upstream doesn't provide an LTS variant of the language?
No. "LTS" or not, Rust will not stop having ongoing development. And while an "LTS variant" has been discussed, that will not affect this type of issue; LTS only really works for a specific subset of users who don't care about incorporating new upstream packages regularly, not the broader ecosystem. Architectures that want to be supported will still need to work with upstream to keep them working, and in particular to make sure they're actively tested.
But in any case, I think this comes back to the question raised elsewhere both in the debian-devel thread and in this thread: who is going to do the work? This isn't something where people interested in an architecture can offload work to others and expect it to get done.
It's not reasonable to look at an actively developed project like Rust and say "slow down, wait up". Rust *will* provide guidance, mentorship, and a welcoming environment where people can contribute. But it won't slow down or wait up.
> Well, after my post to debian-devel-announce, I received exactly this feedback from two very important upstream projects.
Namely?
> Yes, I'm aware of that and I have done that.
So you'd be interested and willing to help get the architectures you care about elevated to a higher support tier in Rust upstream, with the work and infrastructure that would entail?
Posted Nov 14, 2018 20:48 UTC (Wed)
by ballombe (subscriber, #9523)
[Link] (5 responses)
A) "If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch."
B)" The problem is that LLVM has stricter rules which targets they accept into LLVM as a backend as compared to gcc. The m68k backend was not accepted for the time being as LLVM upstream feared the additional burden of another backend.""
C)" Seems like a sensible degree of caution, to think about the impact of a new backend on the rest of the ecosystem. If having an m68k backend makes mainstream architectures even slightly harder to maintain, or if it slows down development due to lower developer resources, it isn't worth it."
No wonder people get mad.
Posted Nov 14, 2018 22:09 UTC (Wed)
by josh (subscriber, #17465)
[Link]
A statement like "LLVM upstream feared the additional burden of another backend" seems rather unlikely if there's a small army of developers for that architecture willing to make sure that the LLVM developers don't end up shouldering that additional burden. That doesn't seem at all unreasonable. That the LLVM developers had this concern suggests that there was reason to believe they'd be stuck doing work for that architecture themselves.
Speaking for Rust, we're *more* than willing to go out of our way a certain degree to provide portability, but we're not willing to get shouldered with the burden of ongoing maintenance of an architecture without the help of some reliable developers for that architecture and some test infrastructure for that architecture.
Posted Nov 14, 2018 23:28 UTC (Wed)
by roc (subscriber, #30627)
[Link] (3 responses)
Posted Nov 20, 2018 6:02 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link] (2 responses)
Posted Nov 20, 2018 18:43 UTC (Tue)
by roc (subscriber, #30627)
[Link] (1 responses)
What is *not* reasonable is for the vanishingly small number of people who care about these niche architectures to push the costs of supporting those architectures onto others.
Posted Nov 20, 2018 22:07 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
And, like it or not (and I don't, particularly), LLVM is becoming the compiler of choice for more and more software.
Posted Nov 15, 2018 11:18 UTC (Thu)
by eru (subscriber, #2753)
[Link]
Posted Nov 16, 2018 10:47 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (7 responses)
This is the biggest BS i've read all day.
Please, go and start building a Rust GCC frontend, so that you can show the work the value of re-inventing the wheel.
Posted Nov 16, 2018 14:22 UTC (Fri)
by glaubitz (subscriber, #96452)
[Link] (5 responses)
How is it bullshit? Multiple languages do that. Go, D, Java, Pascal, C/C++, Python, Fortran and so on all have multiple implementations and have standardized their language. But, of course, all these languages designers are wrong and only the Rust way is the proper way to do it.
> Please, go and start building a Rust GCC frontend, so that you can show the work the value of re-inventing the wheel.
Please don't make hyperbolic arguments just because I am trying to explain you that the current way that Rust is being developed is highly problematic for downstream distributions and hinders the adoption of Rust.
Ask your favorite enterprise distribution vendor what they think about a language that changes their compiler in incompatible ways every six weeks.
It's so annoying when the new kids on the block always think that the generations of engineers and programmers before them did it all wrong and they are the ones who ate wisdom with spoons.
Posted Nov 19, 2018 18:28 UTC (Mon)
by jccleaver (guest, #127418)
[Link]
Interestingly, even perl has gone this route. The perl5 binary remains the language specification for perl v5, but perl6 has an independent specification and has actually had several different implementations over the years.
Posted Nov 19, 2018 20:56 UTC (Mon)
by moltonel (guest, #45207)
[Link] (3 responses)
I know you were relying to an aggressive comment, but please don't spread the idea that the Rust community doesn't want independent implementations or claims that the current Rust way is the only proper one. There's a difference between saying "it would never get as good as the main implementation / it's not worth the effort right now / for the portability usecase it'd be better to port llvm" (which all seem true to me) and refusing an alternate implementation when it comes up (which I would find silly). If anything, mrustc is a counter-example (though it's not clear whether the intended usecase is portability, or trusting trust, of standardisation or...).
All the languages you mention arrived at multiple implems via different routes. C/C++/Fortran/pascal are (were) simple enough that writing a compiler is (was) easy, they were often a new platform's first language, and they got half a century (!) to get there. Java alternate implems endured legal hurdles despite the original implementor's "write one run anywhere" hype. Python implems constantly lag behind the mainline, and many were abandoned because of this. D has been struggling to gain popularity for 16 years, and its gcc frontend looks like a desperate attempt to buck the trend. Go has a huge corporation developing and lobbying it, which can easily absorb the cost of a gcc frontend (plus, like D, Go has been designed to be easy to implement).
It seems likely that Rust too will get good alternate implementation(s) someday, from mrustc or somewhere else. But these things take a lot of time (unless you're a huge corporation and are happy to cut corners in your language). And for now, it's just not a priority for Rust. The Rust community is very good at focusing on short- and long-term targets. Its RFC process is really good, and it is better defined than many "standardized" languages. When the pace of new features has slowed down and it is time for an alternate implem, it'll happen. We might have a few more years debating whether "the year of the need for an alternate rust implem" has arrived or not ;)
Posted Nov 20, 2018 9:15 UTC (Tue)
by renox (guest, #23785)
[Link] (2 responses)
You'd better be careful about what you write, this part of your comment is both very mean and very stupid..
Posted Nov 20, 2018 12:09 UTC (Tue)
by moltonel (guest, #45207)
[Link] (1 responses)
I actually really liked D when I first encountered it. I did tutorials and wrote toy programs, talked to my peers about D, kept an eye on development... And then nothing, it never picked up as I hoped it would. Use of D is still very annecdotal. It seems to me that the window of opportunity for D has closed. I'd be happy to be proved wrong.
Posted Nov 20, 2018 12:22 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
From my PoV (having written production code in all the languages I'm mentioning), D is a competitor to C++11 - it's a major advance on C++03, and competitive with C++11. The trouble is that where C++ has continued to evolve in the form of C++14 and C++17, D seems to have become stuck; it's neither improving at the rate C++ is, nor is it exploring areas of the programming language design space that C++ does not.
Rust is advancing where D has not (indeed, some of the production D code I've worked on in the past has been rewritten in Rust) because the use of affine types moves an interesting set of bugs from runtime to compile time; it thus doesn't matter that C++20 will be a better C++ than Rust, because Rust isn't aiming to be a better C++; in contrast, D was a better C++03, comparable to C++11, and has fallen back since then into a niche where it's not as good as C++ for systems programming (the GC is effectively compulsory), not as good for quick hacks as Go or Python, and has the same division of bugs into runtime versus compile time as C++ (unlike Rust).
Posted Nov 16, 2018 16:35 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link]
Perhaps it is reasonable for a language so young to be changing so rapidly -- as unreasonable as "every six weeks you have to add another compiler to your bootstrap chain" is, it does take a nontrivial amount of effort to achieve even this. Pre-1.0 there was no such policy, and literally nobody knows a series of compilers that will take you back to Ocaml.
Remember that Rust is much bigger and also much younger than any other widely-used language of its age... but in some sense it's a victim of its own success -- Rust code interoperates well with C, compiles to just-as-fast assembler, but has an extremely expressive type system which is fun to write in and makes it much easier to read and reason about. This means that even fairly-conservative systems-programming projects like librsvg are understandably tempted to migrate to it even while it is changing so rapidly. Naturally this means that people who care about trusting trust, long-term stability and high-assurance software are somehow going to have to deal with the Rust toolchain's immature bootstrap story, and just as naturally, this is going to be a painful and frustrating process.
It is not helpful to anybody for Rust advocates to be loudly posting that the current situation is OK and that it is somehow the Debian maintainers who are old fuddy-duddies and should abandon their principles.
I have high hopes that through this process, projects like mrustc, cargo-vendor, etc. -- which are sorely needed to use Rust in the high-assurance high-availability contexts it's designed for -- will gain the respect and human resources they need to succeed in their missions. If I did not believe this, there is exactly zero chance I would be using Rust in production anywhere, and indeed less optimistic people than me are holding off on using Rust until this situation improves.
It is very disheartening to see vocal subsets of the Rust community denigrating these efforts.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> Once you have the gcc backend, you will max out Rust's portability. So, it's definitely worth the effort.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
A phase of the compiler build, created a GCC subset, that could then bootstrap the full compiler.
If you didn't have a C working on target, then cross compiling was another possibility. GCC was used on Sun h/w to build code for telephone exchanges for example, unsuitable to run a compiler natively.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
2. There are interdependencies between rust versions and cargo versions that aren't always obvious until you try particular combinations.
3. There does not seem to be any way to disable multithreading in the build. Threading can easilly lead to failures that only show their face on particular hardware/kernel configurations.
4. After a failed build attempting to re-run the failed command manually using the command line printed by the build process (for example to run it in a debugger or under strace or something) produces completely unrelated errors. I guess there are some special environment variables or something but I can't seem to find any documentation of them if there are.
5. It's virtually impossible to google anything about rust. If I google say "rust fabricate" most of the results have nothing to do with the language.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> "non-portable" is hyperbole; it can already target many architectures, including 14 of Debian's architectures.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> - x86-64 linux
> - (incomplete) x86 windows
> - (incomplete) x86-64 windows
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> How is it bullshit? Multiple languages do that. Go, D, Java, Pascal, C/C++, Python, Fortran and so on all have multiple implementations and have standardized their language. But, of course, all these languages designers are wrong and only the Rust way is the proper way to do it.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
It's very likely that D's gcc frontend exist because there are some people who like D and who like gcc also so they/he wanted to integrate D's frontend into gcc, nothing "desperate" about this.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg