Debian, Rust, and librsvg
Debian supports many architectures and, even for those it does not officially support, there are Debian ports that try to fill in the gap. For most user applications, it is mostly a matter of getting GCC up and running for the architecture in question, then building all of the different packages that Debian provides. But for packages that need to be built with LLVM—applications or libraries that use Rust, for example—that simple recipe becomes more complicated. How much the lack of Rust support for an unofficial architecture should hold back the rest of the distribution was the subject of a somewhat acrimonious discussion recently.
The issue came up on the debian-devel mailing list when John Paul Adrian Glaubitz complained
about the upload of a new version of librsvg to unstable.
Librsvg is used to render Scalable
Vector Graphics (SVG) images; the project has recently been switching
some of its code from C to Rust, presumably for the memory safety offered
by Rust. Glaubitz said that the new "Rust-ified" library had been uploaded
with no warning when the package maintainer "knows very well that
this particular package has a huge number of reverse dependencies and would cause
a lot of problems with non-Rust targets now
". The reverse
dependencies are the packages that rely on librsvg in this case.
Glaubitz noted that he has put a lot of effort into getting Rust working
for various unofficial architectures. In fact, on November 2, he had
announced
that four new architectures now had Rust support, which meant that all of
those that are officially released by Debian are covered. That brought the number of
Debian architectures with Rust support to 14. His goal, it would seem, is
to get Rust running on all of the ports architectures eventually.
So he was unhappy that the new upload has made his work
on Debian ports "harder with all the manual work
required for maintaining librsvg on the non-Rust targets now
". He
concluded with a final shot:
Is that seriously the way we want to work together?
Though never mentioned by name, apparently Jeremy Bicha was the target of
Glaubitz's ire. Bicha responded
by noting that there was some coordination on the upload, as documented
in a Debian bug report. That coordination was aimed at the supported
architectures, however, and was "with the Release Team and not with
whoever maintains ports
". Part of the problem, evidently, is that
Bicha is not clear on how one might even coordinate with the ports
maintainers; ports is not exactly in the Debian mainstream, he said:
Adam Borowski's call
for a revert of the upload was not well-received. Ben Hutchings said
that Debian bases its releases and their contents on the architectures that
will be included. "We do not allow
Debian ports to hold back changes in unstable.
" Bicha pointed
out that the reversion would have a rather unreasonable outcome:
"It sounds to me like you're saying that to fix librsvg being out of
date on 11 arches, we need to make it out of date on every
architecture.
"
Bicha continued that he did not mean to upset Glaubitz with the
upload. "It honestly didn't occur to me
that I ought to talk to ports maintainers before uploading packages
that won't build on ports.
" Beyond that, the new version of librsvg
spent months in the experimental repository without complaint. He also
noted that the announcement did not come with a warning:
The announcement of Rust support on more architectures and, importantly, all of the release architectures is what allowed Bicha to upload the new version of librsvg, however. So Glaubitz feels like his work to make that happen has been used against him. He rejected Bicha's suggestion:
A suggestion made by Samuel Thibault may provide a temporary way to move forward. He suggested that the older version of librsvg be given a different source package name (others suggested "librsvg-c") but build the same binaries as librsvg for architectures that lack Rust support. That will work, but will leave those architectures with an outdated (and unsupported) version of librsvg. Bicha asked for a volunteer to step up to maintain librsvg-c, though Manuel A. Fernandez Montecelo gave a long explanation of why he thought the Debian GNOME team (of which Bicha is a member) should maintain librsvg-c alongside librsvg.
But the aggressive tone of Glaubitz's messages (including this followup) was not particularly helpful. He seems to have a rather one-sided view of respect and communication between Debian participants but also feels that his work in getting Rust on more architectures was not really appreciated and acknowledged by the rest of the project. Various project members tried to rectify that in a sub-thread, while also noting that his messages were unnecessarily harsh and off-putting.
Josh Triplett said that he didn't really see more coordination leading to a different technical outcome; other libraries and applications will be using Rust over time, so architectures that don't support it are going to get left further behind. Librsvg is simply the first to go down this path for Debian (though recent versions of both Firefox and Thunderbird also use Rust):
He asked what kind of help was needed in order to get LLVM and Rust support onto the remaining architectures. Glaubitz replied with a sizable number of complaints about Rust and its upstream; he is also skeptical of the security claims that are made for the language. He did point to three reviews needed for LLVM and to a bug in Rust that needed to be fixed, but he couldn't resist complaining about the stability of the Rust language as well as its six-week release cycle. According to Triplett, though, most of the Rust stability problems that are being complained about boil down to running Rust on unsupported and, critically, untested architectures. He pointed to the Rust platform-support page and suggested that platforms not in tier 2 are not going to be well supported.
In yet another lengthy
reply, Glaubitz continued to complain about Rust. In fact, he said:
"[...] I think it's a bad
idea to use Rust code in a core component like librsvg
". As
Triplett pointed
out, though, Debian cannot control what languages are used by upstream
projects. Beyond that, he is not seeing any real actionable call for
assistance by Glaubitz or others working on Debian ports.
In the end, it is not entirely clear what Glaubitz wants—at least what he wants that is within the realm of possibility. Rust may irritate him, but he can't stand in the way of projects that want to use it, even if it means that there are some less popular architectures that cannot support them. His tone seems to discourage exactly what he might like to see happen (Rust on the rest of the ports architectures). It is understandable that he is unhappy that his work on getting Rust working for more architectures enabled librsvg to move forward for most of Debian—and all of the official parts of the distribution. The ports that do not yet support Rust are going to need to make that happen or be left behind it would seem.
Posted Nov 14, 2018 4:56 UTC (Wed)
by josh (subscriber, #17465)
[Link] (81 responses)
Posted Nov 14, 2018 10:07 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (80 responses)
Posted Nov 14, 2018 10:51 UTC (Wed)
by ballombe (subscriber, #9523)
[Link] (10 responses)
Posted Nov 14, 2018 11:15 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
Posted Nov 14, 2018 11:21 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (1 responses)
There is mrustc which could be interesting, too: https://github.com/thepowersgang/mrustc
Posted Nov 14, 2018 12:30 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Nov 14, 2018 12:49 UTC (Wed)
by lobachevsky (subscriber, #121871)
[Link]
Posted Nov 15, 2018 14:58 UTC (Thu)
by ptman (subscriber, #57271)
[Link]
Posted Nov 16, 2018 10:44 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (4 responses)
This is just wishful thinking. Rust is not C, that is, not all Rust can be 1:1 mapped to C. A lot of Rust is platform specific, so even if one could compile Rust to C, one would need a platform library abstracting syscalls, libc, compiler extensions, etc. wrapping the C systems APIs way beyond what libc offers.
Unless a big company pays 100 people to work on this full-time for a couple of years, this won't become production ready enough such that hobbyist can start modifying the parts required for the resulting C to even compile in the less supported platforms.
If you only want to target x86_64 linux with glibc, then this is doable, and some projects already try to do this. For anything else, there is a titanic amount of work required.
Just as an example, C has macros to support many platforms, Rust does so too. Imagine translating C with macros to Rust: you would have to first generate C without macros for each platform, then generate Rust code, and then recombine all that to generate
Posted Nov 16, 2018 22:22 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Posted Nov 16, 2018 23:06 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link] (2 responses)
I'm not sure what the best way to think about this is, as far as "trusting trust" goes. Is assembler-like C that much better than machine code in terms of making back doors visible?
Posted Nov 16, 2018 23:23 UTC (Fri)
by sfeam (subscriber, #2841)
[Link]
Posted Feb 23, 2021 16:11 UTC (Tue)
by immibis (subscriber, #105511)
[Link]
The point of Rust isn't "security" as in "making backdoors visible" either, if that's where the confusion is coming from. The point of Rust is "security" as in "writing fewer exploitable bugs"
Posted Nov 14, 2018 13:38 UTC (Wed)
by moltonel (guest, #45207)
[Link] (67 responses)
The standard Go compiler is standalone, so writing a GCC frontend (and an LLVM frontend) made sense for Go because they'd have to port *everything* to the new arch otherwise. Same story for the D language. But Rust is already a frontend, delegating most of the portability work to thebackend. Adding a new frontend one doesn't bring much.
Saying that "for portability, all projects should use GCC" is a bit ironic. There are many projects which require or benefit from LLVM nowadays. Arguably nowadays, if an arch doesn't have a good LLVM backend, it's that arch's problem and not the problem of the project that needs LLVM to build. If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch.
That's not to say that porting Rust is just a matter of porting LLVM. There are surely Rust-porting bugs that have nothing to do with LLVM. We all would like Rust's platform support to improve (I think there's going to be a bigger push for that once edition 2018 is out), but writing a Rust GCC frontend doesn't seem like a good use of community resources.
Posted Nov 14, 2018 13:53 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (66 responses)
There are efforts for that, see, for example:
> https://reviews.llvm.org/D50314
The problem is that LLVM has stricter rules which targets they accept into LLVM as a backend as compared to gcc. The m68k backend was not accepted for the time being as LLVM upstream feared the additional burden of another backend.
> The standard Go compiler is standalone, so writing a GCC frontend (and an LLVM frontend) made sense for Go because they'd have to port *everything* to the new arch otherwise. Same story for the D language. But Rust is already a frontend, delegating most of the portability work to thebackend. Adding a new frontend one doesn't bring much.
Adding a new frontend means you get a second, independent implementation of the language which is always of advantage. And with a gcc backend it means that Rust will be available on a new architecture the moment someone has ported gcc which always happens for new architectures.
> Saying that "for portability, all projects should use GCC" is a bit ironic. There are many projects which require or benefit from LLVM nowadays. Arguably nowadays, if an arch doesn't have a good LLVM backend, it's that arch's problem and not the problem of the project that needs LLVM to build. If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch.
There are architectures that might be niche architectures from the Western PoV but they aren't in other countries. What about the Russian Elbrus architecture, the Chinese C-Sky architecture or the Chinese Sunway architecture used in two of the fastest supercomputers on top500? All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler. It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
> That's not to say that porting Rust is just a matter of porting LLVM. There are surely Rust-porting bugs that have nothing to do with LLVM. We all would like Rust's platform support to improve (I think there's going to be a bigger push for that once edition 2018 is out), but writing a Rust GCC frontend doesn't seem like a good use of community resources.
I disagree. Once you have the gcc backend, you will max out Rust's portability. So, it's definitely worth the effort.
PS: I have invested efforts into making Rust more portable myself, so it's not that I am just complaining:
> https://lists.debian.org/debian-devel-announce/2018/11/ms...
Posted Nov 14, 2018 15:18 UTC (Wed)
by cesarb (subscriber, #6266)
[Link] (5 responses)
Posted Nov 14, 2018 23:29 UTC (Wed)
by roc (subscriber, #30627)
[Link]
Posted Nov 16, 2018 10:50 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (3 responses)
There are already prototypes of Rust frontend supporting other backends, but this only works if Rust generates some IR for the backend, and then the backend compiles this IR.
GCC's IRs are designed to be incompatible with this model, to prevent industry from reusing GCC as a backend. The GCC IRs require a lot of back and forth between the different frontend and backend phases to work. This means that you can't use a front-end designed to support multiple "reasonable" backends with GCC, because it is, by design, an unreasonable backend.
Posted Nov 18, 2018 9:21 UTC (Sun)
by drago01 (subscriber, #50715)
[Link] (2 responses)
Posted Nov 18, 2018 10:33 UTC (Sun)
by hsivonen (subscriber, #91034)
[Link]
Posted Nov 29, 2018 22:13 UTC (Thu)
by tschwinge (guest, #43910)
[Link]
Posted Nov 14, 2018 15:37 UTC (Wed)
by moltonel (guest, #45207)
[Link] (36 responses)
> There are architectures that might be niche architectures from the Western PoV but they aren't in other countries. What about the Russian Elbrus architecture, the Chinese C-Sky architecture or the Chinese Sunway architecture used in two of the fastest supercomputers on top500?
I was talking about "niche" archs more in the sense of "arches that require more work to use because they lack the usual tools" than in the sense of "arches used by only X people". In that sense and in my view, any arch that doesn't have LLVM is niche. It is cutting itself from a lot of possible uses. Getting LLVM on an arch seems more important to me than getting Rust on it, because it opens up the door to more than just Rust.
In that light, LLVM's stricter rules for backend upstream inclusion are indeed a PITA. I'm not sure how it should be solved, appart from demonstrating that whoever cares about a given arch will help maintain it.
> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler. It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
Rust upstream *is* very interested in porting Rust to more arches. But the plan is (AFAIK) essentially to upgrade supported arches to higher tiers, and bring new arches, using the main compiler. Efforts on other compilers are welcome, but the core team is currently focusing its non-infinite workforce on the main compiler. There are two important aspects of Rust culture that lean towards that decision: the ideas that the language, compiler and libs should evolve very progressively, and that "correct and good design" is more important than "first to market". A new GCC frontend faces strong obstacles to adoption in that context (see also the "it'll never catch up" issue above) .
Underlying all this is the question of "Who's at fault for libFOO not being available on archFOO ? Who should do the work ?" Is it Rust, because they should write a GCC frontend ? Is it LLVM, because they should support archFOO ? Is it archFOO, because they should be the ones maintaining the compiler backends ? Is it the libFOO devs, because they should have know better than using a technology that isn't universally available ? There's no right answer; depending on your affiliations and abilities, you're likely to think that it's someone else's problem.
> I have invested efforts into making Rust more portable myself, so it's not that I am just complaining
I first read about your work in "This week in Rust" and am very grateful for the result. Sorry if my "please help fix that arch" felt out of place. It was a bit strange to read in LWN today that the person bringing Rust to all official Debian archs is also seemingly annoyed by Rust and by people using or packaging it, but I understand that it's a complicated subject.
Posted Nov 14, 2018 16:05 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (35 responses)
This is no different than CPython being the only implementation of 3.x while PyPy and other projects only had 3.x-2 support or so. Now with CPython explicitly slowing down after the BDFL stuff (and calling out letting other implementations catch up). I imagine that Rust will, at some point, slow down for other implementations, but it took CPython 20+ years to do that. And 10 years (Jython started in 2001) for alternate implementations to even get an initial release.
Posted Nov 15, 2018 6:25 UTC (Thu)
by edomaur (subscriber, #14520)
[Link] (34 responses)
https://blog.rust-lang.org/2018/07/27/what-is-rust-2018.html
Posted Nov 15, 2018 9:54 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (32 responses)
Posted Nov 15, 2018 18:23 UTC (Thu)
by peter-b (subscriber, #66996)
[Link] (1 responses)
Sorry, I don't really understand this argument. Does openSUSE build GCC without GCC?
Posted Nov 18, 2018 10:27 UTC (Sun)
by roblucid (guest, #48964)
[Link]
Posted Nov 15, 2018 20:14 UTC (Thu)
by josh (subscriber, #17465)
[Link] (29 responses)
Posted Nov 15, 2018 20:57 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (28 responses)
Posted Nov 15, 2018 21:14 UTC (Thu)
by josh (subscriber, #17465)
[Link] (27 responses)
Posted Nov 15, 2018 21:23 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (24 responses)
Posted Nov 15, 2018 21:45 UTC (Thu)
by ssokolow (guest, #94568)
[Link]
(mrustc is a Rust compiler, written in C++, which makes some assumptions about the correctness of the code, but is complete enough to compile certain older versions of rustc so that it can be proved free of a Trusting Trust attack.)
Posted Nov 15, 2018 21:46 UTC (Thu)
by josh (subscriber, #17465)
[Link] (20 responses)
You might check with the Debian folks on reproducible build strategies for self-hosting compilers.
Posted Nov 15, 2018 22:15 UTC (Thu)
by ceplm (subscriber, #41334)
[Link] (9 responses)
Posted Nov 15, 2018 23:01 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (7 responses)
To build the minimal version still requires GCC, and to bootstrap GCC you need some form of C compiler (though not necessarily GCC itself).
I don't know of any distros with a requirement to be able to build every package "from scratch" with no outside dependencies. Frankly I'm not even sure how that would work; does every rebuild need to start with procuring a blank-slate system with a direct hardware interface for loading hand-generated machine code? In practice you always start out using host tools of one form or another. If you look at the bootstrap process for something like Linux From Scratch[1] it involves building the toolchain three times: First you build a cross-compiler using the host tools, then you use that cross-compiler on the host to build a native toolchain and other packages required for standalone operation (what LFS calls the "temporary system"), and finally use this temporary system to build the host-independent tools.
This is for a complete system, of course, but bootstrapping a compiler is similar. Use the host's Rust compiler to build a cross-compiler which doesn't rely on the host environment, then use that to build the native Rust compiler. The result of the second build should be deterministic even if the first stage isn't due to variations in host compilers.
Posted Nov 16, 2018 1:11 UTC (Fri)
by pabs (subscriber, #43278)
[Link]
Posted Nov 16, 2018 15:39 UTC (Fri)
by moltonel (guest, #45207)
[Link] (5 responses)
I understand that many platforms get a C compiler before any other, but that doesn't change anything to the fact that a pre-existing binary was used to compile that "initial" compiler. The language of the pre-existing binary shouldn't matter, as long as it can generate native code for your target platform. It's a mindset issue, not a technical issue.
For fun, have a look at https://github.com/nebulet/nebulet, a microkernel written in Rust that only executes wasm, and does so in ring 0. No C in sight ;)
Posted Nov 16, 2018 16:47 UTC (Fri)
by zlynx (guest, #2285)
[Link] (1 responses)
Posted Nov 23, 2018 10:03 UTC (Fri)
by ewen (subscriber, #4772)
[Link]
If you’re thinking of another one and do find the link I’d be curious to read about their approach.
Ewen
Posted Nov 16, 2018 17:11 UTC (Fri)
by madscientist (subscriber, #16861)
[Link] (2 responses)
You can find a C compiler from decades ago that is sufficient to build GCC (because GCC starts out by building a "stage 1" version of the GCC compiler which relies only on very portable C, then builds the rest of GCC, including all the different front-ends like C++, gfortran, etc., using that stage 1 compiler). You can cache some obsolete version of GCC, or get TinyCC or something like that, and use that for bootstrapping. It's super-easy and virtually never has to change.
Posted Nov 16, 2018 18:30 UTC (Fri)
by josh (subscriber, #17465)
[Link]
Rust doesn't systematically demand the latest prior release to build itself; it simply does not refrain from using features as they become available. As mentioned elsewhere in this thread, folks are working on creating a minimized series of rustc versions to successively bootstrap from the last OCaml version.
Posted Nov 20, 2018 5:57 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link]
Posted Nov 16, 2018 14:09 UTC (Fri)
by glondu (subscriber, #70274)
[Link]
Not exactly.
In OCaml world, there is a portable bytecode for binaries (like Java) and the OCaml sources contain a binary of the full compiler (to bytecode) that is used to compile everything (including the compiler itself). The interpreter for that bytecode is written in C. Hence, "bootstrapping" a new architecture is just porting the interpreter to the new architecture, which is easy.
Additionally, there is a compiler to native code, but it is not strictly needed to enjoy OCaml on a new architecture. And it can be compiled using the bytecode compiler, so no bootstrapping issues.
Posted Nov 15, 2018 22:25 UTC (Thu)
by nix (subscriber, #2304)
[Link] (3 responses)
For Rust, this makes sense, because the language is evolving fast and this does mean that at least the compiler can dogfood experimental features added in compiler version N in version N+1, then mark them non-experimental once they know they work for at least that use case. :)
Posted Nov 16, 2018 0:01 UTC (Fri)
by josh (subscriber, #17465)
[Link]
Posted Nov 18, 2018 1:45 UTC (Sun)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
Maybe it's just me, but it's hard to see a system like this as ready for building OS critical components. When the compiler demands bleeding edge features such that it can't be built with a version of itself that's even 3 months old, it is not a stable platform on which to build a system.
Posted Nov 18, 2018 23:51 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Nov 16, 2018 14:11 UTC (Fri)
by plugwash (subscriber, #29694)
[Link] (5 responses)
As the maintainer of Raspbian there have been several things that have made rust particuarly painful for us. I suspect some of these would also be applicable to re-bootstrapping efforts.
1. The upgrade treadmill is unusually rapid. You need the previous release of rust and rust makes a new release about every 6 weeks.
Posted Nov 16, 2018 15:14 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (1 responses)
I have no idea what you expected from googling "rust fabricate", but if I just go to the documentation site of the toolchain: https://doc.rust-lang.org/rustc/index.html , there is a frickin book about how the Rust toolchain is built, how to run tests, debug issues, configure everything, the overall design of the compiler, the different modules, etc.
Posted Nov 17, 2018 2:02 UTC (Sat)
by plugwash (subscriber, #29694)
[Link]
I was hoping to find some documentation on what "fabricate" (which was hanging during the rustc build on one of our buildboxes, I suspect a bug in the arm32 compatibility layer of the arm64 kernel on said box) did and how to troubleshoot problems with it.
But instead I just found a load of results about metal fabrication :(
Someone did later tell me what fabricate was (apparently it's the rust-installer crate), though I haven't got around to trying to take a closer look. For now I have been working around the issue by doing our rust builds on a different system.
> there is a frickin book about how the Rust toolchain is built
Using the search tool in said book to search for fabricate doesn't seem to turn up any results.
Still thanks for the pointer.
Posted Nov 16, 2018 21:09 UTC (Fri)
by josh (subscriber, #17465)
[Link] (2 responses)
My understanding is that both Rust and Cargo only require the previous stable version at most, and should never have dependency loops between their latest versions.
> 4. After a failed build attempting to re-run the failed command manually using the command line printed by the build process (for example to run it in a debugger or under strace or something) produces completely unrelated errors. I guess there are some special environment variables or something but I can't seem to find any documentation of them if there are.
This is being fixed right now, and yes, there are environment variables. See https://github.com/rust-lang/cargo/pull/5683 for details.
Posted Nov 19, 2018 14:15 UTC (Mon)
by dgm (subscriber, #49227)
[Link] (1 responses)
Doesn't that imply that you (potentially) have to build each and every version in succession to get the latest one?
Posted Nov 20, 2018 11:57 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Nov 16, 2018 15:04 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (1 responses)
(what you are saying does not make sense for a toolchain)
Posted Nov 18, 2018 0:01 UTC (Sun)
by JoeBuck (subscriber, #2330)
[Link]
Posted Nov 20, 2018 5:56 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link] (1 responses)
That sounds ridiculous.
Posted Nov 20, 2018 9:01 UTC (Tue)
by Jonno (subscriber, #49613)
[Link]
Of course not, rust support cross-compilation perfectly well!
Sure, that means that if you want to re-verify the chain of trust from scratch you are going to need two computers (one of each arch), but that is a comparatively minor inconvenience...
Posted Nov 15, 2018 20:13 UTC (Thu)
by josh (subscriber, #17465)
[Link]
Posted Nov 14, 2018 15:40 UTC (Wed)
by josh (subscriber, #17465)
[Link] (14 responses)
Seems like a sensible degree of caution, to think about the impact of a new backend on the rest of the ecosystem. If having an m68k backend makes mainstream architectures even slightly harder to maintain, or if it slows down development due to lower developer resources, it isn't worth it.
> Adding a new frontend means you get a second, independent implementation of the language which is always of advantage.
Not at all, no. A second, independent implementation is only interesting if it provides features the primary implementation *cannot*, and still keeps up with the primary implementation. I'd much rather see rustc itself add the necessary portability and provide the same functionality everywhere.
A new backend is a good idea, though it'd still be a monumental undertaking. (There are projects already in progress for new backends, though none for a GCC backend that I know of.) But a new frontend? No. Let's not.
As far as I can tell, the *only* justification for mrustc (an implementation which only works for code already verified as correct by rustc) is for diverse double-compilation; nobody is seriously proposing using mrustc for other purposes.
> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler.
"non-portable" is hyperbole; it can already target many architectures, including 14 of Debian's architectures.
But no, all those usecases are only missing from Rust because nobody has done the work. Those architectures could have an LLVM target, and a Rust target, if a group of developers contributed them *and committed to ongoing maintenance*. Or if someone was sufficiently motivated to produce a GCC backend, they could.
> It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
I really don't think Rust upstream is bemoaning the loss of potential popularity to be had there. Such decisions are tradeoffs, and it's not unreasonable to consider what would be traded off in exchange.
However, speaking as a part of Rust upstream: patches and commitments to ongoing maintenance welcome.
Posted Nov 14, 2018 15:57 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (6 responses)
> Not at all, no. A second, independent implementation is only interesting if it provides features the primary implementation *cannot*, and still keeps up with the primary implementation. I'd much rather see rustc itself add the necessary portability and provide the same functionality everywhere.
Well, then we have a different opinion on that. I think it's definitely valuable to have a second, independent implementation of the language. This would also help with standardization.
> As far as I can tell, the *only* justification for mrustc (an implementation which only works for code already verified as correct by rustc) is for diverse double-compilation; nobody is seriously proposing using mrustc for other purposes.
There is nothing though that speaks against using mrustc for other purposes. Most upstream projects which are using Rust are not using the latest features of the language all the time, so I guess many projects will build just fine with mrustc.
>> All these usecases are being excluded from Rust because there is only one non-portable variant of the Rust compiler.
Ignoring the fact that it's currently broken again on multiple Debian release architectures in Debian experimental.
Don't you think it's a bit of the burden for the Rust maintainers in Debian if they constantly have to fix these issues over and over again because Rust upstream doesn't provide an LTS variant of the language? And what's the point having to constantly update and fix compiler regressions when many upstream projects aren't using the latest features anyway.
I sometimes have the impression that Rust is primarily developed for the language's own sake instead of providing a useful language for projects to adopt.
> But no, all those usecases are only missing from Rust because nobody has done the work. Those architectures could have an LLVM target, and a Rust target, if a group of developers contributed them *and committed to ongoing maintenance*. Or if someone was sufficiently motivated to produce a GCC backend, they could.
Well, I know of two very important upstream projects that have contemplated using Rust code in their projects but have discarded the idea because of the issues I mentioned.
>> It's simply a missed opportunity to make Rust more successful. I don't understand why Rust upstream isn't interested in that.
> I really don't think Rust upstream is bemoaning the loss of potential popularity to be had there. Such decisions are tradeoffs, and it's not unreasonable to consider what would be traded off in exchange.
Well, after my post to debian-devel-announce, I received exactly this feedback from two very important upstream projects. So, I think I really have a valid point.
> However, speaking as a part of Rust upstream: patches and commitments to ongoing maintenance welcome.
Yes, I'm aware of that and I have done that.
Posted Nov 14, 2018 16:18 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
mrustc ignores lifetimes completely, assuming that they are correct. Granted, for released code, this is probably fine, but I certainly wouldn't want to apply a patch to a project without checking with rustc first.
Posted Nov 14, 2018 16:30 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (3 responses)
Yes, I'm aware of that. But I wouldn't use mrustc for developing code either. Just for building release code.
Posted Nov 14, 2018 16:36 UTC (Wed)
by josh (subscriber, #17465)
[Link] (2 responses)
Also worth noting: mrustc currently has far less portability than Rust, and I don't see any signs of changing that.
Posted Nov 16, 2018 14:18 UTC (Fri)
by glaubitz (subscriber, #96452)
[Link] (1 responses)
Did you even try mrustc yourself or are you just guessing? It took me 5 minutes to make it build on m68k.
mrustc generates C code out of Rust code. How should it be **not** portable?
Posted Nov 16, 2018 21:13 UTC (Fri)
by josh (subscriber, #17465)
[Link]
> - Supported Targets:
I based my comment on that. This isn't about the host platform of mrustc, it's about what the generated code targets, and using C as an intermediate language does not automatically make code portable.
Posted Nov 14, 2018 16:28 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Only if it manages to keep up. And even then, I'd expect it to generate ecosystem issues of its own due to implementation-specific quirks and bugs.
The primary implementation is Open Source; the value of having multiple independent implementations seems low compared to the cost. (The value of having multiple backends, on the other hand, seems quite reasonable for the cost.)
> Most upstream projects which are using Rust are not using the latest features of the language all the time, so I guess many projects will build just fine with mrustc.
mrustc doesn't implement borrow checking. It'd be a *bad* idea to use it for that purpose. It primarily exists so that it's possible to independently compile rustc (which is written in rust) and prove that there aren't any "trusting trust" attacks.
> Don't you think it's a bit of the burden for the Rust maintainers in Debian if they constantly have to fix these issues over and over again because Rust upstream doesn't provide an LTS variant of the language?
No. "LTS" or not, Rust will not stop having ongoing development. And while an "LTS variant" has been discussed, that will not affect this type of issue; LTS only really works for a specific subset of users who don't care about incorporating new upstream packages regularly, not the broader ecosystem. Architectures that want to be supported will still need to work with upstream to keep them working, and in particular to make sure they're actively tested.
But in any case, I think this comes back to the question raised elsewhere both in the debian-devel thread and in this thread: who is going to do the work? This isn't something where people interested in an architecture can offload work to others and expect it to get done.
It's not reasonable to look at an actively developed project like Rust and say "slow down, wait up". Rust *will* provide guidance, mentorship, and a welcoming environment where people can contribute. But it won't slow down or wait up.
> Well, after my post to debian-devel-announce, I received exactly this feedback from two very important upstream projects.
Namely?
> Yes, I'm aware of that and I have done that.
So you'd be interested and willing to help get the architectures you care about elevated to a higher support tier in Rust upstream, with the work and infrastructure that would entail?
Posted Nov 14, 2018 20:48 UTC (Wed)
by ballombe (subscriber, #9523)
[Link] (5 responses)
A) "If your favorite niche arch is missing out because it lack LLVM, please help fix LLVM on that arch."
B)" The problem is that LLVM has stricter rules which targets they accept into LLVM as a backend as compared to gcc. The m68k backend was not accepted for the time being as LLVM upstream feared the additional burden of another backend.""
C)" Seems like a sensible degree of caution, to think about the impact of a new backend on the rest of the ecosystem. If having an m68k backend makes mainstream architectures even slightly harder to maintain, or if it slows down development due to lower developer resources, it isn't worth it."
No wonder people get mad.
Posted Nov 14, 2018 22:09 UTC (Wed)
by josh (subscriber, #17465)
[Link]
A statement like "LLVM upstream feared the additional burden of another backend" seems rather unlikely if there's a small army of developers for that architecture willing to make sure that the LLVM developers don't end up shouldering that additional burden. That doesn't seem at all unreasonable. That the LLVM developers had this concern suggests that there was reason to believe they'd be stuck doing work for that architecture themselves.
Speaking for Rust, we're *more* than willing to go out of our way a certain degree to provide portability, but we're not willing to get shouldered with the burden of ongoing maintenance of an architecture without the help of some reliable developers for that architecture and some test infrastructure for that architecture.
Posted Nov 14, 2018 23:28 UTC (Wed)
by roc (subscriber, #30627)
[Link] (3 responses)
Posted Nov 20, 2018 6:02 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link] (2 responses)
Posted Nov 20, 2018 18:43 UTC (Tue)
by roc (subscriber, #30627)
[Link] (1 responses)
What is *not* reasonable is for the vanishingly small number of people who care about these niche architectures to push the costs of supporting those architectures onto others.
Posted Nov 20, 2018 22:07 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
And, like it or not (and I don't, particularly), LLVM is becoming the compiler of choice for more and more software.
Posted Nov 15, 2018 11:18 UTC (Thu)
by eru (subscriber, #2753)
[Link]
Posted Nov 16, 2018 10:47 UTC (Fri)
by Ieee754 (guest, #128655)
[Link] (7 responses)
This is the biggest BS i've read all day.
Please, go and start building a Rust GCC frontend, so that you can show the work the value of re-inventing the wheel.
Posted Nov 16, 2018 14:22 UTC (Fri)
by glaubitz (subscriber, #96452)
[Link] (5 responses)
How is it bullshit? Multiple languages do that. Go, D, Java, Pascal, C/C++, Python, Fortran and so on all have multiple implementations and have standardized their language. But, of course, all these languages designers are wrong and only the Rust way is the proper way to do it.
> Please, go and start building a Rust GCC frontend, so that you can show the work the value of re-inventing the wheel.
Please don't make hyperbolic arguments just because I am trying to explain you that the current way that Rust is being developed is highly problematic for downstream distributions and hinders the adoption of Rust.
Ask your favorite enterprise distribution vendor what they think about a language that changes their compiler in incompatible ways every six weeks.
It's so annoying when the new kids on the block always think that the generations of engineers and programmers before them did it all wrong and they are the ones who ate wisdom with spoons.
Posted Nov 19, 2018 18:28 UTC (Mon)
by jccleaver (guest, #127418)
[Link]
Interestingly, even perl has gone this route. The perl5 binary remains the language specification for perl v5, but perl6 has an independent specification and has actually had several different implementations over the years.
Posted Nov 19, 2018 20:56 UTC (Mon)
by moltonel (guest, #45207)
[Link] (3 responses)
I know you were relying to an aggressive comment, but please don't spread the idea that the Rust community doesn't want independent implementations or claims that the current Rust way is the only proper one. There's a difference between saying "it would never get as good as the main implementation / it's not worth the effort right now / for the portability usecase it'd be better to port llvm" (which all seem true to me) and refusing an alternate implementation when it comes up (which I would find silly). If anything, mrustc is a counter-example (though it's not clear whether the intended usecase is portability, or trusting trust, of standardisation or...).
All the languages you mention arrived at multiple implems via different routes. C/C++/Fortran/pascal are (were) simple enough that writing a compiler is (was) easy, they were often a new platform's first language, and they got half a century (!) to get there. Java alternate implems endured legal hurdles despite the original implementor's "write one run anywhere" hype. Python implems constantly lag behind the mainline, and many were abandoned because of this. D has been struggling to gain popularity for 16 years, and its gcc frontend looks like a desperate attempt to buck the trend. Go has a huge corporation developing and lobbying it, which can easily absorb the cost of a gcc frontend (plus, like D, Go has been designed to be easy to implement).
It seems likely that Rust too will get good alternate implementation(s) someday, from mrustc or somewhere else. But these things take a lot of time (unless you're a huge corporation and are happy to cut corners in your language). And for now, it's just not a priority for Rust. The Rust community is very good at focusing on short- and long-term targets. Its RFC process is really good, and it is better defined than many "standardized" languages. When the pace of new features has slowed down and it is time for an alternate implem, it'll happen. We might have a few more years debating whether "the year of the need for an alternate rust implem" has arrived or not ;)
Posted Nov 20, 2018 9:15 UTC (Tue)
by renox (guest, #23785)
[Link] (2 responses)
You'd better be careful about what you write, this part of your comment is both very mean and very stupid..
Posted Nov 20, 2018 12:09 UTC (Tue)
by moltonel (guest, #45207)
[Link] (1 responses)
I actually really liked D when I first encountered it. I did tutorials and wrote toy programs, talked to my peers about D, kept an eye on development... And then nothing, it never picked up as I hoped it would. Use of D is still very annecdotal. It seems to me that the window of opportunity for D has closed. I'd be happy to be proved wrong.
Posted Nov 20, 2018 12:22 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
From my PoV (having written production code in all the languages I'm mentioning), D is a competitor to C++11 - it's a major advance on C++03, and competitive with C++11. The trouble is that where C++ has continued to evolve in the form of C++14 and C++17, D seems to have become stuck; it's neither improving at the rate C++ is, nor is it exploring areas of the programming language design space that C++ does not.
Rust is advancing where D has not (indeed, some of the production D code I've worked on in the past has been rewritten in Rust) because the use of affine types moves an interesting set of bugs from runtime to compile time; it thus doesn't matter that C++20 will be a better C++ than Rust, because Rust isn't aiming to be a better C++; in contrast, D was a better C++03, comparable to C++11, and has fallen back since then into a niche where it's not as good as C++ for systems programming (the GC is effectively compulsory), not as good for quick hacks as Go or Python, and has the same division of bugs into runtime versus compile time as C++ (unlike Rust).
Posted Nov 16, 2018 16:35 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link]
Perhaps it is reasonable for a language so young to be changing so rapidly -- as unreasonable as "every six weeks you have to add another compiler to your bootstrap chain" is, it does take a nontrivial amount of effort to achieve even this. Pre-1.0 there was no such policy, and literally nobody knows a series of compilers that will take you back to Ocaml.
Remember that Rust is much bigger and also much younger than any other widely-used language of its age... but in some sense it's a victim of its own success -- Rust code interoperates well with C, compiles to just-as-fast assembler, but has an extremely expressive type system which is fun to write in and makes it much easier to read and reason about. This means that even fairly-conservative systems-programming projects like librsvg are understandably tempted to migrate to it even while it is changing so rapidly. Naturally this means that people who care about trusting trust, long-term stability and high-assurance software are somehow going to have to deal with the Rust toolchain's immature bootstrap story, and just as naturally, this is going to be a painful and frustrating process.
It is not helpful to anybody for Rust advocates to be loudly posting that the current situation is OK and that it is somehow the Debian maintainers who are old fuddy-duddies and should abandon their principles.
I have high hopes that through this process, projects like mrustc, cargo-vendor, etc. -- which are sorely needed to use Rust in the high-assurance high-availability contexts it's designed for -- will gain the respect and human resources they need to succeed in their missions. If I did not believe this, there is exactly zero chance I would be using Rust in production anywhere, and indeed less optimistic people than me are holding off on using Rust until this situation improves.
It is very disheartening to see vocal subsets of the Rust community denigrating these efforts.
Posted Nov 16, 2018 10:32 UTC (Fri)
by Ieee754 (guest, #128655)
[Link]
Rust does not need this, architectures/targets that are not supported by LLVM and want to use software written in Rust need this.
That's a pretty big difference.
GCC is designed to make it very hard to use as the backend of a programming language front-end.
The Rust pipeline is HIR->MIR->LLVM-IR->MachineCode. There have been prototypes of adding different backends, like Cranelift, to this pipeline, by generating something else instead of LLVM-IR, and then calling a different backend.
If Rust could generate GIMPLE and then call a GCC backend, a GCC backend wouldn't really be that hard to add. But GCC is designed so that there has to be a lot of back and forth between GIMPLE and the previous stages of the pipe line. So platforms that depend on GCC for support have locked themselves away from the rest of the world.
It is much easier to add LLVM backeds to support these platforms, and then minimal front-end support in Rust, than to contort Rust's compilation pipeline to support a project (GCC) that does everything possible to not be reusable by the outside world.
Posted Nov 14, 2018 7:37 UTC (Wed)
by hsivonen (subscriber, #91034)
[Link] (2 responses)
Also, how come https://www.debian.org/ports/ lists alpha, hppa and powerpc as discontinued but http://cdimage.debian.org/cdimage/ports/current/ still has them?
Posted Nov 14, 2018 10:33 UTC (Wed)
by murukesh (subscriber, #97031)
[Link] (1 responses)
Posted Nov 14, 2018 11:22 UTC (Wed)
by jrtc27 (subscriber, #107748)
[Link]
Posted Nov 14, 2018 9:50 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (28 responses)
It took several months of discussion, a CTTE decision and a GR, to get Debian to switch to systemd as its default init system. And still, as I have recently learned on the debian-devel mailing list, packages which are missing a SysVInit script are considered to be buggy.
With Rust, on the other hand, most people in Debian suddenly don't care about portability anymore and just accept that Rust is breaking multiple architectures and kFreeBSD and the Hurd.
So, it seems that either priorities within Debian have changed or the portability argument against systemd was just an excuse back in 2014.
Posted Nov 14, 2018 10:35 UTC (Wed)
by jond (subscriber, #37669)
[Link] (14 responses)
Posted Nov 14, 2018 10:51 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (13 responses)
I have just checked DAK (Debian Archive Kit) and this is the list of packages that would have to be dropped if librsvg is not buildable: http://paste.debian.net/1051670/
FWIW, Rust is not yet available on riscv64, so a lack of librsvg would mean a huge throwback on this architecture.
Posted Nov 14, 2018 14:53 UTC (Wed)
by jond (subscriber, #37669)
[Link] (12 responses)
Posted Nov 14, 2018 15:08 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (11 responses)
Both systemd and librsvg have a huge impact on the system, I don't see why they should be treated differently. The number of affected packages, including transitive packages is huge.
What's the point if the system boots, if you can't hardly use any applications except for basic command line utilities. You are not using the machine for its own sake, but in order to get work done. But if you can't build a large number of packages, chances that the machine is any useful diminishes dramatically.
> And I also said that this is obvious if you weren't so close to it; a point I think reinforced by your reply.
I don't really understand what you mean by "close" here. Fact is that the Debian community is applying double standards here and I don't think this is ok.
Posted Nov 14, 2018 16:39 UTC (Wed)
by epa (subscriber, #39769)
[Link] (7 responses)
I don't want to offend anyone but it does seem that these are mostly "hobbyist platforms", and some have been that way for decades. Most of them tend to be smaller machines that wouldn't cope with much graphical stuff anyway. And I say this as someone who used to maintain a fleet of interestingly antique systems (down to 16MHz 386SX with ESDI hard disk) and spend some effort getting (then) current Linux versions with X11 to run on them. To get work done? Well, no.
Posted Nov 15, 2018 10:40 UTC (Thu)
by cpitrat (subscriber, #116459)
[Link] (6 responses)
I totally understand the will to not let this first window broken.
Posted Nov 15, 2018 11:26 UTC (Thu)
by epa (subscriber, #39769)
[Link]
I was responding to the particular point raised in the grandparent post: that if people run these systems they do so for "real work" and not just as an end in itself. As a question of fact, I think this is unlikely to be true in most cases.
I would say, though, that surely it is the decision of the Debian project as a whole to decide what status to give these architectures and how important it is for package updates not to break them. That is what the project has codified with the split into core architectures and ports. If the Debian developers as a whole believed that m68k support should be a requirement for all major libraries, they would have made m68k a core architecture. They chose not to.
Posted Nov 15, 2018 16:59 UTC (Thu)
by rodgerd (guest, #58896)
[Link] (2 responses)
So what, then, is your proposal?
Debian be limited to the lowest common denominator of whatever platforms a Debian port exist for? Locked into increasingly ancient and unpatched versions of libraries because they won't run on someone's Amiga 3000?
Or should Debian fork any package and continue maintaining it so it runs there, backporting features and security patches from an increasingly divergant upstream? Who's going to do that work? You? I suspect not.
Posted Nov 17, 2018 8:20 UTC (Sat)
by cpitrat (subscriber, #116459)
[Link] (1 responses)
I think the good solution is to make more (all) arch support rust (Glaubitz did some good work in this direction) and this shouldn't hold back Debian in the meantime. I understand his frustration though considering his work kibd of backfired on his motivations.
I hope this discussion will help rust get supported on more archs whether it's by extending llvm backends list, implementing a gcc frontend (hopefully sharing code with rustc) or any other mean. Even though I don't use any of those archs, I wouldn't like rust nor debian that I both really like to let down users because they are part of a minority.
Posted Nov 19, 2018 12:07 UTC (Mon)
by peter-b (subscriber, #66996)
[Link]
I agree. Who is going to do the work?
Posted Nov 18, 2018 2:55 UTC (Sun)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
But that's the whole idea behind having list of officially supported platforms. Those are the only ones that get full support and can block release when a package doesn't work on them. Any architecture not on the officially supported list is effectively a hobbyist system. It's fine if it can make everything work, but it's up to the hobbyists working on it to keep up with the officially supported architectures, not for the officially supported architectures to slow down.
And it's not as if Debian (or any other distribution with multiple target architectures) makes these decisions arbitrarily. The decision about which architectures will be officially supported and which ones won't is driven by exactly the same thought process that drives LLVM to make those architectures tier 2. Systems being kept alive for fun by hobbyists simply don't have the same kind of developer strength that systems being actively developed by major software vendors have. If they had the kind of developer community to make sure they had a functioning LLVM backend, they probably would still be on the officially supported list.
Posted Nov 18, 2018 8:37 UTC (Sun)
by cpitrat (subscriber, #116459)
[Link]
Posted Nov 15, 2018 11:38 UTC (Thu)
by joncb (guest, #128491)
[Link] (2 responses)
Are you seriously suggesting that you see no possible difference in impact between the software used to bootstrap and manage the entirety of userspace and software that renders images?
> What's the point if the system boots, if you can't hardly use any applications except for basic command line utilities. You are not using the machine for its own
It's been roughly 15 years since I stopped using Debian because of a dependency hell worse than i'd ever experienced on windows so i may not be reading this correctly but your list of packages doesn't seem to block the basic LAMP stack. Whatever your definition of "in order to get work done" is, i'd argue it's flawed if it doesn't include one of the things that caused Linux to become the most ubiquitous server in the land.
Posted Nov 19, 2018 18:32 UTC (Mon)
by jccleaver (guest, #127418)
[Link] (1 responses)
What dependency hell was that? As long as you're using a repo-based dependency resolver and aren't trying to install things from different major releases of the OS (without recompiling) you really shouldn't be running into that. yum and apt-get and the like have been around for... well, I guess about 15 years now.
Have you been back recently? We're a long way from manually searching for .deb's or browsing rpmfind.net
Posted Nov 20, 2018 22:17 UTC (Tue)
by joncb (guest, #128491)
[Link]
Bear in mind that this was a long time ago but my memory is that i was trying to get a somewhat more recent version of Mysql as the stable version was really quite old. Investigating why showed that the problem was that Mysql had a hard dependency on perl which had a dependency loop that prevented it updating. (i'm eliding a lot of details because i just don't remember them... i vaguely recall the kernel was involved somewhere too) Ultimately my options were :-
Honestly i probably should have culled that part of the comment because it serves no purpose really. I don't hold a grudge against Debian, i just found other distributions that (irrespective of that issue) suited me more. Shortly after that incident someone introduced me to Gentoo and, for me, the rest was history. I've not felt a need to go back although these days i use Arch, CoreOS and Alpine more often than Gentoo.
Posted Nov 14, 2018 12:52 UTC (Wed)
by matthias (subscriber, #94967)
[Link]
Posted Nov 14, 2018 15:17 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
Posted Nov 14, 2018 15:32 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (2 responses)
I agree. However, the discussion on debian-devel was started with the background of the status quo where Rust has currently limited portability.
Don't get me wrong: I think that Rust is a great project and language, I have even contributed some patches myself to help with the portability issues. I have definitely no problem with people re-writing their code in Rust. I just wished more Rust developers would take portability more seriously. It would really be great if we could use Rust everywhere where gcc works so that no upstream project would have to worry about excluding certain users when re-writing their code in Rust.
Posted Nov 16, 2018 8:35 UTC (Fri)
by NickeZ (guest, #100097)
[Link]
If this really is what you want your approach is clumsy at best. I'm afraid complaining like this will only get you enemies and developers even less interested in your niche. Instead, you could have written inspiring blog posts / held talks on meetups about how you fixed several archs in the compiler.
Posted Nov 16, 2018 12:51 UTC (Fri)
by hsivonen (subscriber, #91034)
[Link]
Posted Nov 14, 2018 16:59 UTC (Wed)
by jccleaver (guest, #127418)
[Link] (2 responses)
Those who were most opposed to systemd on philosophical grounds may have already left for Devuan, curtailed the development efforts, or simply no longer have the wherewithal to push back on more academic issues like this.
>With Rust, on the other hand, most people in Debian suddenly don't care about portability anymore and just accept that Rust is breaking multiple architectures and kFreeBSD and the Hurd.
> So, it seems that either priorities within Debian have changed or the portability argument against systemd was just an excuse back in 2014.
I'd strongly suggest it's the former. systemd's approval itself was a death knell for kFreeBSD -- it'll never again have the prominence it once had (something which I personally think is a shame and a loss for the entire Linux community).
Having won the systemd argument, there are entire segments of Linux development that are pushing heavily into a rejection of many of the lessons learned by computing projects over the last 30 years: that releases are a thing, semantic versioning is useful, and portability and standards help ensure everyone is playing fair in getting their ideas adopted.
I don't think those who were against systemd were using this as an execuse; I think if you asked them now, you'd be told that future movements like this were quite predictable.
Posted Nov 14, 2018 18:53 UTC (Wed)
by mpr22 (subscriber, #60784)
[Link]
semver is fine if you use 'x' for changes you think everyone else would use 'y' for, and 'y' for changes you think everyone else would use 'z' for, and leave 'z' permanently at a fixed value.
Otherwise, it's an accident waiting to happen.
Posted Nov 19, 2018 14:58 UTC (Mon)
by mstone_ (subscriber, #66309)
[Link]
Posted Nov 14, 2018 17:47 UTC (Wed)
by drag (guest, #31333)
[Link] (2 responses)
A lot of the lesser known/used architectures are often very inaccessible to developers and have slow performance compared to more mainstream systems and thus (at least it seems to me) supporting them would cause delays for them. It's not a problem they can solve. So it's not a surprise that these sorts of issues are not in the forefront of people's minds. There are so much other things to worry about it's a lot to ask of people to make sure that those balls don't get dropped.
It sounds like burnout is really the cause of most of your frustration.
How automated are things in Debian?
Like if somebody creates a package for experimental does packages automatically get generated and built for all the architectures... or does it require somebody logging in, editing some files, and then triggering a process?
If somebody pushes a package into testing that breaks on a particular architecture is that process of installing/testing automatic? Do package maintainers get notified automatically? Do bug reports get filed automatically?
How accessible are 'minor' architectures to developers in Debian? Is there a farm of systems they could log in and test if they felt like it?
If architectures are not even accessible to devs then that is going to cause major headaches and delays in supporting them. If architectures fall behind and supporting them would cause delays for more 'mainstream' systems then that is going to be another major barrier. These are not really problems that can be solved through cultural changes and organizational discipline alone.
Like I said I don't have any good idea how much of this stuff is automated in Debian. But if any step of the way the routine building and testing of packages requires human intervention then that is something that can be fixed and make it a lot easier to mitigate these issues in the future.
As far as systemd goes... while there are very good technical reasons to debate the merits of it versus other approaches a great deal of the drama and much of the volume of communication amounted to mostly just bike shedding. Everybody thinks they are familiar with init systems so everybody has a opinion and they don't hesitate to make those opinions known. This is much less true about LLVM support on less common architectures. When people are uncomfortable with their knowledge on a specific subject they tend to avoid speaking about it.
Posted Nov 14, 2018 21:12 UTC (Wed)
by ballombe (subscriber, #9523)
[Link] (1 responses)
Yes you have no idea. It works so automatically that people take portability for granted.
Maybe this is actually the problem.
[1] if you seen one red cross, if it is for an unreleasing platform your ignore it, otherwise you start running like a headless chicken.
Posted Nov 15, 2018 12:15 UTC (Thu)
by moltonel (guest, #45207)
[Link]
Posted Nov 15, 2018 14:55 UTC (Thu)
by rschroev (subscriber, #4164)
[Link] (1 responses)
Firstly, it's a different kind of debate.
Secondly, the importance. The choice of init system has its implications for the whole system, for all systems. The choice of librsvg only has impact on systems that use it. Perhaps that is a lot of systems, but demonstrably less. (Case study: at work we made software that runs aboard ships of a client of ours, collecting all kinds of data from sensors and sending that to a number of onshore servers. It runs on Debian, and I checked: no librsvg installed or needed. So no, I don't think librsvg is a core component as you claim).
Posted Nov 16, 2018 2:57 UTC (Fri)
by sorpigal (guest, #36106)
[Link]
Posted Nov 15, 2018 1:28 UTC (Thu)
by Yui (guest, #118557)
[Link] (29 responses)
Posted Nov 15, 2018 2:57 UTC (Thu)
by rillian (subscriber, #11344)
[Link] (28 responses)
The rust compiler in Debian which compiles Debian source packages with rust code traces its history back to an upstream binary which was used to compile the first rustc.deb. So it has a different root of trust than the C compiler, or the Haskell compiler, etc. I believe there's a policy for this too. After all, Debian's gcc was originally compiled by some other compiler, way back at the beginning of the distro.
If you're not happy with trusting that particular upstream rustc, see the discussion of mrust above. That, plus reproducible builds, would verify there have been no backend shenanigans. It's also in theory possible to reproduce the rustc bootstrap history starting with Debian's ocaml. I've heard of people trying parts of that, but not that anyone got all the way through the processes.
Posted Nov 15, 2018 20:28 UTC (Thu)
by Yui (guest, #118557)
[Link] (27 responses)
Mrustc has some promise I guess. But it seems to only support very few archs.
Posted Nov 16, 2018 16:18 UTC (Fri)
by moltonel (guest, #45207)
[Link] (25 responses)
Paranoia is good to some extent, but you can't afford to go back to the root ad-infinitum, blockchain-style.
Posted Nov 16, 2018 16:42 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link] (1 responses)
As near as I can tell nobody has ever really taken this seriously for the Rust toolchain.
The argument "why don't you have the same fear about C" reveals a lot of ignorance of the fact that people **do** worry about this for C and have undertaken tremendous efforts to mitigate this concern, specifically for gcc, which is why people generally want to use gcc as the root of trust for other languages.
Posted Nov 17, 2018 0:09 UTC (Sat)
by comex (subscriber, #71521)
[Link]
> It's managed to build rustc from a source tarball, and use that rustc as stage0 for a full bootstrap pass. Even better, from my two full attempts, the resultant stage3 files have been binary identical to the same source archive built with the downloaded stage0.
https://www.reddit.com/r/rust/comments/7lu6di/mrustc_alte...
So I'd say there is someone who has taken this seriously for the Rust toolchain.
Posted Nov 17, 2018 2:53 UTC (Sat)
by Yui (guest, #118557)
[Link] (22 responses)
Of course C has similar concerns but since C is a lot more mature language and has multiple toolchains there are means of verification available that rust just doesn't have. And even if this wasn't the case there still would be the argument that the system should only have one compiler that was bootstrapped instead of building it from the source.
My confidence in Rust community when it comes to security related things is quite low. It's laughable that the recommended installation method is still `curl https://sh.rustup.rs -sSf | sh` and the security issues with their cargo repository seem on par with npm.
Posted Nov 17, 2018 6:09 UTC (Sat)
by roc (subscriber, #30627)
[Link] (21 responses)
C and C++ avoid package repository trust issues by ... not having a package repository. That's not really an advantage.
Posted Nov 17, 2018 16:34 UTC (Sat)
by Yui (guest, #118557)
[Link] (20 responses)
It most certainly is. It means that people get their C/C++ libraries from the distribution's repos. The chain of trust here is much more robust than in user controlled repositories.
Posted Nov 18, 2018 4:10 UTC (Sun)
by roc (subscriber, #30627)
[Link] (19 responses)
But if your software is not packaged for the user's distro, it's a disaster. Your options are all bad:
In practice you have to use some mix of 1, 2 and 3. It's horrible. And notably, 2 and 3 negate any benefit from distro maintainer trust decisions.
Posted Nov 18, 2018 8:53 UTC (Sun)
by Yui (guest, #118557)
[Link] (10 responses)
Posted Nov 18, 2018 10:03 UTC (Sun)
by hsivonen (subscriber, #91034)
[Link] (2 responses)
To take a point of complaint relevant to the Rust-in-Debian topic: Remember how developers considered it offensive and wrong when Apple required apps to have been written in Objective-C? (This requirement has since been relaxed.) If Apple requiring Objective-C was bad, why should it be OK for a distro to block Rust?
(The OP notion that Rust is not portable and, therefore, reasonable to block is hyperbole. Rust has been ported to plenty of ISAs. The problem at hand is about architectures whose chip production has ended years ago and, therefore, are unable to sustain a large enough developer interest to get new work done like developing new ISA-specific code for new compilers. It's worth noting that the main ISA of concern here, m64k, was already considered for removal even from GCC: https://mobile.twitter.com/cr1901/status/915961077464051712 )
Posted Nov 18, 2018 14:57 UTC (Sun)
by Yui (guest, #118557)
[Link] (1 responses)
It's the best application distribution channel from both developer and user perspective. There isn't a more secure or convenient way of installing software.
>why should it be OK for a distro to block Rust?
No one is saying that a distro should ban a specific programming language. It would be beneficial for all if such system level components didn't add new dependencies, especially if those dependencies are only available for a very limited number of architectures.
Posted Nov 18, 2018 17:53 UTC (Sun)
by hsivonen (subscriber, #91034)
[Link]
For all practical purposes, the OP is complaining about an upstream package adopting a particular programming language (and the new version getting imported into Debian) despite it being supported on all Debian "release" platforms.
> especially if those dependencies are only available for a very limited number of architectures.
"very limited number of architectures" is not a fair characterization of the situation. Upstream lists Rust ports for 20 different architecture targets for glibc Linux. Even if you remove targets that are ISA supersets of other targets (i.e. collapse various 32-bit ARM flavors, collapse 32-bit x86 with and without SSE2) but still count 32-bit and 64-bit flavors separately, the count is still around 16, give or take one, depending on what exactly you consider a superset.
Just because a specific ISA that the OP cares about isn't supported does not mean that Rust isn't "portable" or is only available to a "very limited number of architectures".
Posted Nov 18, 2018 10:19 UTC (Sun)
by roc (subscriber, #30627)
[Link] (2 responses)
Posted Nov 18, 2018 15:00 UTC (Sun)
by Yui (guest, #118557)
[Link] (1 responses)
Posted Nov 18, 2018 21:13 UTC (Sun)
by roc (subscriber, #30627)
[Link]
An application might not be packageable for policy reasons, because the distro isn't accepting new packages or the application is closed-source.
For platforms that don't have library package management (i.e. everything except Linux distros), "get your application in the distribution repositories" doesn't help at all with library dependency management, so developers need a separate solution for Windows/Mac/iOS/Android (usually "vendor the libraries"). Once you've vendored the libraries you may as well use them on all platforms.
Compared to the problems of C/C++ dependency management, fixing whatever security issues cargo has is easy.
Posted Nov 18, 2018 10:51 UTC (Sun)
by mjg59 (subscriber, #23239)
[Link] (3 responses)
Posted Nov 18, 2018 14:59 UTC (Sun)
by Yui (guest, #118557)
[Link] (2 responses)
This is probably a reasonable starting point.
Posted Nov 18, 2018 16:31 UTC (Sun)
by mpr22 (subscriber, #60784)
[Link]
It seems to me that the point Matthew seems to be making has eluded you. Debian stretch is the current Debian stable. New packages are basically never accepted into Debian stable. (It is possible to imagine some exotic scenarios where they might be, but I imagine them being along the lines of "upstream for package Q currently in Debian stable has fixed a critical security vulnerability by a method involving the use of package P not currently available in stable and the Debian maintainers of Q do not have the labour resource available to develop their own fix".) If you care enough to ensure that your program will work with 2- to 4-year-old versions of its dynamic library dependencies so that you don't have to drag in eleventy-one updated library versions that might actually break a stable system because the SONAME has been bumped, then once you've got it into unstable you can get it into the -backports repo for stable or possibly even oldstable - but that's officially not part of Debian.
Posted Nov 18, 2018 23:13 UTC (Sun)
by mjg59 (subscriber, #23239)
[Link]
Posted Nov 18, 2018 20:24 UTC (Sun)
by epa (subscriber, #39769)
[Link] (7 responses)
This also has the advantage that you can publish new versions in your repository, and provided they are correctly signed, the system’s normal package management tools will prompt to install the upgrade.
Posted Nov 18, 2018 21:22 UTC (Sun)
by roc (subscriber, #30627)
[Link] (4 responses)
Like vendoring, you have to do a bunch of ongoing work to import the libraries and track their evolution.
Unlike vendoring, you need an extra server-side repository that has to be replicated *per distro* and probably per major version. This has to be built, tested and maintained as all distros evolve. No thanks.
Posted Nov 18, 2018 21:22 UTC (Sun)
by roc (subscriber, #30627)
[Link]
Posted Nov 19, 2018 15:39 UTC (Mon)
by epa (subscriber, #39769)
[Link]
I mentioned this way because I think it's what Adobe did for packaging acroread. Though it seems they have now given up on Linux altogether.
Posted Nov 19, 2018 18:46 UTC (Mon)
by jccleaver (guest, #127418)
[Link] (1 responses)
That's... not very difficult. Plenty of vendors do exactly that, for any repos that they support. Yes, this is why Fedora's 6 month releases aren't supported, but if you're a professional, enterprise shop the ability to do this against stable targets is pretty trivial. If you're building one or more RPMs already for your own software, separating bundled libraries into their own RPMs too is not difficult, and is certainly a good practice.
Posted Nov 19, 2018 21:27 UTC (Mon)
by roc (subscriber, #30627)
[Link]
The Rust approach --- modern dependency management with a global library repository and static linking --- is far more appealing to developers than "set up your own package repository for every distro your users care about".
At some point it may be worth extending crates.io with labels for trusted library versions. Even then it will still be a far more efficient and practical system for managing dependencies than anything C/C++ have to offer.
Posted Nov 18, 2018 22:06 UTC (Sun)
by Yui (guest, #118557)
[Link]
Posted Nov 18, 2018 22:09 UTC (Sun)
by Yui (guest, #118557)
[Link]
Posted Nov 16, 2018 18:21 UTC (Fri)
by rillian (subscriber, #11344)
[Link]
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
"portable Rust". The process from Rust to C isn't very different. Now think about all the nasty things that platform specific C code can use (pragmas, linking, ...).
Debian, Rust, and librsvg
The intention is to use C as the target language, instead of LLVM. This is actually quite doable, but the generated C will look more like assembly rather than a high-level program.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> Once you have the gcc backend, you will max out Rust's portability. So, it's definitely worth the effort.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
A phase of the compiler build, created a GCC subset, that could then bootstrap the full compiler.
If you didn't have a C working on target, then cross compiling was another possibility. GCC was used on Sun h/w to build code for telephone exchanges for example, unsuitable to run a compiler natively.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
2. There are interdependencies between rust versions and cargo versions that aren't always obvious until you try particular combinations.
3. There does not seem to be any way to disable multithreading in the build. Threading can easilly lead to failures that only show their face on particular hardware/kernel configurations.
4. After a failed build attempting to re-run the failed command manually using the command line printed by the build process (for example to run it in a debugger or under strace or something) produces completely unrelated errors. I guess there are some special environment variables or something but I can't seem to find any documentation of them if there are.
5. It's virtually impossible to google anything about rust. If I google say "rust fabricate" most of the results have nothing to do with the language.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> "non-portable" is hyperbole; it can already target many architectures, including 14 of Debian's architectures.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> - x86-64 linux
> - (incomplete) x86 windows
> - (incomplete) x86-64 windows
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> How is it bullshit? Multiple languages do that. Go, D, Java, Pascal, C/C++, Python, Fortran and so on all have multiple implementations and have standardized their language. But, of course, all these languages designers are wrong and only the Rust way is the proper way to do it.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
It's very likely that D's gcc frontend exist because there are some people who like D and who like gcc also so they/he wanted to integrate D's frontend into gcc, nothing "desperate" about this.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
As for 'how many will use a graphical desktop with SVG-using applications ?' I don't know the answer but it's probably non-zero if it works (it of course becomes 0 once it's broken).
Even if the number is small, you can expect librsvg to be the first of a kind. Each one that will follow will break a couple more applications resulting in the arch having not much to offer.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
- people use these arch just for fun: this shouldn't make them less important in my opinion, fun is an important part of our life and of Linux history
- the list of packages broken is limited: but still, it's highly unlikely that nobody will be impacted and the list will only grow in the future
Debian, Rust, and librsvg
Debian, Rust, and librsvg
The fact that it's a hobbyist platform shouldn't really matter.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
> transitive packages is huge.
> sake, but in order to get work done. But if you can't build a large number of packages, chances that the machine is any useful diminishes dramatically.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
* Build and maintain my own installation of Mysql (Ugh... no)
* Move to unstable (Production... no)
* Live with the ancient version (This is what i ended up doing)
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
You upload your source package before going to bed and when you wake up your packages page show all this little green ticks for all the successful packages built on all the platforms, even the unreleasing ones[1].
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Regarding the init system, the options where:
- keep sysvinit
- switch to systemd
- switch to upstart (IIRC)
Regarding librsvg, the options are:
- keep up to date with upstream
- don't keep up to date with upstream
(unless I'm missing something)
Is it a good thing that upstream switched to Rust? Maybe, maybe not, but it's not like Debian has much to say in it: it's upstream's decision.
There's also this: the initd affects everyone. The SVG library doesn't. That will always make the former a bigger deal warranting more discussion.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
1) Depend on a compatible version of the library being installed. Very difficult if you want your software to run on non-latest distros (version skew), impossible if the library isn't packaged by distros you care about. For lots of users your software won't build or (if distributing binaries) won't install.
2) Tell the user to install a compatible version of the dependency. Terrible user experience.
3) "Vendor" the library sources into your own repository so that your build includes a good version of the library. A pain to do, and importing updates to the library is an ongoing maintenance burden.
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg
Debian, Rust, and librsvg