Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Posted Aug 28, 2019 15:04 UTC (Wed) by Archimedes (subscriber, #125143)Parent article: Rust is the future of systems programming, C is the new Assembly (Packt)
- Bootstrapping rust was a real pain but is at least getting better a lot
- solely supporting static builds is IMHO a severe error when building systems (especially when security comes in play)
- cargo as build system is "nice and simple" and can handle a lot but falls short if complex builds are needed which can be done in C using the automake/autoconf, cmake, ... hell ...
- license handling of packages drawn in by cargo (which is hard to controll at least during bootstrap) is awful ... They "recomend" the SPEDX license field, but noone is checking if there are additional licenses or patent clauses in the package. If one is forced to check this it is a lot of handy work to get this collected. And the additional problem that it is too easy to draw in a lot of libs which creates a huge amount of work to extract, collect and check the licenses. Yes thats not a rust problem, but a cargo infrastructure problem ... but thats again kind of a rust problem.
So I kind of like Rust as language but it seems I more and more hate the tools around it (cargo/static linking ...)
Posted Aug 28, 2019 15:51 UTC (Wed)
by moltonel (guest, #45207)
[Link] (17 responses)
I've never seen cargo being a problem for complex cases in practice. It can use a build script, call the local CC, be called from another build system, avoid network access... It's the expected "easy case is easy, complex case is doable". There's a working group making sure that cargo interacts well with other buildsystems both as upstream and downstream. And cargo remains optional: a quick duckduck finds cmake and meson examples to build rust projects without cargo.
The cargo-bom subcommand can help with gathering license info. But it would indeed be nice if you could put something like `accept_licenses = ["GPL-3","Apache",...]` in your Cargo.toml.
Posted Aug 29, 2019 6:17 UTC (Thu)
by Archimedes (subscriber, #125143)
[Link] (16 responses)
cargo gets problemtatic as when you as a maintainer have to even out the developer cargo dependencie trees from multiple groups, so update a dependency which has a CVE (or some other reason) update all toml files and then recompile all code from all groups and hope that the other dependencies in each project can cope with the updated lib. So cargo makes it even harder to maintain a version tree of depencies and update them for the group projects. (I am not saying that the other side, the linux/gnu .so ...hell... is nice but here more tools are available, the API is ...mostly... stable so e.g. updating e.g. openssl and having the advantage in all other libs/bins is simpler then in the rust case. So I should have been more precise in the beginning, the problem is more the idea that the build tool is also the depency handler and library installation tool with mostly no regard on an existing system, then cargo as being a make/cmake as the build tool, but as the configuration and system check tool. (Fortunatly the --offline support is quite usable now)
Yes for that part you can get help, the problem is as long it is not enforced that crates are allowed to pack additional licenses or patent files then there is not much help on the spdx field as one still has to check for additional stuff in all crates. (There is the cargo-license package which can help, but the above is not handled). But of course the problem that C devs mostly need only a one up to a few libs (not counting the dependencies of these libs) for development, on cargo quite simple projects easily have 30+ dependencies ... So in the GNU/C world one has to check a few in the rust world ... a lot ...
Posted Aug 29, 2019 11:08 UTC (Thu)
by moltonel (guest, #45207)
[Link] (15 responses)
Yes, it means that those steps need to be repeated for every affected apps, but they're easier than with most other languages, and it brings some advantages: Because the builds are static, if AppFoo is currently buggy with the fixed LibVuln you can still update AppBar and AppBaz, so that's a security win. Thanks to static builds and LTO, there's a chance that AppFoo isn't affected by the CVE in LibVuln, another win. And because Rust programs can use multiple versions of a lib, you won't be stopped by a deps catch-22.
It's possible to embed enough information in compiled binaries so that tools like cargo-audit can work on the binary rather than the repository. Right now it requires an opt-in crate and some code change, but eventually it'll be integrated enough that a distro maintainer can enable it easily on all packages.
Regarding deps, as you said the same problem exists with C and others, it's just on a smaller scale. At least thanks to the Rust ecosystem's standardization, there's better hope of automating the system than with (for example) C. You could for example extend cargo-bom to error out if a crate doesn't have an acceptable license, and bug upstream if their license isn't clearly stated. Harder to do with other ecosystems.
Posted Aug 29, 2019 13:43 UTC (Thu)
by Archimedes (subscriber, #125143)
[Link] (14 responses)
I dont see it as security win that only a few targets use the fixed version and one (or more) not**. As the problem is that in the C world "someone" fixes the system and the others "only" have to test.
**besides that is also possible in a linux/gnu system, there is no problem to maintain libs in different versions (except the work)
Posted Aug 29, 2019 14:40 UTC (Thu)
by moltonel (guest, #45207)
[Link] (13 responses)
Again: yes, static linking interferes with the traditional way our systems are updated, turning an O(1) process into an O(n) one. There's a need to improve the process so that for example we can assert that we updated all n items, and probably that it can be done by a single person. Testing will always remain O(n).
The system-update issue is thorny and non-trivial. But once it is solved (which should be easier with a Rust app than with a C app with vendored libs), a staticaly-linked system can very well end up safer and more up to date than a dynamically-linked one.
Posted Aug 29, 2019 14:52 UTC (Thu)
by pizza (subscriber, #46)
[Link] (12 responses)
If a vendor can't be bothered to track and issue security updates for a bundled C library their application depends on, they're not going to bother to track and issue updates due to rust libraries their application depends on either.
Posted Aug 29, 2019 16:53 UTC (Thu)
by moltonel (guest, #45207)
[Link] (11 responses)
Yes and no. With cargo, the bar to check for security updates in your deps is much lower: just run `cargo audit` (possibly daily in your CI) and then run `cargo update && cargo install` if `audit` reports something. It's simple enough that it can be done by the distributor/packager/end-user if upstream is not responsive. The same cannot be said of a C/C++ project with vendored deps.
Posted Aug 30, 2019 3:25 UTC (Fri)
by pizza (subscriber, #46)
[Link] (10 responses)
You keep assuming the problem is technical in nature. It is not.
You assume the existence of a CI system. You implicitly assume the presence of an automated regression testing suite. You assume that releases don't have a weeks-to-months-long process. Heck, you assume there's even some process in place that acknowledges that your dependencies might actually need updating outside of multi-year platform update cycles.
That latter bit is the primary reason why "vendored C++ deps" don't get updated. The base platform layer is fixed, and cannot be changed. period.
Posted Aug 30, 2019 9:08 UTC (Fri)
by moltonel (guest, #45207)
[Link] (1 responses)
> The base platform layer is fixed, and cannot be changed. period.
That's a major argument in favor of vendoring (and if you're vendoring, you might as well link statically). Whether you're updating your vendored libs is another story. Note that Rust projects generally link statically but don't vendor, and you *can* update libs easily enough as a distributor if upstream doesn't, or as an end-user if your distributor doesn't.
Posted Aug 30, 2019 11:58 UTC (Fri)
by pizza (subscriber, #46)
[Link]
None of that has any bearing on pathologic development/business processes that treat post-release maintenance as a four-letter word, nor does it help anyone who doesn't have access to complete corresponding source code to everything.
Posted Aug 30, 2019 11:37 UTC (Fri)
by james (subscriber, #1325)
[Link] (7 responses)
If upstream has the problems you mention, the problems will be there for their own code. If cargo audit tells us (users/distributions) which projects aren't maintaining their software properly, we know which projects need help (or, ultimately, need to be forked or ignored).
Posted Aug 30, 2019 12:10 UTC (Fri)
by pizza (subscriber, #46)
[Link] (6 responses)
Even the most pathological F/OSS project is lightyears beyond the depressing suckage in the proprietary world.
Posted Aug 30, 2019 12:22 UTC (Fri)
by pizza (subscriber, #46)
[Link] (5 responses)
I wrote this, then I remembered Apache OpenOffice, which transcends to Dilbert-esque levels of suckage.
Posted Aug 30, 2019 23:36 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (4 responses)
At which point, IBM sank it.
Cheers,
Posted Aug 30, 2019 23:57 UTC (Fri)
by pizza (subscriber, #46)
[Link] (3 responses)
Posted Sep 2, 2019 9:14 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (2 responses)
Tell me, how many contributions written by non-Sun staff actually made it into Sun OpenOffice?
I was under the impression that go-oo forked off during Sun's tenure, because it was GPL in name only ...
Cheers,
Posted Sep 2, 2019 10:00 UTC (Mon)
by nix (subscriber, #2304)
[Link]
What saves this is exactly what you noted: the freedom to fork. If a project is both nonresponsive and has enough problems or enough frustrated developers, it'll fork -- and these days, it's so easy to set up the infrastructure for a fork that the number of frustrated developers is probably one and the definition of nonresponsive may well be "did not reply to patch yet because maintainer was away getting a coffee" :P
Posted Nov 11, 2020 15:08 UTC (Wed)
by ceplm (subscriber, #41334)
[Link]
There was some disagreements about the build system, and some components (Sun OOo was perfectly open source but with CLA, which some contributors were not willing to sign, because also sold OOo under proprietary license). However, where CLA was signed, there were not that many problems, and for example Red Hat (especially Caolán McNamara and others) provided patches to Sun quite regularly. That all broke down when Oracle took over Sun, and one of the steps towards LibreOffice was when exactly Red Hat gave up on OOo and joined go-oo to create new foundation.
Posted Aug 28, 2019 16:02 UTC (Wed)
by Deleted user 129183 (guest, #129183)
[Link] (2 responses)
Actually, Rust does support both compilation and use of shared libraries. The problem is, not every package can be compliled as a shared library, which makes packaging applications for GNU/Linux distributions with policies discouraging static linking a major pain in the ass.
I bet that the reason for it is that the developers of Rust are all rich hipsters using a certain fruity operating system that doesn’t have a notion of the “system package manager”…
Posted Aug 28, 2019 16:55 UTC (Wed)
by wahern (subscriber, #37304)
[Link]
Static linking is hip. Static linking is easier from the perspective of Rust's compiler and toolchain. Finally, Rust was begotten at Mozilla for work on Firefox, where for myriad reasons static linking makes tremendous sense for *every* supported platform--not just fruity, but even the hobby ones.
Alas, the FOSS community seems destined to recapitulate the lessons learned that brought about dynamic linking in the very first place[1], but on the bright side maybe it'll be improved upon. For example, with CTF for improved type safety and automagic runtime FFI.
[1] No, not disk space. But, yes, also disk space, because size always matters, then and now.
Posted Aug 28, 2019 20:56 UTC (Wed)
by bluss (guest, #47454)
[Link]
Posted Aug 28, 2019 16:55 UTC (Wed)
by cesarb (subscriber, #6266)
[Link]
Rust does support dynamic builds, rustc itself is composed of several libraries which are usually dynamically linked together.
The reason it's not used often elsewhere is the lack of a stable ABI, which means everything must be compiled with the exact same version of rustc. There is some work being done on that, starting with name mangling (https://github.com/rust-lang/rfcs/pull/2603).
Posted Aug 29, 2019 20:47 UTC (Thu)
by nicoonoclaste (guest, #128640)
[Link] (1 responses)
> - solely supporting static builds is IMHO a severe error when building systems (especially when security comes in play)
> - cargo as build system is "nice and simple" and can handle a lot but falls short if complex builds are needed which can be done in C using the automake/autoconf, cmake, ... hell ...
> - license handling of packages drawn in by cargo (which is hard to controll at least during bootstrap) is awful ...
Posted Aug 30, 2019 19:05 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Well, it does, but you can only really use it for C ABI boundaries. A shared Rust library would also likely need recompiled/linked for updates due to a lack of ABI stability (in the sense of "updating the compiler can change this") of Rust calls and structure layouts.
Posted Aug 30, 2019 19:08 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
What, really, do you need from a more complex build system? The majority of CMake's complexity comes from compiler and linker inconsistencies. Generated sources have solutions in Rust. Platform introspection can be done at the same time (and generate code for inclusion). Preprocessing to handle some intermediate language is fine too. I guess I'm missing what kinds of complexities are actually required that aren't possible with stock cargo.
One thing I know of is linker stuff (manifest files on Windows, custom ELF sections, library ID management on macOS, rpath/runpath shenanigans, linker scripts, packaging metadata, etc.). But, most of this can also be done as post-processing steps on the resulting artifacts anyways, so I don't think that is a fundamental lack in cargo itself.
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
The problem is that cargo helps for each app/project/program. But how about a system of a lot of these.
The system has (and always should) dictate a base of "stuff" the apps can use (apps can also use additional stuff as long as it is not changing the system part).
In the linux/gnu world a .so is updated and by just updating that one (as long as its compatible) all users are *updated*.
In the Rust world a (system) lib is updated, that has to be *forced* to all app/project/programs then all have to be recompiled (including the deps). (So yes the reason is basically that all is statically linked)
So if you have to maintain just a proramm then you benefit a lot of the rust eco system. If you have to maintain a whole system (with lots of different rust projects) it is more the contrary.
Its ok that currently they are mostly focussing developers/programs instead of systems.
In the rust world, all groups have to update and to test, so now one has to make sure that everyone is updating. (And logicaly also checking if it is done in the end).
I am working in an evirenment where security is really important so fixing CVEs is much higher ranked as "normally".
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Isn't that a feature though?
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Wol
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Wol
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
Rust is the future of systems programming, C is the new Assembly (Packt)
> - Bootstrapping rust was a real pain but is at least getting better a lot
I assume you never had to bootstrap a C toolchain?
At least rustc provides a mostly-straightforward and mostly-documented path for that.
Rust has supported dynamic linking for as long as I can recall?
It's simply not the default.
No comment. >_>'
> They "recomend" the SPEDX license field, but noone is checking if there are additional licenses or patent clauses in the package.
1. Having licensing metadata at all is an improvement over the C ecosystem.
2. From experience, the licensing metadata in crates is accurate in the very vast majority of cases.
3. Distributions are actually checking this metadata when packaging Rust software; for instance, Debian's Rust team (disclaimer: I am a member) definitely does so.
Rust is the future of systems programming, C is the new Assembly (Packt)
> It's simply not the default.
Rust is the future of systems programming, C is the new Assembly (Packt)