Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Posted Mar 9, 2018 16:43 UTC (Fri) by fratti (guest, #105722)In reply to: Shrinking the kernel with a hammer by nix
Parent article: Shrinking the kernel with a hammer
Firefox has also been getting worse now that they're using some Rust. I genuinely hope Rust gets its ABI stuff sorted so that we do not end up living in a world where everything is a >2 MiB static binary in need of recompilation with every dependency update.
Posted Mar 9, 2018 17:56 UTC (Fri)
by excors (subscriber, #95769)
[Link]
Posted Mar 10, 2018 0:48 UTC (Sat)
by pabs (subscriber, #43278)
[Link] (7 responses)
Is that considered a feature in the Rust community like it is with Go?
Posted Mar 10, 2018 6:42 UTC (Sat)
by fratti (guest, #105722)
[Link] (5 responses)
Posted Mar 10, 2018 8:13 UTC (Sat)
by jdub (guest, #27)
[Link] (3 responses)
There's nothing "unsafe" about dynamic linking, just the challenge of safety across C ABI boundaries (which exists for statically linked code as well) and the lack of a stable Rust ABI (which is pretty reasonable).
Posted Mar 10, 2018 9:40 UTC (Sat)
by fratti (guest, #105722)
[Link] (2 responses)
>practically all Rust Linux binaries dynamically link to glibc by default (and by design)
Indeed, though as far as I know they statically link the Rust standard library. Despite the glibc being dynamically linked, e.g. oxipng still clocks in at 2.8M. Compare that to 86K for optipng.
Posted Mar 11, 2018 17:10 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
There isn't in the ISO standard sense, but there are de facto ABIs. GCC and MSVC have declared their ABIs long ago and stick to them. The Rust compiler does not commit to any given ABI between two releases. I suspect there may be one eventually, but it's not in the same area as C++.
Posted Mar 12, 2018 10:48 UTC (Mon)
by iq-0 (subscriber, #36655)
[Link]
The reason you have to compile a lot of crates (rust libraries) while the thing you're building only uses a few parts of a few crates directly, has to do with how coherency-rules effectively cause many crates to depend on other crates in order to offer possibly relevant implementation of traits for there types or implementations of their traits on it's types.
To minimize the pain of these type/trait dependencies, and also to ease semver stability guarantees, a number of projects have extracted their basic types and/or traits in single purpose (and thus relatively small) crates. This helps these common crates to have few changes and reduce their compile times.
The fact that the crate dependency explosion often seems worse is due to different crates being able to have different (incompatible) dependencies on different versions of the same crate. Rust often handles these issues gracefully, which in many programming languages would have been painfull version conflicts, at the cost of sitting through additional crate compilations.
But to counter that, they only get build once for a project, unless you switch compiler versions, and thus often have the effect of reducing rebuild times. First time builds can be pretty long, but you only incur that cost occasionally. You do want to keep this in mind when configuring possible CI so that you cache these compiled dependencies.
> and the fast speed at which the Rust compiler moves.
Unless you really depend on the unstable (nightly) rust version the compiler normally is only updated every six weeks.
If you're using the unstable channel, you get to pick when you want to go through the bother of updating and thus recompiling everything. But I agree that that's hardly a consolation.
> Indeed, though as far as I know they statically link the Rust standard library. Despite the glibc being dynamically linked, e.g. oxipng still clocks in at 2.8M. Compare that to 86K for optipng.
All rust dependencies are, by default, statically linked, though LTO will prevent 90% of the standard library and other dependencies from being included in the final binary. A very large part of the resultant binary is debugging information (Rust's multi-versioning, types and module support has a big impact on the symbol length) and unwind information (in order to perform gracefull panics as opposed to plain aborts).
Both can be disabled and, with some effort, Rust binaries can be reasonably small. But things like monomorphization, while generating more optimized code, will almost always result in more code being generated. For most applications this usually isn't a big problem as the larger binaries don't really have a performance impact and greatly aid in error message information and debugging possibilities.
Luckily the people working on Rust support in Debian are working at making Rust programs integrate better with their distribution philosophy (dynamic linking, separating debug info and each dependency in a dedicated package), and I really hope that a number of their requirements and solutions will find their way back to the upstream Rust project.
Posted Mar 10, 2018 8:19 UTC (Sat)
by bof (subscriber, #110741)
[Link]
Recently having had openSUSE tumbleweed running crond coredump on me until restarted due to weird DL loading of PAM stuff which was apparently updated, again makes me strongly sympatise with that sentiment...
Posted Mar 11, 2018 17:15 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
[1]AFAICT, Go has much more of a "non-Go code doesn't exist" mentality than Rust folks do for non-Rust code.
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Shrinking the kernel with a hammer