Who's afraid of a big bad optimizing compiler?
Who's afraid of a big bad optimizing compiler?
Posted Aug 16, 2019 13:24 UTC (Fri) by farnz (subscriber, #17727)In reply to: Who's afraid of a big bad optimizing compiler? by excors
Parent article: Who's afraid of a big bad optimizing compiler?
In terms of evidence for unsafety not growing, there's Redox OS, which has so far been successful at encapsulating unsafe code (at least in terms of apparent safety - it'd take a formal audit to be confident that there are no erroneous safe wrappers). Similarly, Servo (and the parts backported to Firefox) show that it's also practical to keep unsafe to a small area when working in a big project like a web browser.
My practical experience of Rust is that I have yet to encounter a piece of code which has to be unsafe and cannot provide a safe interface to the rest of your project; I have encountered several places where unsafe is used because it's easier to violate UB guarantees than to design a good interface. As an example, I've written several FFIs to internal interfaces where the C code has a "flags tell you which pointers in this structure are safe to access" design, and I've ended up creating a new datastructure that has nested Rust structs and makes use of Option to identify which sub-structs are valid right now.
Personally, I see "unsafe" as a review marker; it tells you that in this block of code, the programmer is making use of features that could lead to UB if abused (the Rust compiler warns about unnecessary use of unsafe), and that you need to be confident that they are upholding the "no UB via Safe code" guarantee that the compiler expects. It's by no means a perfect system, and as Actix shows, it can be abused, but it does show you where you need to hold the code author to a higher standard than normal.
FWIW, having spent the last 2 years working in Rust, I think it's a decent advance on C; I spend longer getting code to actually compile than I did with C11 or C++17, but it then works (both when tested and in production) more often than my C11 or C++17 did.
Posted Aug 16, 2019 14:06 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
A note is that the amount of *thinking* is decreased overall. Rust front-loads it on the compilation phase (with a decent amount of hand holding) while C likes to back-load it on the debugging phase with a possibly unbounded timesink (where I, at times, feel like gdb is watching me like the Chesire cat). The number of times our Rust stuff has fallen over (mostly "unresponsive"[1] or hook deserialization errors[2]) is less than a dozen. But, an error (not a crash!) in the deserializer is much easier to diagnose than "someone threaded a `NULL` *here*‽" questions I find myself asking in C/C++.
[1] Someone pushed a branch as a backport that caused the robot to go checking ~1000 commits with ~30 checks each. The thinking here was actually about how to detect that we should avoid checking so much work (since the branch is obviously too new to backport). It was churning through it as best it could, but it was still taking ~20 minutes to do the work.
Who's afraid of a big bad optimizing compiler?
[2] GitLab (and GitHub) webhook docs are so anemic around the types they are using for stuff, so that a field is nullable is sometimes surprising. I can't wait for GraphQL to become more widespread :) .