Herb Sutter on increasing safety in C++
Herb Sutter on increasing safety in C++
Posted Mar 13, 2024 7:38 UTC (Wed) by marcH (subscriber, #57642)In reply to: Herb Sutter on increasing safety in C++ by khim
Parent article: Herb Sutter on increasing safety in C++
> What else can be done if developers are not even willing to to follow the rules of the language that they are using?
Simple, push-button automated enforcement. This can work only if you a have linter/sanitizer/analyzer simple to use that tells you if some piece of code is compliant or not.
This is not specific to C++, not specific to memory safety and not even specific to C++ developers. It does not matter whether it's a compilation error, warning, or some other static analysis, or even some code style rule: if you want something, then you need some automated way to enforce it and that's it. Otherwise you won't get it. Life really is that simple: you can only trust machines to consistently enforce tedious stuff, not humans. Humans are here to creatively solve problems, not to cross Ts and dot Is. They just suck too much at it - in any language.
Same thing with bugs: if you want code to support some use case, then you need automated test coverage for it. Otherwise it does not work.
Posted Mar 13, 2024 9:41 UTC (Wed)
by khim (subscriber, #9252)
[Link] (2 responses)
Doesn't work, sorry. That's not even something worth discussing, it's just a [relatively] simple mathematical fact. This may not work. Period. All these tools are fallible (by necessity, it's impossible to make them perfect) and may only work after developer already accepted the fact that it's developer, not tools, who is providing safety. Tools just help. Nope. Machines may find accidental violations. They are very good at that. But they couldn't enforce anything. That's still step one. Step zero is: ensure that all developers want to support that use case. And no, that's not a theoretical issue: I have seen it many times when people installed elaborated schemes of verifications with bazillion tests… and then outsorced important components. And then tests stop working. It's just very simple application of Goodhart's law. Dieselgate is the most famous example, but the same story is repeated again and again: if you say that passing tests is the goal, then people would write code that is passing the test, but doesn't actually work. And it's [almost] always possible. Tests protect against accidents, not against conscious action.
Posted Mar 13, 2024 16:38 UTC (Wed)
by marcH (subscriber, #57642)
[Link] (1 responses)
Yes of course they're not perfect, you need an "escape hatch" like the "unsafe" keyword in Rust, "shellcheck disable=" , etc. But they get most of the work done which no human can do.
> and may only work after developer already accepted the fact that it's developer, not tools, who is providing safety.
The only way developers "accept facts" is when the boss says: these tests must pass. If they don't like it then they look for another job/project.
> Tools just help.
Nothing happens _consistently_ without automated "help"; otherwise failures will always fall through the cracks of good intentions and code reviews and with memory safety a single failure is enough. That's exactly what Stroustrup and Sutter are trying to fix right now with more rules and the _automated_ checks to enforce them. Without enforcement, any new rule is just "pretty please write modern C++" hand waving.
> It's just very simple application of Goodhart's law. Dieselgate is the most famous example, but the same story is repeated again and again: if you say that passing tests is the goal, then people would write code that is passing the test, but doesn't actually work.
Now that's why you _also_ need management and code reviews: to catch developers abusing "unsafe" and similar keywords and fire them and to constantly monitor and adjust the rules to avoid Goodhart effects. Of course you still need humans to supervise the machines; you can't just let them run unsupervised, sorry if I gave that wrong impression. You're right that automated tests are not enough; but they are required as the very first step because humans and code reviews are unpredictable and unscalable.
In the "Dieselgate", management _supported_ and _concealed_ the cheaters so it's a pretty bad example. No system can survive mass, deliberate fraud, that's pretty obvious.
Posted Mar 14, 2024 23:22 UTC (Thu)
by khim (subscriber, #9252)
[Link]
Why is it bad example? Where would you put people like Victor Yodaiken who still, even today, boldly proclaim that that it's Ok to write code with UB and compilers should just magically stop breaking such code? Many of them are developer with years of experience, they are considered experts and management supports them because they deliver. Just like developers of Volkswagen delivered what they were supposed to deliver. It maybe pretty obvious for you but it's not obvious to “we code for the hardware” crowd. They believe they have the right to massively lie to the compiler, deceive it and then still expect predictable output from it. They are far from being rare and unique and none of these proposals address their existence even if without addressing that social issue all technological marvels wouldn't help you.
> Simple, push-button automated enforcement.
Herb Sutter on increasing safety in C++
Herb Sutter on increasing safety in C++
> ...
> That's still step one. Step zero is: ensure that all developers want to support that use case.
> In the "Dieselgate", management _supported_ and _concealed_ the cheaters so it's a pretty bad example.
Herb Sutter on increasing safety in C++
