Who's afraid of a big bad optimizing compiler?
Who's afraid of a big bad optimizing compiler?
Posted Jul 26, 2019 18:18 UTC (Fri) by andresfreund (subscriber, #69562)In reply to: Who's afraid of a big bad optimizing compiler? by jerojasro
Parent article: Who's afraid of a big bad optimizing compiler?
I'd say that C11/C++11 made a large step in that direction, by having a formalized memory model, and builtin atomics. Before that there really was no way to not rely on compiler implementation details to get correctly working (even though formally undefined) concurrent programs.
It does require you however to actually use the relevant interfaces.
I don't quite see how you'd incrementally get to a language that doesn't have any of these issues, without making it close to impossible to ever incrementally move applications towards that hypothetical version of C. I mean there's basically no language that allows to use shared memory and doesn't require escape hatches from its safety mechanisms to implement fast concurrent datastructures (e.g. rust needing to go to unsafe for core pieces). And the languages that get closes require enough of a different approach that it's hard to imagine C going towards it.
That's not to say that C/C++ have sufficiently progressed towards allowing to at least opt into safety. The C11/C++11 memory model and the atomics APIs are a huge step, but it's happened at the very least 10 years too late (while some of the formalisms where developed somewhat more recently, there ought to at least have been some progress before then). And there's plenty other issues where no proper ways are provided (e.g. signed integer overflow handling, mentioned in nearby comments).
Posted Jul 29, 2019 16:44 UTC (Mon)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link]
So there are great opportunities for innovations in the area of locating old concurrent C code in need of an upgrade! :-)
Who's afraid of a big bad optimizing compiler?