Who's afraid of a big bad optimizing compiler?
Who's afraid of a big bad optimizing compiler?
Posted Jul 26, 2019 17:18 UTC (Fri) by excors (subscriber, #95769)In reply to: Who's afraid of a big bad optimizing compiler? by topimiettinen
Parent article: Who's afraid of a big bad optimizing compiler?
E.g. for any shared data structure, there's probably some single-threaded initialisation code that sets it up before it's exposed to other threads. If the structure was declared with volatile/atomic fields, the compiler may add barriers in the initialisation code that the programmer knows are unnecessary. So the programmer might choose to explicitly use READ_ONCE/WRITE_ONCE, improving performance but increasing the risk of missing a case where it's actually required. Which way is "better" depends on how you weigh performance vs correctness.
Posted Jul 26, 2019 17:57 UTC (Fri)
by andresfreund (subscriber, #69562)
[Link]
More important than initialization are probably all the accesses inside sections of code holding a lock, where you likely do not want unnecessary repeat accesses to memory, just because a variable is referenced twice. Because acquiring spinlock/mutex/whatever commonly acts as a barrier of some sorts (depending on the implementation somewhere between a full memory barrier, acquire/release barrier, compiler barrier and nothing), it is *often* unnecessary to use READ_ONCE / WRITE_ONCE from within. Although if there's other accesses working without that lock, or if you have more fine grained locking schemes (say exclusive, exclusive write, read), it's possibly still necessary to use them even within a locked section.
There's also the fact that volatile on datastructures itself often ends up requiring annoying casts to get rid of it, which then makes the code more fragile too (lest you use some other type of smart macro that keeps the type the same except for removing the volatile).
Who's afraid of a big bad optimizing compiler?