LWN: Comments on "Losing the magic" https://lwn.net/Articles/915163/ This is a special feed containing comments posted to the individual LWN article titled "Losing the magic". en-us Fri, 05 Sep 2025 23:26:24 +0000 Fri, 05 Sep 2025 23:26:24 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Losing the magic https://lwn.net/Articles/918942/ https://lwn.net/Articles/918942/ geert <pre> struct foo { ... struct bar embedded; ... }; </pre> If you have a pointer to the "embedded" member of type "struct bar", you can convert it to a pointer to the containing "struct foo" using the <a href="https://elixir.bootlin.com/linux/v6.1-rc7/source/include/linux/container_of.h">container_of()</a> macro. Tue, 03 Jan 2023 09:56:43 +0000 Losing the magic https://lwn.net/Articles/918894/ https://lwn.net/Articles/918894/ tabberson <div class="FormattedComment"> Can you please explain what you mean by "struct embedded"?<br> </div> Mon, 02 Jan 2023 18:31:05 +0000 Losing the magic https://lwn.net/Articles/917921/ https://lwn.net/Articles/917921/ farnz <p>It's more that you can have an external observer outside the abstract machine, but able to understand abstract machine pointers; I can, in theory, store a pointer from malloc in a way that allows the external observer to reach into the abstract machine and read the malloc'd block. I can also have the external observer be looking not at the addresses, but at the pattern of data written into the block (just as in hardware, it's not unknown to have chips only connected to the address bus, and to rely on the pattern of address accesses to determine what to do). <p>The compiler is not allowed to make assumptions about what the external environment can, or cannot, see, and thus has to assume that any write to a volatile is visible in an interesting fashion. Thu, 15 Dec 2022 10:47:00 +0000 Losing the magic https://lwn.net/Articles/917871/ https://lwn.net/Articles/917871/ mathstuf <div class="FormattedComment"> Can the C abstract machine really say that memory it obtains through `malloc` has some other magical property? Wouldn't that require you to get "lucky" with what `malloc` gives you in the first place to have that address space "mean something" to some other part of the system?<br> <p> Maybe the kernel gets away with it by "hiding" behind non-standard allocation APIs…<br> </div> Wed, 14 Dec 2022 23:25:47 +0000 Losing the magic https://lwn.net/Articles/917863/ https://lwn.net/Articles/917863/ farnz <p>If I'm reading the standard correctly, the compiler has to output the stores, because it is possible that the program has shared that memory with an external entity using mechanisms outside the scope of the standard. What the implementation does <em>after</em> the memory is freed is not specified (although the implementation is allowed to assume that the memory is no longer shared with an external entity at this point), and in theory a sufficiently malicious implementation could undo those final stores after you called free, but before the memory is reused. <p>In practice, I don't think this is a significant concern for tricks intended to help with debugging. It is for security-oriented code, but that's not the case here. Wed, 14 Dec 2022 19:31:55 +0000 Losing the magic https://lwn.net/Articles/917860/ https://lwn.net/Articles/917860/ excors <div class="FormattedComment"> <span class="QuotedText">&gt; Even then, couldn't the compiler just set it back after the volatile write, but before the memory is freed? Seeing as how it's unobservable and all. Perhaps it decides to use that "dead" memory as scratch space for some other operation.</span><br> <p> It probably could, but I don't think it's particularly fruitful to consider what the compiler 'could' do, because the goal of the magic numbers here is to detect use-after-free bugs, i.e. we're interested in the practical behaviour of a situation that the standard says is undefined behaviour. We're outside the scope of the standard, so all we can do is look at what GCC/Clang actually will do.<br> <p> If there is no barrier or volatile, and some optimisations are turned on, they demonstrably will delete the write-before-free. With barrier or volatile, it appears (in my basic testing) they don't delete it, so the code will behave as intended - that doesn't prove they'll never delete it, but I can't immediately find any examples where that trick fails, and intuitively I think it'd be very surprising if it didn't work, so I'd be happy to make that assumption until shown a counterexample.<br> <p> (The same issue comes up when trying to zero a sensitive buffer before releasing it, to reduce the risk of leaking its data when some other code has a memory-safety bug - you need to be very careful that the compiler doesn't remove all the zeroing code, and you can't look to the C standard for an answer.)<br> </div> Wed, 14 Dec 2022 18:33:39 +0000 Losing the magic https://lwn.net/Articles/917858/ https://lwn.net/Articles/917858/ mathstuf <div class="FormattedComment"> It means that the value being represented is not trackable in the C abstract machine and therefore no assumptions can be made about it. Because no assumptions can be made, optimizers are hard-pressed to do much of anything about it because the "as-if" rule is likely impossible to track accurately.<br> <p> However, given that this is trivially detectable as about-to-be-freed memory, I don't know what kind of rules exist around "volatile values living in C-maintained memory" might allow even this to still be seen as a dead store and unobservable via UAF == UB.<br> </div> Wed, 14 Dec 2022 17:10:37 +0000 Losing the magic https://lwn.net/Articles/917857/ https://lwn.net/Articles/917857/ adobriyan <div class="FormattedComment"> I don't think so. "volatile" means "load/store instruction must be somewhere in the instruction stream" which is what needed.<br> </div> Wed, 14 Dec 2022 16:47:13 +0000 Losing the magic https://lwn.net/Articles/917775/ https://lwn.net/Articles/917775/ nybble41 <div class="FormattedComment"> Even then, couldn't the compiler just set it back after the volatile write, but before the memory is freed? Seeing as how it's unobservable and all. Perhaps it decides to use that "dead" memory as scratch space for some other operation.<br> </div> Wed, 14 Dec 2022 02:46:50 +0000 Losing the magic https://lwn.net/Articles/917759/ https://lwn.net/Articles/917759/ adobriyan <div class="FormattedComment"> This is the usecase for "volatile": *(volatile int *)&amp;fs-&gt;magic = 0;<br> </div> Tue, 13 Dec 2022 14:47:39 +0000 Losing the magic https://lwn.net/Articles/917755/ https://lwn.net/Articles/917755/ excors <div class="FormattedComment"> <span class="QuotedText">&gt; I ask because some day, a very smart compiler might see that dead write of `fs-&gt;magic = 0;` given the immediate free afterwards and optimize it out as UB to observe.</span><br> <p> That day was at least 8 years ago. GCC 4.9 with -O1 will optimise away the writes, defeating this attempt at memory protection, because ext2fs_free_mem is an inline function so the compiler knows the object is passed to free() and can no longer be observed. See e.g. <a href="https://godbolt.org/z/nWYEa34a6">https://godbolt.org/z/nWYEa34a6</a><br> <p> I guess the cheapest way to prevent that is to insert a compiler barrier (`asm volatile ("" ::: "memory")`) just after writing to fs-&gt;magic, to prevent the compiler making assumptions about observability of memory.<br> </div> Tue, 13 Dec 2022 13:08:33 +0000 Losing the magic https://lwn.net/Articles/917753/ https://lwn.net/Articles/917753/ mathstuf <div class="FormattedComment"> I run my userspace under `MALLOC_CHECK_=3` and `MALLOC_PERTURB_=…` (updated occasionally by a user timer unit) to catch things like this. Is some kind of "memset-on-kfree" mechanism not suitable for debugging the entire kernel for use-after-free while also being far less heavy than KMSAN?<br> <p> I ask because some day, a very smart compiler might see that dead write of `fs-&gt;magic = 0;` given the immediate free afterwards and optimize it out as UB to observe. Additionally, while it's also against UAF in ext2 code, non-ext2 code that gets its hands on the pointer that somehow that doesn't have the magic-checking logic is just as dead too (I have no gauge on how "likely" this is in the design's use of pointers).<br> </div> Tue, 13 Dec 2022 12:11:40 +0000 Losing the magic https://lwn.net/Articles/917722/ https://lwn.net/Articles/917722/ tytso <div class="FormattedComment"> And by the way.... the com_err library is not just used by e2fsprogs. It's also used by Kerberos, as well as a number of other projects that were developed at MIT's Project Athena[1] (including Zephyr[2], Moira[3], Hesiod[4], Discuss[5], etc.) <br> <p> [1] <a href="http://web.mit.edu/saltzer/www/publications/atp.html">http://web.mit.edu/saltzer/www/publications/atp.html</a><br> [2] <a href="http://web.mit.edu/saltzer/www/publications/athenaplan/e.4.1.pdf">http://web.mit.edu/saltzer/www/publications/athenaplan/e....</a><br> [3] <a href="http://web.mit.edu/saltzer/www/publications/athenaplan/e.1.pdf">http://web.mit.edu/saltzer/www/publications/athenaplan/e....</a><br> [4] <a href="http://web.mit.edu/saltzer/www/publications/athenaplan/e.2.3.pdf">http://web.mit.edu/saltzer/www/publications/athenaplan/e....</a><br> [5] <a href="http://www.mit.edu/afs/sipb/project/www/discuss/discuss.html">http://www.mit.edu/afs/sipb/project/www/discuss/discuss.html</a><br> </div> Tue, 13 Dec 2022 06:27:05 +0000 Losing the magic https://lwn.net/Articles/917723/ https://lwn.net/Articles/917723/ Fowl <div class="FormattedComment"> Is it still 'magic' if it's a vtable pointer? ;p<br> </div> Tue, 13 Dec 2022 06:19:45 +0000 Losing the magic https://lwn.net/Articles/917721/ https://lwn.net/Articles/917721/ tytso <p>The use of magic numbers is something that I learned from Multics. One advantage of structure magic numbers is that it also provides protection against use-after-free bugs, since you can zero the magic number before you free the structure, and even if you don't, when it gets reused, if everyone uses the magic number scheme where the first four bytes contain a magic number, then it becomes a super-cheap defense a certain class of bugs without needing to rely on things like KMSAN, which (a) is super-heavyweight and so won't be used on production kernels, and (b) didn't exist in the early days of Linux. </p> <p>Like everything, it's a trade-off. Yes, there is overhead associated with magic numbers. But it's not a lot of overhead (and it's certainly cheaper than KMSAN!) and the ethos of "trying to eliminate an entire set of bugs" which is something is well accepted for making the kernel more secure, is someting that could be applied for magic numbers as well.</p> <p>I still use magic numbers in e2fprogs, where the magic number is generated using the com_err library (another Multicism; where the top 24-bits identify the subsystem, and the low 8-bits is the error code for that subsystem). This means it's super easy to do things like this: </p> <p>In lib/ext2fs/ext2fs.h: <pre> #define EXT2_CHECK_MAGIC(struct, code) \ if ((struct)-&gt;magic != (code)) return (code) </pre> <p>In lib/ext2fs/ext2_err.et.in: <pre> error_table ext2 ec EXT2_ET_BASE, "EXT2FS Library version @E2FSPROGS_VERSION@" ec EXT2_ET_MAGIC_EXT2FS_FILSYS, "Wrong magic number for ext2_filsys structure" ec EXT2_ET_MAGIC_BADBLOCKS_LIST, "Wrong magic number for badblocks_list structure" </pre> <p>The compile_et program generates ext2_err.h and ext2_err.c, for which ext2_err.h will have definitions like this: <pre> #define EXT2_ET_BASE (2133571328L) #define EXT2_ET_MAGIC_EXT2FS_FILSYS (2133571329L) #define EXT2_ET_MAGIC_BADBLOCKS_LIST (2133571330L) ... </pre> <p>Then in various library functions: <pre> errcode_t ext2fs_dir_iterate2(ext2_filsys fs, ext2_ino_t dir, ... { EXT2_CHECK_MAGIC(fs, EXT2_ET_MAGIC_EXT2FS_FILSYS); ... </pre> <p>And of course: <pre> void ext2fs_free(ext2_filsys fs) { if (!fs || (fs-&gt;magic != EXT2_ET_MAGIC_EXT2FS_FILSYS)) return; ... fs-&gt;magic = 0; ext2fs_free_mem(&amp;fs); } </pre> <p>Callers of ext2fs library functions then will do things like this: <pre> errcode_t retval; retval = ext2fs_read_inode(fs, ino, &amp;file-&gt;inode); if (retval) return retval; </pre> or in application code: <pre> retval = ext2fs_read_bitmaps (fs); if (retval) { printf(_("\n%s: %s: error reading bitmaps: %s\n"), program_name, device_name, error_message(retval)); exit(1); } </pre> <p>This scheme has absolutely found bugs, and given that there is a full set of regression tests that get run via "make check", I've definitely found that having this kind of software engineering practice increases developer velocity, and reduces my stress when I code since when I do make a mistake, it generally gets caught really quickly as a result.</p> <p>Personally, I find this coding discipline easier to understand and write than Rust, and more performant than using things like valgrind and MSan. Of course, I use those tools too, but if I can catch bugs early, my experience is that it allows me to generate code much more quickly and reliably.</p> <p>Shrug. Various programming styles go in and out of fashion. And structure magic numbers goes all the way back to the 1960's (Multics was developed as a joint project between MIT, GE, and Bell Labs starting in 1964).</p> Tue, 13 Dec 2022 06:02:50 +0000 Losing the magic https://lwn.net/Articles/917350/ https://lwn.net/Articles/917350/ farnz <p>Yep - and the <tt>O_PONIES</tt> problem, when you reduce it to its core is simple. The standard permits non-deterministic behaviour (some behaviours are defined as "each execution of the program must exhibit one behaviour from the allowed list of behaviours", not as a single definite behaviour). The standard also permits implementation-defined behaviour - where the standard doesn't define how a construct behaves, but instead says "your implementation will document how it interprets this construct". <p>What the <tt>O_PONIES</tt> crowd want is to convert "annoying" UB in C and C++ to implementation-defined behaviour. There's a process for doing that - it involves going through the standards committees writing papers and convincing people that this is the right thing to do. The trouble is that this is hard work - as John Regehr has already demonstrated by making the attempt - since UB has been used by the standards committee as a way of avoiding difficult discussions about what is, and is not, acceptable in a standards-compliant compiler, and thus re-opening the discussion is going to force people to confront those arguments all over again. Thu, 08 Dec 2022 11:11:46 +0000 Losing the magic https://lwn.net/Articles/917355/ https://lwn.net/Articles/917355/ geert <div class="FormattedComment"> Sounds like a good task for sparse?<br> </div> Thu, 08 Dec 2022 10:53:00 +0000 Losing the magic https://lwn.net/Articles/917276/ https://lwn.net/Articles/917276/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; And it's all very well khim saying "the compiler writers have given you an opt-out". But SchrodinUB should always be opt IN. </span><br> <p> The thing is... they are! Run GCC without any arguments and you'll get -O0, ie "no optimization".<br> <p> These UB-affected optimizations are only ever attempted if the compiler is explicitly told to try.<br> <p> Now what I find hilarious are folks who complain about the pitfalls of modern optimization techniques failing on their code while simultaneously complaining "but without '-O5 -fmoar_ponies' my program is too big/slow/whatever". Those folks also tend to ignore or disable warnings, so.. yeah.<br> </div> Wed, 07 Dec 2022 19:48:21 +0000 Losing the magic https://lwn.net/Articles/917271/ https://lwn.net/Articles/917271/ khim <font class="QuotedText">&gt; If there is no UB in my source code, then there is also no UB in the resulting binary, absent bugs/faults in the compiler, the OS or the hardware.</font> <p>We don't disagree there, but that's <b>not</b> what <code>O_PONIES</code> lovers are ready to accept.</p> <font class="QuotedText">&gt; Your examples are cases where I have UB in language L, I translate to language M, and I still have UB - in other words, no new UB has been introduced, but the existing UB has resulted in the output program having UB, too. </font> <p>Yes. Because that's what <code>O_PONIES</code> lovers demand to handle! They, basically, say that it doesn't matter whether L have UB or not. It only matters whether <b>M</b> have UB. If M doesn't have “suitably similar” UB then program in L <b>must</b> be handled correctly <b>even if it violates rules of language L</b>.</p> <p>Unfortunately on practice it works only in two cases:</p> <ol> <li>If L and M are extremely similar (like machine code and assembeler)<br>or</li> <li>If translator from L to M is so primitive that you can, basically, predict how precisely each construct from L maps to M (old C compilers)</li> </ol> <font class="QuotedText">&gt; In turn, this means that UB in language M does not create new UB in language L - the flow of UB is entirely one-way in this respect (there was UB in language L, when I compiled it, I ended up with a program that has UB in language M).</font> <p>Ah, got it. Yeah, <b>in that sense</b> it's one-way street in the absence of bugs. Of course bugs may move things from M to L (see Meltdown and Spectre), but in the absence of bugs it's one way street, I agree.</p> <font class="QuotedText">&gt; This is a lot of work, and involves getting a full understanding of why people want certain behaviours to be UB, rather than defined in a non-deterministic fashion.</font> <p>And it's also explicitly not what <code>O_PONIES</code> lovers want. They explicitly don't want all that hassle, they just want the ability to write code in L with UB and get a working program. <b>That</b> is really pure <code>O_PONIES</code> — <a href="https://sandeen.net/wordpress/uncategorized/coming-clean-on-o_ponies/">exactly like in that story with Linux kernel</a>.</p> <p>List of UBs in C and C++ is still patently insane, but that's <b>different</b> issue. It would have been possible to tackle <b>that issue</b> if <code>O_PONIES</code> lovers actually wanted to alter the spec. That's not what they want. They want ponies.</p> Wed, 07 Dec 2022 19:06:27 +0000 Losing the magic https://lwn.net/Articles/917267/ https://lwn.net/Articles/917267/ khim <font class="QuotedText">&gt; All I want is for the C spec to declare them equivalent.</font> <p>If that's really your desire then you sure found a funny way to achieve it.</p> <p>But I'm not putting you with <code>O_PONIES</code> crowd. It seems you are acting out of ignorance not malice.</p> <p> John Regehr <a href="https://blog.regehr.org/archives/1287">tried to do what you are proposing to do</a> — and failed spectacularly, of course.</p> <p>But <a href="https://lwn.net/Articles/916771/">look here</a>: <i>My paper does not propose a tightening of the C standard. Instead, it tells C compiler maintainers how they can change their compilers without breaking existing, working, tested programs. Such programs may be compiler-specific and architecture-specific (so beyond anything that a standard tries to address), but that's no reason to break them on the next version of the same compiler on the same architecture.</i></p> <p>Basically <code>O_PONIES</code> lovers position is the following: if language M (machine code) have UBs then it's Ok for L to have UB in that place, but if M doesn't have UB <b>then it should be permitted to violate rules of L and still produce working program</b>.</p> <p>But yeah, that's probably problem with me understanding English or you having trouble explaining things.</p> <font class="QuotedText">&gt; What I'm unhappy with is SchrodinUB where the EXACT SAME CODE may, or may not, exhibit UB depending on situations outside the control of the programmer</font> <p>How is that compatible with this:</p> <font class="QuotedText">&gt; khim is determined to drag in features that are on their face insane, like double frees and the like. I'm quite happy for the compiler to optimise on the basis of "this code is insane, I'm going to assume it can't happen (because it's a bug EVERYWHERE).</font> <p>I don't see why do you say that this feature is insane. Let's consider <a href="https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii-chicken-and-egg-problems/">concrete example</a>:</p> <blockquote>On beta versions of Windows 95, SimCity wasn’t working in testing. Microsoft tracked down the bug and <i>added specific code to Windows 95 that looks for SimCity</i>. If it finds SimCity running, it runs the memory allocator in a special mode that doesn’t free memory right away.</blockquote> <p>It looks as if your approach <i>the EXACT SAME CODE may, or may not, exhibit UB depending on situations outside the control of the programmer</i> very much <b>does</b> cover double free, dangling pointers and other such things. It's even possible to make it work if you have enough billions in bank and obsession with backward compatibility.</p> <p>The question: are these a well-spent billion? Should we have a dedicated team which cooks up such patches for the <code>clang</code> and/or <code>gcc</code>? Who would pay for it?</p> <p>Without changing spec (which people like Anton Ertl or Victor Yodaiken very explicitly say not what they want) this would be the only alternative, I'm afraid.</p> <font class="QuotedText">&gt; But SchrodinUB should always be opt IN.</font> <p>Why? It's not part of the C standard, why should it affect good programs which are not abusing C?</p> <font class="QuotedText">&gt; And actually, I get the impression Rust is like that - bounds checks and all that sort of thing are suppressed in runtime code I think I heard some people say. </font> <p>Only integer overflow checks <a href="https://doc.rust-lang.org/reference/behavior-not-considered-unsafe.html#integer-overflow">are disabled</a>. If you would try to divide by zero you would still get check and panic if divisor is zero.</p> <p>But if you violate <a href="https://doc.rust-lang.org/reference/behavior-considered-undefined.html">some other thing</a> (e.g. <a href="https://www.ralfj.de/blog/2019/07/14/uninit.html">if you program would try to access undefined variable</a>) all bets are still off.</p> <p>Let's consider the following example:</p> <pre> bool to_be_or_not_to_be() { int be; return be == 0 || be != 0; } </pre> With Rust you need to jump through the hoops to use uninitialized variable but with <code>unsafe</code> it's possible: <pre> pub fn to_be_or_not_to_be() -&gt; bool { let be: i32 = unsafe { MaybeUninit::uninit().assume_init() }; return be == 0 || be != 0; } </pre> <p>You may argue that <a href="https://godbolt.org/z/PTM8Kn6E4">what Rust is doing</a> (removing the code which follows <code>to_be_or_not_to_be</code> call and replacing it with unconditional crash) is, somehow, better then what C is doing (claiming that value of the <code>be == 0 || be != 0</code> is <code>false</code>).</p> <p>But that would hard sell to <code>O_PONIES</code> lover who was counting on getting <code>true</code> from it (like Rust <a href="https://godbolt.org/z/dY8PW69fK">did only few weeks ago</a>).</p> <p>Yes, Rust is better-defined language, no doubt about it. It has smaller number of UBs and they are more sane. <b>But C and Rust are cast in the same mold</b>!</p> <p>You either avoid UBs and have a predictable result or not's avoid them and end up with something strange… and there are <b>absolutely</b> no guarantee that program which works today would continue to work tomorrow… you have to ensure you program doesn't trigger UB <a href="https://blog.rust-lang.org/2014/10/30/Stability.html">to cash on that promise</a>. Wed, 07 Dec 2022 18:48:27 +0000 Losing the magic https://lwn.net/Articles/917266/ https://lwn.net/Articles/917266/ abatters <div class="FormattedComment"> As an example of the kernel moving in the unsafe direction, the kernel has lots of special printk format specifiers for specific pointer types that are not typechecked by the compiler.<br> <p> <a href="https://docs.kernel.org/core-api/printk-formats.html">https://docs.kernel.org/core-api/printk-formats.html</a><br> <p> </div> Wed, 07 Dec 2022 17:41:25 +0000 Losing the magic https://lwn.net/Articles/917263/ https://lwn.net/Articles/917263/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; And here we come to the underlying fun with O_PONIES: Coming up with definitions for existing UB and pushing that through the standards process is hard work, and involves thinking about a lot of use cases for the language, not just your own, and getting agreement either on a set of allowable behaviours for a construct that's currently UB, or getting the standards process to agree that something should be implementation-defined (i.e. documented set of allowable behaviours from the compiler implementation). This is a lot of work, and involves getting a full understanding of why people want certain behaviours to be UB, rather than defined in a non-deterministic fashion.</span><br> <p> I don't know whether khim's English skills are letting him down, or whether he's trolling, but I think you've just encapsulated my view completely.<br> <p> Multiplication exists in C. Multiplication exists in Machine Code. All I want is for the C spec to declare them equivalent. If the result is sane in C, then machine code has to return a sane result. If the result is insane in C, then machine code is going to return an insane result. Whatever, it's down to the PROGRAMMER to deal with.<br> <p> khim is determined to drag in features that are on their face insane, like double frees and the like. I'm quite happy for the compiler to optimise on the basis of "this code is insane, I'm going to assume it can't happen (because it's a bug EVERYWHERE). What I'm unhappy with is SchrodinUB where the EXACT SAME CODE may, or may not, exhibit UB depending on situations outside the control of the programmer (and then the compiler deletes the programmer's checks!).<br> <p> And it's all very well khim saying "the compiler writers have given you an opt-out". But SchrodinUB should always be opt IN. Principle of "least surprise" and all that. (And actually, I get the impression Rust is like that - bounds checks and all that sort of thing are suppressed in runtime code I think I heard some people say. That's fine - actively turn off checks in production in exchange for speed IF YOU WANT TO, but it's a conscious opt-in.)<br> <p> Cheers,<br> Wol<br> </div> Wed, 07 Dec 2022 17:28:33 +0000 Losing the magic https://lwn.net/Articles/917259/ https://lwn.net/Articles/917259/ farnz <p>You're misunderstanding me still. If there is no UB in my source code, then there is also no UB in the resulting binary, absent bugs/faults in the compiler, the OS or the hardware. <p>Your examples are cases where I have UB in language L, I translate to language M, and I still have UB - in other words, no new UB has been introduced, but the existing UB has resulted in the output program having UB, too. The only gotcha is that the UB in the output program may surprise the programmer, since UB in the source language simply leaves the target language behaviour completely unconstrained. <p>There is never a case where I write a program in language L that is free of UB, but a legitimate compilation of that program to language M results in the program having UB. If this does happen, it's a bug - the compiler has produced invalid output, just as it's a bug for a C compiler to turn <tt>int a = 1 + 2;</tt> into <tt>int a = 4;</tt>. <p>In turn, this means that UB in language M does not create new UB in language L - the flow of UB is entirely one-way in this respect (there was UB in language L, when I compiled it, I ended up with a program that has UB in language M). <p>The only thing that people find tricky here is that they have a mental model of what consequences of UB are "reasonable", and what consequences of UB are "unreasonable", and get upset when a result of compiling a program from L to M results in the compiler producing a program in language M with "unreasonable" UB, when as far as they were concerned, the program in language L only had "reasonable" UB. But this is not a defensible position - the point of UB is that the behaviour of a program that executes a construct that contains UB is undefined, while "reasonable" UB is a definition of what behaviour is acceptable. <p>And here we come to the underlying fun with O_PONIES: Coming up with definitions for existing UB and pushing that through the standards process is hard work, and involves thinking about a lot of use cases for the language, not just your own, and getting agreement either on a set of allowable behaviours for a construct that's currently UB, or getting the standards process to agree that something should be implementation-defined (i.e. documented set of allowable behaviours from the compiler implementation). This is a lot of work, and involves getting a full understanding of why people want certain behaviours to be UB, rather than defined in a non-deterministic fashion. Wed, 07 Dec 2022 16:21:19 +0000 Losing the magic https://lwn.net/Articles/917247/ https://lwn.net/Articles/917247/ khim <font class="QuotedText">&gt; In other words, as you compile from language L to language M, the compiler can leave you with as much UB as you had before, or it can decrease the amount of UB present in language M, but it can never add UB.</font> <p>Of course it can add UB! Every language with manual memory management, without GC, adds UB related to these. On hardware level there are no such UBs, memory is managed by user when he adds new DIMMs or removes then, there may never be any confusion about whether memory is accessible or not.</p> <p>But Ada, C, Pascal and many other such languages add memory management functions and then say “hey, if you freed memory then onus is on you to make sure you wouldn't try to use object which no longer exists”.</p> <p>The desire to do what you are talking about is what gave rise to GC infestation and abuse of managed code.</p> <font class="QuotedText">&gt; The only "problem" this leaves you with if you're the O_PONIES sort is that it means that defining what it actually means for UB to flow from language M to language L is tricky, because in the current world, UB doesn't flow that way, it only flows from language L to language M.</font> <p>UBs can flow in any direction and don't, actually, cause any problems as long as you understand what UB is: something that you are not supposed to do. If you understand what UB is and can list them — you can deal with them.</p> <p>If you don't understand that UB is (<code>O_PONIES</code> people) or don't understand where they are (Clément Bœsch case or, of we are talking about hardware, Meltdown and Spectre case) then there's trouble.</p> <p>Ignorance can be fixed easily. But attitude adjustments are hard. If someone believes it's his right to ignore traffic light because that's how he drove for last half-century in his small village then it becomes a huge problem when someone like that moves to big city.</p> Wed, 07 Dec 2022 15:09:08 +0000 Losing the magic https://lwn.net/Articles/917243/ https://lwn.net/Articles/917243/ farnz <p>You're arguing a different point, around people who demand a definition of UB in their language of choice L, by analogy to another language M. <p>I'm saying that the situation is not as awful as it might sound; if I write in language L, and compile it to language M, it's a compiler bug if the compiler converts defined behaviour in L into undefined behaviour in M. As a result, when working in language L (whether that's C, Haskell, Python, Rust, JavaScript, ALGOL, Lisp, PL/I, Prolog, BASIC, Idris, whatever), I do not need to worry about whether or not there's UB in language M - I only need care about language L, because it's a compiler bug if the compiler translates defined behaviour language L into undefined behaviour in language M. <p>So, for example, if language L says that a left shift by more than the number of bits in my integer type always results in a zero value, it's up to the compiler to make that happen. If language M says that a left shift by more than the number of bits in my integer type results in UB, then the compiler has to handle putting in the checks (or proving that they're not needed) so that if I do have a left shift by more than the number of bits in my integer type, I get 0, not some undefined behaviour. <p>And this applies all the way up the stack if I have multiple languages involved; if machine code on my platform has UB (and it probably does in a high-performance CPU design), it makes no difference if I compile BASIC to Idris, Idris to Chicken Scheme, Chicken Scheme to C, C to LLVM IR and finally LLVM IR to machine code, or if I compile BASIC directly to machine code. Each compiler in the chain must ensure that all defined behaviour of the source language translates to identically defined behaviour in the destination language. <p>In other words, as you compile from language L to language M, the compiler can leave you with as much UB as you had before, or it can decrease the amount of UB present in language M, but it can never add UB. The only "problem" this leaves you with if you're the O_PONIES sort is that it means that defining what it actually means for UB to flow from language M to language L is tricky, because in the current world, UB doesn't flow that way, it only flows from language L to language M. Wed, 07 Dec 2022 14:18:34 +0000 Losing the magic https://lwn.net/Articles/917232/ https://lwn.net/Articles/917232/ khim <font class="QuotedText">&gt; It's worth noting that you're making the situation sound a little worse than it actually is.</font> <p>You are mixing issues. Of course it's possible to make language without UB! There are <b>tons</b> of such languages: C#, Java, Haskell, Python…</p> <p>But that's <b>not</b> what <code>O_PONIES</code> lovers want! They want the ability to “program to hardware”. Lie to compiler (because “they know better”), do certain manipulations to hardware <b>which compiler have no idea about</b> and then expect that code would still work.</p> <p><b>That</b> is impossible (and I, probably, underestimate the complexity of task). It's as if Java program opened <code>/proc/self/men</code>, poked the runtime internals and then, when upgrade broken it, its author demanded satisfaction and claimed that since his code worked in one version of JRE then it must work in all of them.</p> <p>That is what happens when you “use UB to combat UB”. Onus is on you to support new versions of compiler. Just like onus is on you to support new versions of Windows if you use undocumented functions, onus is on you if you poke into linux kernel internals via <code>debugfs</code> and so on.</p> <p>And Linux kernel developers are <b>not</b> shy when they say that when programs rely on such intricate internal details all bets are off. Even <code>O_PONIES</code> term was coined by them, not by compiler developers!</p> <font class="QuotedText">&gt; For source languages with undefined behaviour, the compiler gets a bit more freedom; it can translate a source construct with UB to any destination construct it likes, including one with UB. This is fine, because the compiler hasn't added new UB to the program - it's "merely" chosen a behaviour for something with UB.</font> <p>Yes, but that's precisely what <code>O_PONIES</code> lovers object against. Just <a href="http://www.complang.tuwien.ac.at/papers/ertl17kps.pdf">read the damn paper already</a>. It doesn't even entertain the notion that programs can be written without use of UBs for one minute. They just assert they would continue to write code with UBs (“write code for the hardware” since “C is a portable assembler”) and compilers have to adapt, somehow. Then they discuss how compiler would have to deal with mess <b>they</b> are creating.</p> <p>You may consider that as a concession of sorts (no doubt caused by the fact that you can not avoid UBs in today's world because even bare hardware have UBs), but it's still not a discussable position because instead of listing constructs which are <b>allowed</b> in the source program they want to just only blacklist certain “bad things”.</p> <p>Because it doesn't work! Ask any security guy what he thinks about black lists and you would hear that they are always only a papering over the problem and just lead to the “whack the mole” busywork. To arrive at some promises you have to <b>whitelist</b> good programs, not <b>blacklist</b> the bad ones!</p> Wed, 07 Dec 2022 12:44:51 +0000 Losing the magic https://lwn.net/Articles/917222/ https://lwn.net/Articles/917222/ khim <font class="QuotedText">&gt; That, of course, applies to all programming languages which are in common use</font> <p>Depends on how would you define “common use”, though. The first language which was actually designed is, arguably, <a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)">Lisp</a> (and it wasn't even designed to write real-world programs). It's still in use.</p> <p>Also different version of Algol were designed, Pascal, Ada, Haskell… Even Java, C#, Go were designed to some degree! The goal of all these projects were to create something people can use to discuss how programs are crated, features of the language were extensively discussed and rejected (or accepted) on that basis.</p> <p>C or PHP, on the other hand, were never actually designed, C was create just by pressing need to have something to rewrite <a href="https://en.wikipedia.org/wiki/PDP-7">PDP-7</a> only OS to support <a href="https://en.wikipedia.org/wiki/PDP-11">PDP-11</a>, too. Later more machines were added and C was stretched and stretched till it started to break.</p> <p>Only then committee started it's work and it stitched it together to the best of their abilities, but because some cracks were so deep some results were… unexpected.</p> <font class="QuotedText">&gt; Life is a lot easier when a programming language has only one implementation and you can decree that the official behaviour of the language is whatever that implementation does in every case.</font> <p>You never can do that. Look on languages with one implementation: PHP, Python (theoretically many implementation, but CPython is the definition one), or even Rust (although there are new implementations in development). Different versions may behave differently and you have to decide which one is “right” even if are no other implementation.</p> <p>Life is “simple” only when language and it's implementation never change (Lua comes close).</p> <font class="QuotedText">&gt; most of the code we're running on a daily basis remains written in C</font> <p>I wouldn't say so. Maybe in embedded, but in most other places C is replaced with C++. Even if you say that C and C++ is the same language (which is true to some degree), then you would have to admit that most code today is not written in C, it's written in Python, Java, JavaScript or Visual Basic.</p> <p>It was never true that C was <b>the language</b> which was used to the exclusion of everything else. And it wasn't even initially popular with OS writers: <a href="https://en.wikipedia.org/wiki/Classic_Mac_OS">MacOS</a> was written in Pascal, e.g. and early versions of Microsoft's development tools (Assembler, Linker, etc) were written in Microsoft Pascal, too.</p> <p>Success of UNIX made C popular and this success wasn't even based in technical merits! Rather AT&amp;T was forced to refrain from selling software thus it couldn't compete with sellers of UNIX.</p> <p>It was <b>always</b> known that C is awful language, since day one. It was just not obvious <b>how</b> awful it was till sensible alternative arrived.</p> <p>Approach was: “C is that horrible, awful thing, let's hide it from mere mortals”. Mostly because IT industry bought the cool-aid of GC-based solution to memory safety which, of course, can only work if there are something <b>under</b> your language to provide GC and other important runtime.</p> <p>Most managed languages remained with runtime written in C/C++ because “hey, it's used by professionals, they can deal with sharp corners”. Only Go avoided that and it still need some other language for OS kernel, even in theory.</p> <font class="QuotedText">&gt; Of course C has its problems, but it is virtually guaranteed that any other language, once it has achieved the popularity and widespread use of C, will too (even if invented by “people who know how languages are supposed to be designed”).</font> <p>Oh, absolutely. Pascal was stretched, too and it, too, got many strange warts when <s>Borland</s>, <s>CodeGear</s>, <s>Borland</s>, <s>Embarcadero</s>, Idera was adding hot new features without thinking how to integrate them.</p> <p>Rust is definitely not immune: while it's core is very good <code>async</code> approach is questionable and chances are high that we would know, 10 or 20 years later, how to do it much better than how Rust does it today.</p> <p>But today it's unclear how to it better thus we have what we have.</p> <font class="QuotedText">&gt; Certainly in the last 80 years of programming language design, and claims to the contrary notwithstanding, nobody has so far been able to come up with a systems programming language that has no problems at all, that runs everywhere, and that people are actually prepared to adopt.</font> <p>That's just not possible. Languages come and go. C lifespan was artificially extended by invention and slow adoption of C++, though (when C++ was created it became possible to froze C and say that if you want a modern language you can go use C++ and since C++ wasn't “ready” for so many years it was always easy to say “hey, don't worry, <b>next</b> version would fix everything”). It's a bit funny and sad when you read <a href="https://www.stroustrup.com/good_concepts.pdf">concepts complete C++ templates as originally envisioned</a> 30+ years after language was made, but oh, well… that's life.</p> <p>Rust wasn't built in a vacuum, after all. It took many concepts developed in C++! RAII was invented there, ownership rules were invented there (only initially <a href="https://web.archive.org/web/20080701113040/http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml">enforced by style guide, not compiler</a> and so on.</p> <p>Only, at some point, it becomes obvious that the only way forward is to <b>remove</b> some things from the language — which, basically, makes it a different language (it's really hard to remove something from popular language, recall the Python3 saga). <a href="https://graydon2.dreamwidth.org/218040.html">Here</a> Graydon Hoare lists things which were either removed from Rust or (in some cases) never added.</p> <p>Thus yes, it would be interesting to see what would happen in 10-20 years when we would need to remove something from Rust. Would we get Rust 2.0 (like happened with Python) or entirely different language (like happening with C)? Who knows.</p> <p>But no, I don't expect Rust to live forever. Far from it. It's full of problems, we just have to idea how to solve these problems <b>properly</b> yet, thus we solve these problems the same way we solve them in C (ask developer “to hold it right”).</p> Wed, 07 Dec 2022 12:04:39 +0000 Losing the magic https://lwn.net/Articles/917220/ https://lwn.net/Articles/917220/ farnz <p>It's worth noting that you're making the situation sound a little worse than it actually is. <p>The compiler's job is translate your program from one language (say C) to another language (say x86-64 machine code), with the constraint that the output program's behaviour must be the same as the input program's behaviour. Effectively, therefore, the compiler's job is to translate defined behaviour in the source program into identically defined behaviour in the output program. <p>For source languages without undefined behaviour, this means that the compiler must know about the destination language's undefined behaviour and ensure that it never outputs a construct with undefined behaviour - this can hurt performance, because the compiler may be forced to insert run-time checks (e.g. "is the shift value greater than the number of bits in the input type, if so jump to special case"). <p>For source languages with undefined behaviour, the compiler gets a bit more freedom; it can translate a source construct with UB to any destination construct it likes, including one with UB. This is fine, because the compiler hasn't added new UB to the program - it's "merely" chosen a behaviour for something with UB. Wed, 07 Dec 2022 11:05:41 +0000 Losing the magic https://lwn.net/Articles/917214/ https://lwn.net/Articles/917214/ anselm <blockquote><em>This goes back to the fact that C was never, actually, designed, it was cobbled together by similarly-minded folks thus when people who knew how languages are supposed to be designed have tried to clarify how C works they only could do so much if they don't want to create an entirely different language which doesn't support programs written before that point at all (which would defeat the purpose of clarification work).</em></blockquote> <p> That, of course, applies to all programming languages which are in common use – especially to programming languages that have more than one implementation. (Life is a lot easier when a programming language has only one implementation and you can decree that the official behaviour of the language is whatever that implementation does in every case.) </p> <p> C had been around for almost 20 years, with a considerable number of implementations on a wide variety of hardware architectures, when the first official C standard was published. Considering that, ANSI/ISO 9899-1990, within its limits, was a very important and remarkably useful document. It's easy to argue, with the benefit of 30-plus years' worth of hindsight, that C is a terrible language and the various C standards not worth the paper they're printed on, but OTOH as far as the Internet is concerned, C is still what makes the world go round – between the Linux kernel, Apache and friends, and for that matter many implementations of nicer languages than C, most of the code we're running on a daily basis remains written in C, and it will be a while yet before new languages like Rust get enough traction (and architecture support) to be serious competitors. </p> <p> Of course C has its problems, but it is virtually guaranteed that any other language, once it has achieved the popularity and widespread use of C, will too (even if invented by “people who know how languages are supposed to be designed”). Certainly in the last 80 years of programming language design, and claims to the contrary notwithstanding, nobody has so far been able to come up with a systems programming language that has no problems at all, that runs everywhere, and that people are actually prepared to adopt. </p> Wed, 07 Dec 2022 10:04:34 +0000 Losing the magic https://lwn.net/Articles/917161/ https://lwn.net/Articles/917161/ ejr <div class="FormattedComment"> Isn't this the situation from which Rusgocamkell will save us?<br> <p> Sorry. Couldn't resist the snark. Run-time-ish methods to ensure compatibility likely are good at the correct abstraction points. I've used eight-character strings to detect (integer) endianness as a trivial example. Not zero terminated.<br> <p> And losing magic for openness isn't bad necessarily. Just a tad sad for ex-wizards.<br> <p> </div> Wed, 07 Dec 2022 01:14:40 +0000 Losing the magic https://lwn.net/Articles/917152/ https://lwn.net/Articles/917152/ khim <font class="QuotedText">&gt; IF THE HARDWARE HAS UBs (those were your own words!)</font> <p>Not <i>if</i>. Hardware most definitely have an UB. x86 have less UBs than most other architectures, but it, too <a href="https://preshing.com/20120515/memory-reordering-caught-in-the-act/">can provide <b>mathematically impossible results</b></a>!</p> <p><b>On the hardware level, without help from the compiler!</b> If used incorrectly, of course.</p> <p>These UBs are results of hardware optimizations, instead. <b>You can not turn these off!</b></p> <p>But you can find series of articles which are explaining how one is supposed to work with all that <a href="https://lwn.net/Articles/718628/">right here</a>, <a href="https://lwn.net/Articles/720550/">on LWN</a>!</p> <p>You have probably seen then already, but probably haven't realized what they are <b>actually</b> covering.</p> <font class="QuotedText">&gt; if the compiler assumes that there is no UB, then we're screwed ...</font> <p>Why? How? What suddenly happened? Compiler deals with these UBs precisely and exactly like with any other UBs: it assumes they never happen.</p> <p>And then programmer is supposed to deal with all that in the exact same fashion as with any other UBs: by ensuring that compiler assertion is correct.</p> Tue, 06 Dec 2022 22:45:44 +0000 Losing the magic https://lwn.net/Articles/917138/ https://lwn.net/Articles/917138/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; &gt; In other words, with C's assumption that UB is impossible, we now have a conundrum if we want to write Operating Systems in C!</span><br> <p> <span class="QuotedText">&gt; Why would it be so? There are lots of art built around how can you avoid UBs in practice. Starting from switches which turn certain UBs into IBs (and thus make them safe to use) to sanitizers which [try to] catch UBs like race conditions or double-free or out-of-bounds array access.</span><br> <p> I notice you didn't bother to quote what I was replying to. IF THE HARDWARE HAS UBs (those were your own words!), and the compiler assumes that there is no UB, then we're screwed ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 06 Dec 2022 20:19:45 +0000 Losing the magic https://lwn.net/Articles/917131/ https://lwn.net/Articles/917131/ khim <font class="QuotedText">&gt; In other words, with C's assumption that UB is impossible, we now have a conundrum if we want to write Operating Systems in C!</font> <p>Why would it be so? There are lots of art built around how can you avoid UBs in practice. Starting from switches which turn certain UBs into IBs (and thus make them safe to use) to sanitizers which [try to] catch UBs like race conditions or double-free or out-of-bounds array access.</p> <p>If you accept the goal (ensure that your OS doesn't ever trigger UB) there are plenty of ways to achieve it. Here is <a href="http://web1.cs.columbia.edu/~junfeng/09fa-e6998/papers/sel4.pdf">an interesting article on subject</a>.</p> <p>I, personally, did something similar on smaller scale (not OS kernel, but another security-critical component of the system). Ended up with one bug in 10 years system was in use (and that was related to problem with specification of hardware).</p> <p>But if you insist on your ability to predict what code with UBs would do… you can't write Operation System in C that way (or, rather, you can, it's just there are no guarantees that it will work).</p> <font class="QuotedText">&gt; Which has been my problem ALL ALONG. I want to be able to reason, SANELY, in the face of UB without the compiler screwing me over.</font> <p>Not in the cards, sorry. In you code can trigger UB then the only guaranteed fix is to change code and make it stop doing that.</p> <font class="QuotedText">&gt; If that's an O_PONY then we really are fscked.</font> <p>Why? Rust pushes UBs into tiny corner of your code and there are already enough research into how we can avoid UBs completely (by replacing these with markup which includes proof that your code doesn't trigger any UBs). <a href="https://github.com/google/wuffs">Here</a> is related (and very practical) project.</p> <p>Of course even after all that we would have issue of bugs in hardware, but that's entirely different can of worms.</p> Tue, 06 Dec 2022 18:47:42 +0000 Losing the magic https://lwn.net/Articles/917128/ https://lwn.net/Articles/917128/ khim <font class="QuotedText">&gt; And if I've understood the "higher maths" correctly - I'm more a chemist/medical guy by education - the O_PONY I'm asking for is that signed multiplication be a group. Any group, I don't care, so long as when it doesn't overflow the result is what naive arithmetic would expect.</font> <p>Doesn't look that way to me. Compiler developers <b>already</b> acquiesced to these demands and provided flag which makes <code>clang</code> and <code>gcc</code> to make signed integers behave that way.</p> <p>Now you arguing about different thing: “right to be ignorant”. You don't want to use flag, you don't want to use provided functions, you don't want to accept anything but complete surrender from guys who have never promised you that your approach would work in the first place (because it's not guaranteed to work even in C90 and gcc 2.95 from last century already assumes you write correct code and don't overflow signed integers).</p> <font class="QuotedText">&gt; That's just common sense :-)</font> <p>And that's precisely the problem. You ask for <b>common sense</b> but neither computers nor compilers have it.</p> <p>They couldn't employ common sense during the optimisation because it's not possible to formally describe what “common sense” means!</p> <p>Thus they use the best next substitute: list of logical rules collected in the C standard.</p> <font class="QuotedText">&gt; Is there really anything wrong in asking for the result of computer operations to MAKE SENSE? (No, double-free and things like that - bugs through and through - clearly can't make sense.</font> <p>That's how specification is changed and how new switches are added. People employ their common sense and discuss things and arrive at some set of rules.</p> <p>Similarly to how law is produced: people start with common sense, but common sense is different for different people thus we end up with certain set of rules which some people like, some people don't like, but all have to follow.</p> <p>Only with C standard situation is both simpler and more complicated: subject matter is much more limited, but agent which does the interpretation doesn't have even vestigial amounts of common sense (law assumes that where there are contradictions or ambiguities judge would use common sense, C language specification writers have no such luxury) thus you have to make specification as rigid and strict as possible.</p> Tue, 06 Dec 2022 18:20:07 +0000 Losing the magic https://lwn.net/Articles/917129/ https://lwn.net/Articles/917129/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; The big trouble (and also what makes them truly hopeless) is that hardware is often buggy, it very much contains lots of UBs (especially if you use raw prototypes) but because it's physical thing, UBs are limited. Something doesn't become set when it should, or you need a delay, of if you do something too quickly (or too slowly!) there's a crash… but it rarely happens that issue in one part of you device affects another, completely unrelated part (except if you are developing something like modern 100-billion transistors CPU/GPU… but that's not embedded and I'm not even sure you may classify what these people are doing as “hardware” novadays).</span><br> <p> In other words, with C's assumption that UB is impossible, we now have a conundrum if we want to write Operating Systems in C!<br> <p> Which has been my problem ALL ALONG. I want to be able to reason, SANELY, in the face of UB without the compiler screwing me over. If that's an O_PONY then we really are fscked.<br> <p> Cheers,<br> Wol<br> </div> Tue, 06 Dec 2022 18:03:22 +0000 Losing the magic https://lwn.net/Articles/917126/ https://lwn.net/Articles/917126/ Wol <div class="FormattedComment"> And if I've understood the "higher maths" correctly - I'm more a chemist/medical guy by education - the O_PONY I'm asking for is that signed multiplication be a group. Any group, I don't care, so long as when it doesn't overflow the result is what naive arithmetic would expect.<br> <p> Because, on the principle of least surprise, it's a very unpleasant surprise to discover that multiplying two numbers could legally result in the computer squirting coffee up your nose ... :-)<br> <p> Is there really anything wrong in asking for the result of computer operations to MAKE SENSE? (No, double-free and things like that - bugs through and through - clearly can't make sense. That's just common sense :-)<br> <p> Cheers,<br> Wol<br> </div> Tue, 06 Dec 2022 17:58:50 +0000 Losing the magic https://lwn.net/Articles/917124/ https://lwn.net/Articles/917124/ pizza <div class="FormattedComment"> In other words, you're saying we need more rigorous/detailed specifications for software.<br> <p> ...And you're the one going on about folks asking for O_PONIES?<br> <p> <p> </div> Tue, 06 Dec 2022 17:18:35 +0000 Losing the magic https://lwn.net/Articles/917115/ https://lwn.net/Articles/917115/ khim <font class="QuotedText">&gt; Bare-metal embedded (not to mention the actual hardware) requires a _lot_ more discipline than most other software categories.</font> <p>It requires <b>entirely different</b> discipline, that's the issue.</p> <font class="QuotedText">&gt; On average, you'll find embedded and hw folks a lot more vigorous when it comes to testing/validation, as fixing bugs after things have shipped can be prohibitively expensive.</font> <p>Yes, but more often than not they do bazillion tests and conclude that it's enough to be confident that thing actually works as it should.</p> <p>Often they are even right: hardware is hardware, it often limits input to your program severely (which makes things like buffer overflow impossible simply because laws of physics protect you). And hardware is rarely behaves 100% like specs say it would behave thus without testing math models wouldn't save you.</p> <p>Software is thoroughly different: adversary may control inputs so well and do things which are so far beyond anything you may even imagine that all these defenses built by folks with hardware experience and their tests are sidesteped without much trouble.</p> <p>You need math, logic and rigorous rules to make things work. It's really interesting how attitude of linux kernel developers have slowly shifted from hardware mindset to software mindset when fuzzing guys found more and more crazy ways to break what they have thought was well-designed and tested piece of code.</p> <p>Now they are even trying to use Rust as a mitigation tool. It would be interesting to see whether it would actually work or not: linux kernel sits between hardware and software worlds which means that pure math, logic and rigorous rules are not enough to make it robust.</p> Tue, 06 Dec 2022 15:24:44 +0000 Losing the magic https://lwn.net/Articles/917112/ https://lwn.net/Articles/917112/ khim <font class="QuotedText">&gt; It seems to me you lack a lot of experience!</font> <p>I worked with embedded guys and even know a guy who spent insane amount of time to squeeze AES into 256 bytes on some 4-bit Samsung CPU.</p> <p>I've seen how these folks behave.</p> <font class="QuotedText">&gt; In environments like bare metal, you need a lot of discipline, otherwise you don't understand your own code a few years later... or everything will just crash.</font> <p>The big trouble (and also what makes them truly hopeless) is that hardware is often buggy, it very much contains lots of UBs (especially if you use raw prototypes) but because it's physical thing, <b>UBs are limited</b>. Something doesn't become set when it should, or you need a delay, of if you do something too quickly (or too slowly!) there's a crash… but it rarely happens that issue in one part of you device affects another, completely unrelated part (except if you are developing something like modern 100-billion transistors CPU/GPU… but that's not embedded and I'm not even sure you may classify what these people are doing as “hardware” novadays).</p> <p>They cope with hardware UBs with tests and, naïvely, try to apply the same approach to software. Which rarely ends well and just leads to <code>P_PONIES</code> ultimatums (which remain mostly ignored because software is not hardware and effects of UB maybe thoroughly non-local).</p> <p>They they adopt <i>these idiot compiler makers couldn't be trusted and we are right, thus we would just froze the version of compiler we are using</i>. Which, of course leads to inability to reuse code written is supposedly portable language later (which greatly surprises their bosses).</p> <p>It's a mess. The worst of all is the attitude <i>we don't have time to sharpen the axe, we need to chop trees!</i></p> <font class="QuotedText">&gt; I appreciate the simplicity and readability of C very much.</font> <p>Unfortunately it's simplicity is one skin-deep: it's syntax is readable enough (if you forget blunder with pointers to functions), but it semantic is, often, extremely non-trivial and very few understand it.</p> <p>This goes back to the fact that C was never, actually, designed, it was cobbled together by similarly-minded folks thus when people who knew how languages are supposed to be designed have tried to clarify how C works they only could do so much if they don't want to create an entirely different language which doesn't support programs written before that point at all (which would defeat the purpose of clarification work).</p> Tue, 06 Dec 2022 15:16:07 +0000 Losing the magic https://lwn.net/Articles/917109/ https://lwn.net/Articles/917109/ pizza <div class="FormattedComment"> Bare-metal embedded (not to mention the actual hardware) requires a _lot_ more discipline than most other software categories.<br> <p> On average, you'll find embedded and hw folks a lot more vigorous when it comes to testing/validation, as fixing bugs after things have shipped can be prohibitively expensive.<br> </div> Tue, 06 Dec 2022 14:38:38 +0000