User: Password:
|
|
Subscribe / Log in / New account

Optimizations and undefined behavior

Optimizations and undefined behavior

Posted Jul 21, 2009 20:11 UTC (Tue) by martinfick (subscriber, #4455)
In reply to: Optimizations and undefined behavior by BrucePerens
Parent article: Fun with NULL pointers, part 1

"Implementation dependent" means that the folks who change the compiler, the C library, the kernel, and any stack-smashing detection/prevention code that you are using have the right to change the behavior at any time without documenting the particular side-effect that your code is depending upon.

No, it does not always mean this. In this particular case it may, but it is perfectly reasonable for a specific C compiler to specify its behavior when the standard says that it is undefined, that is the point of "undefined" and "Implementation dependent".

"Implementation dependent" != `cat /dev/urandom`


(Log in to post comments)

Optimizations and undefined behavior

Posted Jul 21, 2009 20:55 UTC (Tue) by BrucePerens (guest, #2510) [Link]

The recent ext3 fsync() snafu is a good example of implementation dependent behavior becoming taken as an implicit guarantee. And then the developer had to reduce the scope of the promise for performance reasons. He ended up regretting that he had ever made that feature visible.

Anyway, there was no such guarantee in this case.

Optimizations and undefined behavior

Posted Jul 21, 2009 22:39 UTC (Tue) by mjg59 (subscriber, #23239) [Link]

The role of software is to be useful to its consumers. Software that fails in this will tend to end up being ignored in the long run.

Optimizations and undefined behavior

Posted Jul 22, 2009 1:28 UTC (Wed) by BrucePerens (guest, #2510) [Link]

Matt, you aren't seriously proposing that we provide some sort of user contract regarding the contents of uninitialized variables being reliable sources of entropy.

Regarding ext3, this problem came up because fsync() was implemented as a performance pig, at least in ext3, and rather than fix it we trained application developers that they'd be safe without it. Had fsync() been repaired when the mozilla problem came up, nobody would be arguing about it today.

Optimizations and undefined behavior

Posted Jul 22, 2009 1:39 UTC (Wed) by mjg59 (subscriber, #23239) [Link]

If a system left truly random numbers in uninitialised variables and applications started making use of that functionality (even if undocumented), then I think you'd need a very good reason to change that behaviour. But since I can't imagine any sane system ever doing that, no, I'm not proposing that we need a user contract on that point.

Contrast that to the ext3 behaviour. While, yes, the behaviour of fsync() on ext3 did result in people being even less likely to use it, the fact that ext3 also made it possible to overwrite a file without having to go through a cumbersome sequence of fsync()ing both the data and the directory made it attractive to application writers. That behaviour dates back far beyond Firefox 3, as demonstrated by people's long-term complaints about XFS leaving their files full of zeros after a crash. ext4 now provides the same ease of use because people made it clear that they weren't going to use it otherwise. Future Linux filesystems are effectively forced to provide the same semantics, which is a good thing.

Optimizations and undefined behavior

Posted Jul 22, 2009 3:28 UTC (Wed) by BrucePerens (guest, #2510) [Link]

Sigh. Good developers are still going to do create temp, write, fsync file, link to permanent name, unlink temp. Fsync on the directory, though, shouldn't be necessary.

Optimizations and undefined behavior

Posted Jul 22, 2009 4:16 UTC (Wed) by mjg59 (subscriber, #23239) [Link]

Good developers aren't going to have to - good operating systems will provide guarantees above and beyond POSIX. Operating systems that don't will end up poorly supported and gradually become irrelevant.

Optimizations and undefined behavior

Posted Jul 23, 2009 5:07 UTC (Thu) by johnflux (guest, #58833) [Link]

I hope not. Good developers will realize that fsync is a complete overkill there. They don't need wait for the changes to actually be made to disk before continuing - they only need to make sure that the changes happen in the right order.

Optimizations and undefined behavior

Posted Jul 30, 2009 13:38 UTC (Thu) by forthy (guest, #1525) [Link]

The problem with fsync() is that it's semantics resemble whery much the one of "PLEASE" in INTERCAL, which means it is a joke. fsync() basically has no semantics, except "make it so" (make it persistent now). Now, all file system operations are persistent, anyway, just not made persistent now. You can't properly test (that is: automated), because to test if there's a missing fsync(), you have to force an unexpected reboot, and then check if there's any missing data. What's worse: A number of popular Unix programming languages don't even have fsync(), starting with all kinds of shell scripts. fsync() is a dirty hack introduced into Unix because of broken (but extremely fast) file system implementations.

We know that Unix is a joke for quite some time, but parts of the API like fsync() show that this is not so far away from the truth ;-). From the kernel development side it is always "easier" to maintain a sloppy specification and blame the loser, but that's the wrong thinking. You are providing a service. Same thing for GCC: Compiler writers provide a service. Using a sloppy specification for questionable "optimizations" is wrong, as well. If the compiler writer can't know that the code really will break when accessing the NULL pointer, then he can't take the test out after having accessed an object. I remember GCC taking out tests like if(x+n>x), because overflows are said to be unspecified in the C language, but compiling code to a machine where overflows were very specifically handled as wraparounds in two's complement representation. This is all wrong thinking.

Optimizations and undefined behavior

Posted Jul 30, 2009 15:04 UTC (Thu) by foom (subscriber, #14868) [Link]

> I remember GCC taking out tests like if(x+n>x)

Still does. You can use -fwrapv if you want to tell it that you want signed number overflow to be
defined as wrapping.

Optimizations and undefined behavior

Posted Jul 30, 2009 21:31 UTC (Thu) by nix (subscriber, #2304) [Link]

fsync(), brought to you by the same people who thought up 'volatile',
another equally impossible-to-define-except-by-reference-to-implementation
feature.

Optimizations and undefined behavior

Posted Jul 30, 2009 13:31 UTC (Thu) by lysse (guest, #3190) [Link]

> "Implementation dependent" != `cat /dev/urandom`

Wasn't that where Bruce came in...? ;)


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds