Indeed. I use and maintain scripts using mktemp and mv on a regular basis. Some of them *also* use sed -i (shudder).
This is *the* way to achieve atomicity on Unix. It always was.
We didn't use to have journaling filesystems and we never used to expect anything at all to work after a crash. Crashes happened and they often meant hours of work, possibly reinstalling everything.
To discover that ext3 data=ordered just kept on working after a crash was a real eye-opener for me. I never realised before that such robustness was possible (just like I never realised prior to Linux that I could afford my own Unix box) and I am not about to relinquish it! I'm sure I speak for many users here. It *was* an unexpected nice-to-have when we first got it; but it has become a solid requirement.
Delayed allocation without an implicit write barrier before renaming a newly-written file virtually guarantees data loss after a crash with existing applications. It is therefore a regression from the status quo, albeit to something somewhat better than the status of a few years back.
Kudos and thanks to Ted for implementing this must-have write barrier (and also for improving the chances that unsafe truncate-and-write is less likely to hit the inevitable race condition, though IMO he's quite right that applications doing that are broken).
I just wish he wouldn't keep insisting that fsync at application level is the right way to achieve what we want.
POSIX and fsync have nothing to do with it (any journaling filesystem provides much more than POSIX), nor do application authors who forgot to think about recovery after an OS crash. A journaling filesystem *can* guarantee atomic renames, so it *should*, for the sake of users' sanity, not for the sake of a standards document.
Standards can be updated to follow best practice. They often do.