I thought the same thing as you at first, but I thought about it more and am no longer sure. To provide a completely unsurprising behavior, i.e. provide expected inter-process POSIX semantics between pre-crash and post-crash processes, you would need to infer a write barrier between every IO operation by a process. This may lead to far too much serialization of IO operations for the typical desktop use case.
So, is there an appropriate set of heuristics to infer write barriers sometimes but not others? The specific case in this discussion would be something like "insert a write barrier after file content operations requested before metadata operations affecting linkage of that file's inode"? Is this sufficient and defensible?
Ideally, we should have POSIX write-barriers that can be applied to a set of open file and directory handles, and use them to get the proper level of ordering across crashes. The fsync solution is far too blunt an instrument to provide the transactionality that everyone is looking for when they relink newly created files into place.
But then what about all those shell scripts out there which do "foo > file.tmp && mv file.tmp file"? We would need a new write-barrier operation applicable from the shell script (somehow selecting partial ordering of requests issued from separate processes), or a heuristic write-barrier as above...