This is what first came to my mind, but if data has been written, but metadata saying what
this data is gets discarded, the new data could be misinterpreted as what the previous
metadata said it was (such as believing it to be more metadata pointing to blocks on the disk,
but it's actually an image). I guess the solution here would be to zero any pointers to
metadata first (or settings a 'corrupt' or 'deleted' flag on the metadata sector itself) and
making sure that's reached the disk before writing the data. Of course this can slow things
down as you have to write to the metadata block an extra time per update.
I think the snapshotting way is the only way forward; if you never get rid of something until
certain the new one works (ie, has completely reached the disc) then it doesn't matter what
you do or when... you'll always have at least one working version. Large writes would start
failing when your disc is nearing full, but with todays drive sizes, we're more concerned with
losing 500G of data than filling it.