Posted Mar 16, 2009 0:26 UTC (Mon) by khim
In reply to: Atomicity vs durability
Parent article: Ts'o: Delayed allocation and the zero-length file problem
Yes, Linux systems do crash on occasion. It is thankfully very
Depends of what hardware and what kind of drivers you have.
Want performance? Then you have to do tricks caching/buffering
data, disks are horribly _s_l_o_w_ when compared to your processor or
The problem is: fast filesystem is useless if it can't keep my data
safe. Microsoft knows this - that's why you don't need to explicitly
unmount flash drive there. Yes, cost is huge, it means flash wears down
faster and speed is horrible - but anything else is unacceptable. Oh, and I
know A LOT OF users who just turn off computer at the end of day. This
problem is somewhat mitigated by design of current systems ("power off"
button is actually "shutdown" button), but people are finding ways to cope:
they just switch power to the desk.
The same thing applies to developers. They are lazy. Most application
writers do not use fsync and do not check the error code from
close. Yet if data is lost - OS will be blamed. Is it fair to OS and FS
developers? Not at all! Can it be changed? Nope. Life is unfair - deal with
The whining started when it was found it that new filesystem can lose
valuable data - where ext3 never does it in this fashion (it can do
this with O_TRUNC, but not with rename). This is pretty serious regression
to most people. The approach "let's fix thousads upon thousands
applications" (including proprietary ones) was thankfully rejected. This is
good sign: this means Linux is almost ready to be usable by normal people.
Last time such problem happened (OSS->ALSA switch) offered solution was
beyond the pale.
to post comments)