Posted Mar 19, 2009 18:06 UTC (Thu) by zmi (guest, #4829)
Parent article: Better than POSIX?
Wonderful. There is a new filesystem (ext4) which behaves like most other
modern filesystems like XFS, reiserfs, reiser4, in that on a crash it
thrashes data. But just because this ones name is similar to ext3, people
want it to behave the same.
The problem is that if it continues like ext3, applications will never get
fixed. But harddisks already use 32MB cache, people use on-board RAID
controllers. Imagine you have a RAID over 4 hard disks with 32MB cache each
- on a power outage, you loose up to 128MB data just from the disks,
there's nothing any filesystem can do about it (despite turning off disk
write caches). There's the *absolutely false* assumption that your data is
safe once you fclose(). But without a fsync, it's not.
In the end it would have been better to fix applications. Otherwise other
filesystems, and newer ones with even more advanced features, will always
be told to "eat your data", while really it's not their fault.
What you currently have is this: use ext3/ext4, or otherwise you can loose
data. Not because the other filesystems are crap, but because application
developers don't (need to) care. They use ext3/4 and people will continue
to say that it's much better. The typical half-truth we often see in IT,
and that often leads to bad systems. Think of computers controlling
equipment like trains, cars, or atomic reactors. "Oh, that one melted
because the filesystem ate it's data" is not a good answer after all. A
good application would have checked it. Seems like more and more people
believe if it works 95% of the time that's enough. Until that computer-
of a computer problem. Then it should have been working 100% of the time,
(Yes, I really feel nervous that we start to trust computers where you