The 2006 Linux File Systems Workshop
Posted Oct 19, 2006 0:02 UTC (Thu) by eatsapizza
In reply to: The 2006 Linux File Systems Workshop
Parent article: The 2006 Linux File Systems Workshop
How do folks resolve the difference between MTBF for a disk and the
bit error rates?
I've read of MTBF's of about 1MHours, and bit error rates of about
10E-15 (so the Mean Bits between Failures is 1E15).
The reliability of a drive is R=exp(-t/MTBF), so the reliability
of a drive for a year is about...
R=exp(-8760/1E6) = about 99%
But if you're reading from your disk...... even say only 1MB/s ..
about 1 to 3% of its possible read rate,
you would read, in a year.. 2.523E14 bits,
R= exp(-2.523E14/1E15) = 77.7% .. a very different result, and
that's assuming the drive isn't really that busy.
I know there is RAID 0+1 and RAID5, and RAID5 (6+2), all of which
make things better, but how can the single disk result be so
to post comments)