|
|
Log in / Subscribe / Register

An md/raid6 data corruption bug

An md/raid6 data corruption bug

Posted Feb 13, 2015 16:46 UTC (Fri) by ragaar (guest, #101043)
In reply to: An md/raid6 data corruption bug by dlang
Parent article: An md/raid6 data corruption bug

Your decimal places seem to be slightly off in the RAID6 scenario [3 drive failure]

fail_week = 0.002;
# n = Number of drives

fail_each_week(n) = n*fail_week;
# NOTE: You can multiply failure by 100 to see % failure

# Pseudo-code
Foreach(4, 3, 2)
Y *= fail_each_week(x);
End

Y = ((4*0.002)*(3*0.002)*(2*0.002)) = 1.92e-07 ≈ 2e-07
Y * 100 ≈ 2e-05%

We're amongst friends so rounding to 2 is fine, but [as of Feb 2015] there seems to be couple extra decimal places in your post.
RAID6 scenario [3 drive failure] shows 2e-07% as opposed to 2e-05%.

Side comment:
Thanks for consolidating this information. This was the only post that I've found combining HDD failure rates on a yearly/monthly/weekly interval, laid out the basic statistical math, and provided description as to the intent applied during each step.

It is well thought out posts like this that help make the internet better. You get a +1 (gold star) vote from me :)


to post comments


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds