|
|
Log in / Subscribe / Register

An md/raid6 data corruption bug

An md/raid6 data corruption bug

Posted Aug 25, 2014 20:39 UTC (Mon) by mathstuf (subscriber, #69389)
In reply to: An md/raid6 data corruption bug by dlang
Parent article: An md/raid6 data corruption bug

> If drives have a 12% chance of dieing each year (rough figure from the big studies several years back), that's 1% a month, or ~.2% per week per drive

Is that 12% for each drive or 12% of drives are expected to die (on second thought…is there a difference?)? If the latter, did you get 1% per month from 12% / 12 or (1 - (1 - .12) ^ (1 / 12)) == 1.06%? The latter seems more accurate, but that's mainly my gut feeling here. As an example, with a 50% failure rate becomes 5.6% per month instead of 4.17% because you expect ~94.4% to survive each month until you're left with 50% still around. Then again, drives are independent (…ish), so maybe just straight division is better there. Anyways, leaving it here for a second thought on it.


to post comments

An md/raid6 data corruption bug

Posted Aug 25, 2014 20:45 UTC (Mon) by dlang (guest, #313) [Link]

and this is why I show the math :-)

It was intended to be 12% of drives die in any given year, so 1% of drives die in any given month, or .2% of drives die in any given week (assuming that the failures are really independent, not a latent manufacturing defect that will affect all drives of a class)


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds