|
|
Subscribe / Log in / New account

SUSE reaffirms support for Btrfs

SUSE reaffirms support for Btrfs

Posted Aug 25, 2017 19:17 UTC (Fri) by nybble41 (subscriber, #55106)
In reply to: SUSE reaffirms support for Btrfs by drag
Parent article: SUSE reaffirms support for Btrfs

While I agree with you regarding RAID5, with a 4-disk RAID6 parity scheme the loss of _any_ two drives is recoverable. With RAID10 on the same array you have a 1/3 chance of losing access to half your data when the second drive fails (i.e. when it happens to be the mirror of the first drive that failed). On the other hand, while RAID6 can perform two reads in parallel from any part of the array, RAID10 has the advantage in speed since it can perform at least two reads (one per mirror) and possibly up to four (one per disk) at one time, depending on which part of the array is being read. Similarly, a RAID6 write touches all four drives whereas RAID10 only needs to update two, potentially allowing up to double the throughput if writes are distributed across the mirrors.


to post comments

SUSE reaffirms support for Btrfs

Posted Aug 28, 2017 15:07 UTC (Mon) by Wol (subscriber, #4433) [Link] (1 responses)

Just watch out though - as far as linux is concerned, raid-10 and raid-1+0 are two different beasts. For example, you only need three drives for raid-10, but four for raid-1+0.

Cheers,
Wol

SUSE reaffirms support for Btrfs

Posted Aug 28, 2017 17:26 UTC (Mon) by nybble41 (subscriber, #55106) [Link]

> you only need three drives for raid-10

Right, I was writing from the perspective of 'nested' RAID10, not the 'complex' version. With 'complex' RAID10 you would have guaranteed partial data loss on failure of any two drives, since the data stripes are distributed across the drives and—for an array of four uniformly-sized disks, as we've been discussing, which doesn't really need to be 'complex'—the two failed drives would be the sole mirrors for approximately one-twelfth of the stripes.

Whether a guaranteed 8% data loss (given a second drive failure) is better or worse than a 1/3 chance of losing 50% of the data will, I suppose, depend on your use case. However, do consider that if the 8% includes critical metadata (and it probably will) then the remainder, while still technically "present", may still be unrecoverable in practice.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds