User: Password:
|
|
Subscribe / Log in / New account

RAID 5/6 code merged into Btrfs

RAID 5/6 code merged into Btrfs

Posted Feb 4, 2013 18:57 UTC (Mon) by drag (subscriber, #31333)
In reply to: RAID 5/6 code merged into Btrfs by butlerm
Parent article: RAID 5/6 code merged into Btrfs

Wouldn't Btrfs's 'COW' design simply mean that partial writes are just discarded? Baring any foolishness in the hardware the filesystem should always remain in a consistent state regardless of it crashing or whatnot. Maybe just have to do a rollback on a commit or two to get a good state.


(Log in to post comments)

RAID 5/6 code merged into Btrfs

Posted Feb 4, 2013 19:57 UTC (Mon) by masoncl (subscriber, #47138) [Link]

Almost. Since the implementation allows us to share stripes between different extents, you might have a shiny new extent going into the same stripe as an old extent.

In this case you need to protect the parity for the old extent just in case we crash while we're rewriting the parity blocks.

RAID 5/6 code merged into Btrfs

Posted Feb 4, 2013 23:51 UTC (Mon) by butlerm (guest, #13312) [Link]

> Wouldn't Btrfs's 'COW' design simply mean that partial writes are just discarded?

That only works if you always write a full stripe, which is generally not the case. ZFS uses variable stripe sizes to achieve this for data writes, but the minimum stripe size tends to be rather large, depending on how many disks you have in your RAID set.

If you spread every filesystem block across all disks the way ZFS does, random read performance suffers dramatically. Every disk has to participate in every uncached data read. Minimum FS block sizes go up with the number of disks, and so on.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds