|
|
Subscribe / Log in / New account

Runtime filesystem consistency checking

Runtime filesystem consistency checking

Posted Apr 3, 2012 18:51 UTC (Tue) by dcg (subscriber, #9198)
Parent article: Runtime filesystem consistency checking

I am surprised that the "integrity check" btrfs feature included in 3.3 wasn't mentioned. It seems to be very similar in concept: (see comments in http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-...)


to post comments

Runtime filesystem consistency checking

Posted Apr 4, 2012 6:58 UTC (Wed) by hisdad (subscriber, #5375) [Link] (2 responses)

btrfs more reliable?
Not for me. I have an Ml110G6 with P212 raid controller.
That should be either all ok or all fail, right? Nope. One disk had a part fail and that somehow got through the controller and corrupted the file systems. btrfsck happily tells me there are errors. When I look for the "-f" it just smiles at me. The ext4 partitions were recoverable.

Oh yes we need better metadata, but also background scrubs.
--Dad

Runtime filesystem consistency checking

Posted Apr 7, 2012 15:24 UTC (Sat) by dcg (subscriber, #9198) [Link]

I just pointed out the existence of a integrity checker, I never said btrfs was more reliable.

Runtime filesystem consistency checking

Posted Apr 13, 2012 6:04 UTC (Fri) by Duncan (guest, #6647) [Link]

Btrfs is marked experimental for a reason, and it's public knowledge that until recently (like late kernel 3.3 cycle) the available fsck was read-only, no fix option. AFAIK there's a fix option version available now, but only in dangerdonteveruse branch of Chris's btrfs-tools tree ATM, not yet even in the main master branch, as it's being tested to ensure that at worst it doesn't make any problems WORSE instead of better.

IOW, as both the wiki and kernel options clearly state or from an even perfunctory scan of recent list posting, btrfs is clearly experimental and only suitable for test data that is either entirely throwaway or is available elsewhere ATM. While a responsible admin will always have backups for data even on production quality filesystems, there, backups are backups, the primary copy can be the copy on the working filesystem. But things are rather different for btrfs or any filesystem at its development state, where the primary copy should be thought of as existing on some non-btrfs volume, backed up as should be any data of value, and anything placed on btrfs is purely testing data -- if it happens to be there the next time you access it, particularly after the next umount/mount cycle, great, otherwise, you have a bug and as a tester of an experimental filesystem, should be following the development list close enough to know whether it has been reported or not yet, and a responsibility to try to trace and report it, including running various debug patches, etc, as may be requested for tracing it down. Otherwise, if you're not actively testing it, what are you doing running such an experimental and in active development filesystem in the first place?

So /of/ /course/ btrfs is not likely to be more reliable at this point. It's still experimental, and while it is /beginning/ to stabilize, development activity and debugging is still very high, features are still being added, it only recently got a writing fsck at all and that's still in dangerdonteveruse for a reason. As a volunteer tester of a filesystem in that state, if it's as stable as a mature filesystem for you, you probably simply aren't pushing it hard enough in your tests! =:^)

But of course every filesystem had a point at which it was in a similar state, and unlike many earlier filesystems, btrfs has been designed from the ground up with data/metadata checksumming and other reliability features as primary design features, so by the time it reaches maturity and that experimental label comes off for mainline (actually, arguably before that as kernel folks are generally quite conservative about removing such labels), it should be well beyond ext* in terms of reliability, at least when run with the defaults (many of the features including checksumming can be turned off, it can be run without the default metadata duplication, etc).

So while btrfs as a still experimental filesystem fit only for testing /of/ /course/ doesn't yet have the reliability of ext3/4, it will get there, and then surpass them... until at a similar maturity to what they are now, it should indeed be better than they are now, or even than they will be then, for general purpose usage reliability, anyway.

Duncan (who has been following the btrfs list for a few weeks now, and all too often sees posts from folks wondering how to recover data... from a now unmountable but very publicly testing-quality-only filesystem that never should have had anything but throwaway quality testing data on it in the first place.)

Runtime filesystem consistency checking

Posted Apr 4, 2012 13:21 UTC (Wed) by ashvin (guest, #83894) [Link]

We are aware of the btrfs integrity checker and plan to incorporate some ideas from there into Recon. There are two main differences between the btrfs integrity checker and Recon right now. The btrfs integrity checker depends on the file system code while Recon works outside, at the block layer. A bug in the file system may affect the integrity checker, while the hope is that bugs in the file system and in Recon are going to be unrelated. Second, the integrity checker mainly checks that updated blocks are destined to the correct locations, while Recon checks the contents of the updated blocks.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds