Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Considering that my XFS problem happened with a 2.6 kernel, I don't think
XFS 1.1 quite solved it all.
XFS a good contender. On SGIs.
Posted Jun 17, 2006 0:57 UTC (Sat) by dododge (subscriber, #2870)
The file-destruction problem certainly still exists:
It can make some types of debugging very painful. For example about a month ago my Xubuntu dapper system (using XFS) was having trouble getting X running on multiple video cards, and the failure mode of X was a hard-lock of the machine requiring a power-cycle. The use of XFS pretty much guaranteed that after reboot, Xorg.0.log was useless for debugging the problem. In at least one case it also null-filled the xorg.conf file that I was using for testing. I quickly got in the habit of typing "sync" a few times before doing anything even remotely risky, in hopes of protecting at least files that I wasn't actively working on.
I don't know what's worse -- the fact that it fills the file with null bytes, or the fact that it leaves the metadata intact so that the file looks fine if you just do an "ls". A similar experience with Reiser4 resulted in xorg.conf disappearing entirely after a hard reboot, and perhaps that was a better way to fail since it was at least obvious that there was a problem.
That said, I'm still using XFS.
Posted Jun 18, 2006 11:33 UTC (Sun) by nix (subscriber, #2304)
Posted Jun 19, 2006 17:00 UTC (Mon) by dododge (subscriber, #2870)
Posted Jun 23, 2006 20:08 UTC (Fri) by nix (subscriber, #2304)
Posted Jun 23, 2006 22:15 UTC (Fri) by dododge (subscriber, #2870)
I only checked the two tarfiles from one archive, so it's possible a copy somewhere is corrupt.
(On two separate machines, with RAID arrays? Strange.)
That it would happen the same way in two different filesystems is certainly unlikely.
Aside: it's a bit surprising how common RAID failures can be. At a LUG discussion a few weeks ago someone mentioned that their RAID-5 was destroyed when, while synchronizing to repair a lost drive, a second drive died in the array. Several people (incuding myself) immediately jumped into the discussion with similar stories.
Posted Jul 3, 2006 23:12 UTC (Mon) by nix (subscriber, #2304)
Neither of these are true any longer, thanks be to Neil Brown :) you're only in trouble now if you have *the same* block unreadable on enough drives.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds