|
|
Subscribe / Log in / New account

Ext4 data corruption trouble

Ext4 data corruption trouble

Posted Oct 24, 2012 23:34 UTC (Wed) by nix (subscriber, #2304)
In reply to: Ext4 data corruption trouble by nix
Parent article: Ext4 data corruption trouble [Updated]

[synopsis of my recent email to tytso]

OK, it turns out that you need to do rather crazy things to make this go wrong -- and if you hit it at the wrong moment, 3.6.1 is vulnerable too, and quite possibly every Linux version ever. To wit, you need to disconnect the block device or reboot *during* the umount. This may well be an illegitimate thing to do, but it is unfortunately also quite *easy* to do if you pull out a USB key.

Worse yet, if you umount -l a filesystem, it becomes dangerous to *ever* reboot, because there is as far as I can tell no way to tell when lazy umount switches from 'not yet umounted, mount point still in use, safe to reboot' to 'umount in progress, rebooting is disastrous'.

I still haven't found a way to safely unmount all filesystems if you have local filesystems nested underneath NFS filesystems (where the NFS filesystems may require userspace daemons to be running in order to unmount, and the local filesystems generally require userspace daemons to be dead in order to unmount).

It may work to kill everything whose cwd is not / or which has a terminal, then unmount NFS and local filesystems in succession until you can make no more progress -- but it seems appallingly complicated and grotty, and will break as soon as some daemon holds a file open on a non-root filesystem. What's worse, it leads to shutdown locking up if a remote NFS server is unresponsive, which is the whole reason why I started using lazy umount at shutdown in the first place!


to post comments

Ext4 data corruption trouble

Posted Oct 25, 2012 0:06 UTC (Thu) by Kioob (subscriber, #56482) [Link] (7 responses)

Well «crazy things» ? Since 8 days, I had 5 servers (over ~200) with data corruption on ext4 partitions (over LVM, over Xen blockfront/blockback, over DRBD, over LVM). Specialy with partitions mounted with defaults,noatime,nodev,nosuid,noexec,data=ordered (MySQL InnoDB data).

I was thinking about a problem with DRBD, then I saw this news... so... I don't know. Is it really the only way to trigger that problem ?

Ext4 data corruption trouble

Posted Oct 25, 2012 0:31 UTC (Thu) by nix (subscriber, #2304) [Link] (6 responses)

It's the only way I've been able to find. /sbin/reboot -f on a system with mounted filesystems does not trigger this problem. Reboot after unmounting does not trigger this problem. Reboot *during* a umount, and *boom* goodbye fs.

I have speculated on ways to fix this for good, though they require a new syscall, a new userspace utility, changes to shutdown scripts, that others on l-k agree my idea is not utterly insane, and for me to bother to implement all of this. The latter is questionable, given the number of things I mean to do that I never get around to. :)

Ext4 data corruption trouble

Posted Oct 25, 2012 1:14 UTC (Thu) by luto (guest, #39314) [Link] (5 responses)

Fix what for good?

If you want to cleanly unmount everything, presumably you want (a) revoke and (b) unmount-the-$!%@-fs-even-if-it's-in-use. (I'd like both of these.)

If you want to know when filesystems are gone, maybe you want to separate the processes of mounting things into the FS hierarchy from loading a driver for an FS. Then you could force-remove-from-hierarchy (roughly equivalent to umount -l) and separately wait until the FS is no longer loaded (which has nothing to do with the hierarchy).

If you want your system to be reliable, the bug needs fixing.

Ext4 data corruption trouble

Posted Oct 25, 2012 1:25 UTC (Thu) by dlang (guest, #313) [Link] (4 responses)

As I understand his post, there are two big issues.

1. you can't even try to unmount a filesystem if it's mounted under another filesystem that you can't reach

example

mount /dev/sda /
mount remote:/something on /something
mount /dev/sdb /something/else

now if remote goes down, you have no way of cleanly unmounting /dev/sdb

2. even solving for #1, namespaces cause problems because with namespaces, it is now impossible for any one script to unmount everything, or even to find what pids need to be killed in all the pid namespaces to be able to make a filesystem idle so that is can be unmounted.

Ext4 data corruption trouble

Posted Oct 25, 2012 1:30 UTC (Thu) by ewen (subscriber, #4772) [Link] (3 responses)

Wouldn't "mount -o remount /dev/sdb" solve the first problem? In theory it should close off the journal and get the file system into a stable state, but not require the non-responsive NFS server to reply. And in theory it should be safe to force unmount a read-only file system, once it's reached that read-only/stable state.

However finding all the file systems in the face of many PID/filesystem name spaces is still non-trivial.

Ewen

Ext4 data corruption trouble

Posted Oct 25, 2012 1:56 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

dlang has it right, that's the problem I was trying to solve with this lazy umount kludge. And for many, many years, it worked!

I had no idea you could use remounting (plus, presumably, readonly remounting) on raw devices like that. That might work rather well in my case: all my devices are in one LVM VG, so I can just do a readonly remount on /dev/$vgname/*.

But in the general case, including PID and fs namespaces, that's really not going to work, indeed.

Ext4 data corruption trouble

Posted Oct 25, 2012 3:50 UTC (Thu) by ewen (subscriber, #4772) [Link] (1 responses)

Yes, I did intend to say "mount -o remount,ro /dev/sdb". For years it's been my usual "try to minimise the harm" approach, when dealing with a stuck server due to some mounts not responding. I'm not sure what happens with a modern server where the same volume is mounted in more than one location (hopefully all the mounts end up read-only). But it definitely works with /dev/mapper/$vgname-$lvname for instance if it's only mounted once.

Ewen

Ext4 data corruption trouble

Posted Oct 25, 2012 11:16 UTC (Thu) by nix (subscriber, #2304) [Link]

Bind mounts will be fine with this: they all share the same read-only state unless explicitly otherwise requested.

Ext4 data corruption trouble

Posted Oct 25, 2012 1:26 UTC (Thu) by ewen (subscriber, #4772) [Link] (5 responses)

For the benefit of those following along at home, this appears to be one of the more detailed posts on LMKL:

https://lkml.org/lkml/2012/10/24/620

(there are others earlier/later, but they mostly only make sense in context.)

ObTopic: possibly there may be an ordering of write operations which ensures that the journal close off/journal replay is idempotent (ie, okay to do twice), but it would appear that EXT4 in some kernel versions either doesn't currently have that for some actions or doesn't have sufficient barriers to ensure the writes hit stable storage in that order. So there seems to be a (small) window of block writing vulnerability during the EXT4 unmounting. (Compare with, eg, the FreeBSD Soft Updates file system operation ordering -- http://en.wikipedia.org/wiki/Soft_updates.)

Ewen

Ext4 data corruption trouble

Posted Oct 25, 2012 2:00 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

Those following along at home is probably half the human race, now we have posts on Phoronix, Slashdot *and* Heise. Who the hell submits things like this to random-terrified-user media outlets before we've even characterized the bloody problem? Every one of those posts is inaccurate, of course, through no fault of their own but merely because we didn't yet know what the problem was ourselves, merely that I and one other person were seeing corruption: we obviously started by assuming that it was something obvious and thus fairly serious, but that didn't mean we *expected* that to be true: I certainly expected the final problem to be more subtle, if still capable of causing serious disk corruption (my warning here was just in case it was not).

But now there's a wave of self-sustaining accidental lies spreading across the net, damaging the reputation of ext4 unwarrantedly, and I started it without wanting to.

It's times like this when I start to understand why some companies have closed bug trackers.

Ext4 data corruption trouble

Posted Oct 25, 2012 9:31 UTC (Thu) by man_ls (guest, #15091) [Link] (2 responses)

That which doesn't kill ext4, makes ext4 stronger. Once the general media realize that only a fraction of a percent of users are affected, they will probably post some kind of correction and everything will go back to normal -- and ext4 will be stronger by it.

Remember the stupid "neutrinos faster than light" news where all media outlets were reporting that Einstein had been rebutted, and that we were close to time travel? In the end it was all a faulty hardware connection, the original results were corrected and the speed of light paradigm came out stronger than ever. In that case it was a few hundreds of scientists signing the original paper that started the wildfire, instead of checking and rechecking everything for a few months before publishing such a fundamental result. I hope they are widely discredited now, all 170 of them (I am not joking now, either in the figure or in the malignity).

So in a few days the bug will be pinned to a very specific and uninteresting condition, and ext4 will come out stronger than ever. One data point: I have seen no corruption with 3.6.3, but then I am never rebooting while unmounting. Now I will be unmounting with extra care :)

Ext4 data corruption trouble

Posted Oct 25, 2012 13:33 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

That FTL neutrino case is actually more similar than you thought -- scientific paper publication (and, these days, arxiv) is directly analogous to development lists like lkml -- it is where the practitioners in the field communicate. So having hundreds of scientists sign that paper is quite expected -- they worked on the collaboration, after all. What is unjustified is for the general media to pick up something like that, always and necessarily a work-in-progress, and consider it a finished deal, certain, unchanging.

LWN's coverage of this was much much better, emphasising the unclear and under-investigation nature of the thing.

A few things

Posted Oct 25, 2012 14:03 UTC (Thu) by man_ls (guest, #15091) [Link]

Actually, the "neutrino anomaly" team gave several press conferences and a webcast. Without that attention-seeking part the story would probably not have blown so big. Imagine if Tso had given a press conference explaining the ext4 bug, instead of just dealing with it?

Also, hundreds of names on a paper may be standard practice, but it is ridiculous. Somebody should compute something like the Einstein index but dividing each result by the number of collaborators.

Finally, it appears from the wikipedia article that the Gran Sasso scientists had sat on their results for six months before publishing them. Even though I called for the same embargo in my post, that they did somehow only makes it worse -- but then life is unfair.

Ext4 data corruption trouble

Posted Oct 25, 2012 10:17 UTC (Thu) by cesarb (subscriber, #6266) [Link]

> Who the hell submits things like this to random-terrified-user media outlets before we've even characterized the bloody problem?

Data corruption/loss is scary. Even more than most security problems (a really bad security problem will be used by some joker to erase your data, so a really bad security problem is equivalent to data corruption/loss).

If the data corruption/loss affects the most used and stable filesystem in the Linux world, the steps to reproduce sound reasonably easy to hit by chance (just reboot twice quickly), and the data loss is believed to be prevented by just not upgrading/downgrading a minor point release, it is natural human behavior to want EVERYONE to know RIGHT NOW, so people will not upgrade/will downgrade until it is safe. Thus the posts on every widely read Linux-related news media people could find.

Even now with the problem being shown to happen in less common situations, and with it being suspected of being older than 3.6.1, I would say 3.6.3 is burned, and people will not touch it with a 3-meter pole until 3.6.4 is out. Even if 3.6.4 has no ext4-related patches at all.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds