LWN: Comments on "Ext4 data corruption trouble [Updated]" https://lwn.net/Articles/521022/ This is a special feed containing comments posted to the individual LWN article titled "Ext4 data corruption trouble [Updated]". en-us Fri, 05 Sep 2025 05:27:02 +0000 Fri, 05 Sep 2025 05:27:02 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521703/ https://lwn.net/Articles/521703/ nix <div class="FormattedComment"> A bit more info: this is observed only if journal_checksum is on (I wasn't using it because I knew it was dangerous, but hadn't noticed that journal_async_commit implied journal_checksum). journal_async_commit combined with nobarrier is even worse: on remount after umounting with those options (and rebooting right after), you don't get a journal abort and readonly remount, you get a remount with no indication of corrupted journal.<br> <p> (nobarrier on its own, without journal_checksum or anything that implies it, seems to be fine, as long as you have suitable battery-backed hardware of course.)<br> <p> </div> Sat, 27 Oct 2012 20:19:23 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521702/ https://lwn.net/Articles/521702/ nix <div class="FormattedComment"> Sysrq-U remounts read-only, it doesn't unmount.<br> <p> (But, still, you do want to remount the loopback-mounted filesystem first, or that umount won't be able to do e.g. journal flushes...)<br> <p> And the answer is no: it ends up calling do_emergency_remount(), which does a straight iteration over all super_blocks: there is no dependency analysis of any kind: I'd expect (given the way super_blocks is built) to unmount the backing store fs *before* the loopback-mounted fs.<br> <p> (Perhaps do_emergency_remount() should iterate over super_blocks in reverse order?)<br> </div> Sat, 27 Oct 2012 20:17:27 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521547/ https://lwn.net/Articles/521547/ butlerm <div class="FormattedComment"> Does that work with loopback mounted filesystems, i.e. is the sysrq handler smart enough to unmount a loopback mounted filesystem before the filesystem that holds its backing store?<br> </div> Fri, 26 Oct 2012 14:37:05 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521537/ https://lwn.net/Articles/521537/ nix <div class="FormattedComment"> Oh. Yeah. That would work. I completely forgot that sysrq-trigger even existed. :)<br> <p> New syscall, who needs it, though relying on sysrq-trigger for something as fundamental as shutting down seems a little icky.<br> <p> </div> Fri, 26 Oct 2012 14:04:35 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521420/ https://lwn.net/Articles/521420/ pr1268 <p>I like this solution:</p> <p><pre> /* 3sync.c */ #include &lt;unistd.h&gt; /* for sync(2) */ #include &lt;time.h&gt; /* for struct timespec and nanosleep(2) */ int main() { struct timespec ts = { 0, 1000000L }; sync(); (void) nanosleep(&amp;ts, 0); sync(); (void) nanosleep(&amp;ts, 0); sync(); return 0; } </pre> </p> <p>Compiled and placed in <tt>$HOME/bin</tt> (which is in <tt>$PATH</tt>), and now it gets used quite frequently in other scripts I run. Which either (1) is horribly inefficient, and/or (2) shows how paranoid I am with data corruption. Sigh.</p> Fri, 26 Oct 2012 01:22:04 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521413/ https://lwn.net/Articles/521413/ neilbrown <pre> echo S > /proc/sysrq-trigger echo U > /proc/sysrq-trigger </pre> ?? Fri, 26 Oct 2012 00:45:38 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521410/ https://lwn.net/Articles/521410/ jhardin <blockquote><i>I still have an ingrained habit of typing `sync' at idle moments in my shell, picked up in the early days of ext2.</i></blockquote> +1, except I picked up the habit on SCO Xenix. <p> sync;sync;sync Fri, 26 Oct 2012 00:44:10 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521407/ https://lwn.net/Articles/521407/ nix <div class="FormattedComment"> That requires that you can *find* all the file systems. In the presence of PID and fs namespaces, no single process can necessarily do that (nor can any single process necessarily talk to any group of processes that can do that, even indirectly).<br> <p> This is somewhat unlikely, it is true.<br> </div> Fri, 26 Oct 2012 00:26:24 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521404/ https://lwn.net/Articles/521404/ ncm <div class="FormattedComment"> Maybe I'm missing something. Why not start by remounting all the file systems to a synchronous-write mode, first? Then, sync. After that, you can execute a HCF(*) instruction, wait long enough for the lying drives to drain their internal queues, power down, and everything should be fine. Right?<br> <p> (*) "Halt and Catch Fire"<br> </div> Fri, 26 Oct 2012 00:21:53 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521378/ https://lwn.net/Articles/521378/ butlerm <div class="FormattedComment"> A 'umountall' system call looks like it could do the job nicely, with one exception. <br> <p> It might be helpful to have a way to put all filesystems in a read only state, for the benefit of shutdown code that only requires (file level) read access, such as code to shutdown RAID devices. <br> <p> A more general problem is that you might have loopback mounts and nested block devices, so what you really need is a combined operation that does the topological sort and quiesces filesystems and block devices in reverse stacking order.<br> </div> Thu, 25 Oct 2012 22:16:18 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521332/ https://lwn.net/Articles/521332/ cesarb <div class="FormattedComment"> <font class="QuotedText">&gt; my boss at Oracle</font><br> <p> Did you boss at Oracle tell you to try btrfs instead? ;-)<br> </div> Thu, 25 Oct 2012 17:25:57 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521319/ https://lwn.net/Articles/521319/ nix <div class="FormattedComment"> I should also take a moment to thank my boss at Oracle for not uttering one word of complaint while I did all this repeated reboot/corrupt/reboot-again stuff, even though it was completely unrelated to the job I was supposed to be doing (except insofar as it is hard to work on anything if your filesystem is mangled!)<br> <p> I did try to work on it only outside working hours, but it's sometimes hard to concentrate on anything else when your filesystems are at risk, so I fear it did compromise my productivity at other times. So, thank you, Elena. :)<br> <p> </div> Thu, 25 Oct 2012 16:30:54 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521313/ https://lwn.net/Articles/521313/ wahern <div class="FormattedComment"> I still have an ingrained habit of typing `sync' at idle moments in my shell, picked up in the early days of ext2. Re-downloading the Slackware floppy set (because invariably one of the disks of a previous downloaded set would be go bad) over a 2400 baud modem was not fun times. Because things were generally less stable back then, and you never knew when the system might crash and leave the disk corrupt and unbootable, vigorous and frequent syncing was the only alternative.<br> <p> <p> </div> Thu, 25 Oct 2012 16:02:25 +0000 A few things https://lwn.net/Articles/521276/ https://lwn.net/Articles/521276/ man_ls Actually, the <a href="http://en.wikipedia.org/wiki/Faster-than-light_neutrino_anomaly">"neutrino anomaly"</a> team gave <a href="http://physics.stackexchange.com/questions/14968/superluminal-neutrinos">several press conferences and a webcast</a>. Without that attention-seeking part the story would probably not have blown so big. Imagine if Tso had given a press conference explaining the ext4 bug, instead of just dealing with it? <p> Also, hundreds of names on a paper may be standard practice, but it is ridiculous. Somebody should compute something like the <a href="http://www.science20.com/hammock_physicist/who_todays_einstein_exercise_ranking_scientists-75928">Einstein index</a> but dividing each result by the number of collaborators. <p> Finally, it appears from the wikipedia article that the Gran Sasso scientists had sat on their results for six months before publishing them. Even though I called for the same embargo in my post, that they did somehow only makes it worse -- but then life is unfair. Thu, 25 Oct 2012 14:03:19 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521260/ https://lwn.net/Articles/521260/ nix <div class="FormattedComment"> That FTL neutrino case is actually more similar than you thought -- scientific paper publication (and, these days, arxiv) is directly analogous to development lists like lkml -- it is where the practitioners in the field communicate. So having hundreds of scientists sign that paper is quite expected -- they worked on the collaboration, after all. What is unjustified is for the general media to pick up something like that, always and necessarily a work-in-progress, and consider it a finished deal, certain, unchanging.<br> <p> LWN's coverage of this was much much better, emphasising the unclear and under-investigation nature of the thing.<br> </div> Thu, 25 Oct 2012 13:33:30 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521259/ https://lwn.net/Articles/521259/ nix <div class="FormattedComment"> I think that's so, yes: that's how MNT_DETACH and MNT_EXPIRE are implemented (and, thus, umount -l).<br> </div> Thu, 25 Oct 2012 13:31:09 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521247/ https://lwn.net/Articles/521247/ rleigh <div class="FormattedComment"> It would be nice for such a system call to also work for selected mount namespaces, so that you can be sure everything is consistent after the last process in the namespace exits. (I assume once the namespace no longer has any users, the mounts are automatically umounted?)<br> </div> Thu, 25 Oct 2012 12:50:31 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521228/ https://lwn.net/Articles/521228/ nix <div class="FormattedComment"> We are of one mind, I think: &lt;<a href="http://lkml.indiana.edu/hypermail/linux/kernel/1210.3/00771.html">http://lkml.indiana.edu/hypermail/linux/kernel/1210.3/007...</a>&gt;<br> </div> Thu, 25 Oct 2012 11:19:05 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521227/ https://lwn.net/Articles/521227/ nix <div class="FormattedComment"> Bind mounts will be fine with this: they all share the same read-only state unless explicitly otherwise requested.<br> </div> Thu, 25 Oct 2012 11:16:56 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521225/ https://lwn.net/Articles/521225/ nix <div class="FormattedComment"> Sure! But... what if you have a mount point causing stalls (perhaps relating to an inaccessible NFS server), with mounted local filesystems buried beyond it? If you do a umount, rather than a umount -l, your shutdown will lock up forever as soon as it hits that mount point.<br> <p> Worse yet, what if you have processes in other PID namespaces, holding open filesystems in other filesystem namespaces? The initramfs can't even see them! *No* umount loop can fix that. I hate adding new syscalls, but I really do think we need a new 'unmount the world' syscall which can cross such boundaries :(<br> </div> Thu, 25 Oct 2012 11:16:00 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521212/ https://lwn.net/Articles/521212/ cesarb <div class="FormattedComment"> <font class="QuotedText">&gt; Who the hell submits things like this to random-terrified-user media outlets before we've even characterized the bloody problem?</font><br> <p> Data corruption/loss is scary. Even more than most security problems (a really bad security problem will be used by some joker to erase your data, so a really bad security problem is equivalent to data corruption/loss).<br> <p> If the data corruption/loss affects the most used and stable filesystem in the Linux world, the steps to reproduce sound reasonably easy to hit by chance (just reboot twice quickly), and the data loss is believed to be prevented by just not upgrading/downgrading a minor point release, it is natural human behavior to want EVERYONE to know RIGHT NOW, so people will not upgrade/will downgrade until it is safe. Thus the posts on every widely read Linux-related news media people could find.<br> <p> Even now with the problem being shown to happen in less common situations, and with it being suspected of being older than 3.6.1, I would say 3.6.3 is burned, and people will not touch it with a 3-meter pole until 3.6.4 is out. Even if 3.6.4 has no ext4-related patches at all.<br> </div> Thu, 25 Oct 2012 10:17:33 +0000 Chastity for monkeys https://lwn.net/Articles/521198/ https://lwn.net/Articles/521198/ man_ls I don't agree. In fact <i>all</i> patches to ext4 should carry a warning "Danger! May eat your data alive!" lest someone be misled by the kernel's "integrity through obscurity" evil policy. <p> &lt;/backSarcasm&gt; Thu, 25 Oct 2012 09:43:15 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521193/ https://lwn.net/Articles/521193/ man_ls That which doesn't kill ext4, makes ext4 stronger. Once the general media realize that only a fraction of a percent of users are affected, they will probably post some kind of correction and everything will go back to normal -- and ext4 will be stronger by it. <p> Remember the stupid "neutrinos faster than light" news where all media outlets were reporting that Einstein had been rebutted, and that we were close to time travel? In the end it was all <a href="http://news.sciencemag.org/scienceinsider/2012/02/breaking-news-error-undoes-faster.html">a faulty hardware connection</a>, the original results were corrected and the speed of light paradigm came out stronger than ever. In that case it was a few hundreds of scientists signing <a href="http://arxiv.org/abs/1109.4897">the original paper</a> that started the wildfire, instead of <i>checking and rechecking everything for a few months</i> before publishing such a fundamental result. I hope they are widely discredited now, all 170 of them (I am not joking now, either in the figure or in the malignity). <p> So in a few days the bug will be pinned to a very specific and uninteresting condition, and ext4 will come out stronger than ever. One data point: I have seen no corruption with 3.6.3, but then I am never rebooting while unmounting. Now I will be unmounting with extra care :) Thu, 25 Oct 2012 09:31:27 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521155/ https://lwn.net/Articles/521155/ cyanit <div class="FormattedComment"> They are just different manifestations of the overmind.<br> <p> </div> Thu, 25 Oct 2012 04:51:22 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521153/ https://lwn.net/Articles/521153/ butlerm <div class="FormattedComment"> Shouldn't quiescing all mounted filesystems and putting them into a read only state be something that the filesystem itself have responsibility to do? In the kernel, when instructed that the system is about to reboot/sleep/power off?<br> <p> Expecting user code to track down all mounted filesystems and unmount them in reverse topological order doesn't sound like the sort of thing one would want filesystem integrity to depend on. It sounds like an ugly hack of the first magnitude.<br> </div> Thu, 25 Oct 2012 04:33:27 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521145/ https://lwn.net/Articles/521145/ ewen <div class="FormattedComment"> Yes, I did intend to say "mount -o remount,ro /dev/sdb". For years it's been my usual "try to minimise the harm" approach, when dealing with a stuck server due to some mounts not responding. I'm not sure what happens with a modern server where the same volume is mounted in more than one location (hopefully all the mounts end up read-only). But it definitely works with /dev/mapper/$vgname-$lvname for instance if it's only mounted once.<br> <p> Ewen<br> </div> Thu, 25 Oct 2012 03:50:54 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521140/ https://lwn.net/Articles/521140/ nix <div class="FormattedComment"> Those following along at home is probably half the human race, now we have posts on Phoronix, Slashdot *and* Heise. Who the hell submits things like this to random-terrified-user media outlets before we've even characterized the bloody problem? Every one of those posts is inaccurate, of course, through no fault of their own but merely because we didn't yet know what the problem was ourselves, merely that I and one other person were seeing corruption: we obviously started by assuming that it was something obvious and thus fairly serious, but that didn't mean we *expected* that to be true: I certainly expected the final problem to be more subtle, if still capable of causing serious disk corruption (my warning here was just in case it was not).<br> <p> But now there's a wave of self-sustaining accidental lies spreading across the net, damaging the reputation of ext4 unwarrantedly, and I started it without wanting to.<br> <p> It's times like this when I start to understand why some companies have closed bug trackers.<br> <p> </div> Thu, 25 Oct 2012 02:00:49 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521139/ https://lwn.net/Articles/521139/ nix <div class="FormattedComment"> dlang has it right, that's the problem I was trying to solve with this lazy umount kludge. And for many, many years, it worked!<br> <p> I had no idea you could use remounting (plus, presumably, readonly remounting) on raw devices like that. That might work rather well in my case: all my devices are in one LVM VG, so I can just do a readonly remount on /dev/$vgname/*.<br> <p> But in the general case, including PID and fs namespaces, that's really not going to work, indeed.<br> </div> Thu, 25 Oct 2012 01:56:03 +0000 Ext4 data corruption trouble [Updated] https://lwn.net/Articles/521137/ https://lwn.net/Articles/521137/ tytso <div class="FormattedComment"> There is a G+ post which folks who are interested might want to follow:<br> <p> <a href="https://plus.google.com/117091380454742934025/posts/Wcc5tMiCgq7">https://plus.google.com/117091380454742934025/posts/Wcc5t...</a><br> <p> I also want to assure people that before I send any pull request to Linus, I have run a very extensive set of file system regression tests, using the standard xfstests suite of tests (originally developed by SGI to test xfs, and now used by most of the developers of the major, actively-maintained file systems). So for example, my development laptop, which I am currently using to post this note, is currently running v3.6.3 with the ext4 patches which I have pushed to Linus for the 3.7 kernel. Why am I willing to do this? Specifically because I am constantly running a very large set of automated regression tests on a very regular basis, and certainly before sending the latest set of patches to Linus.<br> <p> </div> Thu, 25 Oct 2012 01:52:08 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521136/ https://lwn.net/Articles/521136/ ewen <div class="FormattedComment"> Wouldn't "mount -o remount /dev/sdb" solve the first problem? In theory it should close off the journal and get the file system into a stable state, but not require the non-responsive NFS server to reply. And in theory it should be safe to force unmount a read-only file system, once it's reached that read-only/stable state.<br> <p> However finding all the file systems in the face of many PID/filesystem name spaces is still non-trivial.<br> <p> Ewen<br> </div> Thu, 25 Oct 2012 01:30:02 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521132/ https://lwn.net/Articles/521132/ ewen <div class="FormattedComment"> For the benefit of those following along at home, this appears to be one of the more detailed posts on LMKL:<br> <p> <a href="https://lkml.org/lkml/2012/10/24/620">https://lkml.org/lkml/2012/10/24/620</a><br> <p> (there are others earlier/later, but they mostly only make sense in context.)<br> <p> ObTopic: possibly there may be an ordering of write operations which ensures that the journal close off/journal replay is idempotent (ie, okay to do twice), but it would appear that EXT4 in some kernel versions either doesn't currently have that for some actions or doesn't have sufficient barriers to ensure the writes hit stable storage in that order. So there seems to be a (small) window of block writing vulnerability during the EXT4 unmounting. (Compare with, eg, the FreeBSD Soft Updates file system operation ordering -- <a href="http://en.wikipedia.org/wiki/Soft_updates">http://en.wikipedia.org/wiki/Soft_updates</a>.)<br> <p> Ewen<br> </div> Thu, 25 Oct 2012 01:26:12 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521134/ https://lwn.net/Articles/521134/ dlang <div class="FormattedComment"> As I understand his post, there are two big issues.<br> <p> 1. you can't even try to unmount a filesystem if it's mounted under another filesystem that you can't reach<br> <p> example<br> <p> mount /dev/sda /<br> mount remote:/something on /something<br> mount /dev/sdb /something/else<br> <p> now if remote goes down, you have no way of cleanly unmounting /dev/sdb<br> <p> 2. even solving for #1, namespaces cause problems because with namespaces, it is now impossible for any one script to unmount everything, or even to find what pids need to be killed in all the pid namespaces to be able to make a filesystem idle so that is can be unmounted.<br> </div> Thu, 25 Oct 2012 01:25:09 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521131/ https://lwn.net/Articles/521131/ luto <div class="FormattedComment"> Fix what for good?<br> <p> If you want to cleanly unmount everything, presumably you want (a) revoke and (b) unmount-the-$!%@-fs-even-if-it's-in-use. (I'd like both of these.)<br> <p> If you want to know when filesystems are gone, maybe you want to separate the processes of mounting things into the FS hierarchy from loading a driver for an FS. Then you could force-remove-from-hierarchy (roughly equivalent to umount -l) and separately wait until the FS is no longer loaded (which has nothing to do with the hierarchy).<br> <p> If you want your system to be reliable, the bug needs fixing.<br> </div> Thu, 25 Oct 2012 01:14:27 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521126/ https://lwn.net/Articles/521126/ dirtyepic <div class="FormattedComment"> It's Torvalds all the way down.<br> </div> Thu, 25 Oct 2012 00:31:38 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521125/ https://lwn.net/Articles/521125/ nix <div class="FormattedComment"> It's the only way I've been able to find. /sbin/reboot -f on a system with mounted filesystems does not trigger this problem. Reboot after unmounting does not trigger this problem. Reboot *during* a umount, and *boom* goodbye fs.<br> <p> I have speculated on ways to fix this for good, though they require a new syscall, a new userspace utility, changes to shutdown scripts, that others on l-k agree my idea is not utterly insane, and for me to bother to implement all of this. The latter is questionable, given the number of things I mean to do that I never get around to. :)<br> </div> Thu, 25 Oct 2012 00:31:09 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521112/ https://lwn.net/Articles/521112/ Kioob <div class="FormattedComment"> Well «crazy things» ? Since 8 days, I had 5 servers (over ~200) with data corruption on ext4 partitions (over LVM, over Xen blockfront/blockback, over DRBD, over LVM). Specialy with partitions mounted with defaults,noatime,nodev,nosuid,noexec,data=ordered (MySQL InnoDB data).<br> <p> I was thinking about a problem with DRBD, then I saw this news... so... I don't know. Is it really the only way to trigger that problem ?<br> </div> Thu, 25 Oct 2012 00:06:51 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521106/ https://lwn.net/Articles/521106/ nix <div class="FormattedComment"> [synopsis of my recent email to tytso]<br> <p> OK, it turns out that you need to do rather crazy things to make this go wrong -- and if you hit it at the wrong moment, 3.6.1 is vulnerable too, and quite possibly every Linux version ever. To wit, you need to disconnect the block device or reboot *during* the umount. This may well be an illegitimate thing to do, but it is unfortunately also quite *easy* to do if you pull out a USB key.<br> <p> Worse yet, if you umount -l a filesystem, it becomes dangerous to *ever* reboot, because there is as far as I can tell no way to tell when lazy umount switches from 'not yet umounted, mount point still in use, safe to reboot' to 'umount in progress, rebooting is disastrous'.<br> <p> I still haven't found a way to safely unmount all filesystems if you have local filesystems nested underneath NFS filesystems (where the NFS filesystems may require userspace daemons to be running in order to unmount, and the local filesystems generally require userspace daemons to be dead in order to unmount).<br> <p> It may work to kill everything whose cwd is not / or which has a terminal, then unmount NFS and local filesystems in succession until you can make no more progress -- but it seems appallingly complicated and grotty, and will break as soon as some daemon holds a file open on a non-root filesystem. What's worse, it leads to shutdown locking up if a remote NFS server is unresponsive, which is the whole reason why I started using lazy umount at shutdown in the first place!<br> <p> </div> Wed, 24 Oct 2012 23:34:12 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521088/ https://lwn.net/Articles/521088/ tomegun <div class="FormattedComment"> If you use "the right" initramfs (e.g. dracut or Arch's mkinitramfs) with systemd, it might work better.<br> <p> In that case systemd will jump back to the initramfs on shutdown, and the initramfs will then try to kill/unmount whatever processe/mounts remains in the rootfs.<br> </div> Wed, 24 Oct 2012 22:00:15 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521084/ https://lwn.net/Articles/521084/ nix <div class="FormattedComment"> Topological order won't help, alas, not unless it identifies processes which have files open on filesystems or current directories on such filesystems and toposorts *them* (note further that an unambiguous toposort in this case many not be possible, e.g. if you had a weird userspace fileserver serving /foo and /foo/bar, and that fileserver had a current directory set to /foo/bar...)<br> <p> Raw umount(8) does a toposort unmount as well. It is not enough.<br> <p> </div> Wed, 24 Oct 2012 21:36:00 +0000 Ext4 data corruption trouble https://lwn.net/Articles/521080/ https://lwn.net/Articles/521080/ Cyberax <div class="FormattedComment"> As I understand, systemd tries killing/unmounting in the topoligical order. So if there's an order in which your processes can be killed and filesystems unmounted - it can do this.<br> </div> Wed, 24 Oct 2012 21:26:34 +0000