LWN: Comments on "Revisiting "too small to fail"" https://lwn.net/Articles/723317/ This is a special feed containing comments posted to the individual LWN article titled "Revisiting "too small to fail"". en-us Wed, 05 Nov 2025 03:00:38 +0000 Wed, 05 Nov 2025 03:00:38 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Revisiting "too small to fail" https://lwn.net/Articles/724543/ https://lwn.net/Articles/724543/ nix <div class="FormattedComment"> That's not as much of a problem as I thought it was. raid6check does what's necessary (it's just, uh, not installed by default). Frankly, I'm happy to wait for the immensely rare occasion when mismatch_cnt rises above 0 on a RAID-6 array and then run raid6check on it: complete automation of events as rare as that isn't terribly important to me. (But maybe it is for others...)<br> </div> Sat, 03 Jun 2017 21:55:06 +0000 Revisiting "too small to fail" https://lwn.net/Articles/724494/ https://lwn.net/Articles/724494/ Wol <div class="FormattedComment"> I think a lot of people run raid-1 split over machines.<br> <p> My moan at the moment is people who think that just because raid CAN detect integrity errors, then it SHOULDN'T. Never mind. I ought to use it as an exercise to learn kernel programming.<br> <p> But it does appear that a lot of what the kernel does is still stuck in the POSIX mindset. It would be nice if people could sit down and say "POSIX is so last century, what should linux do today?". I think the problem is, though, as Linus said, it's like herding cats ...<br> <p> Cheers,<br> Wol<br> </div> Fri, 02 Jun 2017 18:11:07 +0000 Revisiting "too small to fail" https://lwn.net/Articles/724237/ https://lwn.net/Articles/724237/ nix <div class="FormattedComment"> <font class="QuotedText">&gt; This must be pre-4.3BSD (or there abouts). They clearly don't know about EDQUOT.</font><br> <p> I suspect they were just shocked to get -EINTR from an NFS disk and were trying to argue that you should never need to check for short reads ever. Running out of quota, rather than disk space, is a sufficiently obscure edge case that even I'd forgotten about it. (They were doubly sure that you shouldn't need to check for -ENOSPC when writing inside files, and were surprised when I mentioned sparse files... sigh.)<br> <p> Note: I said these people were grizzled, not that they were skilled. This was just a common belief among at least some of the "grunts on the ground" in the Unix parts of the City of London in the late 90s, is all... if they were skilled they would not have been working where they were, but somewhere else in the City that paid a lot more!<br> <p> <font class="QuotedText">&gt; We are talking about "disk I/O" here. There is no server. NFS just pretends, fakes a lot of stuff, glosses over the differences, mostly works but sometimes doesn't do quite what you want.</font><br> <p> Given the ubiquity of NFS and Samba, this seems unfortunate, but it is true that the vast majority of applications are not remotely ready to deal with simple network failures, let alone a split-brain situation in a distributed filesystem! (Given the number of errors that even distributed consensus stores make in this area, I'm not sure *anyone* is truly competent to write code that runs atop something like that.)<br> <p> <font class="QuotedText">&gt; It really doesn't help to just whine because something doesn't magically match your perceived use-case. The only sensible way forward is to provide a clear design of a set of semantics that you would like to be available. Then we can have a meaningful discussion.</font><br> <p> Agreed, but I'd also like to find one that doesn't break every application out there, nor suddenly stop them working over NFS: that's harder! (My $HOME has been on NFSv3, remote from my desktop, for my entire Unix-using life, so I have a very large and angry dog in this race: without NFS I can't do anything at all. Last week I flipped to NFSv4, and unlike last time, a couple of years back, it worked perfectly.)<br> <p> Something like your proposed *_REMOTE option would seem like a good idea, but even that has problems: one that springs instantly to mind is libraries that open an fd on behalf of others. That library might be ready to handle errors, but what about the other things that fd gets passed off to? (The converse also exists: maybe the library doesn't use that flag because almost all it does with it is hands the fd back, so it never gets updated, even though the whole of the rest of the application is ready to handle -ESPLITBRAIN or whatever.)<br> <p> Frankly I'm wondering if we need something better than errno and the ferociously annoying short reads thing in this area: a new set of rules that allows you to guarantee no short reads / writes but comes with extras wrapped around that, perhaps that the writes might later fail and throw an error back at you over netlink or something.<br> <p> That all seems very asynchronous, but frankly we need that for normal writes too: if a writeback fails it's almost impossible for an application to tell *what* failed without fsync()ing everywhere and spending ages waiting... but this probably requires proper AIO so you can submit IOs, get an fd or other handle to them, then query for the state of that handle later. And we know how good Linux is in *that* area :( just because network I/Os are even more asynchronous by nature than other I/Os, and more likely to have bus faults and the like affecting them, doesn't mean that the same isn't true of disk I/O too. Disks are on little networks, after all, and always have been, and with SANs they're on not-so-little networks too.<br> <p> (This is not even getting into how much more horrible this all gets when RAID or other 1:N or N:1 stuff gets into the picture. At least RAID split across disks on different machines is relatively rare, though it has saved my bacon in the past to be able to run md partially across NBD for a while during disaster recovery!)<br> </div> Wed, 31 May 2017 13:45:00 +0000 Revisiting "too small to fail" https://lwn.net/Articles/724081/ https://lwn.net/Articles/724081/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; no, disk I/O can never fail except -ENOSPC or -EIO if the disk is damaged</font><br> <p> This must be pre-4.3BSD (or there abouts). They clearly don't know about EDQUOT.<br> <p> <font class="QuotedText">&gt; I asked them what exactly it's meant to do if the server goes down</font><br> <p> We are talking about "disk I/O" here. There is no server. NFS just pretends, fakes a lot of stuff, glosses over the differences, mostly works but sometimes doesn't do quite what you want.<br> <p> If you want a new contract between the application and the storage backend, you need to write one. You cannot just assume that an old contract can magically work in a new market place.<br> <p> We could invent O_REMOTE which the application uses to acknowledge that the data might be stored in a remote location, and that it is prepare to handle the errors that might be associated with that - e.g. ETIMEDOUT ???<br> What would it mean to memory-map a file opened with O_REMOTE? That you are happy to receive SIGBUS?<br> What does it mean to execveat() a file, passing AT_REMOTE?? Should it download the whole file (and libraries?) and cache them locally before succeeding?<br> <p> It really doesn't help to just whine because something doesn't magically match your perceived use-case. The only sensible way forward is to provide a clear design of a set of semantics that you would like to be available. Then we can have a meaningful discussion.<br> <p> </div> Tue, 30 May 2017 01:37:19 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723967/ https://lwn.net/Articles/723967/ nix <div class="FormattedComment"> For disk I/O I've been told by a lot of grizzled Unix people that "no, disk I/O can never fail except -ENOSPC or -EIO if the disk is damaged", and that NFS failing is just a sign that NFS is broken.<br> <p> I asked them what exactly it's meant to do if the server goes down, and they look at me, puzzled, as if this is impossible to conceive of.<br> <p> (These are grizzled-enough veterans that they probably consider Unix to be what BSD did -- POSIX? Who reads that?)<br> </div> Fri, 26 May 2017 23:39:23 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723841/ https://lwn.net/Articles/723841/ Kamilion <div class="FormattedComment"> <font class="QuotedText">&gt; The main thing separating the OS from the radio transmitter is that they're on separate chips (or separate blocks of the same chip) and the only physical connection between them is RAM and interrupts used to implement some message-passing protocol,</font><br> <p> "Would you believe two UARTs and a pack of playing cards missing the aces and the kings?" --Agent 86<br> </div> Fri, 26 May 2017 07:29:55 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723609/ https://lwn.net/Articles/723609/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; So there's no way to change the default ....</font><br> <p> Maybe you mean "there's no easy way", but there is a way.<br> <p> 1/ Introduce "GFP_DEFAULT" which does the right thing, and GFP_NOFAIL which really don't fail.<br> 2/ Mark "GFP_KERNEL" as deprecated<br> 3/ Start changing GFP_KERNEL to something else, and nagging others to do the same.<br> <p> It worked for BKL .... eventually.<br> We can do this. We should do this! At least we can start doing this!!<br> <p> </div> Wed, 24 May 2017 01:17:46 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723507/ https://lwn.net/Articles/723507/ epa <div class="FormattedComment"> It's accepted that userspace code is useless and doesn't check for or handle failure conditions, even for something as basic as reading a file from disk. The only errors handled by userspace are those which tend to occur frequently in practice, and if you add a new error which didn't often happen before, almost all existing programs will fail to deal with it. That's why we have 'hard' mounts for NFS. Even after 30 years or so userspace has not progressed to the point where it's safe to return failure if a remote file server is down. Better to hang indefinitely and hope the problem goes away. (This applies almost as much for reading as for writing.) So it is too for kernel memory allocation failures.<br> </div> Tue, 23 May 2017 12:13:22 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723506/ https://lwn.net/Articles/723506/ nix <div class="FormattedComment"> Userspace has to be able to handle -ENO-anything-for-any-reason in any case. All sorts of things can already throw errors, and userspace is usually unprepared for all of them and just dies. -ENOMEM would be just the same, only more so: if it wasn't checking, it would immediately try to use whatever it was, dereference a null pointer, and die. And it has to be able to handle -ENOMEM in any case because userspace has no idea if the kernel-side allocation is 'too big' or not, and that can of course change at any time.<br> </div> Tue, 23 May 2017 10:02:12 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723505/ https://lwn.net/Articles/723505/ dvrabel <div class="FormattedComment"> 32 KiB doesn't seem like a "small" allocation to me -- anything more than PAGE_SIZE can fail (or trigger the OOM killer) due to fragmentation. Perhaps the thing to do here is to reduce the size that is considered small, until small is a page or less?<br> <p> By way of example, the Xen hypervisor has taken considerable effort to ensure all memory allocation are a page or less, or have fallback paths that do so.<br> </div> Tue, 23 May 2017 09:43:28 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723504/ https://lwn.net/Articles/723504/ vegard <div class="FormattedComment"> <font class="QuotedText">&gt; It sounds like we need a fault injection test framework that takes ideas from fuzzers like AFL and continues to try variants until it exercises all of the code paths.</font><br> <p> I've been running trinity and syzkaller with this patch, which records unique callchains and fails allocations from previous-unseen callers:<br> <p> <a href="https://patchwork.kernel.org/patch/9378219/">https://patchwork.kernel.org/patch/9378219/</a><br> <p> It works pretty well and uncovered a handful of cases that I submitted patches for, but far less than I would have expected.<br> </div> Tue, 23 May 2017 09:34:43 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723500/ https://lwn.net/Articles/723500/ vbabka <div class="FormattedComment"> OK, Michal has reminded me that the idea of making the default to allow fail was already shot down by Linus once, because it would propagate more -ENOMEM to existing userspace, which might be unprepared to handle it. And it's hard to make sure that all allocations that can lead to the userspace ENOMEM are properly covered, before switching the default.<br> On the other hand here is no issue with TIF_MEMDIE tasks failing an allocation, as that cannot propagate to userspace (the task is killed before returning). So there's no way to change the default to the full "may fail", and little incentive to change the default to the full "can't fail".<br> </div> Tue, 23 May 2017 08:19:33 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723471/ https://lwn.net/Articles/723471/ mageta <div class="FormattedComment"> <font class="QuotedText">&gt; Or possibly a simpler idea, take one of the existing automatic unit test generators and modify it for kernel code, although there's always the problem of knowing if the hardware is being simulated correctly.</font><br> <p> If there is simulation at all. CPU/MMU alright, even some basic I/O. But I'd wager, for over 90% of the hardware for which the kernel has drivers there is no simulation at all. There is also no trend to make more "simulations", its (understandably) much more interesting to do para-virtualized solutions.<br> </div> Mon, 22 May 2017 15:28:42 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723434/ https://lwn.net/Articles/723434/ mstsxfx <div class="FormattedComment"> JFTR I have tried to change __GFP_REPEAT to have "do not retry for ever" for<br> requests regardless of their size. The last attempt is<br> <a href="http://lkml.kernel.org/r/20170307154843.32516-1-mhocko@kernel.org">http://lkml.kernel.org/r/20170307154843.32516-1-mhocko@ke...</a><br> <p> There doesn't seem to be a huge interest in this flag so far, though.<br> <p> -- <br> Michal Hocko<br> <p> </div> Mon, 22 May 2017 11:43:18 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723425/ https://lwn.net/Articles/723425/ JdGordy <div class="FormattedComment"> <font class="QuotedText">&gt; I don't think any phone runs multiple VMs yet, but it is only a matter of time. Then, probably each app will run in its own VM, with its own toy OS.</font><br> <p> In a previous life we (ok-labs.com) were running multiple full linux VM's on a single phone, one or 2 with the full android userland and half a dozen or more doing various safety/virtualisation things (network virtualisation, virtualising and sharing the various bits of hardware).<br> <p> worked pretty well until GD bought the place.<br> </div> Mon, 22 May 2017 06:50:14 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723414/ https://lwn.net/Articles/723414/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; I'm not proposing NOFAIL to become the real default.</font><br> <p> Sad. I think NOFAIL really should be the default (at least for PAGE_SIZE or less). Almost always kmalloc doesn't fail, so it wouldn't really be a big change in behaviour.<br> I don't see that an OOM victim needs special treatment. The memory that is freed when a victim is killed is mostly the user-space mappings, and those are freed independently of what the kernel code is doing (aren't they?).<br> <p> I think we need clearly defined waiting behaviour, and I think the options should be:<br> 1/ don't wait - usable in interrupts and spinlocks<br> 2/ wait for kswapd (or whatever) to make one pass trying to free memory - used when memory would be convenient an easy fall-back is available<br> 3/ wait indefinitely - thread eventually goes onto a (per-cpu?) queue and as memory is made available (possibly by killing mem hogs), it is given to threads on the queue. This must never be used on the write-out path or in shrinkers etc.<br> <p> <font class="QuotedText">&gt; would be too dangerous</font><br> <p> ahh for the good old days of even=stable, odd=devel. Then we could break everything in the devel series and clean up the pieces as they were found. Kernel development is just too safe these days!!<br> <p> </div> Sun, 21 May 2017 22:30:01 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723412/ https://lwn.net/Articles/723412/ zlynx <div class="FormattedComment"> It sounds like we need a fault injection test framework that takes ideas from fuzzers like AFL and continues to try variants until it exercises all of the code paths.<br> <p> Even fancier, if the test framework could examine branch conditions and work backward to create the conditions to exercise untested branches. Or if it can't work it out, log it for developer attention so a custom test rule can be created.<br> <p> Although, it'd also need a lot of verification code written. Sure, the memory allocation failure during bad block handling during a btrfs scrub, at the same time the SAS link got hot-removed got handled, but is the filesystem still correct? Someone has to write that.<br> <p> Or possibly a simpler idea, take one of the existing automatic unit test generators and modify it for kernel code, although there's always the problem of knowing if the hardware is being simulated correctly.<br> </div> Sun, 21 May 2017 21:49:28 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723406/ https://lwn.net/Articles/723406/ jejb <div class="FormattedComment"> <font class="QuotedText">&gt; Nowadays the kernel is likely to be running on a hypervisor, meaning it's really just a user-space program itself, but with pretensions. Killing it and starting ("spinning up") another is a reasonable alternative to muddling along. </font><br> <p> That's a cop out: while the vast majority of cloud instances (and this ignores all the phones, tablets and laptops) may be running on a hypervisor; in most cloud cases Linux *is* the hypervisor as well. Crashing the kernel on memory failure would take down the whole physical node and thus all the instances. This is seen as undesirable even in the cloud.<br> </div> Sun, 21 May 2017 15:47:58 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723404/ https://lwn.net/Articles/723404/ excors <div class="FormattedComment"> <font class="QuotedText">&gt; Note that your phone OS is almost certainly running under a hypervisor, and has no direct physical access to the radio transmitter.</font><br> <p> As far as I'm aware (which might not be far enough), in most cases the closest thing to a hypervisor is TrustZone running exactly two OSes - the 'non-secure' Android one with unimportant stuff like the user's highly private data, and the 'secure' one that does a few vital things like protected DRM video playback. Almost all the hardware is accessed directly by the non-secure OS and is not abstracted enough to let multiple VMs share it. The main thing separating the OS from the radio transmitter is that they're on separate chips (or separate blocks of the same chip) and the only physical connection between them is RAM and interrupts used to implement some message-passing protocol, and if you're very lucky there might be a correctly-configured IOMMU blocking the CPU from accessing the modem OS's region of RAM and vice versa; there's no hypervisor involved in their communication.<br> <p> <font class="QuotedText">&gt; probably each app will run in its own VM, with its own toy OS.</font><br> <p> How would that help with out-of-memory problems? If you run out of physical memory, killing an app VM doesn't sound much easier than killing an app process. Maybe you could reduce the risk of multi-page allocation failures caused by fragmentation if you overcommit and give each VM very large amounts of guest-physical address space, so it can easily find guest-physically-contiguous pages, but that'd only really work if the guests can return unused pages to the host with single-page granularity, which I assume isn't easy. (It also seems quite hacky to use a VM just so the kernel can pretend it's doing physical allocations when really they're virtual allocations; surely it'd be better to make the kernel know it's doing virtual allocations and not need the VM at all). What real benefits would come from using per-app VMs?<br> </div> Sun, 21 May 2017 14:50:56 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723403/ https://lwn.net/Articles/723403/ vbabka <div class="FormattedComment"> You mean kvmalloc? That can help for allocations larger than a single page, yeah, otherwise not.<br> </div> Sun, 21 May 2017 13:56:02 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723399/ https://lwn.net/Articles/723399/ vbabka <div class="FormattedComment"> Yes, the default is strictly speaking "may fail", but only under very specific conditions - as Michal Hocko mentioned in the thread, e.g. when the task itself is selected as OOM victim. This makes any error handling even more rarely executed, so even less likely to trust. So statistically speaking, the default is NOFAIL. I'm not proposing NOFAIL to become the real default. I'd rather see "may fail" the real default, but just flipping the switch and making the existing error handling more likely to be executed, would be too dangerous IMHO.<br> <p> </div> Sun, 21 May 2017 13:47:49 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723389/ https://lwn.net/Articles/723389/ ncm <div class="FormattedComment"> Are you shocked? Yes, it's shocking, as it was to many in 2014. But people can get used to anything.<br> <p> Version 7 UNIX would ungracefully panic (i.e. crash) on out-of-memory, on the assumption that rebooting would get you back to working again sooner than muddling along. The code to handle failures wouldn't have fit anyway.<br> <p> Nowadays the kernel is likely to be running on a hypervisor, meaning it's really just a user-space program itself, but with pretensions. Killing it and starting ("spinning up") another is a reasonable alternative to muddling along. <br> <p> Note that your phone OS is almost certainly running under a hypervisor, and has no direct physical access to the radio transmitter. I don't think any phone runs multiple VMs yet, but it is only a matter of time. Then, probably each app will run in its own VM, with its own toy OS.<br> <p> It will be interesting when the spooks' exploits to suborn your phone hypervisor leak out.<br> </div> Sun, 21 May 2017 05:13:41 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723384/ https://lwn.net/Articles/723384/ pbonzini <div class="FormattedComment"> Heavier use of kvalloc is probably a good idea, since it uses __GFP_NORETRY internally.<br> </div> Sat, 20 May 2017 20:44:04 +0000 Putting the problem into perspective https://lwn.net/Articles/723383/ https://lwn.net/Articles/723383/ ebiederm <div class="FormattedComment"> Several years ago I ran into the problem of code trying forever for small allocations.<br> The fix of the immediate symptoms was: 96c7a2ff2150 ("fs/file.c:fdtable: avoid triggering OOMs from alloc_fdmem")<br> <p> The basic issue was that the file table expansion code was making a 32KiB allocation. Which is the maximum size at which the code retries forever.<br> <p> Except the code doesn't exactly try forever. Instead of failing the allocation the code instead triggers the OOM killer. Which in effect shifts where the failure was happening. In the case I was dealing with this caused the OOM killer to be triggerd on a system with roughly 4GiB free memory. The problem was that there were no chunks of memory of size 32KiB large and at that point it had no way to defragment the memory to make a 32KiB chunk of memory available.<br> <p> The file table code in question had a fallback to handle a large page allocation failure. The code performs a vmalloc instead of a kmalloc.<br> <p> Which demonstrates two things.<br> - That all allocations &lt;= PAGE_ALLOC_COSTLY_ORDER won't fail because such pages will always be available is observably wrong. <br> - That there is actually harm in retrying forever on some code paths. As my fix demonstrated the retrying forever heuristic took a system that would have stayed up and caused it to crash.<br> <p> That said I don't argue that on most code paths retrying forever is generally harmless as many times there isn't much that can be done except return an error to userspace.<br> <p> </div> Sat, 20 May 2017 19:15:15 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723382/ https://lwn.net/Articles/723382/ jhoblitt <div class="FormattedComment"> I'm not sure I'm following, isn't "may fail" the default? Are you proposing the __GFP_NOFAIL become the default behavior?<br> </div> Sat, 20 May 2017 18:02:13 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723364/ https://lwn.net/Articles/723364/ vbabka <div class="FormattedComment"> It's possible we'll have to introduce __GFP_MAYFAIL after all, just so we can move forward. As much as I'll hate the churn this will cause - if we ever realize that everything is marked MAYFAIL or NOFAIL, we can change the default to MAYFAIL and drop the flag again. Is it the unavoidable price for the safest course?<br> </div> Sat, 20 May 2017 14:38:22 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723363/ https://lwn.net/Articles/723363/ corbet So, then, what <i>is</i> the right approach to take here? Just accept the status quo? Sat, 20 May 2017 14:30:45 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723362/ https://lwn.net/Articles/723362/ vbabka <div class="FormattedComment"> Ugh, is somebody purposely sprinkling __GFP_NORETRY just to allow non-costly-order requests fail? I wouldn't recommend that, as it also prevents the reclaim and compaction from retrying (with increasing priority for compaction). The failure can thus be premature, especially for non-zero orders. It's fine for opportunistic allocation attempts that e.g. have a fallback, but it's IMHO not a good idea to start massively marking all allocations that have an error path with __GFP_NORETRY.<br> </div> Sat, 20 May 2017 14:16:22 +0000 Revisiting "too small to fail" https://lwn.net/Articles/723361/ https://lwn.net/Articles/723361/ arjan <div class="FormattedComment"> All the error paths also add to the code size in the binary, from that angle, a non-failing kmalloc() is actually a very nice thing, especially for the smaller side of embedded or cloud.<br> </div> Sat, 20 May 2017 14:01:39 +0000