LWN: Comments on "Improved block-layer error handling" https://lwn.net/Articles/724307/ This is a special feed containing comments posted to the individual LWN article titled "Improved block-layer error handling". en-us Mon, 13 Oct 2025 02:28:32 +0000 Mon, 13 Oct 2025 02:28:32 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Improved block-layer error handling https://lwn.net/Articles/831530/ https://lwn.net/Articles/831530/ nix <div class="FormattedComment"> Yeah, that works for some cases -- but the other thing fsync does other than make sure all the fs I/O errors are in place is ensures that the thing is entirely on cold storage in case of power failure. An -EIO handler only deals with one of those problems. (In practice, you&#x27;d probably want the thing containing the -EIO handler to *also* do an fsync itself, and for the -EIO handling machinery to suppress fsyncs in its children, or something like that, so unmodified children could be run.)<br> </div> Mon, 14 Sep 2020 17:20:26 +0000 Improved block-layer error handling https://lwn.net/Articles/828726/ https://lwn.net/Articles/828726/ flussence <div class="FormattedComment"> I think the clue there is in the name “Cache”, isn&#x27;t it?<br> </div> Thu, 13 Aug 2020 11:08:12 +0000 Improved block-layer error handling https://lwn.net/Articles/828317/ https://lwn.net/Articles/828317/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt; or just data that&#x27;s low value to begin with (downloaded Docker containers? nosql databases?)</font><br> <p> Why are nosql databases low value? Actually, nosql databases usually have a far higher signal/noise ratio - I converted a database from nosql to sql, and I think the size of the db DOUBLED.<br> <p> Not only do nosql databases contain much more data per megabyte than relational, but they tend to be much faster - it&#x27;s an old story but I remember stories about a company converting from UniVerse to (sn)Oracle, and it took SIX MONTHS for the consultants to get a Snoracle query (running on a twin Xeon) to outperform the old system running on a Pentium 90.<br> <p> Or the &quot;request for bids&quot; put out by some University Astronomy department, that wanted a system to process 100K tpm. snOracle had to &quot;cheat&quot; to meet the target - delayed indexing, a couple of other tricks - while Cache had no trouble hitting 250K tpm.<br> <p> RDBMSs are fundamentally inefficient, due to limitations in the relational model itself ... (just try and *store* a list in an rdbms).<br> <p> Cheers,<br> Wol<br> </div> Fri, 07 Aug 2020 20:02:09 +0000 Improved block-layer error handling https://lwn.net/Articles/828247/ https://lwn.net/Articles/828247/ flussence <div class="FormattedComment"> This is an old subject but here&#x27;s a proposal, which I promise I spent more than 3 minutes thinking up: an fsync cgroup controller. Add a per-cgroup setting that can suppress in-situ sync attempts, like libeatmydata does, and/or pause all timer-based disk writeback unless memory pressure dictates, like laptop_mode but not global.<br> <p> When the last process in the group exits, it syncs any remaining dirty buffers touched by the process tree - in this example they&#x27;d be build artifacts, but they could be overly-paranoid software that fsyncs far too much (apt-get used to, foldingathome is awful on rotational disks), or just data that&#x27;s low value to begin with (downloaded Docker containers? nosql databases?)<br> <p> And once we have that in place and people using it, extending it to use filesystem-native transactions (wherever they exist) seems like an obvious next move. :-)<br> </div> Fri, 07 Aug 2020 11:14:21 +0000 Improved block-layer error handling https://lwn.net/Articles/828117/ https://lwn.net/Articles/828117/ pskocik <div class="FormattedComment"> Off the top of my hat, I think an easy (?) hack that wouldn&#x27;t require modifications to each application that might contribute to a possibly quite complex project build could be to add to the kernel a mechanism (syscall or an open on a special device) whereby a parent process of the project build could request to be signal-notified if there is a write error in one of the IO writes that its children (recursively) have issued, whenever the IO failure happens. Presumably the parent process could then cancel the build and remove all build products in order to prevent a corrupted build. Another syscall (or perhaps an fsync on the fd from the special device) could be used by the build supervisor process when all its children have finished (with no signal generated) to wait on any IO requests generated by its children recursively.<br> </div> Wed, 05 Aug 2020 15:39:24 +0000 Improved block-layer error handling https://lwn.net/Articles/724638/ https://lwn.net/Articles/724638/ zblaxell <div class="FormattedComment"> <font class="QuotedText">&gt; POSIX provides no way for applications to say 'hey, fs, I want integrity from this, thank you'</font><br> <p> Nor does it need one. POSIX should assume integrity by default unless applications say the opposite. One way applications can do that is by not checking any system call return values.<br> <p> <font class="QuotedText">&gt; POSIX also provides no way to say 'hey, fs, this file was written but failed integrity checks'</font><br> <p> I don't think any changes to POSIX are required. We already have most of this in existing filesystems, just not in most existing filesystems.<br> <p> In cases like compiles, where the writing application has completely disappeared before the block writes even start, there's no process to notify about the failure at the time the failure is detected. fsync() return behavior is irrelevant to this case--*every* system call, even _exit, returns before *any* error information is available. We want compiles to be fast, so we don't want to change this. A different solution is required. Note that reporting errors through fsync() is not wrong--it's just not applicable to this case.<br> <p> For compiles we want to get the block-level error information passed from one producing process to another consuming process when the processes communicate through a filesystem. So let's do exactly that: If a block write fails, the filesystem should update its metadata to say "this data blocks were not written successfully and contain garbage now." Future reads of affected logical offsets of affected inodes should return EIO until the data is replaced by new (successful) writes, or the affected blocks are removed from the file by truncate, or the file is entirely deleted. If the filesystem metadata update fails too, move up the hierarchy (block &gt; inode &gt; subvol &gt; planet &gt; constellation &gt; whatever) until the entire FS is set readonly and/or marked with errors for the next fsck to clean up by brute force.<br> <p> Note that this scheme is different from block checksums. The behavior is similar, but block checksums are used to detect read errors (successful write followed by provably incorrect read), not write errors (where the write itself fails and the disk contents are unknown, possibly correct, possibly incorrect with hash collision). Checksums would not be an appropriate way to implement this. The existing prealloc mechanism in many filesystems could be extended to return EIO instead of zero data on reads. Prealloc already has most of the desired behavior wrt block allocation and overwrites.<br> <p> <font class="QuotedText">&gt; EIO is, ah, likely to be misinterpreted by essentially everything</font><br> <p> I'm not sure how EIO could be misinterpreted in this context. The application is asking for data, and the filesystem is saying "you can't have that data because an IO-related failure occurred," so what part of EIO is misinterpreted exactly? What application (other than a general disk-health monitoring application, which could get detailed block error semantics through a dedicated kernel notification mechanism instead) would care about lower-level details, and which details would it use?<br> <p> Also note EIO already happens in most filesystems, so we're not talking theoretically here. Most applications (even linkers), if they notice errors at all (**), notice EIO and do something sensible when they see it (*). This produces much, much more predictable results than just throwing random disk garbage into applications and betting they'll notice.<br> <p> (*) interesting side note: linkers don't read all of the data in their input files, and will happily ignore EIO if it only occurs in the areas of files they don't read. Maybe not the best example case for a "data integrity" discussion. ;)<br> <p> (**) for many years, GCC's as and ld didn't even notice ENOSPC, and would silently produce garbage binaries when the disk was full (maybe these would be detected by the linker later on...maybe not). Arguably we should also mark inodes with a persistent error bit if there is an ENOSPC while writing to them, but that *is* a major change which will surprise ordinary POSIX applications.<br> </div> Mon, 05 Jun 2017 18:51:48 +0000 Multiple drives https://lwn.net/Articles/724629/ https://lwn.net/Articles/724629/ jlayton <div class="FormattedComment"> The patchset actually initializes the errseq_t in struct file to the value of the mapping's errseq_t at open time. So, in principle you shouldn't see errors that occurred prior to your open.<br> <p> How mixed buffered and direct I/O are handled is not really addressed (or changed for that matter) in this set. Yes, you will quite likely see an error on an O_DIRECT fsync, but it's quite likely that you'll see that today anyway. Most filesystems make no distinction about whether you opened the fd with O_DIRECT or not. They flush the pagecache and inode anyway just like they would with a buffered fd.<br> <p> The flip side of this (and the scarier problem) is that with the current code, it's likely that that fsync on the O_DIRECT fd would end up clearing the error such that a later fsync on the buffered fd wouldn't ever see it. That problem at least should be addressed with these changes.<br> </div> Mon, 05 Jun 2017 16:15:34 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724635/ https://lwn.net/Articles/724635/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; So adding more errors is not only not noncompliant, it is both explicitly permitted and very common.</font><br> <p> Yes, for *new* error conditions not specified by POSIX. However:<br> <p> <font class="QuotedText">&gt; Implementations shall not generate a different error number from one required by this volume of POSIX.1-2008 for an error condition described in this volume of POSIX.1-2008, ...</font><br> <p> The error list for the open() and openat() system calls specifies ENOSPC as follows:<br> <p> <font class="QuotedText">&gt; [ENOSPC]</font><br> <font class="QuotedText">&gt; The directory or file system that would contain the new file cannot be expanded, the file does not exist, and O_CREAT is specified.</font><br> <p> So if "the filesystem ... cannot be expanded" is read to include the "out of inodes" condition (a reasonable interpretation IMHO) then POSIX requires open() to return ENOSPC for this condition, and not some other error code.<br> </div> Mon, 05 Jun 2017 16:15:00 +0000 Improved block-layer error handling https://lwn.net/Articles/724598/ https://lwn.net/Articles/724598/ nix <div class="FormattedComment"> Aha. Your distinction makes sense: I was indeed conflating these, and fsync() does indeed provide safety, not integrity. Filesystems *are* increasingly providing integrity support, because disk vendors are not exactly brilliant at providing it (how many vendors seriously try not to wreck their SSDs' contents on power failure: only Intel? and even they don't on all parts).<br> <p> Of course, POSIX provides no way for applications to say 'hey, fs, I want integrity from this, thank you', and it does whatever checksumming it can so the applications don't all need to reimplement it. This might make sense: it seems like something that could probably be a per-filesystem attribute, or at least a whole-directory-tree attribute or something. Of course, POSIX also provides no way to say 'hey, fs, this file was written but failed integrity checks': -EIO is, ah, likely to be misinterpreted by essentially everything. So while it would be nice to have app-level integrity checking, I doubt we can get there from here: we do need to do it invisibly, below the visible surface of the system.<br> </div> Mon, 05 Jun 2017 12:05:23 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724597/ https://lwn.net/Articles/724597/ nix <blockquote> Remember that the possible error codes for syscalls were defined by POSIX, so simply adding an EOUTOFINODES would be non-compliant and could easily do more harm then good, because in practice, ENOSPC is a good fit for "out of inodes" and software might actually expect it to cover both cases </blockquote> It might well do more harm than good, but the first part of your statement is just wrong. POSIX.1 2008 states (and all previous versions have similar wording): <blockquote> Implementations may support additional errors not included in this list, may generate errors included in this list under circumstances other than those described here, or may contain extensions or limitations that prevent some errors from occurring. <p> The ERRORS section on each reference page specifies which error conditions shall be detected by all implementations (``shall fail") and which may be optionally detected by an implementation (``may fail"). If no error condition is detected, the action requested shall be successful. If an error condition is detected, the action requested may have been partially performed, unless otherwise stated. <p> Implementations may generate error numbers listed here under circumstances other than those described, if and only if all those error conditions can always be treated identically to the error conditions as described in this volume of POSIX.1-2008. Implementations shall not generate a different error number from one required by this volume of POSIX.1-2008 for an error condition described in this volume of POSIX.1-2008, but may generate additional errors unless explicitly disallowed for a particular function. </blockquote> So adding more errors is not only not noncompliant, it is both explicitly permitted and very common. Mon, 05 Jun 2017 11:55:28 +0000 Multiple drives https://lwn.net/Articles/724596/ https://lwn.net/Articles/724596/ pbonzini <div class="FormattedComment"> <font class="QuotedText">&gt; I don't quite see why you'd want to avoid reporting errors on a O_DIRECT fd in either case though. In both cases, it's possible that data previously written via that O_DIRECT file descriptor didn't make it to disk, so wouldn't you want to inform the application?</font><br> <p> I certainly would. :) However, I'm worried about the application using O_DIRECT seeing errors that happened while accessing the file via another fd.<br> <p> In fact, if I understand correctly, those errors could even have happened before the O_DIRECT file descriptor had even been opened, if they have never been reported to userspace.<br> </div> Mon, 05 Jun 2017 11:55:04 +0000 Multiple drives https://lwn.net/Articles/724594/ https://lwn.net/Articles/724594/ jlayton <div class="FormattedComment"> Thanks, that makes sense.<br> <p> I don't quite see why you'd want to avoid reporting errors on a O_DIRECT fd in either case though. In both cases, it's possible that data previously written via that O_DIRECT file descriptor didn't make it to disk, so wouldn't you want to inform the application?<br> <p> The big change here is that reporting those errors on the O_DIRECT fd won't prevent someone else from seeing those errors on via another fd. So, I don't quite see why it'd be desirable to avoid reporting it on the O_DIRECT one.<br> </div> Mon, 05 Jun 2017 11:44:01 +0000 Multiple drives https://lwn.net/Articles/724570/ https://lwn.net/Articles/724570/ pbonzini <div class="FormattedComment"> Also to ensure that the data is safe, because writes can stop at the disk cache and an fsync is needed to ensure it reaches the platters or the flash. This is represented as a REQ_FLUSH request (while metadata often are REQ_FUA, i.e. force unit access). REQ_FLUSH applies to all completed writes *before* the flush, while FUA applies to the write that had the flag only.<br> </div> Sun, 04 Jun 2017 19:32:18 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724559/ https://lwn.net/Articles/724559/ MarcB <div class="FormattedComment"> Remember that the possible error codes for syscalls were defined by POSIX, so simply adding an EOUTOFINODES would be non-compliant and could easily do more harm then good, because in practice, ENOSPC is a good fit for "out of inodes" and software might actually expect it to cover both cases:<br> <p> If the software is some kind of cache, discarding the files that are least relevant is a proper course of action for both kinds of ENOSPC.<br> If the software is some kind of archival system, moving the oldest files to the next tier of storage will also help in both cases.<br> <p> If the software can't freely discard or move data, all it can do, is scream for help, anyway.<br> <p> Also, an ENOSPC due to lack of inodes will usually happen on open() while an ENOSPC due to lack of disk space will usually happen on write() or similar.<br> So applications could already translate this to proper error messages. It is common that the same error code has different meaning for different syscalls and developers should know this.<br> <p> <p> Of course, ideally filesystems would solve this problem completely. In fact, some do: btrfs has an upper limit of 2^64 inodes, as does XFS or ZFS (might be 2^48).<br> btrfs is fully dynamic, i.e. each btrfs, that is large enough to hold the inode information, can in fact contain 2^64 inodes. XFS is dynamic enough in practice (make sure to use "inode64", though. Otherwise inodes can only be stored in the lowest 1 TB, and that space can run out if also used for file data - been there, done that). Even NTFS allows 2^32 and is also fully dynamic<br> <p> The ext-family is the big exception. Theoretically, the limit is also 2^32, but it cannot allocate space for inodes dynamically, and thus uses much lower limits by default. Otherwise, each inode would consume 256 bytes, even if unused.<br> </div> Sun, 04 Jun 2017 14:15:02 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724555/ https://lwn.net/Articles/724555/ matthias <div class="FormattedComment"> Even XFS can run out of inodes. Inode numbers are mapped to blocks by a very simple mapping. It is roughly like every i-th block can have inodes. The possible inodes are just numbered starting by one. Once these blocks are filled (with inodes or data), XFS cannot create new inodes. Changing the mapping would change every single inode number.<br> <p> We had once the following problem after growing a filesystem. Standard was at that time to only use 32-bit inode numbers. After growing the filesystem the 32-bit inode numbers where all in the already filled lower part of the filesystem.(*) Thus no new inodes could be created. Took a while to find that one only having the meaningful message "No space left on device.". Luckily it was a 64-bit system. Thus, we could just switch to 64-bit inode numbers. The other solution would have been to recreate the filesystem, not the quickest solution with a 56 TB filesystem.<br> <p> That said the circumstances under which XFS runs out of inodes are very rare. So it would be very important to have meaningful error messages, to notice that one of these rare circumstances just happened.<br> <p> (*) On fs creation XFS usually chooses the number i to be such that all possible inodes have 32-bit numbers. After growing this condition was not satisfied any more, as this number cannot be changed. On 32-bit systems, one would need to set this number i manually at fs creation time, if one wants to have the possibility to grow the filesystem.<br> </div> Sun, 04 Jun 2017 05:09:39 +0000 Improved block-layer error handling https://lwn.net/Articles/724554/ https://lwn.net/Articles/724554/ neilbrown <div class="FormattedComment"> I think you are conflating two distinct but similar concepts - safety and integrity.<br> <p> On the one hand you have applications that need to know that the data they have written is "safe". They need to know this so that they can tell someone. Maybe the editor tells the user "the file has been saved". Maybe the email system tells its peer "I have that email now, you can discard your copy". Maybe the database store is telling the database journal "that information is safe".<br> In each of these cases you need fsync() because you need to tell someone that the data is safe.<br> <p> The C compiler or assembler doesn't need to tell anyone. But the linker does, as you say, want to know that if the file it is loading is the same as the file that the assembler wrote out. It doesn't care if the data was safe or not. It is perfectly acceptable for the linker to say "sorry, data was corrupt" (as long as it doesn't do it too often). What is not so acceptable is for the linker to provide a complete binary which contains corruption.<br> <p> In the first case you want data safety - I know I can read it back if I want to. In the second you want data integrity - I know that this data is (or isn't) the data that was written.<br> <p> I don't believe the OS has any role in providing integrity, beyond best-effort to save and return data as faithfully as possible. If an application really cares, the application needs to add a checksum or crypto-hash or whatever. git does this. backups do this. gzip does this. I'm sure that if the cost/benefit analysis suggested that the C compiler should do this, then it would be easy enough to add.<br> <p> </div> Sun, 04 Jun 2017 04:26:16 +0000 Multiple drives https://lwn.net/Articles/724553/ https://lwn.net/Articles/724553/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; Now, that said...one wonders why an application would call fsync on an O_DIRECT fd?</font><br> <p> To ensure that the metadata is safe? I think you need O_SYNC|O_DIRECT if you want to not use fsync at all.<br> See "man 2 open"<br> <p> </div> Sun, 04 Jun 2017 04:02:36 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724552/ https://lwn.net/Articles/724552/ Cyberax <div class="FormattedComment"> Exceptions imply some kind of a type system. I'd settle for something like: "error.filesystem.io.disk-space/required=1233/available=123" where I can use simple prefix matching to get more and more detailed error.<br> </div> Sun, 04 Jun 2017 03:39:12 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724550/ https://lwn.net/Articles/724550/ rossmohax <div class="FormattedComment"> you don't need exceptions to have error inheritance.<br> </div> Sun, 04 Jun 2017 01:42:49 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724549/ https://lwn.net/Articles/724549/ rossmohax <div class="FormattedComment"> that is exactly what XFS is doing, inodes are allocated dynamically and you can never run out of them as long as you have free space. Try using XFS instead of ext4, it is awesome<br> </div> Sun, 04 Jun 2017 01:39:57 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724545/ https://lwn.net/Articles/724545/ Richard_J_Neill <div class="FormattedComment"> Yes, you're right, excepting that the common "no space left on device" message is actually very misleading when there is in fact plenty of space.<br> <p> Also, while the sysadmin can add extra monitoring and debugging, surely the point of a reliable system is to minimise the chance of human error.<br> We are used to the abstraction of a storage being "somewhere you can fill up with data"; the very existence of inodes should be no more the concern of the average programmer/sysadmin than the specifics of which pointer has which address... it should be "the computer's" problem, not "the operator's problem". If the computer is going to break that rule, and do so rarely, but catastrophically, the least it can do is to fail "noisily".<br> <p> Anyway... in these days of LVM and resizeable volumes, why shouldn't the filesystem be able to automatically notice that it has lots of space but too few inodes, and automatically create more inodes as needed? <br> </div> Sat, 03 Jun 2017 23:16:28 +0000 Improved block-layer error handling https://lwn.net/Articles/724544/ https://lwn.net/Articles/724544/ nix <div class="FormattedComment"> Well, the clear intention of journalled md is that SSDs with decent powerfail behaviour be used (good thing one such does exist: it even tells you in the SMART data if its capacitors are failing). It's also frankly damn stupid that any storage devices exist that can brick themselves on not exactly rare events (even places with excellent grids have a power failure or two a decade).<br> </div> Sat, 03 Jun 2017 22:23:32 +0000 Improved block-layer error handling https://lwn.net/Articles/724540/ https://lwn.net/Articles/724540/ jhoblitt <div class="FormattedComment"> If power-loss is a failure mode of significant concern for a non-distributed system, typically a "RAID controller" (may be in JBOD mode) with a BBU is used. That seems like a pretty reasonable engineering compromise as long as we don't have large quantities of non-volatile memory. If we had massive amounts high speed NVM, we probably wouldn't even need to worry about fsync()ing at all.<br> </div> Sat, 03 Jun 2017 15:23:55 +0000 Improved block-layer error handling https://lwn.net/Articles/724539/ https://lwn.net/Articles/724539/ nix <blockquote> not clear to me if wear-leveling SSDs work for the case where fsync() is immediately followed by power-loss </blockquote> I believe that the only SSD that currently even tries to reliably handle power loss without at least the possibility of massive data loss, corruption, or outright device failure is Intel's fairly costly datacentre parts. So, 'no'. :( Sat, 03 Jun 2017 14:38:42 +0000 Improved block-layer error handling https://lwn.net/Articles/724536/ https://lwn.net/Articles/724536/ gdt <p>The rules for "correct" use of fsync() by applications' programmers are already not useful. If the program wasn't started interactively then it's best not to call fsync(), as a few thousand fsync() calls in a short time leads to substantial jitter. How you can tell if a program is being run interactively is no longer straightforward (is that HTTP POST from a person or a API). So there is a risk v benefit balance in programmer's minds when using fsync() for common file I/O, with a strong tendency towards "no" -- partly because of advocacy from kernel programmers, but also because fsync() historically works less well than suggested by the man page (eg, not clear to me if wear-leveling SSDs work for the case where fsync() is immediately followed by power-loss).</p> <p>It's worse for library authors, as they have no idea of the significance of the data, and so if to implement the notion that "applications that care about their data will occasionally call fsync()". You might argue that databases should use fsync(). You'll recall that Firefox had an issue with adding unwelcome latency by storing bookmarks in a SQLite database which issued fsync().</p> <p>For these reasons even if they "care" for the data a programmer might well choose not to call fsync() but simply close() the file and let the system proceed without added latency. On the plus side, applications' programmers already accept some asynchronicity in read(), write() and close() error reporting and perhaps this could be further extended.</p> Sat, 03 Jun 2017 13:57:25 +0000 Improved block-layer error handling https://lwn.net/Articles/724534/ https://lwn.net/Articles/724534/ nix <blockquote> He adds a mechanism that is based on the idea that applications that care about their data will occasionally call fsync() to ensure that said data has made it to persistent storage. </blockquote> I keep hearing this, but the problem is that not only is this not true, you don't want it to be true and would probably refuse to use any system on which it was true because its performance would be appalling. Obviously, yes, text editors should be (and are) very careful about fsync()ing your six hours of work now you finally remembered to save it -- but let's pick on another favourite test load of kernel hackers, compiling a kernel. It would be bad if a chunk of data was omitted from the middle of an object file, right? So clearly the assembler "cares about" its data in this sense. But, equally, an assembler that called fsync() on its output would be the subject of copious vile swearing: you don't want your massive 64-way compile to be fsync()ing all over the place, not even in a filesystem better-behaved than ext3 (where fsync sometimes == sync()). You want any sync to happen at the end, after everything is linked, and you're probably happy if nothing syncs at all much of the time (for test compiles, if the power goes out, you'll just rebuild). However, that doesn't mean you're happy if an I/O error replaces crucial hunks of the kernel with \0! <p> This is just the first example that springs to mind. There are probably many more. One thing that's become clear to me as I classify everything on my machines into 'I care about this, RAID-6 and bcache it' and 'I don't care about this, chuck it on an unjournalled RAID-0' is that not only is there currently no way for applications to indicate what is important in this sense, and there is also *no way for most of them to know at all*. Whether a given file write is important is a property of what the user plans to do with the file later. <p> (Another kernel-compile-variety case: I do a lot of quick checks of enterprise kernels, with all their modules. Each module_install writes about 3--4GiB of module data out to /lib/modules/$whatever/kernel. Obviously that's an important write, right? If it goes wrong the machine probably won't boot! Only it's not: 90% of those modules are never referenced again, and the whole lot is going onto a loopback filesystem on that RAID-0 array because I'm actually only going to use it once, for testing, then throw it away. There is no way the assember, the linker, install(1), or the kernel makefile could know that, but if it didn't know that it might e.g. in my case decide to cache all 3GiB on an SSD, or journal it all through the RAID journal, or fsync() each file individually, or something. And, of course, in most cases even the users don't bother to make this sort of determination, or don't have the knowledge to, even though they're the only ones who could.) <p> I do not see an easy way out of this. :( Sat, 03 Jun 2017 12:27:41 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724531/ https://lwn.net/Articles/724531/ itvirta <div class="FormattedComment"> Now that you learned about the issue of inodes running out, you know to add it to your monitoring.<br> It's very much the same as running out of disk space, which isn't that uncommon with some logging<br> getting out of hand either. Both can be checked with `df`.<br> <p> Also, there's the possibility of distributing unrelated data on separate file systems, or using quotas to<br> protect the rest of the system from an application getting out of hand.<br> <p> </div> Sat, 03 Jun 2017 10:32:41 +0000 Multiple drives https://lwn.net/Articles/724530/ https://lwn.net/Articles/724530/ jlayton <div class="FormattedComment"> That's really not related to the changes we're making here, but it is possible to do so.<br> <p> Ultimately, an fsync syscall returns whatever the filesystem's fsync operation returns, so if the filesystem wants to check for O_DIRECT and always return 0 without flushing, then it can do so today.<br> <p> Now, that said...one wonders why an application would call fsync on an O_DIRECT fd?<br> </div> Sat, 03 Jun 2017 09:53:30 +0000 Improved block-layer error handling https://lwn.net/Articles/724528/ https://lwn.net/Articles/724528/ jlayton <div class="FormattedComment"> They should float up.<br> <p> fsync is called on a file descriptor, which is ultimately an open file on some sort of filesystem. When there is an error, the filesystem is ultimately responsible for marking the mapping for the inode with an error (sometimes this is handled by lower layers common code, like the buffer.c routines). When fsync is called, the filesystem should check for an error since we last checked via the file, report it if there was one and advance the file's errseq_t to the current value.<br> <p> Note that the way errors get recorded is not terribly different from what we do today. The big difference is in how we report errors at fsync time. Most of the changes to filesystems are in fsync here, though I am going through various parts of the kernel and trying to make sure that we're recording errors properly when they occur.<br> </div> Sat, 03 Jun 2017 09:38:11 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724526/ https://lwn.net/Articles/724526/ matthias <div class="FormattedComment"> As the other commenters, I agree that running out of inodes should not be a problem of the kernel. However the error reporting could be improved. Returning ENOSPC when the actual problem is running out of inodes is misleading. The user has to know that the error number is also used for other reasons than "No space left on device". Today, probably many users do not even know that they can run out of inodes. Even if they know this in theory, they have to remember this when seeing ENOSPC.<br> <p> I would much prefer error reporting by exceptions. The type of the exception more or less corresponds to the error numbers and can be used by the program to determine how to react, but there is a string attached that can be passed up the call chain, which has meaningful information for the user. This way the program still gets the information contained in ENOSPC (actually most programs are fine to react to running out of space and running out of inodes in the same way), but the user which sees the error message knows instantly where to search for the problem. <br> <p> Adding type inheritance to the exceptions additionally allows the program to select how fine grained the error information should be. Some programs are fine seeing an IO exception. Others want to differentiate whether the error is running out of resources or a real problem and some might want to know the difference between running out of space and running out of inodes.<br> <p> </div> Sat, 03 Jun 2017 08:31:54 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724522/ https://lwn.net/Articles/724522/ MarcB <div class="FormattedComment"> I don't think running out of free inodes is conceptually different from running out of free space. Also, it is not a problem per se from the kernel's PoV: It cannot be prevented, it does not happen at random, it is not something exceptional at all.<br> It is just another resource exhaustion that user space has to deal with - and perhaps even is dealing with, so nothing is actually wrong.<br> <p> Also, this used to be much more common in the past, when many filesystems allowed much fewer inodes by default. So, perhaps some administrators simply have forgotten (or never learned) that inode exhaustion is a real thing.<br> <p> And diagnosing this - once you are aware that it can happen - is not harder than diagnosing "out of space" (in practice: even easier, as is is unlikely that large numbers of inodes are held by deleted yet opened files).<br> It can, and should, also be monitored just like free disk space.<br> <p> <p> </div> Sat, 03 Jun 2017 07:44:33 +0000 Multiple drives https://lwn.net/Articles/724519/ https://lwn.net/Articles/724519/ pbonzini <div class="FormattedComment"> Would it be possible to exclude O_DIRECT file descriptors from reporting writeback failures, or perhaps you are already doing that?<br> </div> Sat, 03 Jun 2017 06:17:20 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724517/ https://lwn.net/Articles/724517/ k8to <div class="FormattedComment"> The applications are all told ENOSPC in this situation, so lots of them should be complaining and some of that should be hitting logs.<br> <p> It's unclear to me that the kernel should also log for each such failure. It might be so noisy as to cause more breakage. I would want the system to do something like log when this situation is near-occurring and when it has occurred in some throttled way, which suggests monitoring logic. Should that be implemented in-kernel or in userland?<br> </div> Sat, 03 Jun 2017 04:18:08 +0000 perhaps running out of inodes could be taken "more seriously"? https://lwn.net/Articles/724510/ https://lwn.net/Articles/724510/ Richard_J_Neill <div class="FormattedComment"> We recently hit a bug where the disk had plenty of free space, but couldn't create new files, making the server unusable. It turns out, we'd run out of inodes (due to a misbehaving web-app creating hundreds of 0-byte lock files per minute). It was really hard to diagnose this, because of the lack of any helpful messages. I'd have expected that, if the kernel encounters a hard error like this, it would have at least put something into dmesg or syslog (it didn't). The design philosophy seems to be that running out of Inodes is more akin to a permissions error (i.e. nothing wrong with the system), than to a fatal disk error, and that, while even a trivial usb hotplug event generates lots of log traffic, an unusable root filesystem (from inode exhaustion) is deemed not important enough to merit a log message!<br> <p> </div> Sat, 03 Jun 2017 00:36:32 +0000 Improved block-layer error handling https://lwn.net/Articles/724509/ https://lwn.net/Articles/724509/ jhoblitt <div class="FormattedComment"> Will errors from dm-mapper, lvm, and/or luks float up or will those abstractions layers essentially hide error reporting?<br> </div> Fri, 02 Jun 2017 23:55:58 +0000 Improved block-layer error handling https://lwn.net/Articles/724495/ https://lwn.net/Articles/724495/ jlayton <div class="FormattedComment"> To be clear, while I'm focusing on block-device based filesystems now, errseq_t based error handling is applicable for any sort of filesystem. I expect that almost all of them will end up being converted to use errseq_t for tracking errors, whether block-based or not.<br> </div> Fri, 02 Jun 2017 18:59:04 +0000 Multiple drives https://lwn.net/Articles/724492/ https://lwn.net/Articles/724492/ jlayton <div class="FormattedComment"> No. Errors are stored a per-inode basis (well, per address-space, but most inodes have only a single address_space). A filesystem on /dev/sda would not have the same inodes as one on /dev/sdb, so that wouldn't occur.<br> </div> Fri, 02 Jun 2017 17:54:37 +0000 Multiple drives https://lwn.net/Articles/724493/ https://lwn.net/Articles/724493/ corbet An error will be returned only if the application calls <tt>fsync()</tt> on a file descriptor for a file that has experienced errors. Multiple drives are not an issue; errors should not propagate beyond the affected file even on a single drive. Fri, 02 Jun 2017 17:53:08 +0000 Multiple drives https://lwn.net/Articles/724491/ https://lwn.net/Articles/724491/ abatters <div class="FormattedComment"> What if there is a writeback error to a filesystem on /dev/sda, and an application does fsync() on a fd to a file on a filesystem on /dev/sdb? Would it get an error? I hope not.<br> </div> Fri, 02 Jun 2017 17:49:24 +0000