LWN: Comments on "A way to do atomic writes" https://lwn.net/Articles/789600/ This is a special feed containing comments posted to the individual LWN article titled "A way to do atomic writes". en-us Sat, 18 Oct 2025 00:16:28 +0000 Sat, 18 Oct 2025 00:16:28 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net A way to do atomic writes https://lwn.net/Articles/970656/ https://lwn.net/Articles/970656/ BrucePerens <div class="FormattedComment"> 5 years later I don't think anything's been done about this. In addition to your concerns, linkat() could use a flag to atomically unlink an existing target file when creating the new link. That seems to be the missing piece to creating an anonymous temporary file (O_TMPFILE) and then atomically giving it a name.<br> </div> Sun, 21 Apr 2024 18:50:45 +0000 A way to do atomic writes https://lwn.net/Articles/856821/ https://lwn.net/Articles/856821/ aist <div class="FormattedComment"> Atomic operations on files don&#x27;t have much sense now, because they imply two things:<br> <p> 1. The model of concurrency.<br> 2. The system of relaxations of atomicity, to support different consistency/performance tradeoffs. <br> <p> Atomicity is not cheap, and, what is much more important, it&#x27;s not (easily) composable. Because of that, pushing high-level semantics down to hardware (disks) will not work as expected. Elementary (1-4 blocks) atomic operations are more than enough to support high-level composable atomic semantics across many disks. BUT, it&#x27;s very hard to have high-level (generic) atomic writes which will be parallel. People, relational databases apparently provide pretty concurrent atomicity, but they rely on the fact that relational tables are unordered (sets or records), so it&#x27;s relatively easy to merge multiple parallel versions of the same table into the single one. Merging of ordered data structures like vectors (and files) is not defined in general case (it&#x27;s application-defined). <br> <p> There are pretty good single-writer-multiple-readers (SWMR) schemes which are pretty light-weight, log-structured, wait-free and atomic for readers and writers (see LMDB for an example), but they are inherently single-writer. So, only one writer at a time can own the domain of atomicity (single file, directory, file-system). Readers are fully concurrent though with themselves and with writers. SWMR is best suitable for dynamic data analytics applications because of point-in-time semantics for readers (stable cursors, etc). <br> <p> Multiple concurrent atomic writes (MWMR) are possible, but they are not that wait-free like SWMR, have much higher operational overhead and require atomic counters for (deterministic) garbage collection. And write-write conflict-resolution is application-defined. So, if we want an MWMR engine to be implemented at the device level, it will require pretty tight integration with applications, implying pretty complex APIs. Simply speaking, it isn&#x27;t worth the efforts.<br> <p> Log-structured SWMR/MWMR may work well with a single-block-scale atomic operations, they just need certain power-failure guaranties. They can be implemented entirely in the userspace as services on top of asynchronous IO interfaces like uring. Partial POSIX file API is emulation for legacy applications accessing the &quot;atomic data&quot; is also possible via FUSE.<br> <p> Adding complex high-level atomic semantics (especially multi-operation commits) to the POSIX API will create much more problems than atomics are intended to solve. <br> <p> <p> <p> </div> Thu, 20 May 2021 20:33:44 +0000 A way to do atomic writes https://lwn.net/Articles/818685/ https://lwn.net/Articles/818685/ edgecase <div class="FormattedComment"> Looking at the set of all filesystems in-tree, out of tree, experimental, dead and gone, I wonder if there are just too many conflicting requirements, use-cases, and types of underlying hardware, to choose one set of semantics.<br> <p> The POSIX API seems to be bursting at the seams also.<br> <p> I wonder if ACID could be broken up, at a lower layer, and POSIX semantics built on top, as well as other APIs (or mount options?), that could focus on whatever combination of features is wanted.<br> <p> One in particular I can see being useful, is similar to the Android example mentioned earlier in this thread. The use-case of an operating system package manager installing or updating a set of packages (apt-get upgrade, yum update) for example, would ideally employ the sequence:<br> <p> 1) unpack many files Isolated, but not durable. This takes advantage of elevator seeks and write combining, but does not put (as much) pressure on journal.<br> 2) wait until they are all Durable (as a set, not individually)<br> 3) rename them all into place in one transaction, directory lock could be taken once per directory, not per file<br> 4) let them be visible (end the isolation)<br> <p> Making a special filesystem for doing this isn't ideal; there should be a more general way, since in this use-case, modifying files atomically isn't of value, but for someone else it might be.<br> </div> Sun, 26 Apr 2020 21:44:51 +0000 A way to do atomic writes https://lwn.net/Articles/790563/ https://lwn.net/Articles/790563/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt; It might be nice to add an open flag named O_TMPFILE that tells the system that a file shouldn't be persisted...except we already have one, and it already does that. So these tmpfile examples could look more like:</font><br> <p> I worked on a FORTRAN system in the 80s that had such a file - it flagged the file as "delete on close". Except it had a bug - it flagged the file *descriptor* as delete on close. And because you could re-use a file descriptor my program started deleting a bunch of random - important - files. Caused the operators (it was a mainframe) a small amount of grief until I twigged the problem, tested it, and raised a bug report!<br> <p> Cheers,<br> Wol<br> </div> Thu, 06 Jun 2019 14:34:18 +0000 A way to do atomic writes https://lwn.net/Articles/790524/ https://lwn.net/Articles/790524/ Wol <div class="FormattedComment"> A synchronous flush?<br> <p> Dunno how easy it would be to implement this, but imagine ...<br> <p> My application (database, whatever) writes a load of stuff - a user-space journal. It then calls the flush. This triggers writing all the buffers to disk, with a guarantee that writes AFTER my sync call can be moved EARLIER in time, but ALL EARLIER writes will complete before my call returns.<br> <p> That way, my application knows, when the call returns, that it's safe to start updating the files because it can recover a crash from the logs. It doesn't interfere with other applications because it's not hogging i/o. And if it's one of the few applications on an almost-single-use system then the almost continuous flushing it might trigger probably won't actually get noticed much - especially if it's a multi-threaded database because it can happily mix flushing one transaction's logs with another transaction's data.<br> <p> Cheers,<br> Wol<br> </div> Thu, 06 Jun 2019 14:29:36 +0000 A way to do atomic writes https://lwn.net/Articles/790522/ https://lwn.net/Articles/790522/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt; ...aaaand we're back to the horrified "but it could be slower!" chant again.</font><br> <p> Which is a damn good reason NOT to use fsync ...<br> <p> When ext4 came in, stuff suddenly started going badly wrong where ext3 had worked fine. The chant went up "well you should have used fsync!". And the chant came back "fsync on ext4 is slow as molasses!".<br> <p> On production systems with multiple jobs, fsync is a sledgehammer to crack a nut. A setup that works fine on one computer WITHOUT fsync could easily require several computers WITH fsync.<br> <p> Cheers,<br> Wol<br> </div> Thu, 06 Jun 2019 12:03:57 +0000 A way to do atomic writes https://lwn.net/Articles/790301/ https://lwn.net/Articles/790301/ yige <div class="FormattedComment"> Yes, we do cache files in memory during a transaction. Flushes to disk are ignored until a transaction gets committed. This guarantees isolation for ACID transactions. But you're right, it sets up a limit for the size of a transaction. In our case, the optimization for eliminating temporary durable files is more of a positive side effect of ACID transactions. It helps especially in case users are unaware of the existence of such files in their transaction code.<br> <p> I agree that a separate indicator for minimized persistency on temporary files can be a good idea, so that it gets flushed only in face of memory pressure.<br> </div> Tue, 04 Jun 2019 21:33:49 +0000 A way to do atomic writes https://lwn.net/Articles/790182/ https://lwn.net/Articles/790182/ zblaxell <div class="FormattedComment"> <font class="QuotedText">&gt; If your drive controller goes bad, it could start writing the wrong blocks.</font><br> <p> That happens from time to time. Storage stacks can deal with that kind of event gracefully. Today we expect those errors to be detected and reported by the filesystem or some layer below it, and small errors repaired automatically when there is sufficient redundancy in the system.<br> <p> <font class="QuotedText">&gt; If you want to be really sure about your data, you restore of real off-site (or, at least, off-box) backups</font><br> <p> To make correct backups, the backup process needs a complete, correct, and consistent image of the filesystem to back up, so step one is getting the filesystem to be capable of making one of those.<br> <p> Once you have that, and can atomically update it efficiently while the filesystem is online, you can stop using fsync as a workaround for legacy filesystem behaviors that should really be considered bugs now. fsync should only be used for its two useful effects: to reorder and isolate updates to individual files for reduced latency, and to synchronize IO completion with events outside of the filesystem (and those two things should become separate system calls if they aren't already). If an application doesn't need to do those two things, it should never need to call fsync, and its data should never be corrupted by a filesystem.<br> <p> <font class="QuotedText">&gt; you don't know what could have gone wrong before whatever caused the crash actually brought down the system.</font><br> <p> If we allow arbitrary failure modes to be in scope, we'll always lose data. To manage risks, both the frequency and cost of event occurrence have to be considered.<br> <p> Most of the time, crashes don't have complications with data integrity impact (e.g. power failure, HA forcing a reboot, kernel bugs with known causes and effects). We expect the filesystem to deal with those automatically, so we can redirect human time to cleaning up after the rarer failures: RAM failures not detected by ECC, multi-disk RAID failures, disgruntled employees with hammers, etc.<br> <p> When things start going unrecoverably wrong, each subsystem that detects something wrong gives us lots of information about the failure, so we can skip directly to the replace-hardware-then-restore-from-backups step even before the broken host gets around to crashing. All the filesystem has to do in those cases is provide a correct image of user data during the previous backup cycle.<br> <p> None of the above helps if the Linux filesystem software itself is where most of the unreported corruption comes from. It was barely tolerable while Linux filesystems were a data loss risk comparable to the rest of the storage stack, but over the years the rest of the stack has become more reliable while Linux filesystems have stayed the same or even gotten a little worse.<br> <p> <font class="QuotedText">&gt; I'm saying that the post-crash state should exactly match some state that userspace might have observed had the system never crashed, and any deviation from that should be accounted and planned for like equipment failure</font><br> <p> I'm saying that in the event there is no equipment failure, there should be no deviation. Even if there is equipment failure, there should not necessarily be a deviation, as long as the system is equipped to handle the failure. We don't wake up a human if just one disk in a RAID array fails--that can wait until morning. We don't want a human to spend time dealing with corrupted application caches after a battery failure--the filesystem shouldn't corrupt the caches in the first place.<br> <p> <font class="QuotedText">&gt; in any case, you're trading off reliability against size, performance, and cost, and none of these is ever perfectly ideal.</font><br> <p> ...aaaand we're back to the horrified "but it could be slower!" chant again.<br> <p> Atomic update is probably going to end up being faster than delalloc and fsync for several classes of workload once people work on optimizing it for a while, and start removing fsync workarounds from application code. fsync is a particularly bad way to manage data integrity when you don't have external synchronization constraints (i.e. when the application doesn't have to tell anyone else the stability status of its data) and when your application workload doesn't consist of cooperating threads (i.e. when the application code doesn't have access to enough information to make good global IO scheduling decisions the way that monolithic database server applications designed by domain experts do).<br> <p> It's easier, faster, and safer to run a collection of applications under eatmydata on a filesystem with atomic updates than to let those applications saturate the disk IO bandwidth with unnecessary fsync calls on a filesystem that doesn't have atomic updates--provided, as I mentioned at the top, that there's a way to asynchronously pipeline the updates; otherwise, you just replace a thousand local-IO-stall problems with one big global-IO-stall problem (still a net gain, but the latency spikes can be nasty).<br> <p> Decades ago, when metadata journaling was new, people complained it might be an unreasonable performance hit, but it turned out that the journal infrastructure could elide a lot of writes and be faster at some workloads than filesystems with no journal. The few people who have good reasons not to run filesystems with metadata journals can still run ext2 today, but the rest of the world moved on. Today nobody takes a filesystem seriously if it can't recover its metadata to a consistent state after uncomplicated crashes (though on many filesystems we still look the other way if the recovered state doesn't match any instantaneous pre-crash state, and we should eventually stop doing that). We should someday be able to expect user data to be consistent after crashes by default as well.<br> <p> Worst case, correct write behavior becomes a filesystem option, then I can turn it on, and you can turn it off (or it becomes an inode xattr option, and I can turn it off for the six files out of a million where crash corruption is totally OK). You can continue to live in a world where data loss is still considered acceptable, and the rest of us can live in a world where we don't have to cope with the post-crash aftermath of delalloc or the pre-crash insanity of fsync.<br> </div> Mon, 03 Jun 2019 18:25:02 +0000 A way to do atomic writes https://lwn.net/Articles/790183/ https://lwn.net/Articles/790183/ zblaxell <div class="FormattedComment"> <font class="QuotedText">&gt; The followed piece of pseudo-code will be executed without the persistence of fileA.</font><br> <p> ...if fileA and fileB can be stored in the cache memory of the implementation, yes; otherwise, you have to start writing the data of at least one of fileA or fileB to disks before you get to the unlink which removes the need to write.<br> <p> That point was missed in the previous comments too: temporary file optimizers can't read your mind, they can only read an incomplete transaction log, so they only work if the temporary file gets deleted while it is (or parts of it are) still in memory. If you really want temporary file optimization, you need to label the file as a temporary as early as possible, so it can avoid getting flushed to disk too early.<br> <p> It might be nice to add an open flag named O_TMPFILE that tells the system that a file shouldn't be persisted...except we already have one, and it already does that. So these tmpfile examples could look more like:<br> <p> fs_tx_begin();<br> create(dirA, O_TMPFILE | normal_flags); // returns fileA file descriptor<br> create(fileB, normal_flags);<br> write(fileA);<br> write(fileB); // if we start running out of memory here, we might flush fileB, not fileA<br> // don't need to unlink fileA, it will disappear when closed.<br> fs_tx_commit();<br> <p> If you need fileA to have a name or you need to close it during the transaction, it gets more complicated to use O_TMPFILE: you have to do a weird dance with /proc/self/fd/* symlinks and linkat, and you do need to do the unlink at the end, and the O_TMPFILE flag is not a correct hint to the optimizer if you change your mind and keep the file through the end of the transaction. It's also not clear that keeping fileA in RAM was a net win above--maybe it's better if fileA gets pushed out to disk so there's more RAM for caching fileB.<br> <p> So maybe a separate indicator (an xattr or fadvise flag) that says "optimize by minimizing writes to this file", and nothing else, might be more useful to provide hints to transaction infrastructure. That will handle the cases where it's better to flush the temporary file to disk under memory pressure instead of the permanent one, and it can be set or cleared on files without having to modify the parts of the application that create files (which might be inaccessible or hard to modify).<br> <p> The nice thing about xattrs is that system integrators can set them without having to hack up applications, so you can get broad behaviors like "everything created in this directory from now on is considered a temporary file for the purposes of transaction commit optimization."<br> </div> Mon, 03 Jun 2019 17:36:38 +0000 A way to do atomic writes https://lwn.net/Articles/790090/ https://lwn.net/Articles/790090/ daniel <div class="FormattedComment"> So in other words, you want exactly Tux3 semantics.<br> </div> Sun, 02 Jun 2019 16:55:13 +0000 A way to do atomic writes https://lwn.net/Articles/790042/ https://lwn.net/Articles/790042/ iabervon <div class="FormattedComment"> The operating system can't guarantee that your hardware won't destroyed entirely by your building collapsing on it, even if you called fsync() on a file. If your drive controller goes bad, it could start writing the wrong blocks. If you want to be really sure about your data, you restore of real off-site (or, at least, off-box) backups, rather than using a filesystem left on a machine that crashed, if only because you don't know what could have gone wrong before whatever caused the crash actually brought down the system.<br> <p> I'm saying that the post-crash state should exactly match some state that userspace might have observed had the system never crashed, and any deviation from that should be accounted and planned for like equipment failure, except that it may be attributed to the filesystem software rather than the disk hardware; in any case, you're trading off reliability against size, performance, and cost, and none of these is ever perfectly ideal.<br> </div> Sat, 01 Jun 2019 01:47:26 +0000 A way to do atomic writes https://lwn.net/Articles/790025/ https://lwn.net/Articles/790025/ zblaxell <div class="FormattedComment"> <font class="QuotedText">&gt; Imagine there's a process on the machine constantly making backups to storage that doesn't go down in the crash.</font><br> <p> My backups are atomic, and have been for years...<br> <p> OK, I'm imagining this now: of course, I expect the storage to be updated atomically and correctly every time by this backup process. Worst case, I don't get the last update, or the last N consecutive updates if I choose to optimize for performance by pipelining.<br> <p> <font class="QuotedText">&gt; After the system comes back, the state matches the backup, except that there may be some arbitrary damage</font><br> <p> [still imagining] Nope, that's totally unacceptable. If I see any damage at all, someone's getting a bug report. Broken files are failure. [imagination off]<br> <p> Proprietary storage vendors have supported atomic backups from snapshots of live filesystems for decades, and Linux hasn't been far behind. They just work. This is not a new or experimental thing any more. The better implementations have reasonable performance costs. Let's have filesystems on Linux that can get it right all the time, not just between crashes.<br> <p> <font class="QuotedText">&gt; The current behavior, or any behavior, obviously fits this model</font><br> <p> "It's impossible to report any undesirable behavior as a bug because all current behavior is tautologically correct, even when it changes." For example, recently I pointed out on LWN that the current behavior is undesirable, and someone soon replied to explain the current behavior back to me without supporting argument, as if it was somehow self-evident that the current behavior is the best behavior possible.<br> <p> This pattern happens a lot. It usually takes several tries to get past that and get into a discussion of what's undesirable about the current behavior, or how things could become better, or even how different people just have different preferences and expectations. Then we have to get past the horrified "but...that could be slightly slower!" response. Then we have to avoid regressing to the beginning of the loop when someone who missed the first part of the conversation jumps in. Usually by this point, everyone's gotten bored and left.<br> <p> The problem is not that Linux is an incorrect implementation of the current model. The problem is that the current model is patently insane, and we should maybe consider sane models that could be used instead.<br> <p> <font class="QuotedText">&gt; a different idea of what the kernel should be optimizing for with respect to crash resilience</font><br> <p> This doesn't appear to be a different idea. It seems to be just a retroactive justification of the way Linux filesystems have worked since the mid 90's--a time when every crash resulted in filesystem damage requiring time-expensive recovery tools, because nothing better had been implemented yet. Now we have atomic snapshots and journals and CoW and persistent writeback caches and future improvements like atomic write API implemented in multiple filesystems. We can do better than mid-90's standards of crash behavior now.<br> <p> <font class="QuotedText">&gt; Note that Unix-style buffered writes don't cause any problem here, because ordering changes there can be explained as getting unlucky as to the order the backup process read them.</font><br> <p> The filesystem and the backup process could arrange not to change the ordering, and then it would all work properly. No luck required, nothing to explain.<br> </div> Sat, 01 Jun 2019 00:02:24 +0000 A way to do atomic writes https://lwn.net/Articles/789917/ https://lwn.net/Articles/789917/ yige <div class="FormattedComment"> F.Y.I. we have an academic project, TxFS, which provides a version of Ext4 file system with ACID transactions. The idea can be generalized to other file systems that use journaling or COW to maintain crash consistency as well.<br> <a href="https://github.com/ut-osa/txfs">https://github.com/ut-osa/txfs</a><br> <p> We added three new system calls to initiate/commit/abort a per-process file system transaction. It also separates ordering from durability, and does optimizations like eliminating temporary durable files, as discussed in the previous comments. e.g. The followed piece of pseudo-code will be executed without the persistence of fileA.<br> <p> fs_tx_begin();<br> create(fileA);<br> create(fileB);<br> write(fileA);<br> write(fileB);<br> unlink(fileA);<br> fs_tx_commit();<br> <p> It currently only supports a subset of file-related system calls since it's an experimental project. (e.g. rename and mmap are not supported yet.)<br> </div> Thu, 30 May 2019 20:50:15 +0000 A way to do atomic writes https://lwn.net/Articles/789875/ https://lwn.net/Articles/789875/ perennialmind <p> This is way too reminiscent of Transactional NTFS and Microsoft's earlier attempts to bring ACID guarantees to the filesystem. The generalized contract proposed makes a huge commitment and relies an awful lot on the particulars of journaling filesystems. The per-filesystem feature doesn't strike me as "kind of ugly" at all: it sounds conservative and maintainable, with reasonably predictable ramifications. </p> <p> For a more general solution, I'm much more interested in that <a href="https://www.usenix.org/conference/fast18/presentation/won">Barrier-Enabled IO Stack for Flash Storage</a>. It just makes sense to me. It's so neat and plausible, that I can't help but wonder if it isn't also wrong. Else, I should be reading more about it on LWN. &lt;shrug&gt; </p> All I want is a sane ordering guarantee. I'm willing to accept intermediate changes being observable if it means I can have some weak, localized constraints on causality. <pre><code>write(...) write(...) fdatabarrier(...) rename(...)</code></pre> Thu, 30 May 2019 16:59:07 +0000 A way to do atomic writes https://lwn.net/Articles/789866/ https://lwn.net/Articles/789866/ zblaxell <div class="FormattedComment"> The same thing happens now if the dirty writeback timer (or filesystem commit interval for filesystems that have one of those) expires in the middle of this sequence. We'd write out whichever parts of B1/B2 were in cache at the time (at the end of this sequence B3 exists, so we're always going to write that one). We also do a big flush if we run out of memory while buffering B1 or B2, and in that case we block writing processes (so we do the expensive thing _and_ force userspace to wait for it).<br> <p> The difference with filesystem-atomic-by-default is that we'd choose some epoch and the atomic update would include all writes completed before the epoch and none of them after (any concurrent modification during writeback would be redirected to the next atomic update). So you'd get e.g. all of A, all of B1, and the first parts of B2 in one atomic update, and a later update would delete B1, write the rest of B2, and the first parts of B3. If there's a crash before B1 gets to disk, then A disappears.<br> <p> This is fine! This is the correct behavior according to what the various system calls are documented to do, assuming that writeback caching behaves like an asynchronous FIFO pipeline to disk by default (i.e. when you didn't explicitly turn off atomic update, call fsync(), or provide some other hint that says the filesystem needs to do more or less work for specific files). It's not the most performant behavior possible, so it sucks, but the most performant behavior possible does bad things to data when the system crashes, so it sucks too. Most people who aren't saturating their storage stack with iops care more about correctness than performance, and would trade some iops to get correctness even if it means flushing out a multi-gigabyte temporary file now and then.<br> </div> Thu, 30 May 2019 15:52:04 +0000 A way to do atomic writes https://lwn.net/Articles/789824/ https://lwn.net/Articles/789824/ Jonno <div class="FormattedComment"> <font class="QuotedText">&gt; Though that looks too simple and there may be something I'm missing.</font><br> <p> For crash consistency, you have to at least add an `fdatasync B` before `ioctl_ficlone A from B`, or you might get garbage in A after recovery. It also depends upon the filesystem writing the new extent mapping of A to disk atomically, which I don't think is actually guaranteed (though most filesystems probably do so anyway).<br> <p> </div> Thu, 30 May 2019 12:36:31 +0000 A way to do atomic writes https://lwn.net/Articles/789808/ https://lwn.net/Articles/789808/ mjthayer <div class="FormattedComment"> <font class="QuotedText">&gt; This could be extremely expensive. For example, suppose I do a sequence of writes, serialized in this order:</font><br> [...]<br> <font class="QuotedText">&gt; And keep doing the create / delete dance.</font><br> <p> <font class="QuotedText">&gt; In your atomic model, A cannot ever be made durable without writing at least one large unnecessary file to disk.</font><br> <p> And is this a use case to be optimised for, or should people learn other ways of creating huge transient files? I know that sounds like a suggestive question, but it is not. I don't feel qualified to say.<br> </div> Thu, 30 May 2019 08:39:30 +0000 A way to do atomic writes https://lwn.net/Articles/789800/ https://lwn.net/Articles/789800/ qtplatypus <div class="FormattedComment"> I have been thinking about this and I'm wondering if on fs that support it ioctl_ficlone it might leveraged to make atomic writes of the A without D type that many application devs desire.<br> <p> A sequence of<br> <p> create B<br> ioctl_ficlone B from A<br> write to B<br> ioctl_ficlone A from B<br> <p> Though that looks too simple and there may be something I'm missing.<br> <br> </div> Thu, 30 May 2019 05:44:25 +0000 A way to do atomic writes https://lwn.net/Articles/789752/ https://lwn.net/Articles/789752/ walters <div class="FormattedComment"> I don't know of any other software using the boot ID technique; I just made it up one day. It seems to work well though.<br> <p> You can see some parts of this in <br> <a href="https://github.com/ostreedev/ostree/blob/e0ddaa811b2f7a1af7e24c6b8c6f1074e216609e/src/libostree/ostree-repo.c#L3197">https://github.com/ostreedev/ostree/blob/e0ddaa811b2f7a1a...</a><br> <a href="https://github.com/ostreedev/ostree/blob/e0ddaa811b2f7a1af7e24c6b8c6f1074e216609e/src/libostree/ostree-repo-commit.c#L1776">https://github.com/ostreedev/ostree/blob/e0ddaa811b2f7a1a...</a><br> and<br> <p> But this is definitely the kind of thing that would be better with kernel assistance somewhat like the article is talking about. Some sort of open flag or fcntl that says "I always want the whole file written, or nothing" - an extension to `O_TMPFILE` that allows fsyncing at the same time as `linkat()`? Which is also closely related to my wishlist item for O_OBJECT: <a href="https://marc.info/?l=linux-fsdevel&amp;m=139963046823575&amp;w=2">https://marc.info/?l=linux-fsdevel&amp;m=139963046823575&amp;...</a> <br> <p> <p> </div> Wed, 29 May 2019 20:43:03 +0000 A way to do atomic writes https://lwn.net/Articles/789710/ https://lwn.net/Articles/789710/ iabervon <div class="FormattedComment"> I think the right model is: Imagine there's a process on the machine constantly making backups to storage that doesn't go down in the crash. It will back up every file eventually, and will back up every file that fsync() is called on during the call. After the system comes back, the state matches the backup, except that there may be some arbitrary damage, which the kernel tries to minimize.<br> <p> The current behavior, or any behavior, obviously fits this model, since it means that any post-restore state is possible, but it gives a different idea of what the kernel should be optimizing for with respect to crash resilience, which seems to me to be a good fit for how users rate system behavior.<br> <p> Note that Unix-style buffered writes don't cause any problem here, because ordering changes there can be explained as getting unlucky as to the order the backup process read them.<br> </div> Wed, 29 May 2019 19:12:50 +0000 A way to do atomic writes https://lwn.net/Articles/789737/ https://lwn.net/Articles/789737/ luto <div class="FormattedComment"> This could be extremely expensive. For example, suppose I do a sequence of writes, serialized in this order:<br> <p> 1. Create huge file B1.<br> 2. Write to file A.<br> 3. Create huge file B2.<br> 4. Delete B1.<br> 5. Create huge file B3.<br> 6. Delete B2.<br> <p> And keep doing the create / delete dance.<br> <p> In your atomic model, A cannot ever be made durable without writing at least one large unnecessary file to disk.<br> </div> Wed, 29 May 2019 18:56:39 +0000 A way to do atomic writes https://lwn.net/Articles/789735/ https://lwn.net/Articles/789735/ epa <div class="FormattedComment"> Perhaps dumping the object files into the same directory as the source files (and hence the same filesystem) is a bit mad anyway? A tmpfs / ramdisk approach does seem more sensible, given that there is no particular need to spool out the intermediate files to physical disk. It’s one of those “we have always done it this way” things. <br> </div> Wed, 29 May 2019 18:45:00 +0000 A way to do atomic writes https://lwn.net/Articles/789721/ https://lwn.net/Articles/789721/ mmastrac <div class="FormattedComment"> This is great information on something I hadn't considered before. Do you have anywhere you could point me at for "best practices" to use in combination with that technique (ie: fsync the file, then syncfs, then renameat iff boot_id matches, etc)?<br> </div> Wed, 29 May 2019 17:35:44 +0000 A way to do atomic writes https://lwn.net/Articles/789682/ https://lwn.net/Articles/789682/ walters <div class="FormattedComment"> One way to look at it is - what scenarios is the current setup good for? I would argue it's nearly optimal for the interesting case of "local software builds", e.g. on a developer workstation/laptop.<br> <p> Compilers tend to generate a lot of intermediate files which often get unlinked quickly. If writes weren't buffered this would be an enormous hit. It'd really force build systems to write to a tmpfs style mount instead of the persistent source code directory.<br> <p> I don't think anyone would want compliers to invoke fsync() either. But it's not truly transient either - I *do* usually want the intermediate build files still there after I reboot. (Contrast with most production buildsystems that don't do incremental builds). So this falls more into your case of:<br> <p> "you can write this any convenient time in the next hour, I don't care"<br> <p> But then on the topic of consistency - a while ago when I was using ccache I hit a bug where some of the cached objects were corrupted (zero-filled) after I had a kernel crash, and that caused bizarre errors. This probably hits the interesting corner case of "tmpfile + rename over existing is atomic, but writing a new file isn't". It took me longer than it should have to figure out, kept doing `git clean -dfx` and double checking the integrity of my compiler, etc.<br> <p> In <a href="https://github.com/ostreedev/ostree">https://github.com/ostreedev/ostree</a> we write all new objects to a "staging" directory that includes the value of /proc/sys/random/boot_id, and then syncfs() everything, then renameat() into place (I've been meaning to investigate doing parallel/asynchronous fsync instead). We assume that the files are garbage if the boot id doesn't match (i.e. system was rebooted before commit).<br> <p> <p> <p> </div> Wed, 29 May 2019 13:06:54 +0000 A way to do atomic writes https://lwn.net/Articles/789674/ https://lwn.net/Articles/789674/ nilsmeyer <div class="FormattedComment"> I vaguely remember f2fs already having atomic writes support through some ioctl. <br> </div> Wed, 29 May 2019 09:32:25 +0000 A way to do atomic writes https://lwn.net/Articles/789661/ https://lwn.net/Articles/789661/ zblaxell <div class="FormattedComment"> It would be nice to keep atomicity separate from durability, i.e. don't make fsync() the atomic commit primitive, unless it also works with an asynchronous variant of fsync(). A lot of applications don't care if you lose the last few updates, as long as the application never sees the results of a partial update. In database terms, this is asynchronous commit, which trades durability for performance without giving up atomicity, consistency, or integrity.<br> <p> Currently what happens in Linux looks kind of weird when you write it down:<br> <p> 1. A bunch of application threads issue a series of well-defined(-ish) IO syscalls which modify the logical contents of a filesystem. The kernel arbitrates an order when several operations affect the same files or metadata in concurrent threads according to some rules--maybe not the best rules, or rules that everyone likes, or rules that are even written down, but there are rules and the kernel does follow them. Observers that later inspect the data in the filesystem find something equivalent to the result of performing the mutation operations in the arbitrated order. Application developers can predict how multiple threads modifying a filesystem might behave as long as the system doesn't crash. There are test suites to verify consistent behavior across filesystems, and bugs can be reported against filesystems that don't comply. But then...<br> <p> 2. At irregular intervals, we mash all of the writes together in big shared buffers, and unreliably spew them out to disk in arbitrary order (not necessarily the optimal IO order, although IO optimization is ostensibly why we do this) without protecting those buffers against concurrent modification. If there's no crash and no threads concurrently writing, the disk holds a complete and correct copy of the filesystem state at some chosen instant in time. If there's a crash, we just leave the user applications with a mess to clean up, proportional in size to the amount of idle RAM we had lying around, possibly containing garbage and previously deleted data. The existing behavior is the spec, so it's impossible to report any undesirable behavior as a bug because all current behavior is tautologically correct, even when it changes.<br> <p> So for an application developer, we set up a bunch of expectations in #1, and then fail to deliver on all of them in #2. No wonder they hate Linux filesystems and can't use fsync() correctly!<br> <p> It would be nice to get to a point where we can say that not behaving like #1 after crashes is a filesystem bug, or the result of some risky non-default mount option or open flag chosen by an administrator or application developer. Features that reduce data integrity and impose new requirements on existing applications (like delalloc, or, for that matter, Unix-style buffered writes in general) should really be opt-in. It's maybe a bit late to have this opinion now, some decades after Unix-style buffered writes became a thing, but if multiple filesystems support atomic updates, there might be an opportunity to make better choices for default filesystem behavior in the future (e.g. do periodic filesystem commits as atomic updates as well).<br> <p> Databases know how to make precise tradeoffs between integrity and performance, and they can use the many shades of ranged and asynchronous fsync() effectively to implement atomic updates and all their other requirements on any filesystem, and they are more willing than most to keep up with changing filesystem API. Non-database application developers don't want have to constantly learn new ways to avoid losing data every time filesystem default behavior changes. They just want the filesystem to give back some consistent version of the data their threads asked it to write, and don't need or want to know any details beyond knobs that adjust performance and latency (i.e. "write this to disk immediately, I'll wait" at one extreme, "you can write this any convenient time in the next hour, I don't care" at the other). <br> </div> Wed, 29 May 2019 03:53:41 +0000