|
|
Subscribe / Log in / New account

Luu: Files are hard

Luu: Files are hard

[Kernel] Posted Dec 14, 2015 20:15 UTC (Mon) by corbet

Here is a lengthy posting from Dan Luu on why it is so hard to safely write files on Unix-like systems. It comes down to a combination of POSIX semantics and filesystem bugs. "Something to note here is that while btrfs’s semantics aren’t inherently less reliable than ext3/ext4, many more applications corrupt data on top of btrfs because developers aren’t used to coding against filesystems that allow directory operations to be reordered (ext2 was the only other filesystem that allowed that reordering). We’ll probably see a similar level of bug exposure when people start using NVRAM drives that only have byte-level atomicity. People almost always just run some tests to see if things work, rather than making sure they’re coding against what’s legal in a POSIX filesystem."

Comments (141 posted)


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds