LWN: Comments on "Better guidance for database developers" https://lwn.net/Articles/799807/ This is a special feed containing comments posted to the individual LWN article titled "Better guidance for database developers". en-us Fri, 29 Aug 2025 16:14:17 +0000 Fri, 29 Aug 2025 16:14:17 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Better guidance for database developers https://lwn.net/Articles/800757/ https://lwn.net/Articles/800757/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; In the case of failures, the behavior needs to be documented, he said.</font><br> <p> Yes - assuming anyone knows in the first place. Error handling in general is barely ever designed and never tested. Are filesystems somewhat better? How much error injection can be found in filesystem test suites?<br> <p> <font class="QuotedText">&gt; Application developers have to find and read threads on the Linux kernel mailing list to figure that out.</font><br> <p> <p> </div> Sat, 28 Sep 2019 06:14:19 +0000 Files are hard https://lwn.net/Articles/800669/ https://lwn.net/Articles/800669/ jhhaller <div class="FormattedComment"> There are always trade-offs. You wrote the file to the disk, and then the disk failed. No more data. So, you move to RAID. You wrote the file, the OS wrote to one disk, but the other disk is failed - do you wait until it's repaired to report completion, or just write it to one drive and hope there isn't another failure? Next, you wrote a file, it was written to two drives. But then the data center holding the drives was hit by lightning and burned to the ground. So, you write it to two data centers. If one data center if offline, do you wait, find another data center, or assume that in this case, that one data center is enough. This case is also likely to be slower, unless the replication isn't synchronous, but that case, there is a risk that if the first data center fails before replication, the data is lost. Does the sun turning into a red dwarf become an important cause of data loss? The heat death of the universe?<br> <p> The trade-offs are between performance, cost, and durability. It's impossible to get high performance with low cost and high durability.<br> </div> Thu, 26 Sep 2019 21:47:31 +0000 Better guidance for database developers https://lwn.net/Articles/800663/ https://lwn.net/Articles/800663/ rweikusat2 <div class="FormattedComment"> I'm sorry but you're just dancing around the issue. UNIX(*) file systems used to do directory modifications synchronously in order to guarantee (to the point this was possible) file system integrity in case of a sudden loss of cache contents. And that's what the people who wrote the POSIX text had in mind: A situation where there's file data in the filesystem but no directory entry pointing to it cannot occur. Hence, ensuring that all file data and metadata is written, as per definition of fsync, is sufficient to guarantee that the file won't be lost. <br> <p> The Linux ext2 file system introduced write-behind caching of directory operations in order to improve performance at the expense of reliablity in situations deemed to be rare. Because of this, depending on the filesystem being used, fsync on a file descriptor is not sufficient to make a file crash-proof on Linux: An application would need to determine the path to the root file system, walk that down while fsyncing every directory and then call fsync on the file descriptor. This is obviously not a requirement applications will realistically meet in practice.<br> <p> Possibily 'hostile' activities of other processes (as in "Let's say ...") are of no concern here because that's not a situation fsync is supposed to handle.<br> </div> Thu, 26 Sep 2019 20:55:22 +0000 Better guidance for database developers https://lwn.net/Articles/800658/ https://lwn.net/Articles/800658/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; I'm - however - pretty convinced that the idea what that the data can be retrieved after a sudden "cache catastrophe" and not that it just sits on the disk as magnetic ornament.</font><br> <p> Even if you mandated that fsync() == sync() so that *all* filesystem data was written to disk before fsync() returns it still wouldn't guarantee that there is actually a directory entry pointing to that file. For example, it could have been unlinked by another process, in which case the data on disk really would be nothing more than a "magnetic ornament".<br> <p> Let's say process A creates a file with path "/a/file", writes some data to it, and calls fsync(). While this is going on, another process hard-links "/a/file" to "/b/file" and then unlinks "/a/file" prior to the fsync() call. Would you expect the fsync() call to synchronize both directories, or just the second directory?<br> </div> Thu, 26 Sep 2019 20:29:39 +0000 Fallback depends on more than alignment https://lwn.net/Articles/800655/ https://lwn.net/Articles/800655/ sitsofe <div class="FormattedComment"> You can silently fallback to buffered I/O even though you set the O_DIRECT "hint" just because of the filesystem, the filesystem's current options, you're doing allocating writes on a certain filesystem etc. See <a href="https://stackoverflow.com/questions/34572559/asynchronous-io-io-submit-latency-in-ubuntu-linux/46377629#46377629">https://stackoverflow.com/questions/34572559/asynchronous...</a> (point 2 and the references) for some background.<br> </div> Thu, 26 Sep 2019 18:01:15 +0000 Files are hard https://lwn.net/Articles/800654/ https://lwn.net/Articles/800654/ rweikusat2 <div class="FormattedComment"> I'd rather call this a great example of how people who don't understand what they're talking about can end up producing loads and loads of gibberish, ex all the talk about "reordering". That's a term someone lifted from machine code execution and applied here to mean "what happened wasn't what I expected to happen !!!1". But that's entirely the fault of this person: By default, all writes to any file system end up in the page cache which does write-behind caching of "disk writes". Consequently, there's absolutely no correlation between an ordering of write system calls on different file descriptors and and a later ordering of "disk writes" flushing dirty pages: That's an inherent property of this kind of caching scheme which has existed since some time in the 1970s.<br> <p> Moving forward along these lines, fsync is not "a barrier and a flush operation", it's a forced writeback of a part of the page cache. Obviously, updates to the page cache after an fsync won't end up being written prior to the writeback forced by the fsync because - duh! - that has already happened. It's not because there's some kind of "reordering" fsync prevents.<br> <p> </div> Thu, 26 Sep 2019 17:20:19 +0000 Files are hard https://lwn.net/Articles/800651/ https://lwn.net/Articles/800651/ psoberoi <div class="FormattedComment"> Anyone who thinks this is simple needs to read this:<br> <p> <a href="https://danluu.com/deconstruct-files/">https://danluu.com/deconstruct-files/</a><br> <p> Even you you don't think it's simple - read that article. It's a great explanation of how hard it is to do persistence reliably.<br> </div> Thu, 26 Sep 2019 15:27:47 +0000 Better guidance for database developers https://lwn.net/Articles/800620/ https://lwn.net/Articles/800620/ metan <div class="FormattedComment"> As far as I can tell that happens only when you pass unaligned buffers to the read()/write() syscalls. In that case some filesystems reports errors and some fall back to page cache backed I/O. But as far as you align you buffers correctly it should not happen.<br> </div> Thu, 26 Sep 2019 09:24:20 +0000 Better guidance for database developers https://lwn.net/Articles/800619/ https://lwn.net/Articles/800619/ liam <div class="FormattedComment"> Ceph uses bluestore which, iirc, interfaces directly with the block layer.<br> A small hitch might be that bluestore uses an (internal) rocksdb for handling the metadata, thus requiring them to reimplement exactly enough of the filesystem interface to support rocks.<br> <p> </div> Thu, 26 Sep 2019 08:49:56 +0000 Better guidance for database developers https://lwn.net/Articles/800598/ https://lwn.net/Articles/800598/ dezgeg <div class="FormattedComment"> I thought there are some filesystems that may silently have O_DIRECT I/O fall back to buffered I/O under some circumstances? <br> </div> Thu, 26 Sep 2019 04:33:19 +0000 Better guidance for database developers https://lwn.net/Articles/800594/ https://lwn.net/Articles/800594/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; Couldn't a large database installation work with raw disk partitions,</font><br> <p> Raw disk partitions would be a bit clumsy, but using O_DIRECT access is quite close to raw partition access.<br> <p> You would need to create the file safely - sync the directory and pre-allocate the address space of the file and make sure that was safely on disk. But then with a raw partition you would need have a reliable way to create the partition safely and be sure the partition details were safely in non-volatile storage.<br> <p> Which ever way you cut it, you need reliable guarantees about how things work.<br> <p> </div> Wed, 25 Sep 2019 22:21:42 +0000 Better guidance for database developers https://lwn.net/Articles/800593/ https://lwn.net/Articles/800593/ rweikusat2 <div class="FormattedComment"> As I quoted in an earlier post:<br> <p> The fsync() function is intended to force a physical write of data from the buffer cache, and to assure that after a system crash or other failure that all data up to the time of the fsync() call is recorded on the disk.<br> <p> You're correct insofar as this doesn't explicitly demand that the data which was recorded can ever be retrieved again after such an event, IOW, that an implementation which effectively causes it to be lost is perfectly compliant :-). But that's sort of a moot point as any "sychronous I/O capability" is optional, IOW, loss of data due to write-behind caching of directory operations is just a "quality" of (certain) Linux implementations of this facility. I'm - however - pretty convinced that the idea what that the data can be retrieved after a sudden "cache catastrophe" and not that it just sits on the disk as magnetic ornament. In any case, POSIX certainly doesn't "mandate" this "feature".<br> <p> </div> Wed, 25 Sep 2019 22:12:22 +0000 Better guidance for database developers https://lwn.net/Articles/800591/ https://lwn.net/Articles/800591/ nybble41 <div class="FormattedComment"> You seem to be arguing that POSIX compliance requires fsync() on a file to imply an fsync() on the parent directory, and potentially all other ancestor directories up to the root of the filesystem. Or possibly *multiple* parent directories and their ancestors in the case of hard links. Do you have any examples of POSIX-style operating systems which make such guarantees?<br> <p> Personally I'd say that the Linux implementation is perfectly compliant. The fsync() call ensures that the data and metadata for the target file (i.e., inode) is written to the backing device. After reset and recovery any process with a reference to the file will read the data which was present at the time of the fsync() call (unless it was overwritten later). This is enough to satisfy the requirements. In order to get such a reference, however, you need directory entries to associate a path with that inode. Those directory entries are not part of the file, and the creation of a directory entry is not an I/O operation on the file, so an fsync() call on the file itself does not guarantee anything about the directory. For that you need to fsync() the directory.<br> </div> Wed, 25 Sep 2019 21:44:51 +0000 Better guidance for database developers https://lwn.net/Articles/800579/ https://lwn.net/Articles/800579/ rweikusat2 <div class="FormattedComment"> <font class="QuotedText">&gt; Well, sure. Instead of saying "POSIX allows any character except NUL and / to appear in a filename" we should all, to be strictly </font><br> <font class="QuotedText">&gt; correct, say "the POSIX standard demands that a conforming implementation allow any character...". </font><br> <p> [...]<br> <p> <font class="QuotedText">&gt; And so on and so on. Surely we all understand what is meant by the shorter form?</font><br> <p> The important distinction here is that a standard is a requirements specification and such, it doesn't and cannot 'guarantee' anything. Implementations aiming to conform to the specification might guarantee something (or not) but that's up the the implementation.<br> <p> The notion that "the API is all wrong" would seem to be a preconceived opinion of some people (and to which degree this is nothing but "Microsoft does it differentenly" in disguise is anybody's guess) but that's not what I think this article was about. It was about deficiencies of the Linux implementation of an API, especially about the lack of consistency wrt to different file systems and about insuffcient documentation. Eg, <br> <p> | For example, if you create a file, write to it, and then call fsync() on it, do you also have to open its directory and fsync() that in <br> | order to be sure that the file is persistent in the directory? Is that even filesystem-specific?<br> |<br> |Kernel filesystem developer Jan Kara said that POSIX mandates the directory fsync() for persistence. <br> <p> But this is just plain wrong. *If* an implementation supports POSIX synchronized I/O (something Linux doesn't claim to support, only aims to support in some way here and there), then "All I/O operations shall be completed as defined for synchronized I/O file integrity completion." upon fsync and "synchronized I/O file integrity completion" is defined as <br> <p> | Identical to a synchronized I/O data integrity completion with the addition that all file attributes relative to the I/O operation <br> | (including access time, modification time, status change time) are successfully transferred prior to returning to the calling <br> | process.<br> <p> with "I/O data integrity completion" being defined as "all data and all metadata necessary to retrieve this data has been written". IOW, a problem here is that Linux doesn't implement the POSIX API but some essentially random subset of that here and another there, depending on whatever the responsible maintainer had for breakfast a fortnight ago.<br> </div> Wed, 25 Sep 2019 21:09:05 +0000 Better guidance for database developers https://lwn.net/Articles/800557/ https://lwn.net/Articles/800557/ hkario <div class="FormattedComment"> It's more like Kernel provides a "POSIX-like" interface, yes, it's compatible with POSIX, but it's the lowest common denominator, it's not what Linux can do and what interfaces does it provide.<br> <p> or to put it other way: POSIX doesn't require the error handling of the APIs to be underspecified<br> </div> Wed, 25 Sep 2019 15:26:47 +0000 Better guidance for database developers https://lwn.net/Articles/800555/ https://lwn.net/Articles/800555/ epa <div class="FormattedComment"> Well, sure. Instead of saying "POSIX allows any character except NUL and / to appear in a filename" we should all, to be strictly correct, say "the POSIX standard demands that a conforming implementation allow any character...". Instead of "POSIX doesn't provide a video streaming API" we should say "there is no requirement, in the POSIX standard, that a conforming implementation implement an API for video streaming". And so on and so on. Surely we all understand what is meant by the shorter form?<br> <p> Yes, fsync() exists and is part of POSIX, and guarantees a physical write (when using a conforming implementation). But if fsync() were enough and its semantics were clearly understood by everyone, surely this LWN article would not exist? I thought the whole point was that the the API provided by the Linux kernel (which is loosely speaking a superset of POSIX) doesn't provide the interfaces a database system developer would like to use -- or at least it's not understood by everyone how to use them.<br> </div> Wed, 25 Sep 2019 15:23:48 +0000 Better guidance for database developers https://lwn.net/Articles/800520/ https://lwn.net/Articles/800520/ rweikusat2 <div class="FormattedComment"> <font class="QuotedText">&gt; The kernel provides a POSIX interface (with a few extra frills). As noted in the article, POSIX doesn't really provide any </font><br> <font class="QuotedText">&gt; guarantees about persistence of data in the event of a crash. If you have strong requirements for that, it makes sense </font><br> <font class="QuotedText">&gt; to avoid the POSIX file system interface and use something else. </font><br> <p> "Holy non-sequitur, Batman!" Nobody uses 'POSIX', hence, there's no reason to avoid using something which happens to be 'in POSIX' just because something else is not. It all boils down to properties of implementations of some interface which happens to be 'in POSIX'. There's also a fundamental misunderstanding about the nature of 'a technical standard' in here: These don't and cannot 'guarantee' anything as a standard has no control over something which claims to be an implementation of it. The standard demands that conforming implenentation shall have certain properties.<br> <p> Leaving this aside, the statement is also wrong, cf<br> <p> The fsync() function is intended to force a physical write of data from the buffer cache, and to assure that after a system crash or other failure that all data up to the time of the fsync() call is recorded on the disk. Since the concepts of "buffer cache", "system crash", "physical write", and "non-volatile storage" are not defined here, the wording has to be more abstract.<br> <p> <a href="https://pubs.opengroup.org/onlinepubs/9699919799/functions/fsync.html">https://pubs.opengroup.org/onlinepubs/9699919799/function...</a><br> <p> This is an optional feature which implementations may or may not implement but it's certainly 'in POSIX'.<br> </div> Wed, 25 Sep 2019 15:15:32 +0000 Better guidance for database developers https://lwn.net/Articles/800514/ https://lwn.net/Articles/800514/ martin.langhoff <div class="FormattedComment"> Bravo. We need more of these "core app developers talk with kernel devs" sessions. All the popular stack components -- all the parts of MEAN, LAMP, Pythons and Rubys and Erlangs should come bearing "things I wish kernel devs knew about $foo" two-pagers.<br> <p> Many of the proposed answers -- ie: the sane way to rename() is x,y,z -- could/should be encoded in a battery of tests that supports fault injection. <br> </div> Wed, 25 Sep 2019 14:31:53 +0000 Better guidance for database developers https://lwn.net/Articles/800513/ https://lwn.net/Articles/800513/ epa <div class="FormattedComment"> The kernel provides a POSIX interface (with a few extra frills). As noted in the article, POSIX doesn't really provide any guarantees about persistence of data in the event of a crash. If you have strong requirements for that, it makes sense to avoid the POSIX file system interface and use something else. One day that might be a next-generation file system API which lets you robustly (and simply) guarantee consistent data on disk while getting good performance. Until then, bypassing the file system altogether seems like the only way.<br> <p> Similarly, POSIX doesn't provide an API for hard real-time; neither does stock Linux. So applications with hard real-time requirements bypass the kernel CPU scheduling and use something else -- often a separate real-time kernel which sits underneath Linux.<br> </div> Wed, 25 Sep 2019 14:23:24 +0000 Better guidance for database developers https://lwn.net/Articles/800512/ https://lwn.net/Articles/800512/ NightMonkey <div class="FormattedComment"> "Is it supposed to protect your data from somebody pouring coffee into the host's disk array too?"<br> <p> Yes, for your sake, it better. I will pour my coffee into your database's disk array AGAIN if you keep leaving it in my bedroom, all 16 loud fans blowing full speed, ringere. I don't care how many nines you've promised, or how much fault tolerance you claim, or how "important" your data is. Hot chocolate, too, if you do it in the winter. So, step off or get burned. Make sure your transactions are atomic, check your backups, and get this monster OUT of here!<br> <p> Worst roommate EVER, you are.<br> <p> P.S. I agree with you. ;)<br> </div> Wed, 25 Sep 2019 14:16:15 +0000 Better guidance for database developers https://lwn.net/Articles/800508/ https://lwn.net/Articles/800508/ ringerc <div class="FormattedComment"> Yes, they can. That's what Oracle does/did at various points in time, with various deployment models.<br> <p> It works, but it has major costs: the DBMS must duplicate a large chunk of OS functionality, which is extremely wasteful. Skills and knowledge of people who know the OS I/O systems are not very transferable to tuning and working with the DBMS's I/O systems because they're parallel implementations. If the OS fixes a bug, the DBMS must fix it separately. The DBMS must find ways to share with and interoperate with the OS sometimes, which can introduce even more complexity.<br> <p> So we should just bypass the kernel I/O stack. Well, why not just bypass the pesky scheduler, device drivers, etc too and write our own kernel? PostgresOS! We could write our own UEFI firmware and CPU microcode too, and maybe some HBA firmware...<br> <p> OK, so that's hyperbolic. But why is it that the solution to I/O problems with the kernel is to bypass the kernel? If I wanted to override all kernel CPU scheduling you'd probably call me crazy, but it's if anything less extreme than replacing the I/O stack.<br> <p> To me, if I can expect to rely on the kernel doing sensible things when I mmap() something, schedule processes reasonably, enforce memory protection, etc, I should be able to expect it to do sane things for I/O too.<br> </div> Wed, 25 Sep 2019 13:32:51 +0000 Better guidance for database developers https://lwn.net/Articles/800507/ https://lwn.net/Articles/800507/ ringerc <div class="FormattedComment"> Exactly. This is pretty much what happened on most consumer SSDs until quite recently: they would lie about flushing data, instead storing it in a volatile cache where it's re-ordered and lazily written out.<br> <p> Abruptly lost power? Oh well. Hope you didn't need that data written consistently and in order.<br> <p> But for marketing and benchmark results reasons, they'd report to the OS that they were doing write-through even when they were really write-back caching.<br> <p> How's a database supposed to defend against that?<br> <p> Is it supposed to protect your data from somebody pouring coffee into the host's disk array too?<br> </div> Wed, 25 Sep 2019 13:25:26 +0000 Better guidance for database developers https://lwn.net/Articles/800505/ https://lwn.net/Articles/800505/ ringerc <div class="FormattedComment"> A nice ideal in theory.<br> <p> In reality, it's turtles all the way down.<br> <p> Can you confidently state that the UEFI firmware hijacking control of the disk to do some I/O to a UEFI hidden/reserved partition won't affect durability?<br> <p> What if the SATA firmware on (random made-up example) some Western Digital Silver SSD drives responds prematurely to a flush request if it immediately follows a TRIM command? Should they know that, special case that, handle that?<br> <p> Because if so, I promise you there is only one possible outcome: "We make no guarantees about the durability of your data, good luck with that."<br> <p> In reality that's *always* the case. It's all about confidence levels, testing, and experience. We can never prove that we cannot lose your data. We can only say, confidently, that we've done a rather comprehensive job of plugging all the routes we can find by which we might lose it. If that's not good enough, you'd better go back to pen &amp; paper because there's no way in the universe that any one person is going to understand everything and all possible interactions. Not with CPU microcode, UEFI, BMCs, PCIe inter-device communication, IOMMUs, VT-x and VT-IO, hypervisors, firmware on SSDs and HDDs, ACPI, firmware on I/O controllers, the kernel core, kernel device drivers, bus power states, device power states, processor power states, power management interacting with everything, etc etc etc.<br> <p> Can you list every microprocessor on your laptop that can interact with your RAM, PCI-e bus, or USB HCI? I guarantee you can't.<br> </div> Wed, 25 Sep 2019 13:17:38 +0000 Better guidance for database developers https://lwn.net/Articles/800503/ https://lwn.net/Articles/800503/ fwiesweg <div class="FormattedComment"> Well, large ones maybe, but definitely not sqlite on Android phones, unless Google adds a "change the partition layout" app permission allowing random apps to brick the whole device ;)<br> </div> Wed, 25 Sep 2019 11:20:58 +0000 Better guidance for database developers https://lwn.net/Articles/800493/ https://lwn.net/Articles/800493/ epa <div class="FormattedComment"> Couldn't a large database installation work with raw disk partitions, cutting out the file system entirely? Some DBMSes already bypass the page cache, since they do their own caching.<br> </div> Wed, 25 Sep 2019 09:13:33 +0000 Better guidance for database developers https://lwn.net/Articles/800489/ https://lwn.net/Articles/800489/ weberm <div class="FormattedComment"> This is completely impractical, and voicing your concerns as developer building on something murky is absolutely necessary to drive a change.<br> <p> Say for example you have a HD controller that lies about flushing data to the disk, because for some benchmark figure there's a cache in there that's considered "part of the disk" and that's where the flush goes to. But if you flush just enough data, the buffer overflows and your actual data is getting written to the disk. Are database developers now supposed to follow each transaction with just enough data (different per disk / controller) so that this internal cache overflows? Why should they have to be bothered with this? Because some user uses questionable hardware?<br> <p> This multiplies across the stack that your software is built on, and extends to hardware, even the CPU you're running on. No single sane person can claim to fully understand the whole stack, hardware and software. This is completely unrealistic. So you go to the experts for the various layers and communicate your expectations and necessities. There's no other way than to collaborate.<br> </div> Wed, 25 Sep 2019 07:36:14 +0000 Better guidance for database developers https://lwn.net/Articles/800485/ https://lwn.net/Articles/800485/ geoffhill <div class="FormattedComment"> Part of me want to think it isn't the db developers' fault. Trying to understand the behavior of Linux syscalls across the disparate myriad of modules that implement them is intractable and becomes more so every year.<br> <p> But at the end of the day, if db developers claim consistency of data on my disk, I hold them responsible. If Linux doesn't provide the syscalls to do that, the database developers and users who care about on-disk consistency should look elsewhere for a system that provides the guarantees they seek.<br> <p> It doesn't matter if you are the foremost expert in your database design. If you cannot understand the abstractions you are building upon (perhaps because they are not clear and rock-solid abstractions!), you cannot claim to understand your own product.<br> </div> Wed, 25 Sep 2019 06:14:40 +0000 Kernel IO APIs https://lwn.net/Articles/800458/ https://lwn.net/Articles/800458/ k8to <div class="FormattedComment"> But my experience on a "enterprise" datacenter application over ~10 years is that BSD went away from nearly all large corporate scenarios. <br> <p> It's certainly used still in some roll your own situations. Mostly I've seen FreeBSD when I've seen it at all.<br> <p> My expectation is that the platforms people care about are going to be Linux and to a lesser extent Windows for the forseable future. All the shops I've worked for in the past 14 years have their software running on OSX but it's never fully supported. It just "sort of works" as demo-ware. <br> <p> Projects like SQLite and Postgres tend to care about making their software work reliably on an incredibly number of platforms though. Postgres finally dropped support for Vax not that long ago. I expect they achieve these goals on all the BSDs.<br> </div> Tue, 24 Sep 2019 18:52:38 +0000 Kernel IO APIs https://lwn.net/Articles/800457/ https://lwn.net/Articles/800457/ k8to <div class="FormattedComment"> I expect the BSDs have much less varying behavior for sure. But I think the full durability details are complicated in every system.<br> <p> </div> Tue, 24 Sep 2019 18:42:53 +0000 Kernel IO APIs https://lwn.net/Articles/800456/ https://lwn.net/Articles/800456/ darwi <div class="FormattedComment"> Is the situation better for the BSDs, or they don't matter much anyway?<br> <p> </div> Tue, 24 Sep 2019 18:01:36 +0000