Not logged in
Log in now
Create an account
Subscribe to LWN
Pencil, Pencil, and Pencil
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Little things that matter in language design
many drives have been found to lie to the OS abut when the data is saved (they report it being saved when it's only in the cache, not when it's actually saved to the platter)
if you have a drive like this, then you will loose data, no matter what filesystem you use, and even if you use a high-end raid controller.
XFS: the filesystem of the future?
Posted Jan 29, 2012 1:36 UTC (Sun) by sbergman27 (subscriber, #10767)
Especially if the filesystem itself is already being cavalier with the data, by holding it in the page/buffer caches as long as it can, and playing russian roulette with "features" like delayed allocation. All in the name of good benchmark numbers.
Drive caches are a small factor in comparison. We didn't really even used to worry, or even thing about them. Now, they seem to be the preferred scapegoat for Linux filesystem devs when data loss occurs. I *know* how reliable things were before we had barriers and FUA. And I know what I'm seeing now. With all due respect, I'm just not buying this explanation.
Posted Jan 30, 2012 17:35 UTC (Mon) by Otus (guest, #67685)
They used to be small. They've grown approximately at the same speed as HDD sizes, which is significantly faster than throughput, not to mention seek time.
A consumer HDD from c. 2000 might have a 2 MB cache and 40 MB/s throughput, so a full cache empties from sequential data in 50 ms best case. Current 2-3 TB drives have a 64 MB cache and 100-150 MB/s throughput, so a full cache takes around 500 ms minimum to empty.
For non-sequential data it's much worse.
Posted Feb 2, 2012 16:57 UTC (Thu) by jd (guest, #26381)
(a) it was battery-backed, and
(b) was write-through
Battery-backed doesn't have to mean the whole drive has to remain powered-up, it just has to mean the DRAM gets enough juice to keep refreshing until regular power is restored *if* there is any unwritten content in it. In other words, if everything is flushed to disk then you don't need to keep the drive's RAM powered. If drive manufacturers were *really* clever, then only those blocks of RAM with unflushed content would need to remain powered.
It's hard to get a frame of reference, as most devices with RAM and a modern LIon battery also have a power-hungry CPU and an even hungrier RF system to feed. Here, you only need to keep selective RAM chips powered, no processing is required. I have absolutely no idea what kind of leakage of charge good batteries suffer, but it is probably small. Just keeping DRAM alive doesn't take a vast amount of power. This solution should be adequate to handle even Katrina-length power outages. Beyond that, disk corruption is unlikely to be your major concern.
Posted Feb 5, 2012 19:45 UTC (Sun) by rilder (subscriber, #59804)
Speaking of write cache, the directive is to disable it on disk if you have another battery backed write cache sitting behind, or speaking of laptops you can have them since they are battery backed, no one says to disable cache completely.
You can start reading about it here -- http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_w...
Speaking of caching in page cache, it is just to provide better I/O locality as mentioned in talk, it is flushed periodically, if it is flushed as and when required you will end up with seek nightmare with disk.
Posted Feb 6, 2012 2:52 UTC (Mon) by dlang (✭ supporter ✭, #313)
having a SSD or battery backed cache does not replace doing fsyncs. If you don't do the fsync you don't know that the data is being written from the OS cache to the disk subsystem.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds