Especially if the filesystem itself is already being cavalier with the data, by holding it in the page/buffer caches as long as it can, and playing russian roulette with "features" like delayed allocation. All in the name of good benchmark numbers.
Drive caches are a small factor in comparison. We didn't really even used to worry, or even thing about them. Now, they seem to be the preferred scapegoat for Linux filesystem devs when data loss occurs. I *know* how reliable things were before we had barriers and FUA. And I know what I'm seeing now. With all due respect, I'm just not buying this explanation.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds