|
|
Subscribe / Log in / New account

The "too small to fail" memory-allocation rule

The "too small to fail" memory-allocation rule

Posted Dec 25, 2014 18:38 UTC (Thu) by quotemstr (subscriber, #45331)
In reply to: The "too small to fail" memory-allocation rule by yoe
Parent article: The "too small to fail" memory-allocation rule

Why does ZFS need its own cache? Page cache should be sufficient and system-global. If you want a particular installation to prefer file-backed pages to swap-backed pages at reclaim time, fine, but I don't see why this policy should have to be tied to a specific filesystem. It feels like a step backward from the unified cache era.


to post comments

The "too small to fail" memory-allocation rule

Posted Dec 25, 2014 22:33 UTC (Thu) by yoe (guest, #25743) [Link]

ZFS doesn't necessarily need its own cache, I suppose, but the algorithms involved can be more efficient if they have knowledge of the actual storage layout (which *does* require that they are part of the ZFS code, if done to the extreme). E.g., ZFS has the ability to store a second-level cache on SSDs; if the first-level (in-memory) cache needs to drop something, it may prefer to drop something which it knows to also be stored on the SSD cache over something which isn't. The global page cache doesn't have the intricate knowledge of the storage backend which is required for that sort of thing.

I suppose that's a step backwards if what you want is "mediocre cache performance, but similar performance for *all* file systems". That's not what ZFS is about, though; ZFS wants to provide excellent performance at all costs. That does mean it's not the best choice for all workloads, but it does beat the pants off of most other solutions in the workloads that it was meant for.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds