LWN: Comments on "Defending against page-cache attacks" https://lwn.net/Articles/776801/ This is a special feed containing comments posted to the individual LWN article titled "Defending against page-cache attacks". en-us Wed, 01 Oct 2025 07:58:40 +0000 Wed, 01 Oct 2025 07:58:40 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Defending against page-cache attacks https://lwn.net/Articles/778053/ https://lwn.net/Articles/778053/ sourcejedi <div class="FormattedComment"> <a rel="nofollow" href="https://cartesianproduct.wordpress.com/2011/09/15/done-and-dusted/">https://cartesianproduct.wordpress.com/2011/09/15/done-an...</a><br> <p> <font class="QuotedText">&gt; The report was on “applying working set heuristics to the Linux kernel“: essentially testing to see if there were ways to overlay some elements of local page replacement to the kernel’s global page replacement policy that would speed turnaround times.</font><br> <p> <font class="QuotedText">&gt; The answer to that appears to be ‘no’ – at least not in the ways I attempted, though I think there may be some ways to improve performance if some serious studies of phases of locality in programs gave us a better understanding of ways to spot the end of one phase and the beginning of another.</font><br> <p> <font class="QuotedText">&gt; But, generally speaking, my work showed the global LRU policy of the kernel was pretty robust.</font><br> </div> Thu, 31 Jan 2019 11:31:30 +0000 Defending against page-cache attacks https://lwn.net/Articles/777890/ https://lwn.net/Articles/777890/ nix <div class="FormattedComment"> I had a wobbly RAM pack with an extra flaw: the PSU on my ZX81 was underspec so it didn't generate quite enough power to power the RAM and screen at once. The video signal generation was the first thing to go: you got waves of sync problems like a bad VHS video player working their way over the screen. But it didn't take long for eight-year-old me to figure out that the RAM wasn't holding its content either...<br> <p> (Obviously I couldn't fix it. An eight year old with terrible coordination go messing in a power supply?! HELL NO.)<br> </div> Wed, 30 Jan 2019 14:42:51 +0000 Defending against page-cache attacks https://lwn.net/Articles/777658/ https://lwn.net/Articles/777658/ gevaerts <div class="FormattedComment"> That's why you built some contraption to keep it all in place! (which is, of course, when something went wrong with saving and you had to re-type it anyway)<br> </div> Mon, 28 Jan 2019 14:09:55 +0000 Defending against page-cache attacks https://lwn.net/Articles/777640/ https://lwn.net/Articles/777640/ paulj <div class="FormattedComment"> Complete tangent from the story: That 16 KiB ZX81 RAM pack - it was wobbly, and just as you'd be getting into the end of (what felt like to a 9yo anyway) hours of typing in some programme, it'd wobble, the ZX81 would reset and everything would be gone! Oh that RAM pack, so frustrating! :)<br> </div> Mon, 28 Jan 2019 07:55:50 +0000 Defending against page-cache attacks https://lwn.net/Articles/777437/ https://lwn.net/Articles/777437/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; &gt; &gt; &gt; So the known mechanisms for non-destructively querying the state of the page cache are likely to be shut down, perhaps only if the kernel is configured into a "secure mode".</font><br> <p> <font class="QuotedText">&gt; The future of computing is straight-up partitioning, sharing nothing. It's a much simpler and more robust world.</font><br> <p> To avoid a myriad of new CONFIG_SECURE_SIDE_CHANNEL_FOO options, how about a unique CONFIG_SHARED_SYSTEM setting controlling all these at once?<br> <p> "Shared" can unfortunately apply to single-user systems too, think Android applications for instance :-(<br> <p> </div> Thu, 24 Jan 2019 05:15:25 +0000 Defending against page-cache attacks https://lwn.net/Articles/777215/ https://lwn.net/Articles/777215/ Paf <div class="FormattedComment"> The page cache is not just about reuse.<br> <p> The page cache allows both write aggregation and readahead, and for writes to complete asynchronously from the submitting syscall. Both of these have enormous (positive) performance impacts which rise as the amount of I/O the filesystem/device can have in flight increases, and also as the response latency of the device increases.<br> <p> The page cache allows your single threaded dd to have the system queue up a bunch of writes which may be able to be processed all at once, as contrasted with direct I/O which is 1 I/O per process.<br> <p> Additionally, if your whole write fits in the page cache and you’re not doing other heavy I/O (ie semi-idle time is available to write out your data) the ability to write to memory and complete asynchronously means your application level performance (where the app doesn’t wait for the write to be on disk) will stomp almost any standard storage device or RAID array,<br> <p> This means it’s not beneficial to use direct I/O for single use I/O in general, it really depends on your case. DIO is essentially only faster in the cases where your device is *extremely* fast or you have many threads and a very high bandwidth back end (you can overwhelm the page cache).<br> <p> In cases with higher latency devices (HDD, network file systems) or where there is device level parallelism to exploit (SSDs), direct I/O is often much, much slower, even for well formed I/O. (In real deployments of the Lustre parallel file system, which I work on, single threaded DIO can be 5-10x slower than normal I/O. That’s an extreme case but the reasons for it hold for local file systems too.)<br> </div> Mon, 21 Jan 2019 01:49:36 +0000 Defending against page-cache attacks https://lwn.net/Articles/777197/ https://lwn.net/Articles/777197/ farnz <p>Your i7-4790K has 32 KiB I$ and 32 KiB D$ - so about as much total L1 cache as your C64 had RAM, but not enough to cover the ROM as well. <p>My first Z80 machine would fit in L1 cache on your CPU, though - the ZX81 had 1 KiB RAM, 8 KiB ROM, and could be expanded commercially to 16 KiB RAM, 8 KiB ROM. Sun, 20 Jan 2019 20:00:07 +0000 Defending against page-cache attacks https://lwn.net/Articles/777191/ https://lwn.net/Articles/777191/ Sesse <div class="FormattedComment"> That's not L1 cache only. You can do cache attacks even if you only have L3.<br> </div> Sun, 20 Jan 2019 18:37:19 +0000 Defending against page-cache attacks https://lwn.net/Articles/777148/ https://lwn.net/Articles/777148/ naptastic <div class="FormattedComment"> I can fit 2^7 of my first computer in the on-die cache of my five-year-old desktop processor. (Commodore 64 -&gt; i7 4790k)<br> </div> Sat, 19 Jan 2019 01:45:40 +0000 Defending against page-cache attacks https://lwn.net/Articles/777140/ https://lwn.net/Articles/777140/ quotemstr <div class="FormattedComment"> Well, yeah. The whole reason we share stuff in the first place is to make efficient use of limited system resources. As resources become cheaper, the case for elaborate (and apparently insecurity-prone) sharing mechanisms diminishes. The future of computing is straight-up partitioning, sharing nothing. It's a much simpler and more robust world.<br> </div> Fri, 18 Jan 2019 19:23:36 +0000 Defending against page-cache attacks https://lwn.net/Articles/777103/ https://lwn.net/Articles/777103/ bof <div class="FormattedComment"> <font class="QuotedText">&gt; wonder how many programs will use O_DIRECT now.</font><br> <p> Anything with a use case that wants to *avoid* perturbing the page cache. As a sysadmin I regularly use dd iflag=direct or oflag=direct when checksumming or network copying block devices. Applicable to all do-once I/O, actually, and the last time I played with fadvise FADV_NOREUSE (which dd does not support anyway) it was much less reliable.<br> </div> Fri, 18 Jan 2019 14:47:57 +0000 Defending against page-cache attacks https://lwn.net/Articles/777100/ https://lwn.net/Articles/777100/ Sesse <div class="FormattedComment"> By RAM, you mean L1 cache?<br> </div> Fri, 18 Jan 2019 13:36:22 +0000 Defending against page-cache attacks https://lwn.net/Articles/777098/ https://lwn.net/Articles/777098/ amarao <div class="FormattedComment"> <font class="QuotedText">&gt; wonder how many programs will use O_DIRECT now. Or am I misunderstanding things?</font><br> A lot of server apps, specifically on IO side (iscsi, different storage/cluster/database software). The faster underlying device is, the more desirable is to use O_DIRECT for the access.<br> </div> Fri, 18 Jan 2019 13:20:49 +0000 Defending against page-cache attacks https://lwn.net/Articles/777080/ https://lwn.net/Articles/777080/ mangix <div class="FormattedComment"> wonder how many programs will use O_DIRECT now. Or am I misunderstanding things?<br> </div> Fri, 18 Jan 2019 04:27:59 +0000 Defending against page-cache attacks https://lwn.net/Articles/777078/ https://lwn.net/Articles/777078/ Nahor <div class="FormattedComment"> <font class="QuotedText">&gt; I think the general case problem here is cached data is generally interesting data.</font><br> <p> Easy solution: just cache everything. Load the whole disk in RAM at boot. No slow access, no timing attack and the system becomes faster. Win-win! :)<br> </div> Fri, 18 Jan 2019 01:32:58 +0000 Defending against page-cache attacks https://lwn.net/Articles/777051/ https://lwn.net/Articles/777051/ kucharsk <div class="FormattedComment"> I think the general case problem here is cached data is generally interesting data.<br> <p> You can extend the paradigm as far out into the computing arena as you like; if a system has both SSD and hard drives, data from SSD will probably be more important or of greater interest than that on the spinning media. If you have a storage solution that sends data off to secondary or tertiary storage, the time it takes to access said data reveals how old the data is.<br> <p> Likewise on systems with NVRAM, information in NVRAM will generally be more important or interesting than data not kept in non-volatile storage.<br> <p> This paradigm is of course true for all operating systems, not just Linux.<br> <p> Timing is always an issue; during the Cold War, Soviet spies were able to wiretap IBM Selectric typewriters in embassies by detecting how long it took the type ball to rotate to each character, giving them a reasonable chance of determining each character being typed.<br> <p> We obviously can't take the approach of "slow everything down to the time taken to access the slowest device," and there will always be a need to be able to pre-populate clusters, containers or other mechanisms to provide for fast startup times or to provide instant failover. Someone will need access to that information, and as soon as someone does, that's a potential leak.<br> <p> It's more a matter of reducing exposure than eliminating it, and the question is where does that balance between security and the need for ever faster operation lie?<br> </div> Thu, 17 Jan 2019 21:44:52 +0000 Defending against page-cache attacks https://lwn.net/Articles/777043/ https://lwn.net/Articles/777043/ quotemstr <div class="FormattedComment"> The paper authors suggest moving toward a Windows-style "working set" model of page cache instead of global LRU. I wish this option would be more seriously considered despite it involving massive vm subsystem changes.<br> </div> Thu, 17 Jan 2019 20:59:36 +0000