|
|
Subscribe / Log in / New account

Notes from the Intelpocalypse

Notes from the Intelpocalypse

Posted Jan 4, 2018 5:37 UTC (Thu) by sfeam (subscriber, #2841)
In reply to: Notes from the Intelpocalypse by jimzhong
Parent article: Notes from the Intelpocalypse

That might narrow the timing window but I don't think it would be sufficient to prevent the attack. The analysis of Spectre shows that hundreds of instructions may be executed speculatively before the misprediction is recognized, so snooping on the cache contents would still be possible during that interval.


to post comments

Notes from the Intelpocalypse

Posted Jan 4, 2018 7:22 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

And so the ways to snoop on the cache contents should be curtailed.

Notes from the Intelpocalypse

Posted Jan 4, 2018 14:15 UTC (Thu) by droundy (subscriber, #4559) [Link]

Indeed, I wonder about the possibility of separate speculative caches. Sounds terribly expensive, though.

Notes from the Intelpocalypse

Posted Jan 4, 2018 21:50 UTC (Thu) by roc (subscriber, #30627) [Link] (9 responses)

The only reasonable and watertight way to do that that I can think of is to partition the cache by protection domain. So cache lines would have owners: the kernel, specific user-space processes, and even within processes you'd want separate cache lines for JS vs the browser. A cache lookup would have to find a line owned by the current protection domain; if it did not, that has to be treated as a miss, and you would only be allowed to evict cache lines owned by the current domain.

It would hurt performance but what else would really work?

Notes from the Intelpocalypse

Posted Jan 4, 2018 22:28 UTC (Thu) by rahvin (guest, #16953) [Link] (2 responses)

That sounds like a fix that would destroy cache effectiveness, you'd probably also enable a DOS attack that causes the cache to be partitioned until there isn't any cache left and things start locking up.

Notes from the Intelpocalypse

Posted Jan 4, 2018 22:40 UTC (Thu) by roc (subscriber, #30627) [Link] (1 responses)

There are probably ways to avoid lockup. I agree the performance impact would be bad. But what else really works?

Notes from the Intelpocalypse

Posted Jan 5, 2018 1:46 UTC (Fri) by rahvin (guest, #16953) [Link]

And we all thought heart-bleed was the worst thing ever, kinda pales in comparison to this.

Notes from the Intelpocalypse

Posted Jan 4, 2018 22:51 UTC (Thu) by sfeam (subscriber, #2841) [Link] (1 responses)

It's worse than you think. The use of cache as a side-channel was convenient for the proof-of-concept exploits but was not necessary. Mitigation that focuses on the cache rather than the speculative execution of invalid code is necessarily incomplete. The Spectre report notes: potential countermeasures limited to the memoryu cache are likely to be insufficient, since there are other ways that that speculative execution can leak information. For example, timing effects from memory bus contention, DRAM row address selection status, availability of virtual registers, ALU activity, [...] power and EM.

Notes from the Intelpocalypse

Posted Jan 4, 2018 23:00 UTC (Thu) by roc (subscriber, #30627) [Link]

Yeah, I read the paper. Just addressing the cache question since it was raised.

Notes from the Intelpocalypse

Posted Jan 4, 2018 22:57 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Switchable caches (by PCID), perhaps?

Notes from the Intelpocalypse

Posted Jan 4, 2018 23:01 UTC (Thu) by roc (subscriber, #30627) [Link]

That's basically the same as partitioning the cache, isn't it?

Notes from the Intelpocalypse

Posted Jan 5, 2018 0:03 UTC (Fri) by excors (subscriber, #95769) [Link]

Rather than restricting each domain to a tiny partition of the cache (which sounds painful for L1), perhaps you could let each domain use the whole cache (like now) but flush it every time you switch domain.

Then you'd want to rearchitect software to minimise the amount of domain-switching. E.g. instead of a syscall accessing protected data from the same core as the application, it would just be a stub that sends a message to a dedicated kernel core. Neither core would have to flush their own cache, and they couldn't influence each other's cache. Obviously you'd have to get rid of cache coherence (I don't see how your proposal would be compatible with coherence either), and split shared L2/L3 caches into dynamically-adjustable per-domain partitions, and no hyperthreading, etc.

Then maybe someone will notice that DRAM chips remember the last row that was accessed, so a core can touch one of two rows and another core can detect which one responds faster, and leak information that way. Then we'll have to partition DRAM by domain too.

Eventually we might essentially have a network of tiny PCs, each with its own CPU and RAM and disk and dedicated to a single protection domain, completely isolated from each other except for an Ethernet link.

Hmm, I'm not sure that will be good enough either: Spectre gets code in one domain (e.g. the kernel) to leak data into cache that affects the timing of a memory read in another domain (e.g. userspace), but couldn't it work with a purely kernel-only cache, if you simply find an easily-timeable kernel call that performs the memory read for you? Then it doesn't matter how far removed the attacker is from the target.

Notes from the Intelpocalypse

Posted Jan 5, 2018 13:44 UTC (Fri) by welinder (guest, #4699) [Link]

Even that might not be enough. If any information based on speculation has left
the cpu chip -- memory reads that reach the main memory -- then you might get
caching effects there.

I don't see tagging every memory location with an owner as a viable option.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds