KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space
Posted Nov 15, 2017 8:19 UTC (Wed) by marcH (subscriber, #57642)Parent article: KAISER: hiding the kernel from user space
I find timing-based attacks fascinating.
In order to grow, computer performance has become less and less deterministic. On one hand this makes real-time and predictions harder. On the other hand this leaks more and more information about the system.
Posted Nov 15, 2017 14:19 UTC (Wed)
by epa (subscriber, #39769)
[Link] (6 responses)
Posted Nov 15, 2017 15:55 UTC (Wed)
by matthias (subscriber, #94967)
[Link]
I did also not know this before, but several of these attacks are described in the linked paper.
Posted Nov 27, 2017 15:46 UTC (Mon)
by abufrejoval (guest, #100159)
[Link] (4 responses)
If instead you set a randomization bias you can run CPUs at say 5GHz logical clock and then add random delays to hit say 3, 2 or 1 GHz on average depending on the workload. Every iteration of an otherwise pretty identical loop would wind up a couple of clocks different, throwing off snoop code without much of an impact elsewhere. Of course it shouldn't be one central clock overall, but essentially any clock domain could use its own randomization source and bias. I guess CPUs have vast numbers of clock synchronization gates these days anyway, so very little additional hardware should be required.
Stupid, genius or simply old news?
Posted Nov 27, 2017 16:55 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (2 responses)
Posted Jan 5, 2018 8:26 UTC (Fri)
by clbuttic (guest, #121058)
[Link] (1 responses)
Posted Jan 6, 2018 16:02 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted Nov 27, 2017 17:43 UTC (Mon)
by excors (subscriber, #95769)
[Link]
For example, the KAISER paper says the "double page fault attack" distinguishes page faults taking 12282 cycles for mapped pages and 12307 cycles for unmapped pages, i.e. a difference of 25 cycles. If I remember how maths works: You could add a random delay to the page fault handler (or randomly vary the CPU speed or whatever) so it has a mean and standard deviation of (M, S) for mapped pages and (M+25, S) for unmapped. If S > 25 (very roughly) then the attacker can measure a page fault but can't be sure whether it belongs to the first category or the second.
But if they repeat it 10,000 times (which only takes a few msecs) and average their measurements, they'd expect to get a number in the distribution (M, S/100) for mapped pages or (M+25, S/100) for unmapped. You'd have to make S > 2500 to stop them being able to distinguish those cases easily. At that point it's much more expensive than the KAISER defence, and it would still be useless against an attacker who can repeat the measurement a million times. And that's for measuring a relatively tiny difference of 25 cycles in an operation that takes ~12K cycles; it's harder to protect the TSX or prefetch attacks where the operation only takes ~300 cycles.
It seems much safer to ensure operations will always take a constant amount of time, rather than adding randomness and just hoping the statistics work in your favour.
KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space
These days where CPUs constantly vary their speeds to either exploit every bit of thermal headroom they can find or re-adjust constantly to hit an energy optimum for a limited value workload, it seems almost stupid to try sticking to a constant speed.
KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space
KAISER: hiding the kernel from user space