|
|
Subscribe / Log in / New account

Remote Spectre exploits demonstrated

This paper from four Graz University of Technology researchers [PDF] describes a mechanism they have developed to exploit the Spectre V1 vulnerability over the net, with no local code execution required. "We show that memory access latency, in general, can be reflected in the latency of network requests. Hence, we demonstrate that it is possible for an attacker to distinguish cache hits and misses on specific cache lines remotely, by measuring and averaging over a larger number of measurements. Based on this, we implemented the first access-driven remote cache attack, a remote variant of Evict+ Reload called Thrash+Reload. Our remote Thrash+Reload attack is a significant leap forward from previous remote cache timing attacks on cryptographic algorithms. We facilitate this technique to retrofit existing Spectre attacks to our network-based scenario. This NetSpectre variant is able to leak 15 bits per hour from a vulnerable target system." Other attacks described in the paper are able to achieve higher rates.

to post comments

Remote Spectre exploits demonstrated

Posted Jul 27, 2018 14:48 UTC (Fri) by martin.langhoff (subscriber, #61417) [Link] (4 responses)

From a quick read, the network and target machine need to be very "quiet" for this to work...

Remote Spectre exploits demonstrated

Posted Jul 27, 2018 16:31 UTC (Fri) by jcm (subscriber, #18262) [Link]

Not strictly required, we've seen in reproduced in a range of environments.

Remote Spectre exploits demonstrated

Posted Jul 30, 2018 7:54 UTC (Mon) by jk (subscriber, #31383) [Link]

I would have thought so too, but:

> We used `stress -i 1 -d 1` for the experiments, to simulate a
> realistic environment. Although we would have expected our attack
> to work best on a completely idle server, we did not see any negative
> effects from the moderate server loads. In fact, they even slightly
> improved the attack performance

(section 6.3)

Remote Spectre exploits demonstrated

Posted Jul 30, 2018 11:27 UTC (Mon) by rweikusat2 (subscriber, #117920) [Link] (1 responses)

Not necessarily. The only hard requirment is that 'uncontrolled fluctuations' in machine state are random, IOW, cancel each other out when enough measured values are averaged. OTOH, the more 'noisy' the environment is, the lower the achievable transmission rate will be and 15 bits/ hour isn't exactly a high bandwidth to begin with.

Remote Spectre exploits demonstrated

Posted Aug 2, 2018 8:16 UTC (Thu) by timokokk (subscriber, #52029) [Link]

Something that probably makes a difference in the real world is the network latency. I didn't read the paper with too much care, but it caught my eye that in their setup they had ~15us average latency and they used a million samples per bit to distinguish ones and zeroes from each other. In real life the latency to any random server is at least in the range of milliseconds and varies much more. You basically need to be in the same LAN in order to reliably exploit the flaw, otherwise your bits will get lost in the noise. So we are nowhere near where you could just pick up random net servers and extract data off it just by measuring the response latency variance.

Remote Spectre exploits demonstrated

Posted Jul 27, 2018 23:26 UTC (Fri) by jcm (subscriber, #18262) [Link]

@Jon: technically they can exploit other variants remotely also, just they use v1 for simplicity


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds