|
|
Subscribe / Log in / New account

West: Post-Spectre web development

Mike West has posted a detailed exploration of what is really required to protect sensitive information in web applications from speculative-execution exploits. "Spectre-like side-channel attacks inexorably lead to a model in which active web content (JavaScript, WASM, probably CSS if we tried hard enough, and so on) can read any and all data which has entered the address space of the process which hosts it. While this has deep implications for user agent implementations' internal hardening strategies (stack canaries, ASLR, etc), here we’ll remain focused on the core implication at the web platform level, which is both simple and profound: any data which flows into a process hosting a given origin is legible to that origin. We must design accordingly."

to post comments

West: Post-Spectre web development

Posted Feb 27, 2021 14:05 UTC (Sat) by jgg (subscriber, #55211) [Link] (3 responses)

One thing I've always wondered about spectre in web browsers.. Don't all the side channel attacks require highly accurate timing/performance data to measure what the processor is doing speculatively?

In a sandbox like a web browser, isn't it reasonable to just block access to this high resolution data? Eg by limiting time resolution, blocking performance counting and/or adding randomness.

West: Post-Spectre web development

Posted Feb 27, 2021 14:22 UTC (Sat) by Paf (subscriber, #91811) [Link]

The first two have largely been done (removing access to detailed metrics/high re timers), but my understanding is that you can reduce the data rate, but without adding “sound” randomness to *every* interaction, all you can do is reduce the data rate.

They can just do a statistical analysis of response times on whatever they gin up, and gradually extract data from that. Any randomness added would have to be added to *every* interaction and if it didn’t change, could presumably be puzzled out. Maybe it’s possible to do ... something ... with that and a CSPRNG? Eek. That seems fraught.

West: Post-Spectre web development

Posted Feb 27, 2021 14:59 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> In a sandbox like a web browser, isn't it reasonable to just block access to this high resolution data?
You can create a high-res timer by running a thread that increments a variable and observing it from another thread.

The only way to defeat this is to disable shared-memory multithreading for JS.

West: Post-Spectre web development

Posted Feb 28, 2021 3:10 UTC (Sun) by excors (subscriber, #95769) [Link]

> You can create a high-res timer by running a thread that increments a variable and observing it from another thread.

That's why browsers disabled SharedArrayBuffer (the main API for sharing memory between JS threads (workers)) along with the high-res timer APIs when Spectre came out, until they could implement mitigations by running each site in a separate process (when enabled with certain HTTP headers) and relying on address space isolation to prevent Spectre-like attacks stealing data from other sites, so now SharedArrayBuffer and high-res timers can be used again but only by sites that opt in to running in isolated processes.

West: Post-Spectre web development

Posted Mar 1, 2021 14:38 UTC (Mon) by mw_skieske (guest, #144003) [Link]

related, maybe worth an article of it's own.

Spectre Exploits running on linux found in the wild:

https://dustri.org/b/spectre-exploits-in-the-wild.html

apparently this could be bought since 2018, as suggested by this twitter thread and this HN discussion:

https://twitter.com/immunityinc/status/959155986098421760

https://news.ycombinator.com/item?id=26301326


Copyright © 2021, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds