|
|
Subscribe / Log in / New account

They’re ba-ack: Browser-sniffing ghosts return to haunt Chrome, IE, Firefox (Ars Technica)

Ars Technica looks at a revival of a technique for remote sites to determine browser history. Originally, using JavaScript and CSS allowed sites to track browsing history, but those holes were eventually closed by browser makers. Exploiting a timing attack [PDF] on the browser can distinguish between sites that have been visited and those that have not. "The browser timing attack technique [Aäron] Thijs borrowed from fellow researcher [Paul] Stone abuses a programming interface known as requestAnimationFrame, which is designed to make animations smoother. It can be used to time the browser's rendering, which is the time it takes for the browser to display a given webpage. By measuring variations in the time it takes links to be displayed, attackers can infer if a particular website has been visited."

to post comments

They’re ba-ack: Browser-sniffing ghosts return to haunt Chrome, IE, Firefox (Ars Technica)

Posted Jun 6, 2014 0:21 UTC (Fri) by josh (subscriber, #17465) [Link]

Also possible via social engineering (easily hidden in a game or site navigation): http://tinsnail.neocities.org/

Pretty much impossible to eliminate covert channels

Posted Jun 6, 2014 15:20 UTC (Fri) by dskoll (subscriber, #1630) [Link] (5 responses)

A web browser is such a complicated beast and the Javascript API so huge that I doubt we'll ever be able to eliminate this sort of thing. Best simply to recognize the problem and if you are very concerned, run a separate browser instance in separate virtual machines for each web site you visit. :(

Pretty much impossible to eliminate covert channels

Posted Jun 6, 2014 19:27 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (2 responses)

<plug type="shameless">uzbl[1] stores history externally, so the only "purple" links are those visited in that instance. Close the instance and history is saved, but not accessible to any JavaScript code.</plug>

[1]http://uzbl.org

Pretty much impossible to eliminate covert channels

Posted Jun 6, 2014 20:21 UTC (Fri) by dskoll (subscriber, #1630) [Link] (1 responses)

I do not believe uzbl is immune to timing attacks, though.

Pretty much impossible to eliminate covert channels

Posted Jun 6, 2014 20:50 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

I wouldn't say that it is immune, but you can mitigate. If you set `cache_model` to `document_viewer`, webkit should do no caching whatsoever behind your back (uzbl does none of its own).

In any case, you can slow *everything* down by setting request_handler to "request NO_HANDLER" which will pause everything for 1 second while the request times out. Unfortunately, it runs in the GUI thread (with WebKit1 which is all uzbl seriously works with right now), so every request will cause a 1 second stall (though changing the time is easy in requests.c).

Incognito windows are your friends

Posted Jun 8, 2014 14:59 UTC (Sun) by man_ls (guest, #15091) [Link]

Isn't it enough to open a new private browser window? ("incognito" window in Chrome terminology, "pr0n mode" for the common person). I have checked and visited links elsewhere are shown as not visited here. There are probably many covert channels, but this is a simple, efficient solution.

I am getting accustomed to opening incognito windows for many things to avoid being tracked. Youtube is much more enjoyable if it doesn't have access to your previous history. Surely there are multiple ways around it, such as IP addresses combined with browser signatures, but for now websites seem to respect the lack of cookies.

Pretty much impossible to eliminate covert channels

Posted Jun 13, 2014 7:55 UTC (Fri) by malor (guest, #2973) [Link]

What I do is to disable caching in my browser completely, set all cookies to session-only, except for a very few sites that are allowed to set permanent ones (like LWN), and disable all saving of history. About the only data that persists across sessions is bookmarks and those few allowed cookies.

AFAIK, this combination of settings would prevent this attack from working, unless the other pages it was trying to test happened to be currently loaded in another window or tab. In that case, I think they'd probably get a hit.

So, it could still leak some information, but not very much. And, honestly, I hardly notice any difference as I'm browsing; because cookies are set to be session-only, they are accepted, and sites work, but as soon as the browser closes, they all vanish, and then I'm a brand-new, first-time vistor the next time I hit the site, even if it's five minutes later.

Combined with AdBlock and NoScript, it's not perfect, but it's pretty good.

Oh, I also disable Flash cookies as well... and have the Flash plugin set to only run when I activate it. This is above and beyond the blocking that NoScript already does, so I end up having to click twice to run Flash, and then it still can't store anything permanently.

I'm sure I can still be identified with enough work, but I suspect it would take targeted effort.... I believe this setup will evade most automated tracking tools.

They’re ba-ack: Browser-sniffing ghosts return to haunt Chrome, IE, Firefox (Ars Technica)

Posted Jun 6, 2014 19:30 UTC (Fri) by intgr (subscriber, #39733) [Link] (2 responses)

Back when Firefox released a fix for this issue, they specifically attempted to address timing attacks as well. I guess the approach didn't help as much as they hoped: https://blog.mozilla.org/security/2010/03/31/plugging-the...

> Next, we are changing some of the guts of our layout engine to provide a fairly uniform flow of execution to minimize differences in layout time for visited and unvisited links

They’re ba-ack: Browser-sniffing ghosts return to haunt Chrome, IE, Firefox (Ars Technica)

Posted Jun 6, 2014 22:25 UTC (Fri) by rahvin (guest, #16953) [Link] (1 responses)

I thought this is tied to DNS lookups. If it finds the domain quick it's in your DNS cache. That's outside the browser so it wouldn't matter what Mozilla did unless they start adding delays to DNS lookups which will slow everything down. There has to a be away to take all input and output, sandbox it and randomize all of it to the javascript engine (but not the user) so that these timing attacks aren't of any use in identifying anything. The key is to slow down what the javascript sees without impacting the user and that sounds very very hard. Javascript is a plague on the internet IMO.

They’re ba-ack: Browser-sniffing ghosts return to haunt Chrome, IE, Firefox (Ars Technica)

Posted Jun 7, 2014 2:40 UTC (Sat) by wahern (subscriber, #37304) [Link]

It would be made substantially easier if Javascript didn't have access to a high-resolution clock. For example, only allow Javascript to see times down to the minute. And/or make the clock non-monotonic, so that time could go backwards by several seconds between queries. You'd also need to insert random delays into HTTP requests which weren't directly triggered by user input so the code couldn't query an external clock.


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds