Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Posted Nov 4, 2010 3:19 UTC (Thu) by JohnLenz (guest, #42089)Parent article: Gathering session cookies with Firesheep
Since this situation is so common, it would be nice if the browsers would implement some sort of optional HMAC-SHA1 digest you could use on cookies. So in a SSL connection you could have the server send a cookie (containing the session id) plus and a shared secret key. Then everytime the browser sends a request, it sends the cookie, a sequence number, and the HMAC digest of the cookie plus sequence. You could send this over an unencrypted connection. No replay attack would be possible because the sequence number has been used and the attacker can't create digests. A single HMAC-SHA1 calculation would be less work than a full SSL connection on the server.
You can already do this with javascript in the browser but there is no way to fallback.
Posted Nov 4, 2010 4:28 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (11 responses)
It's really amazing to watch people jump through intellectual hoops to justify not protecting their users with SSL.
Posted Nov 4, 2010 13:47 UTC (Thu)
by robert_s (subscriber, #42402)
[Link] (10 responses)
Jump through intellectual hoops?
How about not being able to use virtualhosts with HTTPS? A huge number (the vast majority I would say) of sites on the web use virtualhosts. I wonder how quickly IPv4 would be exhausted if we all started using HTTPS and needed individual IPs for our websites.
On top of that, once we start using HTTPS, most of our lovely tiered caching mechanisms become unusable. All requests will have to be served fully.
There are plenty of real problems with switching everything to HTTPS, intellectual hoops are not needed.
I was also thinking of something along the lines of JohnLenz. It would probably need a browser HTTP extension so the contents of the full request could be signed against a timestamp of some sort rather than using a sequentially-shifting key.
Posted Nov 4, 2010 16:33 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (9 responses)
Clients cache requests served over SSL just fine. Gateway machines can translate SSL traffic into something before sending it to a reverse proxy or load-balancing it. CDNs also support SSL these days.
You are jumping through intellectual hoops to justify your hostility toward SSL. A modicum of research would have uncovered these solutions. Continuing to risk user privacy merely to save a few CPU cycles is just unacceptable. Network hardware never gets tired. CPU cycles are cheap. Real people have actual sensitive information crucial to their physical and emotional well-being. I can't believe people prefer the former to the latter.
Posted Nov 4, 2010 19:12 UTC (Thu)
by dlang (guest, #313)
[Link] (8 responses)
the problem is that the browsers don't all support this, so unless you are willing to reject everyone with a bad browser, this doesn't matter.
Posted Nov 4, 2010 19:14 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (7 responses)
Posted Nov 4, 2010 19:22 UTC (Thu)
by dlang (guest, #313)
[Link] (5 responses)
Posted Nov 4, 2010 19:26 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (4 responses)
Posted Nov 4, 2010 19:27 UTC (Thu)
by dlang (guest, #313)
[Link] (3 responses)
Posted Nov 4, 2010 19:30 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (2 responses)
Are you really arguing that supporting a handful of users with ancient browsers is worth sacrificing everyone's privacy?
Posted Nov 4, 2010 23:50 UTC (Thu)
by nteon (subscriber, #53899)
[Link] (1 responses)
Posted Nov 9, 2010 14:36 UTC (Tue)
by holstein (guest, #6122)
[Link]
And Linux runs ususally very well on these ancient machines ;)
Posted Nov 5, 2010 9:03 UTC (Fri)
by ekj (guest, #1524)
[Link]
A solution which is unavailable on ~25% of all webservers, and which fail to work for ~10% of all users, is not currently viable.
It seems likely this problem will go away in the future. But at the moment, it's a real problem. 5 years from now, I expect SNI will be pretty universally supported. It'll allow shared-ip-webhosts to offer https afterall, and that's a pretty major progress.
Posted Nov 4, 2010 9:51 UTC (Thu)
by oseemann (guest, #6687)
[Link] (1 responses)
I.e. when static file requests, ajax requests or multiple requests in separate tabs all use the same sequence number, only the first one would succeed, all other would fail, due to the sequence number already being used. For each request the browser would have to wait for the next sequence number + hash to arrive before starting the next request. That's just not feasible.
Accepting a list/range of sequence numbers or giving each one a specific validity period (i.e. 10s) could remedy that problem, but would also open the window for attackers again.
Granted, static files may be excluded from the requirement, but with the ubiquity of ajax these days and users' habit of opening several links in background tabs, this is not an acceptable workaround.
Posted Nov 5, 2010 4:01 UTC (Fri)
by jzbiciak (guest, #5246)
[Link]
Posted Nov 4, 2010 19:42 UTC (Thu)
by Simetrical (guest, #53439)
[Link]
To avoid this, you'd have to MAC the whole contents of the request. But HTTP proxies tend to rewrite the contents of non-secure requests, so your MACs will break and stuff will fail randomly. The only way around it is, yep, encrypt the request. Integrity without encryption fails if you have proxies that expect to be able to meddle with requests.
There are admittedly some practical reasons not to use TLS for everything right now, but they're not prohibitive -- look at Gmail or typical bank websites -- and they're diminishing with time.
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
How about not being able to use virtualhosts with HTTPS? A huge number (the vast majority I would say) of sites on the web use virtualhosts. I wonder how quickly IPv4 would be exhausted if we all started using HTTPS and needed individual IPs for our websites.
Server Name Indication.
On top of that, once we start using HTTPS, most of our lovely tiered caching mechanisms become unusable. All requests will have to be served fully.
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep