LWN.net Logo

Gathering session cookies with Firesheep

By Jake Edge
November 3, 2010

The recent release of Firesheep—a Firefox extension that captures others' cookies on open WiFi networks—has set off something of a firestorm. The particular hole that Firesheep exploits is not anything new, we looked at an EFF-sponsored workaround for the problem back in July, but the particulars of the Firesheep implementation are fairly eye-opening. It would seem that Firesheep developer Eric Butler was wildly successful in doing what he set out to do: increase the visibility of insecure session cookie handling by major web sites.

It is fairly standard for web sites to protect their login screens by using HTTPS (i.e. SSL/TLS encrypted connections) so that usernames and passwords cannot be intercepted. But once the login has been completed, a session is created, and sites typically hand out a cookie—a (hopefully) opaque value that the server can use to associate a request with a particular session (i.e. user). Each time the user's browser sends a request to the site, it also sends any cookies that have been set by that site. Those cookies are valid for a server-selectable period of time, and while they are valid, they can be used by anyone to appear to the server as the user who logged in. The problem is that the cookies are often transmitted via unencrypted HTTP.

So Firesheep, which was released at ToorcCon 12 on October 24, can intercept these cookie values for various high-profile web sites (e.g. Facebook, Twitter, Amazon, Google, Github, and so on). It does the cookie interception by sniffing the network traffic on open WiFi networks, and once it has them, it offers the user the ability to connect to those services using the captured cookies. So someone sitting in a coffeeshop can run Firesheep and potentially access Facebook as some other unsuspecting customer.

The ability to do a one-click takeover of someone's account is clearly Firesheep's most controversial feature. But it certainly serves the purpose of alerting the public to this particular problem. Packaging the program as a Firefox extension is also a clever touch. There is no reason that Firesheep couldn't be a standalone program, but making it available in the browser eases the installation process so that it can get in the hands of more (ab)users.

Butler's intent is to shame (or scare) web site operators into switching to HTTPS. It is the same end goal that the EFF had with its HTTPS Everywhere Firefox extension, but Firesheep definitely grabbed a lot more attention than the EFF's tool did. HTTPS Everywhere uses rules to rewrite http:// URLs to https:// URLs, which is useful—but not particularly striking, at least to casual users and the press.

People have expressed ethical concerns about the release of Firesheep, but like many security-oriented tools, it can be used for good or ill. There are also reports that Microsoft's anti-virus software is marking Firesheep as a threat. This firestorm has caused Butler to strongly defend Firesheep and its release:

In addition to questioning Firesheep's legality, some people have questioned the ethics of its release. Similar tools have existed for years, so big companies, especially Facebook and Twitter, cannot claim they are unaware of these issues. They have knowingly placed user privacy on the back burner, and I'd be interested to hear some discussion about the ethics of these decisions, which have left users at risk since long before Firesheep.

Web sites can fix the problem by converting over to HTTPS and marking their session cookies as HTTPS-only, but it's not quite as simple as just flipping a switch. HTTPS will definitely require more server resources to encrypt and decrypt all of its traffic, but there are other potential problem areas as well. Various internal links in existing content may need to be converted or handled by the web server rewrite engine, and there is a class of content that web site operators may not have any control over: advertisements.

Ad networks run by Google and others often do not offer HTTPS for serving ads. That results in a warning from many web browsers because there is insecure (i.e. HTTP) content in an HTTPS page. The last thing many web site operators want is for their new users to be greeted with a scary warning about the site.

We have been running some experiments here at LWN and plan to have HTTPS-only cookies soon, though we haven't quite figured out how to handle the Google ad problem. It is really something we (and lots of other sites) should have done a long time ago. Thanks to Firesheep, there are now even more compelling reasons to make that switch happen.


(Log in to post comments)

Gathering session cookies with Firesheep

Posted Nov 4, 2010 1:37 UTC (Thu) by NicDumZ (subscriber, #65935) [Link]

> Web sites can fix the problem by converting over to HTTPS and marking their session cookies as HTTPS-only, but it's not quite as simple as just flipping a switch. HTTPS will definitely require more server resources to encrypt and decrypt all of its traffic, but there are other potential problem areas as well.

So far, server resources or not, I believe that limited HTTPS support was mostly about little incentive there was for most users to use HTTPS portals. HTTPS Everywhere has been around for some time, but no so many non-tech users were using it.

secure.wikimedia.org for instance is definitely slower than *.wikipedia.org. HTTPS Everywhere users might have noticed that. There are probably various legit reasons for this (hello wikitech-l!), but until now improving response time on HTTPS portals was probably not the top priority of availability/reliability teams.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 3:19 UTC (Thu) by JohnLenz (subscriber, #42089) [Link]

Since this situation is so common, it would be nice if the browsers would implement some sort of optional HMAC-SHA1 digest you could use on cookies. So in a SSL connection you could have the server send a cookie (containing the session id) plus and a shared secret key. Then everytime the browser sends a request, it sends the cookie, a sequence number, and the HMAC digest of the cookie plus sequence. You could send this over an unencrypted connection. No replay attack would be possible because the sequence number has been used and the attacker can't create digests. A single HMAC-SHA1 calculation would be less work than a full SSL connection on the server.

You can already do this with javascript in the browser but there is no way to fallback.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 4:28 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

First of all, the server would need to keep a lot of state to prevent replay attacks. Second, attackers can still sniff connections and obtain a great deal of personal information. It'd be less effort to just bite the bullet and use SSL, which has many advantages besides preventing session hijacking. SSL isn't even all that bad for performance if configured properly.

It's really amazing to watch people jump through intellectual hoops to justify not protecting their users with SSL.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 13:47 UTC (Thu) by robert_s (subscriber, #42402) [Link]

"It's really amazing to watch people jump through intellectual hoops to justify not protecting their users with SSL."

Jump through intellectual hoops?

How about not being able to use virtualhosts with HTTPS? A huge number (the vast majority I would say) of sites on the web use virtualhosts. I wonder how quickly IPv4 would be exhausted if we all started using HTTPS and needed individual IPs for our websites.

On top of that, once we start using HTTPS, most of our lovely tiered caching mechanisms become unusable. All requests will have to be served fully.

There are plenty of real problems with switching everything to HTTPS, intellectual hoops are not needed.

I was also thinking of something along the lines of JohnLenz. It would probably need a browser HTTP extension so the contents of the full request could be signed against a timestamp of some sort rather than using a sequentially-shifting key.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 16:33 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

How about not being able to use virtualhosts with HTTPS? A huge number (the vast majority I would say) of sites on the web use virtualhosts. I wonder how quickly IPv4 would be exhausted if we all started using HTTPS and needed individual IPs for our websites.
Server Name Indication.
On top of that, once we start using HTTPS, most of our lovely tiered caching mechanisms become unusable. All requests will have to be served fully.

Clients cache requests served over SSL just fine. Gateway machines can translate SSL traffic into something before sending it to a reverse proxy or load-balancing it. CDNs also support SSL these days.

You are jumping through intellectual hoops to justify your hostility toward SSL. A modicum of research would have uncovered these solutions. Continuing to risk user privacy merely to save a few CPU cycles is just unacceptable. Network hardware never gets tired. CPU cycles are cheap. Real people have actual sensitive information crucial to their physical and emotional well-being. I can't believe people prefer the former to the latter.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:12 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

TLS also supports multiple name-based virtual hosts with SSL

the problem is that the browsers don't all support this, so unless you are willing to reject everyone with a bad browser, this doesn't matter.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:14 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Rejecting IE6 has to happen sooner or later. The other major browsers support SNI.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:22 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

what about the minor browsers? (and remember to include all the browsers on phones and mobile devices)

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:26 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

iOS4 and Android also support SNI. Even elinks supports it these days.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:27 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

iOS and Android are both pretty new platforms

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:30 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

They're also widely-used; many features don't work with ancient browsers anyway.

Are you really arguing that supporting a handful of users with ancient browsers is worth sacrificing everyone's privacy?

Gathering session cookies with Firesheep

Posted Nov 4, 2010 23:50 UTC (Thu) by nteon (subscriber, #53899) [Link]

according to Wikipedia, no IE on WinXP has SNI support. Thats unfortunately a large chunk of the general, internet browsing public.

Gathering session cookies with Firesheep

Posted Nov 9, 2010 14:36 UTC (Tue) by holstein (subscriber, #6122) [Link]

Well, that could be (another) incentive to move on to something better.

And Linux runs ususally very well on these ancient machines ;)

Gathering session cookies with Firesheep

Posted Nov 5, 2010 9:03 UTC (Fri) by ekj (subscriber, #1524) [Link]

Everyone who is on XP and using IE (any version!) lives without SNI. As far as I know (it's been a while since I've used it, so possibly, this has been fixed) IIS also fails to support SNI.

A solution which is unavailable on ~25% of all webservers, and which fail to work for ~10% of all users, is not currently viable.

It seems likely this problem will go away in the future. But at the moment, it's a real problem. 5 years from now, I expect SNI will be pretty universally supported. It'll allow shared-ip-webhosts to offer https afterall, and that's a pretty major progress.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 9:51 UTC (Thu) by oseemann (subscriber, #6687) [Link]

This approach would be problematic with concurrent requests.

I.e. when static file requests, ajax requests or multiple requests in separate tabs all use the same sequence number, only the first one would succeed, all other would fail, due to the sequence number already being used. For each request the browser would have to wait for the next sequence number + hash to arrive before starting the next request. That's just not feasible.

Accepting a list/range of sequence numbers or giving each one a specific validity period (i.e. 10s) could remedy that problem, but would also open the window for attackers again.

Granted, static files may be excluded from the requirement, but with the ubiquity of ajax these days and users' habit of opening several links in background tabs, this is not an acceptable workaround.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 4:01 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]

Then perhaps don't require cookies to get static pages, and only use them for the dynamic ones?

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:42 UTC (Thu) by Simetrical (guest, #53439) [Link]

So then the attacker could just intercept the request and save the cookie plus sequence number plus digest, and send their own request with the same metadata. So rewrite a few dozen requests from one page view to do whatever you want and eat the response. The user sees it as the page timing out, so they figure the site is being slow and hit refresh, which gets them the authentic page. Yeah, it'd stop Firesheep, but it wouldn't do anything to stop a real attack.

To avoid this, you'd have to MAC the whole contents of the request. But HTTP proxies tend to rewrite the contents of non-secure requests, so your MACs will break and stuff will fail randomly. The only way around it is, yep, encrypt the request. Integrity without encryption fails if you have proxies that expect to be able to meddle with requests.

There are admittedly some practical reasons not to use TLS for everything right now, but they're not prohibitive -- look at Gmail or typical bank websites -- and they're diminishing with time.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 4:30 UTC (Thu) by brantgurga (subscriber, #22667) [Link]

There is indeed https available for Google ads, you just have to put https in front of the script that gets loaded. In addition to that, you have to fiddle with the domain name so that the certificate matches. https://pagead2.googleadservices.com/pagead/show_ads.js is what I came up with.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 10:58 UTC (Thu) by bboissin (subscriber, #29506) [Link]

Can't you juste use //pagead2.googleadservices.com/pagead/show_ads.js and it will use the correct scheme (http or https)?

Gathering session cookies with Firesheep

Posted Nov 4, 2010 7:44 UTC (Thu) by ekj (subscriber, #1524) [Link]

https has 3 problems that makes adoption low:

1) You need a separate ip-address for a https-server, this is a problem because shared-ip hosting is extremely common, even for fairly high-profile sites. Yes SNI solves that, but that's a recent development (and thus a solution that'd mean blocking out all older browsers)

2) The situation where self-signed-https causes scary warnings, whereas no-encryption-http does NOT. I don't know what the browser-makers are smoking, but the practical result is that if I make my site MORE secure, the users get hassled with warnings about me being UNSECURE. This is totally batshit crazy. The certificate-signing business, is a fraud. Lots of money, for essentially nothing. Why the hell doesn't new domain-names come with signed wildcard certs for the domain, in the first place ?

3) The initial handshake, means https is slower. (the encryption and decryption is basically irrelevant) It causes several more round-trips, and thus is a huge problem, particularily if there's many-small-requests that can't or aren't all using keep-alive.

And don't give me "it's cheap". Not it's not cheap. If I should have valid certs for just the domains I use for various hobby/experimental stuff, It'd be a week or so of full-time work and costly enough to more than double my hosting-costs, and by the time you've added in separate ips for each server, you're talking of multiplying my hosting-costs by like 5.

There's a *reason* this ain't happening.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 10:13 UTC (Thu) by osma (subscriber, #6912) [Link]

So true.

One more thing: HTTPS makes front-end caching (aka HTTP acceleration) more difficult. You can't (or at least shouldn't) run a medium-to-high traffic site with dynamic content without a proper front-end cache.

For example, Varnish doesn't support HTTPS connections so you need to put something like Pound or Stunnel in front of it. OK so that's perhaps just a problem with Varnish (apparently Squid can do HTTPS reverse proxying), but still, that's making things more difficult for the webmaster/sysadmin in the current state of affairs.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 12:05 UTC (Thu) by Klavs (subscriber, #10563) [Link]

That would probably just be a matter of sponsoring the work for Varnish :)

Gathering session cookies with Firesheep

Posted Nov 4, 2010 10:28 UTC (Thu) by gerv (subscriber, #3376) [Link]

The situation where self-signed-https causes scary warnings, whereas no-encryption-http does NOT. I don't know what the browser-makers are smoking, but the practical result is that if I make my site MORE secure, the users get hassled with warnings about me being UNSECURE. This is totally batshit crazy.

Not at all crazy. http://www.gerv.net/security/self-signed-certs/ .

Gerv

Gathering session cookies with Firesheep

Posted Nov 4, 2010 13:55 UTC (Thu) by ekj (subscriber, #1524) [Link]

I'm sorry, but that argument doesn't merely fail to fly, it sinks like neutronium in hot butter. Facts:
  • https with a self-signed cert is MORE secure than no encryption at all.
  • https with a self-signed cert causes warnings that plain http does not.
  • It does not protect against man-in-the-middle, but it DOES protect fully against all passive attacks. A defence that stops *some* attacks, is better than no defence at all.
  • Browsers could if they liked, save any self-signed certs and warn if they ever change -- this would stop man-in-the-middle in all cases, except those cases where your *first* visit happens to hit the mitm. (again: stopping *some* attacks is superior to stopping no attacks.)
  • The practical result of making https cumbersome and expensive to use, is that people use plain http instead, this does not in ANY way benefit security.
There's a difference between "false security" and "some security" - encryption that *does* in fact stop all passive eavesdropping does not deserve to be labeled "false", despite the fact that there are *other* attacks it does not stop.

It's a 3-step ladder:

1: No protection. 2: Protection against passive attacks. 3: Protection against active attacks.

There's absolutely no rational reason to not warn in cases 1 and 3, but DO warn in case 2. Yes, I'm aware that some claim there is, but merely claiming it, doesn't make it true.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 14:44 UTC (Thu) by gerv (subscriber, #3376) [Link]

https with a self-signed cert is MORE secure than no encryption at all.

It depends what you mean by "secure". Does it protect against some attacks (e.g. passive attacks)? Yes. Does it open the user up to some additional attacks (e.g. phishing)? Yes. Because no security measure is taken in isolation - it's associated with a set of code changes, UI changes and behaviour advice. And the interaction patterns associated with "self-signed certs are OK" are intimately tied up with "the cert sometimes changes on you", and that event can be either the sign of an attack, or not. And if users can't differentiate well between the two, they are opened up to new attacks.

Browsers could if they liked, save any self-signed certs and warn if they ever change -- this would stop man-in-the-middle in all cases, except those cases where your *first* visit happens to hit the mitm. (again: stopping *some* attacks is superior to stopping no attacks.)

I do address that exact point in my article. My question to you: how do you train Joe Public to differentiate between: "This cert has changed!" (you are now being MITMed) and "This cert has changed!" (the server operator changed their cert)? The browser can't tell the difference - the user would need an out-of-band way of verifying the cert fingerprint with the site. And what are the chances of my grandmother doing that?

"Hello, is that Marks and Spencer?"

"Yes, how can I help you?"

"Hello, dear. Well, my browser tells me that I have to telephone you to verify the Shalsum of your certificate."

"Shalsum?"

...

The practical result of making https cumbersome and expensive to use, is that people use plain http instead, this does not in ANY way benefit security.

The expense of certificates is no longer a factor here. Go get a free cert from StartCom and be happy. And the computational cost is the same for trusted certs and for self-signed certs. So there is very little additional cost. In fact, given how much hassle it is to generate a self-signed cert on e.g. Windows, the CA route is actually less costly in time terms.

Gerv

Gathering session cookies with Firesheep

Posted Nov 4, 2010 15:49 UTC (Thu) by nye (guest, #51576) [Link]

In what way is your argument not equivalent to saying that telnet is more secure than SSH?

There are a couple of points that seem worth commenting on specifically:

>Does it open the user up to some additional attacks (e.g. phishing)? Yes.
Extraordinary claims require extraordinary evidence.

>And the interaction patterns associated with "self-signed certs are OK" are intimately tied up with "the cert sometimes changes on you"

If that's the case, then it's clearly a bug. These situations are entirely orthogonal.

>My question to you: how do you train Joe Public to differentiate between: "This cert has changed!" (you are now being MITMed) and "This cert has changed!" (the server operator changed their cert)?

I don't really care, because even if they can't the worst case scenario is still strictly better than the present situation.

>"Hello, is that Marks and Spencer?"

Presumably M&S can afford a CA-signed certificate should they want one.

>The expense of certificates is no longer a factor here. Go get a free cert from StartCom and be happy.

Allowing these sorts of CAs comes with all the downsides of self-signed certificates, plus the additional problem that the client needs to trust that CA, which can't be guaranteed.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 16:06 UTC (Thu) by gerv (subscriber, #3376) [Link]

I am saying that neither the security of telnet nor the security of SSH (when you don't bother to check the key fingerprints) is sufficient for people to do online banking or shopping. At the moment, "secure UI and no warnings" in a web browser means "safe to do online shopping and banking".

If we switched to an SSH-style key change detection model for web site security, then "secure UI and no warnings" (it is the warnings you want to get rid of, isn't it?) would no longer give that safety.

If that's the case, then it's clearly a bug. These situations are entirely orthogonal.

Not so. Say you run a website which uses a self-signed cert. Your website gets compromised. You clean it up and start it up again, with a new cert (the old private key is known to the attacker, after all). Or perhaps your cert just expires. Either way, you change the cert. The user gets a warning saying "the cert has changed". What do they conclude? "Someone is trying to MITM me" or "the site admin just updated the cert"? Which would you tell them to conclude? How would you tell them to decide?

In the CA model, as long as both certs were signed by a trusted CA, the user gets no warning in the safe case, and a warning in the unsafe case (because the attacker can't get a CA-signed cert for the domain).

I don't really care,

If you don't really care that Joe Public gets MITMed, then I don't think we are working from the same base set of assumptions, and it's unlikely this discussion will be fruitful. For me, I'm trying to stop Joe getting MITMed, as are the rest of the Mozilla security team. If you don't care, then we clearly have different security goals. Feel free to create your own browser which meets them.

Gerv

Gathering session cookies with Firesheep

Posted Nov 4, 2010 18:11 UTC (Thu) by paulj (subscriber, #341) [Link]

Why not make the UI for self-signed certs equivalent to plain old HTTP? That is, make no UI claims the page is secure; don't do anything UI wise that Joe User could perceive in fact.

Have an option somewhere unobtrusive to allow the (few) users who care to check the details of the cert.

What possible security objection could you have to that?

Gathering session cookies with Firesheep

Posted Nov 4, 2010 20:17 UTC (Thu) by Simetrical (guest, #53439) [Link]

Because then when an attacker stages a MITM attack against your bank's website, you get no notification except the absence of HTTPS UI, which no one will notice. Whereas at present, if your browser starts to connect to an https:// URL, there's no way for anyone to stage a MITM attack without triggering a warning message.

(Of course, if your browser starts to connect to the http:// URL, like because you just typed the domain name without protocol into the URL bar, they can still MITM it. This is what Strict-Transport-Security is meant to prevent. That also causes all cert errors to fatal, so you can't MITM a site using STS even if you could persuade the user to click through a warning.)

Gathering session cookies with Firesheep

Posted Nov 5, 2010 5:01 UTC (Fri) by foom (subscriber, #14868) [Link]

It sure would be nice if RFC 2817 actually was a thing. Then http:// urls could be opportunistically encrypted without cert checking and without warning on use of unencrypted or unauthenticated connections.

That would be a pure increase in security, without degrading the security or MITM protection of the https:// url scheme.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 16:45 UTC (Fri) by Simetrical (guest, #53439) [Link]

What attack does this prevent? Using more encryption doesn't help you if it doesn't prevent real-world attacks.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 18:49 UTC (Fri) by foom (subscriber, #14868) [Link]

Uh, it protects against all forms of passive snooping of your network links. That's a huge increase in practical security. Not only is an active attack frequently harder to achieve, it also risks detection by the victims.

If everyone's "insecure" HTTP sessions were being encrypted that also makes widespread untargeted monitoring by e.g. a spy agency less feasible. You'd have to put your sniffer in the middle of things, and risk detection. (which I'm sure they do sometimes, but it has to be targeted...) Currently, someone could be sniffing the whole internet and nobody would have any way of telling.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 20:40 UTC (Fri) by Simetrical (guest, #53439) [Link]

Granted. I think tcpcrypt.org is a much better way to approach this than Upgrade headers, though.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 7:31 UTC (Fri) by paulj (subscriber, #341) [Link]

A bank won't be using a self-signed cert. So it will be presented with the "this page (attempts) to be secure" UI, including whatever scary warnings are needed if things seem broken. Also, you could show self-signed pages with a broken padlock somewhere, e.g. beside the HTTPS in the URI - or you could /not/ show the "https" part of the URI for self-signed, etc (Chromium replaces the protocol scheme of URIs with icons).

The key point is that self-signed shouldn't be /worse/ to use than no-security, given that self-signed definitely solves /some/ security problems.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 16:48 UTC (Fri) by Simetrical (guest, #53439) [Link]

"A bank won't be using a self-signed cert."

Right . . .

"So it will be presented with the 'this page (attempts) to be secure' UI, including whatever scary warnings are needed if things seem broken."

. . . but now I don't follow you. Say you try to connect to your bank. I intercept the connection during the TLS handshake. The request never reaches the bank, so you never get the bank's certificate. You get my self-signed certificate instead, which appears to come from the bank's website. In this case you clearly want a warning of some type, or else you have no protection against MITMs at all. But how does the browser distinguish between this, and the case where the site's legitimate owners are using a self-signed cert?

Gathering session cookies with Firesheep

Posted Nov 5, 2010 17:47 UTC (Fri) by paulj (subscriber, #341) [Link]

Why are we discussing banks? A good bank is never going to use self-signed certs, so whatever changes we're discussing about behaviour re self-signed certs don't apply to banks...

If you mean "what if the bank did use a self-signed cert, we'd need to warn the user!". This would be equivalent to a bank NOT using any HTTPS at all, yet browsers do NOT warn users if a bank uses only HTTP. Indeed, we can generalise this to say that browsers do not warn users where websites use HTTP for information which should otherwise be sent over a secure (i.e. authenticated + private) channel. This is because there is no practical way to do it. Nor is there any practical way to tell where self-signed certs are being used where better security should have been used.

However, in the one case the browser simply lets users get on with it. In the other case browser implementors have decided to make it hard for ordinary users to use. Worse, given self-signed certs continue to be used, this situation helps inure users to scary browser security warnings - precisely the situation at least some browser implementors say they wish to avoid!

So again, why not just make the self-signed case == HTTP case, more or less?

Gathering session cookies with Firesheep

Posted Nov 5, 2010 19:23 UTC (Fri) by foom (subscriber, #14868) [Link]

You don't want to make https://mybank.com allow self-signed certs without warning, because the "s" on the end means both "try to encrypt" and "I expect this url to be MITM-free". What was a secure bookmark to a MITM-protected url would no longer be MITM-protected at all. That's a decrease in security.

This is what RFC 2817 (not implemented by anyone) would be useful for.

The right thing to do is to leave https:// alone, but to add the ability to encrypt http:// transactions, without requiring that MITM-protection be present. If http:// urls could be automatically encrypted whenever both the client and server support it, that's a pure win. Even more so if all the popular servers were configured to have that work out of the box.

Gathering session cookies with Firesheep

Posted Nov 6, 2010 2:06 UTC (Sat) by paulj (subscriber, #341) [Link]

You're basically restating exactly my point, despite trying to disagree with me. ;)

Gathering session cookies with Firesheep

Posted Nov 5, 2010 20:37 UTC (Fri) by Simetrical (guest, #53439) [Link]

Okay, let me try again in more detail. Let's say you go to https://yourbank.com/ in your browser. yourbank.com has paid for a perfectly good EV SSL certificate, and if you connect successfully, the browser will present no warnings or anything, and will put the company name in the URL bar, highlight it green, etc., etc.

But let's say you're doing this using a free Wi-Fi hotspot, and I happen to have set up that Wi-Fi hotspot with a malicious program on it. Now this malicious program sees your outgoing HTTPS request to some IP address, which it happens to know belongs to a bank.

Instead of passing the HTTPS request on to the actual bank, my program instead acts as a man-in-the-middle. It pretends to be the bank, and proceeds with the SSL handshake as though it were the real bank website. At some point, your browser demands my certificate to prove that I'm not an impostor. Unfortunately for me, I don't have a valid certificate for yourbank.com, because I don't control that domain and so I (hopefully) can't convince a CA to sign my certificate.

However, I can easily make up my own *self-signed* certificate. So I pass that certificate to your browser. Up to this point, I'm acting exactly like the real site would, but now I do something different: the bank provides a CA-signed cert, I provide a self-signed cert.

Currently, all browsers pop up a big warning message: "This might not be the site you think it is!" Hopefully this will scare you away from using the bank's site for now.

But in your scheme, the browser would raise no warning, just act the same as a regular HTTP request. In that case, my attack succeeds. You almost certainly won't notice the lack of HTTPS UI, so I'll get your username and password and promptly withdraw your account's balance in cash.

Now, of course the same attack would be possible if you visited http://yourbank.com/. But you hopefully are not -- that's the point of having different URLs for HTTP and HTTPS. The fact that the URL begins "https://" instead of "http://" means "Do not give me the results of this page unless you can verify that it's authentic." If it means anything less, you're opening up attacks by active MITMs, which are extremely practical if you're running free Wi-Fi, Tor exit nodes, an ISP's router (maybe hacked), etc.

So an https:// page that isn't secure *is* worse than an http:// page that isn't secure. In one case the lack of security is expected, but in the other it's unexpected. If https:// URLs behaved as you described, there would be no way for a URL to encode the information "I don't want to connect unless I'm sure it's the real site."

On the other hand, there is no reason in principle not to use some type of encryption over regular HTTP on port 80, even if you don't do authentication. This costs very little resources these days, and at least protects against passive snooping. tcpcrypt.org outlines a very interesting approach to this. But such encryption is not enough for connecting to websites that really want to be secure, like banks, and they need to be able to force authentication somehow. Currently the only way they have to do that is with an https:// URL. Hopefully the new Strict-Transport-Security header will allow a better way to distinguish, and then maybe we can relax warnings for self-signed certs.

Am I clear now?

Gathering session cookies with Firesheep

Posted Nov 9, 2010 17:29 UTC (Tue) by nye (guest, #51576) [Link]

>Am I clear now?

This is the only coherent argument of this point that I've ever seen, and I thank you for it.

Gathering session cookies with Firesheep

Posted Nov 11, 2010 2:49 UTC (Thu) by filteredperception (subscriber, #5692) [Link]

>>Am I clear now?

>This is the only coherent argument of this point that I've ever seen, and I thank you for it.

+1. I'm glad my eyes didn't glaze over this long thread and I kept skimming till that explanation. I too required that explanation before I finally 'got it'.

Gathering session cookies with Firesheep

Posted Nov 11, 2010 3:10 UTC (Thu) by filteredperception (subscriber, #5692) [Link]

>>Am I clear now?

>This is the only coherent argument of this point that I've ever seen, and I thank you for it.

Ok, I too value that explanation, because it is the essence of the counterargument against the argument that allowing self-signed certs without warnings would be a net improvement.

But after a couple minutes of hopefully actually grokking this explanation of subsequent potential net-banking mitm attack vectors, this thought occurred to me-

Isn't the only added hurdle to pulling off this attack the need to get a non-self-signed cert? Which sure, is a bit of a relative pain and cost compared to a self-signed cert, but if you were mitm attacking peoples bank accounts, wouldn't getting a valid (effectively disposable) cert be just a 'cost of doing criminal business?'. Sure in the process of getting the cert, you have to leave some identity information, use a credit card, but in my estimation of current global security, I tend to imagine that the criminals could do those things effectively anonymously.

And if in the unlikely event that both my understanding of the issue, and that subsequent analysis are correct, then the question is- which is the bigger net gain for society- the benefits of facilitating easy https encryption with self-signed certs, or the benefits of adding the go-buy-or-steal-a-real-cert hurdle to bank attackers? And I think I'd lean towards the former. But odds are I'm still misunderstanding various aspects of this...

Gathering session cookies with Firesheep

Posted Nov 11, 2010 3:57 UTC (Thu) by foom (subscriber, #14868) [Link]

> Isn't the only added hurdle to pulling off this attack the need to get a non-self-signed cert?

You can't get just *any* non-self-signed cert. It has to be a cert valid for the domain name the user is trying to access, signed by one of the certification authorities trusted by the browser.

And that's not a completely trivial thing to do with just a small application of money.

It's only trivial if you happen to run one of the ~500 trusted root or intermediate CAs (e.g. most major governments in the world, and a few companies besides), or have enough money to infiltrate one.

Gathering session cookies with Firesheep

Posted Nov 11, 2010 5:24 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

that sort of thing has happened. it's been documented to happen to www.microsoft.com and there's no reason to believe that it can't happen with a bank as well.

but if you watch out for the cert changing, as opposed to just the cert existing, you cover most of that problem

Gathering session cookies with Firesheep

Posted Nov 11, 2010 5:43 UTC (Thu) by filteredperception (subscriber, #5692) [Link]

> You can't get just *any* non-self-signed cert. It has to be a cert valid for the domain name the user is trying to access, signed by one of the certification authorities trusted by the browser.

duh, OK, I figured I was missing something. Hmmm... Maybe the real issue is that certs cost $$ for no good reason, and that is the central issue impeding much more widespread use of https.

Gathering session cookies with Firesheep

Posted Nov 13, 2010 10:31 UTC (Sat) by gerv (subscriber, #3376) [Link]

Certs don't "cost $$ for no good reason". If all you want is a Domain Verified cert, get one from StartCom for free. And if you want an EV cert, the CA has to do a load of checks (see cabforum.org for the document listing them all) and that costs money, so you should expect to pay. Any CA can sign up to issue them, with the relevant audits, so it's not a closed market and there is competition.

Gerv

Gathering session cookies with Firesheep

Posted Nov 13, 2010 23:40 UTC (Sat) by Simetrical (guest, #53439) [Link]

You're right that if you can get an illegitimate cert, the entire PKI falls apart. However, the cert has to match the domain, and certificate authorities will have their trust revoked by browsers (making their certs useless) if they're found to be giving certs away to people who don't actually control the domains they're for. Typically you have to at least control the e-mail for a domain to be able to get a cert for it. Large governments could probably get hold of illegitimate certs easily enough, but it's quite nontrivial for anyone else. And even for governments, a forged cert is inherently detectable, so any complicit CAs could be eventually found out and get removed from browsers' trusted lists.

This problem will potentially go away in the medium term with DNSSEC. Once sites can deploy certificates through DNSSEC, there's no reason we couldn't also devise a DNS record that says "only accept certificates from DNSSEC, not certificates that claim to be signed by CAs". Then the only way to publish a false certificate for the site would be to compromise their DNS, which gives you many fewer attack vectors than now, when you can compromise (or trick or bully) any one of hundreds of CAs.

There's been discussion about adding a feature like this to Strict-Transport-Security, so you can say "only accept a cert signed by this root CA". Then an attacker has to compromise a *specific* CA to compromise the site instead of being able to compromise *any* CA, making their job much harder.

Gathering session cookies with Firesheep

Posted Nov 14, 2010 11:59 UTC (Sun) by anselm (subscriber, #2796) [Link]

[…] and certificate authorities will have their trust revoked by browsers (making their certs useless) if they're found to be giving certs away to people who don't actually control the domains they're for.

Yeah right. Like this happened to VeriSign in March, 2001.

Gathering session cookies with Firesheep

Posted Nov 14, 2010 12:11 UTC (Sun) by gerv (subscriber, #3376) [Link]

Is it your contention that a single mistake by a CA should mean they are thereafter disqualified from being included in browsers until the end of time?

There's a difference between a mistake (which happen to the best of us) and wilfully ignoring the necessary rules and safeguards, or a history of mistakes which leads to a diagnosis of institutional incompetence. I suggest that Verisign is guilty of neither of the latter two things.

In addition, the certificate(s) in the incident you reference were digital code-signing certificates, not web server certificates. Very occasionally, web server certs do fall into the wrong hands (which can be via hacking and theft as much as misissuance - how many SSL-running web servers do you think were rooted in the past year?) but I'd be impressed if you can show me a single reported incident where a fraudulently-acquired web server cert was used for spoofing.

Gerv

Gathering session cookies with Firesheep

Posted Nov 9, 2010 17:25 UTC (Tue) by nye (guest, #51576) [Link]

>I am saying that neither the security of telnet nor the security of SSH ... is sufficient for people to do online banking or shopping

Nobody claimed it was. Stop making things up.

>Not so. ... In the CA model, as long as both certs were signed by a trusted CA, the user gets no warning in the safe case, and a warning in the unsafe case (because the attacker can't get a CA-signed cert for the domain).

I think I understand what you mean by this now, and it's really the same as the next point.

>If you don't really care that Joe Public gets MITMed

I was very upset to read this, and nearly responded in a very inflammatory manner, but fortunately I gave myself time to cool off.

It appears that you have deliberately and in bad faith removed the important part of that sentence in order to change its meaning entirely. In fact your entire argument against anyone who disagrees with you seems to be based around the use of ridiculous straw men, so I see that there is no point in attempting rational discourse with you.

Gathering session cookies with Firesheep

Posted Nov 9, 2010 17:44 UTC (Tue) by gerv (subscriber, #3376) [Link]

Nobody claimed it was. Stop making things up.

With a small allowance for shorthand, yes they were. People are claiming that the SSH "notify on key-change" model, a.k.a. the self-signed cert model of security, is sufficiently secure to build into web browsers. And if it's built in, Joe Public will be using it, because he's using a tool which supports it. And having a security mode in a consumer product, used for banking or shopping, which does not have sufficient security for those activities is foolish.

I have not "changed the meaning of your sentence entirely". You said you didn't care if Joe Public could tell the difference between two situations, one of which involved them being MITMed, and the other of them involved them not being MITMed. I interpreted this as you not caring if Joe was MITMed. That does not seem like an unreasonable inference. If you don't care if he can tell if he's being MITMed, then you must not care if it happens to him.

I'm sorry you don't rate the quality of my argument. All I can say is that I and a large number of fairly bright people at Mozilla have spent quite a long time thinking about this, and come under regular pressure to make these sort of changes, with people advocating all sorts of reasons. We have heard and considered all the arguments, pretty much. And the case for making this change in consumer-facing browsers just doesn't stack up.

Gerv

Gathering session cookies with Firesheep

Posted Nov 10, 2010 6:21 UTC (Wed) by ekj (subscriber, #1524) [Link]

The thing is, it's a strawman, because an ordinary https-certificate, even one that's signed by a CA, is *also* not considered good enough for a bank, and infact, atleast where I live (Norway) I'm fairly sure no bank goes without extended validation or whatever it's named.

That causes significant and clear gui-changes, namely a large green bar stating the name of the institution.

In contrast, https causes a tiny grey padlock to appear in the bottom-right corner, next to the Sync-icon, which is pretty close to totally nonvisible.

Yes, I get the point that https:-self-signed has an identical url to https:-with-certificate and that thus users with bookmarks is at risk. (few users enter the url with https: Joe Public has by this time LONG gotten used to not typing http(s):// instead if they enter the address at all, they go for "www.mybank.com". That often redirects to https://www.mybank.com/ but I don't think a large fraction of users would notice if it stopped doing that.

AND - and that's my most significant point: The question isn't if a change would cause harm. The question is if the benefits would outweigh the harm, or vice versa. One practical consequence of the current situation, is that self-signed, is essentially not-usable. And encryption of any sort whatsoever is, essentially, not available for everyone hosting a simple website on a shared-ip-address which means probably 95% of the websites in the world.

The practical result of this decision, despite being made for reasons of security -- is that nearly all websites have no encryption whatsoever.

Gathering session cookies with Firesheep

Posted Nov 10, 2010 7:32 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

you would be surprised at how many banks don't use EV certs.

and frankly, I don't blame them. In many ways the EV certs are a way for a cartel of cert providers to be able to charge $1500/cert and cut outtheir competition that has ruined their prior scam of $900 for a 128 bit cert and $250 for a cert that would drop to 40 bits if a export browser connected to it (not that there are any of those around anyway)

the amount of real checking done is still not that much.

Gathering session cookies with Firesheep

Posted Nov 10, 2010 8:29 UTC (Wed) by anselm (subscriber, #2796) [Link]

There is nothing special about EV certificates except their extortionate price tag and the fact that they have a magic flag set which will cause the site name to show up on a green background in the browser. You don't get to produce your own EV certificates the way you can produce ordinary certificates (e.g., using OpenSSL) because the magic flag is CA-specific, and browsers that support EV certificates contain a hard-coded list of the CAs which are part of the EV certificate cartel and their corresponding magic flags.

Basically, for EV certificates, the CAs that are in on the game promise that they will actually do the sort of checking they should have been doing for all certificates in the first place. That is, somebody applying for an EV certificate for an entity will have to prove that the entity really exists at the specified address. This is then used to justify a vastly increased price for the certificate.

Gathering session cookies with Firesheep

Posted Nov 4, 2010 20:14 UTC (Thu) by Simetrical (guest, #53439) [Link]

"My question to you: how do you train Joe Public to differentiate between: 'This cert has changed!' (you are now being MITMed) and 'This cert has changed!' (the server operator changed their cert)?"

You can't. I know all about how TLS works, but I'd have no idea how to tell whether a cert is actually legitimate. (Look at details, try to find the fingerprints, Google them . . . ?) So you make a best guess. Browsers currently guess that any cert change is due to MITM, and thus throw up scary warning messages and make it difficult to continue. But in fact, as a paper by Microsoft Research observes:

"""
Ironically, one place a user will almost certainly never see a certificate error is on a phishing or malware hosting site. That is, using certificates is almost unknown among the reported phishing sites in PhishTank [7]. The rare cases that employ certificates use valid ones. The same is true of sites that host malicious content. Attackers wisely calculate that it is far better to go without a certificate than risk the warning. In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.

Thus, to a good approximation, 100% of certificate errors are false positives. Most users will come across certificate errors occasionally. Almost without exception they are the result of legitimate sites that have name mismatches, expired or self-signed certificates.
"""
http://research.microsoft.com/en-us/um/people/cormac/pape...

So if you're going to make an informed guess on the user's behalf, you should guess that the cert is self-signed or mismanaged and not bother the user about it.

Of course, this logic taken on its face would say you should just get rid of TLS entirely, which is wrong. The reason MITM attacks with self-signed certs don't occur is *because* of the warning messages. But the drastic overreaction here by browsers just erodes users' confidence in browser warnings. They need to present the issue more realistically and honestly, keeping in mind that most cert errors are actually innocuous.

(Chrome is particularly egregious here. I've seen it flat-out refuse to let me continue because it thought a cert was expired or revoked or something, but it was just some stupid Microsoft feedback site that I wasn't submitting anything secret to at all, so if it was a MITM I just didn't care. And it makes flat-out wrong claims like "This is probably not the site you are looking for!" Firefox is wordy and tedious, but at least it doesn't outright lie to you. Still, the severity of the warning message even on pages that you can easily guess are legitimate, like <https://amazon.com>, is really unwarranted.)

I think a lot of this debate can be solved by STS. Once all sites that really need TLS are using STS with long expiration times, and browsers come with a prepackaged list of such sites that they update regularly like their lists of malware sites, the UI for non-STS TLS should be relaxed considerably. STS is probably how TLS should have worked to begin with.

Also, serving certs through DNSSEC gives us a chance to make them easier to deploy, and moots the question of self-signing. So I think we can improve the situation a lot here. But browsers' current UI for cert errors still is not at all reasonable.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 16:18 UTC (Fri) by gerv (subscriber, #3376) [Link]

In a comment on my blog, Cormac Herley rowed back somewhat from the position outlined in those paragraphs you quote.

He wrote: "[That line] was being a little provocative :-) The point I wanted to make is that the user has never seen anything to suggest that the annoyances are there for a purpose. That said, so many of the emails and comments I’ve got have flagged this line that it’s clear I should have worded it better. I completely agree that even 100% false positives doesn’t mean we can get rid of the technology."

If you made the warnings less severe, the problems they are there to prevent would become more common.

Gerv

Gathering session cookies with Firesheep

Posted Nov 5, 2010 17:04 UTC (Fri) by Simetrical (guest, #53439) [Link]

"If you made the warnings less severe, the problems they are there to prevent would become more common."

Correct, but if you make the warnings more severe than warranted, users pay less heed to warnings generally. If the user never received a browser security warning before in their life, the first time will make them think twice. If they've seen them before and wound up going ahead and nothing bad happened, they'll come to ignore them.

Honesty might not always be the best policy, but the current policy is certainly bad. In real life we know that certain types of cert errors are much more likely to be innocuous than others -- like a cert for "www.amazon.com" on "amazon.com", vs. a large banking site using a self-signed cert. A lot of this knowledge could be wired into the browser, and the warnings could be adjusted accordingly. Attackers will realistically target mostly large e-commerce or banking sites, where they can see easy gains, so getting a list of those and stepping up the warnings there while scaling back for others would greatly increase warning accuracy.

I'm hopeful that STS will mostly solve the problem, by giving an out-of-band automated way to get a list of sites that really want to commit to using valid certs always. (Out-of-band because the real value will be when lists ship with the browser and auto-update.) In that case, non-STS sites can have their warnings greatly moderated, ideally notification bars instead of interstitials. But the existing problem is real, and could have been mitigated by the browser implementers by deploying fairly simple heuristics long before now -- when instead some have been making it worse.

Gathering session cookies with Firesheep

Posted Nov 5, 2010 8:52 UTC (Fri) by ekj (subscriber, #1524) [Link]

By "X is more secure than Y" I mean that the set of attacks that work against X, is smaller than the set of attacks that work against Y.

A protocol that is protected against passive eavesdropping, is thus more secure than a protocol which isn't.

Arguing that self-signed-https is NOT more secure than plain http, is precisely analogous to arguing that ssh is not more secure than telnet.

Running active MITM-attacks is actually more difficult, more costly and more likely to be discovered than merely sniffing plaintext-traffic that passes by. Thus defending against passive listening-attacks, is better than doing nothing at all.

ssh is, infact, more secure than telnet.

I agree that Joe Public can't be trained to evaluate the danger of a changed certificate - but (and this is a big but) even if he cannot - how does that make him worse off, compared to http ?

Yes, true. Joe Public won't notice active man-in-the-middle attacks when the site uses self-signed https. But that is ALSO true of plain http.
The browsers, effectively, claim that "self-signed https is MORE dangerous than plain http"

If we where arguing which is more secure of externally-signed and self-signed, then we'd agree: externally-signed is better for foiling man-in-the-middle.

But that's not my argument !

My argument is that self-signed-https is superior to plain http. And thus it's insane to put scary warnings on it, which are absent from http.

http is WORST. self-signed https is BETTER. externally-signed https is *BEST*

Gathering session cookies with Firesheep

Posted Nov 8, 2010 17:06 UTC (Mon) by gerv (subscriber, #3376) [Link]

I agree that Joe Public can't be trained to evaluate the danger of a changed certificate - but (and this is a big but) even if he cannot - how does that make him worse off, compared to http ?

Because if you make him used to dismissing changed-cert warnings, he'll also dismiss them when it's using CA-based HTTPS. Which makes him a lot worse off, because he'll get MITMed when accessing his bank.

Gerv

Gathering session cookies with Firesheep

Posted Nov 4, 2010 16:07 UTC (Thu) by bfields (subscriber, #19510) [Link]

The ability to do a one-click takeover of someone's account

If password changes require reentering the password under SSL, then it's not a *complete* takeover: the attacker gains the ability to impersonate the owner for the length of a login session, but the account owner is still the only one that knows the password, and can still lock out the attacker by logging out and/or changing the password. Unless I'm missing something....

(It's still a serious breach regardless, of course.)

Password takeover

Posted Nov 4, 2010 16:11 UTC (Thu) by corbet (editor, #1) [Link]

What an attacker could do on a lot of sites is change the email address associated with the account, then request the password (or a reset). That, of course, would be a complete takeover without knowing the original password.

Password takeover

Posted Nov 4, 2010 20:18 UTC (Thu) by Simetrical (guest, #53439) [Link]

What sites allow e-mail reset of passwords but don't require you to re-enter your password to change your e-mail?

Password takeover

Posted Nov 17, 2010 12:58 UTC (Wed) by DonDiego (subscriber, #24141) [Link]

If you capture the insecure session cookie as described in the article, you don't need to enter a password.

Password takeover

Posted Nov 18, 2010 0:52 UTC (Thu) by bfields (subscriber, #19510) [Link]

Try it. Go to facebook, and try to change your email address or your password without re-entering your password. You'll find it doesn't let you, even though you've given it a session cookie. And that's by design....

Gathering session cookies with Firesheep

Posted Nov 4, 2010 19:32 UTC (Thu) by Spudd86 (guest, #51683) [Link]

I suppose one workable solution (in some cases, probably good enough for LWN) is to have two session id's for each user, one that's SSL/TLS only one that is available over HTTP, then use the HTTP cookie to present customized content, decide if you're logged in, etc. Then when the user takes some action that needs to be authenticated switch to HTTPS (ie clicks a "Post Comment" button).

Obviously there are issues with this (one of which being it'd be hard to make sure it really is secure enough, and get the implementation right), but it's probably viable for situations where for most stuff there isn't a real risk for a MITM (ie a MITM can't really do anything of consequence))

Gathering session cookies with Firesheep

Posted Nov 4, 2010 20:48 UTC (Thu) by corbet (editor, #1) [Link]

I've implemented a simpler variant, have been using it for LWN editor accounts for a little bit now. The authentication cookie is SSL-only, of course, but we also set an insecure "SSL only" cookie. Whenever the site sees that second cookie on a non-SSL connection, the browser is redirected. Seems to work great.

A not-security-geek question

Posted Nov 5, 2010 15:08 UTC (Fri) by rmano (subscriber, #49886) [Link]

I know that maybe this is not the best place to ask, but, as lwn people is so nice, I will...

Is this problem present when you connect to a wifi network that "seems" to be open, redirect you to a vendor page, and you make a login now to enter (I mean, hotels, airports, wifi spots, etc.)? I do not know how you auth is granted after login, but I suspect is a simple MAC register - so should we consider that kind of networks dangerous as well?

Thanks...

A not-security-geek question

Posted Nov 5, 2010 22:33 UTC (Fri) by Kwi (subscriber, #59584) [Link]

Yes, as long as the wifi network is unencrypted, it's vulnerable to this attack. The attacker wouldn't even need a login.

Encryption is the only viable defense, whether it's an encrypted wifi (but note that the encryption ends at the access point, leaving you vulnerable to the network owner, and possibly other users, depending on setup), encryption at the application layer (e.g. https), or an encrypted tunnel (e.g. SSH or a full VPN).

A not-security-geek question

Posted Nov 9, 2010 0:23 UTC (Tue) by adisaacs (subscriber, #53996) [Link]

Partly right, partly wrong.

Yes, an unencrypted 802.11 network is trivially sniffable, whether or not it uses "captive portal" logins.

However, an encrypted wifi is not very much better. WEP is completely broken against eavesdropping even without the attacker knowing the passphrase. WPA is effective against eavesdroppers, but (AFAIK) anyone who knows the PSK can still decrypt captured WPA traffic.

There are more sophisticated variants of WPA (labelled "Enterprise" in the jargon of the trade), but they're significantly more difficult to set up and a total non-starter for the coffeeshop/hotel use case. (They generally require a SecurID-style token of some kind.)

And finally, yes -- a VPN, encrypted tunnel, or application-layer encryption system such as HTTPS is more secure.

Why is this LINUX Weekly News?

Posted Nov 11, 2010 17:18 UTC (Thu) by Epicanis (guest, #62805) [Link]

Really a minor nit-pick (since after all it seems like a solid majority of web SERVERS run Linux) but...

Firesheep is not even available for Linux. Only a couple of proprietary operating systems. At least, it still isn't available for Linux as I type this (a week after this LWN Weekly Edition story came out).

Why is this LINUX Weekly News?

Posted Nov 11, 2010 17:23 UTC (Thu) by jake (editor, #205) [Link]

> Firesheep is not even available for Linux.

True, though the developer claims that Linux support is coming. But Linux users have to be just as concerned about Firesheep even though it doesn't (yet) run on Linux. Someone running it on another OS will be able to sniff credentials of Linux users just as easily.

jake

Why is this LINUX Weekly News?

Posted Nov 15, 2010 16:58 UTC (Mon) by vonbrand (subscriber, #4458) [Link]

Never noticed that this is no longer "Linux Weekly News (LWN for short)" but just plain "LWN"?

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds