Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Posted Nov 5, 2010 7:31 UTC (Fri) by paulj (subscriber, #341)In reply to: Gathering session cookies with Firesheep by Simetrical
Parent article: Gathering session cookies with Firesheep
The key point is that self-signed shouldn't be /worse/ to use than no-security, given that self-signed definitely solves /some/ security problems.
Posted Nov 5, 2010 16:48 UTC (Fri)
by Simetrical (guest, #53439)
[Link] (14 responses)
Right . . .
"So it will be presented with the 'this page (attempts) to be secure' UI, including whatever scary warnings are needed if things seem broken."
. . . but now I don't follow you. Say you try to connect to your bank. I intercept the connection during the TLS handshake. The request never reaches the bank, so you never get the bank's certificate. You get my self-signed certificate instead, which appears to come from the bank's website. In this case you clearly want a warning of some type, or else you have no protection against MITMs at all. But how does the browser distinguish between this, and the case where the site's legitimate owners are using a self-signed cert?
Posted Nov 5, 2010 17:47 UTC (Fri)
by paulj (subscriber, #341)
[Link] (13 responses)
If you mean "what if the bank did use a self-signed cert, we'd need to warn the user!". This would be equivalent to a bank NOT using any HTTPS at all, yet browsers do NOT warn users if a bank uses only HTTP. Indeed, we can generalise this to say that browsers do not warn users where websites use HTTP for information which should otherwise be sent over a secure (i.e. authenticated + private) channel. This is because there is no practical way to do it. Nor is there any practical way to tell where self-signed certs are being used where better security should have been used.
However, in the one case the browser simply lets users get on with it. In the other case browser implementors have decided to make it hard for ordinary users to use. Worse, given self-signed certs continue to be used, this situation helps inure users to scary browser security warnings - precisely the situation at least some browser implementors say they wish to avoid!
So again, why not just make the self-signed case == HTTP case, more or less?
Posted Nov 5, 2010 19:23 UTC (Fri)
by foom (subscriber, #14868)
[Link] (1 responses)
This is what RFC 2817 (not implemented by anyone) would be useful for.
The right thing to do is to leave https:// alone, but to add the ability to encrypt http:// transactions, without requiring that MITM-protection be present. If http:// urls could be automatically encrypted whenever both the client and server support it, that's a pure win. Even more so if all the popular servers were configured to have that work out of the box.
Posted Nov 6, 2010 2:06 UTC (Sat)
by paulj (subscriber, #341)
[Link]
Posted Nov 5, 2010 20:37 UTC (Fri)
by Simetrical (guest, #53439)
[Link] (10 responses)
But let's say you're doing this using a free Wi-Fi hotspot, and I happen to have set up that Wi-Fi hotspot with a malicious program on it. Now this malicious program sees your outgoing HTTPS request to some IP address, which it happens to know belongs to a bank.
Instead of passing the HTTPS request on to the actual bank, my program instead acts as a man-in-the-middle. It pretends to be the bank, and proceeds with the SSL handshake as though it were the real bank website. At some point, your browser demands my certificate to prove that I'm not an impostor. Unfortunately for me, I don't have a valid certificate for yourbank.com, because I don't control that domain and so I (hopefully) can't convince a CA to sign my certificate.
However, I can easily make up my own *self-signed* certificate. So I pass that certificate to your browser. Up to this point, I'm acting exactly like the real site would, but now I do something different: the bank provides a CA-signed cert, I provide a self-signed cert.
Currently, all browsers pop up a big warning message: "This might not be the site you think it is!" Hopefully this will scare you away from using the bank's site for now.
But in your scheme, the browser would raise no warning, just act the same as a regular HTTP request. In that case, my attack succeeds. You almost certainly won't notice the lack of HTTPS UI, so I'll get your username and password and promptly withdraw your account's balance in cash.
Now, of course the same attack would be possible if you visited http://yourbank.com/. But you hopefully are not -- that's the point of having different URLs for HTTP and HTTPS. The fact that the URL begins "https://" instead of "http://" means "Do not give me the results of this page unless you can verify that it's authentic." If it means anything less, you're opening up attacks by active MITMs, which are extremely practical if you're running free Wi-Fi, Tor exit nodes, an ISP's router (maybe hacked), etc.
So an https:// page that isn't secure *is* worse than an http:// page that isn't secure. In one case the lack of security is expected, but in the other it's unexpected. If https:// URLs behaved as you described, there would be no way for a URL to encode the information "I don't want to connect unless I'm sure it's the real site."
On the other hand, there is no reason in principle not to use some type of encryption over regular HTTP on port 80, even if you don't do authentication. This costs very little resources these days, and at least protects against passive snooping. tcpcrypt.org outlines a very interesting approach to this. But such encryption is not enough for connecting to websites that really want to be secure, like banks, and they need to be able to force authentication somehow. Currently the only way they have to do that is with an https:// URL. Hopefully the new Strict-Transport-Security header will allow a better way to distinguish, and then maybe we can relax warnings for self-signed certs.
Am I clear now?
Posted Nov 9, 2010 17:29 UTC (Tue)
by nye (subscriber, #51576)
[Link] (9 responses)
This is the only coherent argument of this point that I've ever seen, and I thank you for it.
Posted Nov 11, 2010 2:49 UTC (Thu)
by filteredperception (guest, #5692)
[Link]
>This is the only coherent argument of this point that I've ever seen, and I thank you for it.
+1. I'm glad my eyes didn't glaze over this long thread and I kept skimming till that explanation. I too required that explanation before I finally 'got it'.
Posted Nov 11, 2010 3:10 UTC (Thu)
by filteredperception (guest, #5692)
[Link] (7 responses)
>This is the only coherent argument of this point that I've ever seen, and I thank you for it.
Ok, I too value that explanation, because it is the essence of the counterargument against the argument that allowing self-signed certs without warnings would be a net improvement.
But after a couple minutes of hopefully actually grokking this explanation of subsequent potential net-banking mitm attack vectors, this thought occurred to me-
Isn't the only added hurdle to pulling off this attack the need to get a non-self-signed cert? Which sure, is a bit of a relative pain and cost compared to a self-signed cert, but if you were mitm attacking peoples bank accounts, wouldn't getting a valid (effectively disposable) cert be just a 'cost of doing criminal business?'. Sure in the process of getting the cert, you have to leave some identity information, use a credit card, but in my estimation of current global security, I tend to imagine that the criminals could do those things effectively anonymously.
And if in the unlikely event that both my understanding of the issue, and that subsequent analysis are correct, then the question is- which is the bigger net gain for society- the benefits of facilitating easy https encryption with self-signed certs, or the benefits of adding the go-buy-or-steal-a-real-cert hurdle to bank attackers? And I think I'd lean towards the former. But odds are I'm still misunderstanding various aspects of this...
Posted Nov 11, 2010 3:57 UTC (Thu)
by foom (subscriber, #14868)
[Link] (3 responses)
You can't get just *any* non-self-signed cert. It has to be a cert valid for the domain name the user is trying to access, signed by one of the certification authorities trusted by the browser.
And that's not a completely trivial thing to do with just a small application of money.
It's only trivial if you happen to run one of the ~500 trusted root or intermediate CAs (e.g. most major governments in the world, and a few companies besides), or have enough money to infiltrate one.
Posted Nov 11, 2010 5:24 UTC (Thu)
by dlang (guest, #313)
[Link]
but if you watch out for the cert changing, as opposed to just the cert existing, you cover most of that problem
Posted Nov 11, 2010 5:43 UTC (Thu)
by filteredperception (guest, #5692)
[Link] (1 responses)
duh, OK, I figured I was missing something. Hmmm... Maybe the real issue is that certs cost $$ for no good reason, and that is the central issue impeding much more widespread use of https.
Posted Nov 13, 2010 10:31 UTC (Sat)
by gerv (guest, #3376)
[Link]
Gerv
Posted Nov 13, 2010 23:40 UTC (Sat)
by Simetrical (guest, #53439)
[Link] (2 responses)
This problem will potentially go away in the medium term with DNSSEC. Once sites can deploy certificates through DNSSEC, there's no reason we couldn't also devise a DNS record that says "only accept certificates from DNSSEC, not certificates that claim to be signed by CAs". Then the only way to publish a false certificate for the site would be to compromise their DNS, which gives you many fewer attack vectors than now, when you can compromise (or trick or bully) any one of hundreds of CAs.
There's been discussion about adding a feature like this to Strict-Transport-Security, so you can say "only accept a cert signed by this root CA". Then an attacker has to compromise a *specific* CA to compromise the site instead of being able to compromise *any* CA, making their job much harder.
Posted Nov 14, 2010 11:59 UTC (Sun)
by anselm (subscriber, #2796)
[Link] (1 responses)
Yeah right. Like this happened to VeriSign in March, 2001.
Posted Nov 14, 2010 12:11 UTC (Sun)
by gerv (guest, #3376)
[Link]
There's a difference between a mistake (which happen to the best of us) and wilfully ignoring the necessary rules and safeguards, or a history of mistakes which leads to a diagnosis of institutional incompetence. I suggest that Verisign is guilty of neither of the latter two things.
In addition, the certificate(s) in the incident you reference were digital code-signing certificates, not web server certificates. Very occasionally, web server certs do fall into the wrong hands (which can be via hacking and theft as much as misissuance - how many SSL-running web servers do you think were rooted in the past year?) but I'd be impressed if you can show me a single reported incident where a fraudulently-acquired web server cert was used for spoofing.
Gerv
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
[…] and certificate authorities will have their trust revoked by browsers (making their certs useless) if they're found to be giving certs away to people who don't actually control the domains they're for.
Gathering session cookies with Firesheep