Gathering session cookies with Firesheep
Gathering session cookies with Firesheep
Posted Nov 4, 2010 20:14 UTC (Thu) by Simetrical (guest, #53439)In reply to: Gathering session cookies with Firesheep by gerv
Parent article: Gathering session cookies with Firesheep
You can't. I know all about how TLS works, but I'd have no idea how to tell whether a cert is actually legitimate. (Look at details, try to find the fingerprints, Google them . . . ?) So you make a best guess. Browsers currently guess that any cert change is due to MITM, and thus throw up scary warning messages and make it difficult to continue. But in fact, as a paper by Microsoft Research observes:
"""
Ironically, one place a user will almost certainly never see a certificate error is on a phishing or malware hosting site. That is, using certificates is almost unknown among the reported phishing sites in PhishTank [7]. The rare cases that employ certificates use valid ones. The same is true of sites that host malicious content. Attackers wisely calculate that it is far better to go without a certificate than risk the warning. In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.
Thus, to a good approximation, 100% of certificate errors are false positives. Most users will come across certificate errors occasionally. Almost without exception they are the result of legitimate sites that have name mismatches, expired or self-signed certificates.
"""
http://research.microsoft.com/en-us/um/people/cormac/pape...
So if you're going to make an informed guess on the user's behalf, you should guess that the cert is self-signed or mismanaged and not bother the user about it.
Of course, this logic taken on its face would say you should just get rid of TLS entirely, which is wrong. The reason MITM attacks with self-signed certs don't occur is *because* of the warning messages. But the drastic overreaction here by browsers just erodes users' confidence in browser warnings. They need to present the issue more realistically and honestly, keeping in mind that most cert errors are actually innocuous.
(Chrome is particularly egregious here. I've seen it flat-out refuse to let me continue because it thought a cert was expired or revoked or something, but it was just some stupid Microsoft feedback site that I wasn't submitting anything secret to at all, so if it was a MITM I just didn't care. And it makes flat-out wrong claims like "This is probably not the site you are looking for!" Firefox is wordy and tedious, but at least it doesn't outright lie to you. Still, the severity of the warning message even on pages that you can easily guess are legitimate, like <https://amazon.com>, is really unwarranted.)
I think a lot of this debate can be solved by STS. Once all sites that really need TLS are using STS with long expiration times, and browsers come with a prepackaged list of such sites that they update regularly like their lists of malware sites, the UI for non-STS TLS should be relaxed considerably. STS is probably how TLS should have worked to begin with.
Also, serving certs through DNSSEC gives us a chance to make them easier to deploy, and moots the question of self-signing. So I think we can improve the situation a lot here. But browsers' current UI for cert errors still is not at all reasonable.
Posted Nov 5, 2010 16:18 UTC (Fri)
by gerv (guest, #3376)
[Link] (1 responses)
In a comment on my blog, Cormac Herley rowed back somewhat from the position outlined in those paragraphs you quote.
He wrote: "[That line] was being a little provocative :-) The point I wanted to make is that the user has never seen anything to suggest that the annoyances are there for a purpose. That said, so many of the emails and comments Ive got have flagged this line that its clear I should have worded it better. I completely agree that even 100% false positives doesnt mean we can get rid of the technology."
If you made the warnings less severe, the problems they are there to prevent would become more common.
Gerv
Posted Nov 5, 2010 17:04 UTC (Fri)
by Simetrical (guest, #53439)
[Link]
Correct, but if you make the warnings more severe than warranted, users pay less heed to warnings generally. If the user never received a browser security warning before in their life, the first time will make them think twice. If they've seen them before and wound up going ahead and nothing bad happened, they'll come to ignore them.
Honesty might not always be the best policy, but the current policy is certainly bad. In real life we know that certain types of cert errors are much more likely to be innocuous than others -- like a cert for "www.amazon.com" on "amazon.com", vs. a large banking site using a self-signed cert. A lot of this knowledge could be wired into the browser, and the warnings could be adjusted accordingly. Attackers will realistically target mostly large e-commerce or banking sites, where they can see easy gains, so getting a list of those and stepping up the warnings there while scaling back for others would greatly increase warning accuracy.
I'm hopeful that STS will mostly solve the problem, by giving an out-of-band automated way to get a list of sites that really want to commit to using valid certs always. (Out-of-band because the real value will be when lists ship with the browser and auto-update.) In that case, non-STS sites can have their warnings greatly moderated, ideally notification bars instead of interstitials. But the existing problem is real, and could have been mitigated by the browser implementers by deploying fairly simple heuristics long before now -- when instead some have been making it worse.
Gathering session cookies with Firesheep
Gathering session cookies with Firesheep