Certificates and "authorities"
Posted Sep 9, 2011 21:24 UTC (Fri) by Simetrical
In reply to: Certificates and "authorities"
Parent article: Certificates and "authorities"
You accurately summarize many of the failings of HTTPS in practice today, but don't give enough credit to solutions that are being worked on and deployed right now.
First: It's not widely used. Because of the extra software and layer-8 complexity people simply don't bother to use it. Some complexity is unavoidable for authentication, but HTTPS as deployed provides all or nothing security: You don't get ephemeral encryption which makes eavesdropping harder and detectable and which could be provided without any administrative burden unless you also swallow the full authentication pill.
This one I'm in total agreement with. I don't know of any good solution being worked on. HTTPS is a headache even for big sites to get right -- https://amazon.com is proof enough of that. Still, it's worth pointing out that the highest-profile targets are also the ones who are most likely to actually have the resources to deploy HTTPS properly.
(in practice: self-signed certs throw warnings at users which hardly provides any security, but provides enough FUD to make them mostly useless, instead the browsers could just completely hide that such pages are encrypted but they don't)
That behavior would destroy any benefit of HTTPS. A MITM could intercept any HTTPS connection and take it over by just serving a self-signed cert, and the only way the user would know is if they were aware enough to notice the UI changes. I wrote up a more detailed explanation of this on LWN a while ago. If we want encrypted but unauthenticated HTTP, we should either reuse the http scheme or make up a new one. The https scheme has to remain reserved for full authentication only.
The lack of ubiquitous mandatory HTTPS makes downgrading attacks utterly trivial: "Oh no, my victim is using SSL! Oh wait, thats no problem, I'll block port 443 and they'll switch to http because its not unusual for HTTPS to fail to work".
HSTS is a good improvement on this particular fault, but even though it is trivially activated practically no sites support it and it creates its own complicated failure modes and still suffers from an inability to securely initialize it.
It creates its own failure modes, but so does anything. It can be securely initialized right now via browsers shipping with lists of sites that should always use HTTPS. Chrome already does something like this for some Google sites, and in fact that's how these forged certificates got detected to start with. In the medium term, you could in principle securely initialize HSTS using DNSSEC, although the performance implications would need to be carefully considered. HSTS is definitely going to greatly improve HTTPS security, although admittedly it will make it yet more complicated.
The authentication model is also failure prone. Certificates expire frequently and users routinely encounter certificate errors even on big name high profile sites. Browser vendors have tried to combat the resulting blind clicking by making the process more burdensome (three clicks in firefox, IIRC) but increasing the burden of the task simply makes the inattentional blindness more potent.
I had recent experience with this: Some teammate linked to an IETF page in chat, and the IETF had an expired certificate. I managed to click through the warnings without ever realizing they were there, only noticing it when other people in the chat commented. None of us reported the expired cert perhaps we were all just MITMed with an old cert, we'll never know because the HTTPS model simply doesn't work.
Part of this is probably because certs have to be reissued regularly, at a cost. If you set up a system where a site can refresh its certs automatically, like using DNSSEC, then this kind of failure is less likely. But yes, HTTPS is unreasonably hard to actually deploy properly, and I agree that's a huge problem.
Despite its faults and the huge number of supported CA's, SSL is also costly: The smaller number of public CA's that are supported by a broader set of browsers charge a lot, especially for the wildcard certificates which are needed to support subdomains. This further discourages usage.
Using DNSSEC instead of CAs would fix this problem entirely. Recent Chrome already supports this. Certs would be free, and as reliable as the domain name registration process itself. An attacker who compromises the registrar could forge a cert, but that's a very small attack surface.
The nice thing about Chrome's implementation is that it doesn't rely on DNSSEC actually being available on the client. It just sticks the signed record in place of the regular cert in the TLS setup. Thus the only limiting factors are browser support, and TLD signing. Lack of browser support will delay practical usability of DNSSEC certs for many years, unfortunately, but that's a problem with any realistic alternative too.
Even when the CAs are functioning normally their validation process is a joke: usually it requires nothing more than responding to an email sent to a domain name administrative contact, or a file with a particular name placed on the site (and served via unauthenticated HTTP) in many cases neither are significant barriers to anyone with a fax machine or a little luck at guessing password. Yet making it better would only increase the already high costs.
Once DNSSEC cert stapling is reliably available, there will be no reason for sites to use old-style CAs anymore. At that point, browsers can gradually drop support for them, only allowing DNSSEC-based certs, which don't have this impersonation problem. In the short term, HSTS should allow for individual sites to require that only certain public keys work for them (pinning), which works around it for now.
Again, this is how the DigiNotar compromise was actually caught in real life. An Iranian Chrome user reported that they were getting hard failure when accessing Google sites, since the wrong cert was presented. Even though the cert was completely valid, Chrome blocked it because the correct public key was shipped with the browser. There's no reason this approach couldn't protect every site on the web that opted in.
Furthermore, in the almost universally used without-PFS mode SSL certificates stored on a server are incredibly valuable to attackers: Capturing a sites certificate not only allows you to _undetectably_ impersonate the site for the duration of the cert (or until its revoked), but it allows an attacker to decrypt all communications with that server _prior_ to the exploitation which they may have captured. So, as deployed SSL does little to discourage the creation of billion dollar ubiquitous surveillance systems, as even when its used its easily defeated ex-post-facto!
If you can get root access to the web server, sure. In that case, why not just take over the webserver process itself?
Moreover, often your your browser talking to an intermediary "application service provider" rather than to the true far end of your communication. E.g. when you send an email on facebook the other end of your communication is your friend facebook is just a middle-man no different from your ISP. In this very common model HTTPS offers nothing in the way of end to end security, it simply moves the vulnerability point around. In the same way we don't consider our local WiFi WPA adequate to secure our internet traffic, we shouldn't consider HTTPS adequate.
In almost all cases, the user actually wants the site they're connecting to to see their data. For instance, Gmail lets you search your mail, it can filter it according to criteria you set, it can heuristically mark certain messages as important or spam, etc. Most users just do not want solutions like PGP, because the security-convenience tilts drastically toward convenience for them. So I don't rate this as a problem with HTTPS at all.
I could continue, but I think these points are enough to establish that the compromise any of many vulnerability of the CA mode is just one problem out of a great many.
The biggest problem with HTTPS today is that it's not secure against determined attackers, such as governments, because of the countless SPOFs (every CA in existence). The way to address that is to support public key pinning in HSTS, which has been discussed a bunch and is likely to happen in the not-too-distant future, I hope. Chrome already supports such a feature (although currently only for certain Google sites) and it did actually work against the DigiNotar compromise.
The second-biggest problem with HTTPS today is that it's fragile and hard to set up. This is less tractable, but it's also less important. Relatively few targets are worth anyone's effort to MITM, and the ones that are can mostly handle the complexity. Certificates over DNSSEC will be a big step forward, because that will remove the cost, and then most of the remaining complexity can be automated away.
HTTPS does suffer from several major design flaws that have caused untold harm to the web and its users. However, it's not a fundamentally broken approach and real efforts are underway to fix some of its worst problems. I wish progress could be faster, but it is happening.
to post comments)