LWN.net Logo

Certificates and "authorities"

By Jake Edge
September 7, 2011

The more we hear about the DigiNotar certificate situation, the worse it seems to get. What started out looking something like the Comodo incident from last March—a serious breach in its own right—has now turned into damning evidence of almost unimaginably lax security at the DigiNotar certificate authority (CA). That led browser makers to do something unprecedented: not to just blacklist the known-bad certificates that had been issued, but to blacklist any DigiNotar-issued certificate. As it turns out, that was the right response (modulo a short-lived whitelisting of some Dutch government certificates) as the scope of the compromise has just continued to grow.

The certificates in question are SSL/TLS certificates that are used to authenticate and deliver the keys used to encrypt HTTPS traffic. CAs issue certificates for secure web sites and sign them with their own private keys so that browsers can ensure that the certificates are valid. The public half of those CA keys are stored in a browser's "root store". When they decide to include a given CA's root key, browser makers are explicitly trusting certificates signed by those CAs. In the wrong hands, a certificate signed by a trusted root can be easily used to perform man-in-the-middle attacks against users who are accessing the secure site.

Compromise and discovery

The compromise of DigiNotar's certificate signing systems evidently occurred on or before July 19 and once the CA detected the problem, it issued revocations for the certificates that it was able to determine were wrongly issued. DigiNotar evidently did not notify browser makers or others of the compromise and essentially swept the whole thing under the rug. But the attackers, who may have compromised parts of DigiNotar's systems as far back as May 2009, were able to issue certificates that were not detected when the compromise was uncovered.

One of those fake certificates, for *.google.com, was reported to the Gmail help forum by a user in Iran. The user was able to do so because of a Chrome/Chromium browser feature called public key pinning. Essentially, Chrome has a list of the hashes of public keys that are allowed to be used to sign certificates for Google's servers. If one of those public keys is not found in the certificate presented, it is a fatal error, which is what the user observed.

It is fortunate for that user—and now the rest of the internet—that Chrome has that feature. Without it, browsers like Firefox, IE, Safari, and others happily accept the bogus certificate. The evidence seems to point to an effort emanating from Iran—likely sponsored or run by the Iranian government—to generate and then use these certificates for man-in-the-middle attacks against Iranians, particularly those who might be involved in protests or other dissent. The evident link to Iran is one that both the Comodo and DigiNotar attacks share.

Dutch government sites

Once the problem was reported, Google then alerted other browser makers who were generally quick to issue updates (though Safari seems to have lagged) that removed the DigiNotar root certificates from the root store, effectively blacklisting all DigiNotar-issued certificates. There is a wrinkle, however, because some Dutch government sites use certificates that are signed by DigiNotar (which is a Dutch company). A blanket ban of DigiNotar-signed certificates would have affected these sites, so, at the request of the Dutch government, an exemption to the ban was added for Firefox and Chrome. As a Mozilla blog update puts it:

These certificates are issued from a different DigiNotar-controlled intermediate, and chain up to the Dutch government CA (Staat der Nederlanden). The Dutch government's Computer Emergency Response Team (GovCERT) indicated that these certificates are issued independently of DigiNotar's other processes and that, in their assessment, these had not been compromised. The Dutch government therefore requested that we exempt these certificates from the removal of trust, which we agreed to do in our initial security update early this week.

But it seems that the government was a bit hasty in that assessment, because it was fairly quickly revoked. Mozilla described it this way in the update: "The Dutch government has since audited DigiNotar's performance and rescinded this assessment. We are now removing the exemption for these certificates, meaning that all DigiNotar certificates will be untrusted by Mozilla products." In fact, since then, the Dutch government has taken over operational management of DigiNotar, and explicitly "denounces trust in certificates issued by DigiNotar".

This is ugly stuff. The CA model relies on trust and part of that trust is that CAs will zealously guard access to their signing authority. In two recent cases—and it is certainly possible there are other compromises as yet unknown—we have seen that some CAs are not taking enough precautions. As it stands, every time another compromise is discovered, browser makers will have to race around to patch their browsers, then Linux distributions need to get updates out (for Firefox and others), and, finally, users actually need to apply the update.

Unfortunately, it is not just a Google certificate that is out there in the wild. Early reports were that it was just a handful of bad certificates, but as time went on, the number of certificates issued by the attackers using the DigiNotar keys have risen: first to around 200 and now there are reports of as many as 500. Not only were its signing systems compromised, but it would seem that DigiNotar's logging and audit procedures were circumvented as well.

A pastebin posting purporting to be from the attacker (of both DigiNotar and the Comodo affiliate back in March) sheds some light on the extent and motives for the attack. It also indicates that there are four other CAs that have been penetrated, including one that is named: GlobalSign. Since that posting, GlobalSign has, at least temporarily, stopped issuing certificates. Whether that's just based on prudence or whether GlobalSign found evidence of a compromise is unclear. If the pastebin posting is real, however, there are other CAs that are also at risk.

Going forward

For obvious reasons, this recent spate of attacks has raised the profile of the problems inherent in the centralized CA model that is in use today. The central authorities are supposed to reduce the attack surface against SSL/TLS keys, but that depends on the vigilance of those CAs. The number of different CAs trusted by a modern browser is rather eye-opening, and hoping that they will all keep their systems secure is pretty clearly forlorn.

Small CAs, like DigiNotar, can be blacklisted when—if—compromises are discovered, but that's much harder to do for large CAs like Comodo or Verisign, for example. Luckily, detecting bad certificates is relatively easy—easier than figuring out if CAs have been compromised. Since web sites must present their certificate each time an encrypted connection is made, both detection and evidence gathering are fairly straightforward. Chrome's "pinning" feature does that in a limited way, though it still places trust in the CAs that do have signing authority for Google's keys; should any of those CAs be compromised, pinning would not catch them.

The pinning feature is one that other browsers will likely consider adding. Google has made it clear that it will allow other sites to pin their certificates to specific CA keys, and presumably any other browsers that implement it will do the same. However, that may turn Google, Mozilla, and others into the de facto arbiters of certificate authenticity, which may not be a desirable outcome. It is also possible that Chrome and the other browsers could provide a way for sites to do their own pinning via HTTP Strict Transport Security (HSTS) or some other means.

But, other alternatives to the centralized model are certainly being looked at. One that seems to have attracted some attention recently is Moxie Marlinspike's Convergence, which uses "trust notaries" in place of hard-coded lists of CA root keys. These notaries are in multiple locations and compare notes on the certificates that get presented to them, which is an effective way to recognize a certificate-based man-in-the-middle attack. Convergence is a Firefox add-on that is based on ideas from the Perspectives project, along with some of Marlinspike's ideas on trust agility.

We will certainly see more problems with compromised CAs down the road, particularly because governments have shown an interest in acquiring "fake" certificates—and using them against their citizens. It's a problem that is not going away soon and one that needs to be addressed. Building webs of trust implicitly via Convergence/Perspectives or more explicitly using something like Monkeysphere is one possible solution, or piece of a solution. Removing or reducing the trust that we currently place in CAs is pretty much required to be part of any solution, but we've known that for quite some time now. The CAs may not like it, but their stranglehold over the issuance of trusted certificates is likely on its way out.


(Log in to post comments)

Certificates and "authorities"

Posted Sep 8, 2011 0:13 UTC (Thu) by gmaxwell (subscriber, #30048) [Link]

To some extent I feel like the concern over the compromised CA's is really missing the elephant in the room: Even with perfect CA security, the HTTPS security model fails to provide effective security in the real world.

First: It's not widely used. Because of the extra software and layer-8 complexity people simply don't bother to use it. Some complexity is unavoidable for authentication, but HTTPS as deployed provides all or nothing security: You don't get ephemeral encryption which makes eavesdropping harder and detectable and which could be provided without any administrative burden unless you also swallow the full authentication pill. (in practice: self-signed certs throw warnings at users which hardly provides any security, but provides enough FUD to make them mostly useless, instead the browsers could just completely hide that such pages are encrypted but they don't)

The lack of ubiquitous mandatory HTTPS makes downgrading attacks utterly trivial: "Oh no, my victim is using SSL! Oh wait, thats no problem, I'll block port 443 and they'll switch to http because its not unusual for HTTPS to fail to work".

HSTS is a good improvement on this particular fault, but even though it is trivially activated practically no sites support it— and it creates its own complicated failure modes and still suffers from an inability to securely initialize it.

The authentication model is also failure prone. Certificates expire frequently and users routinely encounter certificate errors even on big name high profile sites. Browser vendors have tried to combat the resulting blind clicking by making the process more burdensome (three clicks in firefox, IIRC) but increasing the burden of the task simply makes the inattentional blindness more potent.

I had recent experience with this: Some teammate linked to an IETF page in chat, and the IETF had an expired certificate. I managed to click through the warnings without ever realizing they were there, only noticing it when other people in the chat commented. None of us reported the expired cert— perhaps we were all just MITMed with an old cert, we'll never know because the HTTPS model simply doesn't work.

Despite its faults and the huge number of supported CA's, SSL is also costly: The smaller number of public CA's that are supported by a broader set of browsers charge a lot, especially for the wildcard certificates which are needed to support subdomains. This further discourages usage.

Even when the CAs are functioning normally their validation process is a joke: usually it requires nothing more than responding to an email sent to a domain name administrative contact, or a file with a particular name placed on the site (and served via unauthenticated HTTP)— in many cases neither are significant barriers to anyone with a fax machine or a little luck at guessing password. Yet making it better would only increase the already high costs.

Furthermore, in the almost universally used without-PFS mode SSL certificates stored on a server are incredibly valuable to attackers: Capturing a sites certificate not only allows you to _undetectably_ impersonate the site for the duration of the cert (or until its revoked), but it allows an attacker to decrypt all communications with that server _prior_ to the exploitation which they may have captured. So, as deployed SSL does little to discourage the creation of billion dollar ubiquitous surveillance systems, as even when its used its easily defeated ex-post-facto!

Moreover, often your your browser talking to an intermediary "application service provider" rather than to the true far end of your communication. E.g. when you send an email on facebook the other end of your communication is your friend— facebook is just a middle-man no different from your ISP. In this very common model HTTPS offers nothing in the way of end to end security, it simply moves the vulnerability point around. In the same way we don't consider our local WiFi WPA adequate to secure our internet traffic, we shouldn't consider HTTPS adequate.

I could continue, but I think these points are enough to establish that the compromise any of many vulnerability of the CA mode is just one problem out of a great many.

People looking at the "SSL problem" would do to expand their analysis to beyond the CA insecurity. Systems like OTR (within its narrow problem domain) and Tcpcrypt (covering layers below authentication) provide security properties which are much more aligned with the practical needs of the internet than what HTTPS provides.

Certificates and "authorities"

Posted Sep 8, 2011 7:45 UTC (Thu) by sgros (subscriber, #36440) [Link]

Why do you think that solving "CA problem" wouldn't do? The majority of the problems you mention is related to CAs. Other set of the problems could be linked to bad usability of browsers. After all, security adds complexity and restrictions, there is no way around it.

But what if "CA problem" is solved in such a way that helps browsers (and users) make better informed decisions? Also, what if the system allows you to not completely trust a single CA? Finally, why browser vendors don't require bundled CAs to provide annual audit about their security from independent companies?

Of course, there is also the fact that majority of Internet users simply do not care about security... (e.g. "I have open WiFi, so what?!", "Pearson 1: You are using http, which means someone can see you communication. Pearson 2: So what!?" or "Pearson 2: Uhm?!").

Certificates and "authorities"

Posted Sep 8, 2011 9:18 UTC (Thu) by gmaxwell (subscriber, #30048) [Link]

Of the problems I listed the only ones that could even theoretically be solved by a perfect CA are the cost and poor validation problems. I'm doubtful that these two can be concurrently solved with the current technical system.

User visible complexity is what matters the most, as implementation complexity becomes well amortized quickly. And from the user's perspective "security adds complexity and restrictions" is not an invariant. Security is a many dimensioned continuum, not a binary value. Some kinds of security adds significant complexity, some add basically none, but the amount of complexity is often only weakly related to the amount of security being provided.

For example, address space layout randomization does not meaningfully increase complexity for the user. Likewise Ephemerally keyed encryption can be provided completely invisibly and provides _absolute_ protection against passive monitoring and at least the risk of detection (if unlikely) for any attacker who performs active attacks.

The user not caring isn't a user problem: It's a technical problem with the system. HTTPS should provide security for the user even if they are unaware or indifferent and should do so without effort on their part. A system which fails to do so is not secure in the real world where systems are used by real humans which have well established poor behaviour, even security experts fall into these traps.

"It's secure if you put the user on the moon" wouldn't be an acceptable excuse for failing of HTTPS and nor should "it's secure if the user has superhuman vigilance" be considered acceptable.

Asking users to make _more_ decisions will decrease security, not increase it, as it provides more ways to trick them, more ways to downgrade them, more reasons for them to blindly agree to whatever dialogs pop up.

Of course, improving things like not making everyone concurrently trust all CA's would be grand. I wasn't trying to oppose this, only pointing out that the CA specific problems are only one part of a system with many issues.

Certificates and "authorities"

Posted Sep 8, 2011 12:09 UTC (Thu) by sgros (subscriber, #36440) [Link]

Your use of the term "HTTPS" confuses me! What you mean by "HTTPS"? For example, what do you mean by "HTTPS should provide"?

For me, HTTPS is a layered protocol (HTTP + SSL) in which SSL protects communication using provided certificate.

Certificates and "authorities"

Posted Sep 8, 2011 9:25 UTC (Thu) by nmav (subscriber, #34036) [Link]

This is of course only your opinion. If you can tolerate encryption without authentication, it is good for you, but no good for me. I don't care if I talk to someone secretly so no-one hears if I don't know whom do I talk to. The examples of failures you mention are really because of you reluctance to use the provided interfaces. Security -online or offline- requires to follow some protocols. If you don't want to follow them, don't expect security.

Certificates and "authorities"

Posted Sep 11, 2011 1:29 UTC (Sun) by foom (subscriber, #14868) [Link]

> If you can tolerate encryption without authentication, it is good for you, but no good for me.

You're missing the point there. Which is better: no encryption whatsoever, or encryption without assurance of who you're talking to? The second is plainly better, as it defeats at least *some* types of attackers (those who have intercepted, but do not have the ability to modify your traffic). By now, all web ought to at least be at the "encryption without authentication" level.

Clearly, authentication of who you're talking to is an important feature to have, but requiring that be present to enable the use of encryption at all was a colossal blunder in the development of HTTPS.

Certificates and "authorities"

Posted Sep 16, 2011 17:09 UTC (Fri) by bjartur (guest, #67801) [Link]

By now, all web ought to at least be at the "encryption without authentication" level.

Which, as HSTS, provides adequate data security over end-to-end TCP connections on links that attackers can not inject malicious packets to. This does nothing to protect you against the most dangerous villains: MTAs, ISPs, proxies and the like.

Certificates and "authorities"

Posted Sep 17, 2011 2:26 UTC (Sat) by njs (guest, #40338) [Link]

The NSA isn't a dangerous villain?

Encryption without authentication forces any potential broad-scale sniffers to take a more active role, which may be politically problematic and is certainly much more expensive. (Decrypting/re-encrypting a few million TCP flows on the fly is not cheap or easy.)

Certificates and "authorities"

Posted Oct 18, 2011 20:46 UTC (Tue) by rich0 (guest, #55509) [Link]

Considering how prevalent cookie theft is over unsecured WiFi I'd say that there is a huge case for encrypted communications even if they aren't authenticated.

Sure, there is always the risk of MITM but at least you force the attacker to make an active attack, which then creates the opportunity to detect the hacker. Just have a few police stings in campus coffee shops or whatever and I bet you'd have some impact on the practice.

I'm amazed sometimes at the XOR approach we take towards security - either very secure but lots of cost/hurdles, or absolutely and completely insecure. A better approach is to provide a tiered system where everybody can work out how secure is secure enough for a particular application. Use DNSSEC and stick the required security level (as well as certificates) in the DNS record for a site and you have a standard way of ensuring the client and server are on the same page where security is important.

The bigger picture

Posted Sep 8, 2011 14:18 UTC (Thu) by scripter (subscriber, #2654) [Link]

Thank you for reminding us of the bigger picture of placing trust, beyond the scope of Certificate Authorities and SSL.

A colleague once stated, "encryption was the opiate of the 90s". I'm not so sure that we've gotten past our delusions of what encryption can accomplish, even in 2011.

Certificates and "authorities"

Posted Sep 9, 2011 21:24 UTC (Fri) by Simetrical (guest, #53439) [Link]

You accurately summarize many of the failings of HTTPS in practice today, but don't give enough credit to solutions that are being worked on and deployed right now.

First: It's not widely used. Because of the extra software and layer-8 complexity people simply don't bother to use it. Some complexity is unavoidable for authentication, but HTTPS as deployed provides all or nothing security: You don't get ephemeral encryption which makes eavesdropping harder and detectable and which could be provided without any administrative burden unless you also swallow the full authentication pill.

This one I'm in total agreement with. I don't know of any good solution being worked on. HTTPS is a headache even for big sites to get right -- https://amazon.com is proof enough of that. Still, it's worth pointing out that the highest-profile targets are also the ones who are most likely to actually have the resources to deploy HTTPS properly.

(in practice: self-signed certs throw warnings at users which hardly provides any security, but provides enough FUD to make them mostly useless, instead the browsers could just completely hide that such pages are encrypted but they don't)

That behavior would destroy any benefit of HTTPS. A MITM could intercept any HTTPS connection and take it over by just serving a self-signed cert, and the only way the user would know is if they were aware enough to notice the UI changes. I wrote up a more detailed explanation of this on LWN a while ago. If we want encrypted but unauthenticated HTTP, we should either reuse the http scheme or make up a new one. The https scheme has to remain reserved for full authentication only.

The lack of ubiquitous mandatory HTTPS makes downgrading attacks utterly trivial: "Oh no, my victim is using SSL! Oh wait, thats no problem, I'll block port 443 and they'll switch to http because its not unusual for HTTPS to fail to work". HSTS is a good improvement on this particular fault, but even though it is trivially activated practically no sites support it— and it creates its own complicated failure modes and still suffers from an inability to securely initialize it.

It creates its own failure modes, but so does anything. It can be securely initialized right now via browsers shipping with lists of sites that should always use HTTPS. Chrome already does something like this for some Google sites, and in fact that's how these forged certificates got detected to start with. In the medium term, you could in principle securely initialize HSTS using DNSSEC, although the performance implications would need to be carefully considered. HSTS is definitely going to greatly improve HTTPS security, although admittedly it will make it yet more complicated.

The authentication model is also failure prone. Certificates expire frequently and users routinely encounter certificate errors even on big name high profile sites. Browser vendors have tried to combat the resulting blind clicking by making the process more burdensome (three clicks in firefox, IIRC) but increasing the burden of the task simply makes the inattentional blindness more potent. I had recent experience with this: Some teammate linked to an IETF page in chat, and the IETF had an expired certificate. I managed to click through the warnings without ever realizing they were there, only noticing it when other people in the chat commented. None of us reported the expired cert— perhaps we were all just MITMed with an old cert, we'll never know because the HTTPS model simply doesn't work.

Part of this is probably because certs have to be reissued regularly, at a cost. If you set up a system where a site can refresh its certs automatically, like using DNSSEC, then this kind of failure is less likely. But yes, HTTPS is unreasonably hard to actually deploy properly, and I agree that's a huge problem.

Despite its faults and the huge number of supported CA's, SSL is also costly: The smaller number of public CA's that are supported by a broader set of browsers charge a lot, especially for the wildcard certificates which are needed to support subdomains. This further discourages usage.

Using DNSSEC instead of CAs would fix this problem entirely. Recent Chrome already supports this. Certs would be free, and as reliable as the domain name registration process itself. An attacker who compromises the registrar could forge a cert, but that's a very small attack surface.

The nice thing about Chrome's implementation is that it doesn't rely on DNSSEC actually being available on the client. It just sticks the signed record in place of the regular cert in the TLS setup. Thus the only limiting factors are browser support, and TLD signing. Lack of browser support will delay practical usability of DNSSEC certs for many years, unfortunately, but that's a problem with any realistic alternative too.

Even when the CAs are functioning normally their validation process is a joke: usually it requires nothing more than responding to an email sent to a domain name administrative contact, or a file with a particular name placed on the site (and served via unauthenticated HTTP)— in many cases neither are significant barriers to anyone with a fax machine or a little luck at guessing password. Yet making it better would only increase the already high costs.

Once DNSSEC cert stapling is reliably available, there will be no reason for sites to use old-style CAs anymore. At that point, browsers can gradually drop support for them, only allowing DNSSEC-based certs, which don't have this impersonation problem. In the short term, HSTS should allow for individual sites to require that only certain public keys work for them (pinning), which works around it for now.

Again, this is how the DigiNotar compromise was actually caught in real life. An Iranian Chrome user reported that they were getting hard failure when accessing Google sites, since the wrong cert was presented. Even though the cert was completely valid, Chrome blocked it because the correct public key was shipped with the browser. There's no reason this approach couldn't protect every site on the web that opted in.

Furthermore, in the almost universally used without-PFS mode SSL certificates stored on a server are incredibly valuable to attackers: Capturing a sites certificate not only allows you to _undetectably_ impersonate the site for the duration of the cert (or until its revoked), but it allows an attacker to decrypt all communications with that server _prior_ to the exploitation which they may have captured. So, as deployed SSL does little to discourage the creation of billion dollar ubiquitous surveillance systems, as even when its used its easily defeated ex-post-facto!

If you can get root access to the web server, sure. In that case, why not just take over the webserver process itself?

Moreover, often your your browser talking to an intermediary "application service provider" rather than to the true far end of your communication. E.g. when you send an email on facebook the other end of your communication is your friend— facebook is just a middle-man no different from your ISP. In this very common model HTTPS offers nothing in the way of end to end security, it simply moves the vulnerability point around. In the same way we don't consider our local WiFi WPA adequate to secure our internet traffic, we shouldn't consider HTTPS adequate.

In almost all cases, the user actually wants the site they're connecting to to see their data. For instance, Gmail lets you search your mail, it can filter it according to criteria you set, it can heuristically mark certain messages as important or spam, etc. Most users just do not want solutions like PGP, because the security-convenience tilts drastically toward convenience for them. So I don't rate this as a problem with HTTPS at all.

I could continue, but I think these points are enough to establish that the compromise any of many vulnerability of the CA mode is just one problem out of a great many.

I disagree.

The biggest problem with HTTPS today is that it's not secure against determined attackers, such as governments, because of the countless SPOFs (every CA in existence). The way to address that is to support public key pinning in HSTS, which has been discussed a bunch and is likely to happen in the not-too-distant future, I hope. Chrome already supports such a feature (although currently only for certain Google sites) and it did actually work against the DigiNotar compromise.

The second-biggest problem with HTTPS today is that it's fragile and hard to set up. This is less tractable, but it's also less important. Relatively few targets are worth anyone's effort to MITM, and the ones that are can mostly handle the complexity. Certificates over DNSSEC will be a big step forward, because that will remove the cost, and then most of the remaining complexity can be automated away.

HTTPS does suffer from several major design flaws that have caused untold harm to the web and its users. However, it's not a fundamentally broken approach and real efforts are underway to fix some of its worst problems. I wish progress could be faster, but it is happening.

Certificates and "authorities"

Posted Sep 11, 2011 1:40 UTC (Sun) by foom (subscriber, #14868) [Link]

>> Capturing a sites certificate not only allows you to _undetectably_ impersonate the site for the duration of the cert (or until its revoked), but it allows an attacker to decrypt all communications with that server _prior_ to the exploitation which they may have captured.

> If you can get root access to the web server, sure. In that case, why not just take over the webserver process itself?

The point is that taking over the webserver should not allow you to decrypt sessions that occurred *prior* to the takeover. Yet, because of the shoddy encryption most commonly used for SSL, that is exactly what you can do.

Certificates and "authorities"

Posted Sep 11, 2011 14:44 UTC (Sun) by Simetrical (guest, #53439) [Link]

I'm not familiar with SSL implementations to this level of detail, so I'll take your word for it. Regardless, this seems like only a minor additional level of compromise in the scheme of things. In most cases, such a breach will be pretty disastrous to start with, and lack of perfect forward secrecy doesn't make it that much worse. The fact that malicious governments can MITM sites like Gmail is an incomparably bigger issue.

Certificates and "authorities"

Posted Oct 18, 2011 20:54 UTC (Tue) by rich0 (guest, #55509) [Link]

The issue is that the SSL certificate is used to encrypt the session key and transmit it to the server. That means that the corresponding key can recover the session key (which is a requirement, since the server has to know the session key to talk to the browser).

If you capture the entire SSL session and later recover the key from the webserver you could go back and decrypt the session.

Note that a CA breach doesn't help with this - the key is stored on the webserver and is used to generate a CSR - it never leaves the server in a good implementation. Apache at least lets you store it encrypted on-disk as well (which requires providing the password when you start the web server).

I'm not certain this would work, but I'd think the way that you could prevent this attack would be to have the webserver not actually give the client its certificate, but instead generate and sign a new certificate for each connection, and then give that certificate to the browser. It would still have a valid trust chain so it would be usable, but it would have a different private key. The webserver would then discard that private key after the session is complete so that nobody could go back and decrypt the session. Right now server certificates cannot be used to sign additional certificates, so that would need to change to make this work.

New URI suggestion

Posted Sep 16, 2011 17:22 UTC (Fri) by borthner (guest, #4277) [Link]

May I suggest that httpe:// would be an appropriate scheme for encrypted but not authenticated traffic? This would solve a number of use cases. I get very tired of clicking allow on self-signed certificates for internal sites, but I'm too lazy (and overworked) to set up my own CA and import the CA into every browser on every computer I use. Browsers could take it as a given that httpe:// traffic would be allowed for any certificate, since the principal purpose would be to ensure encryption, rather than to ensure authentication. Sites where authentication of the server is critical would continue to use the https:// URI scheme.

New URI suggestion

Posted Sep 16, 2011 17:28 UTC (Fri) by jrn (subscriber, #64214) [Link]

Encryption without _some_ sort of authentication is useless. For example, after a DNS cache poisoning attack, the other end could be a spy who is forwarding your data to the actual intended recipient. If you are lucky enough to have a DNS setup that is immune to cache poisoning, then that's a form of authentication.

New URI suggestion

Posted Sep 16, 2011 22:14 UTC (Fri) by Simetrical (guest, #53439) [Link]

It's not actually useless. For one thing, it's possible to design a protocol where any MITM attack can be detected after the fact. tcpcrypt creates session ids in the course of setting up the encryption, with the property that they will match if and only if there was no MITM. Thus clients could conceivably store the session keys and compare them later over an authenticated channel.

The clients could also compare the session id's in some ad hoc way at the application layer. This will detect a MITM if the attacker is just blindly intercepting the encryption without understanding the higher-level protocol. For instance, if web browsers supported something like tcpcrypt, the current client session id could be exposed to JavaScript. Then obfuscated site-specific JavaScript code (complicated enough that it can't be automatically detected) could compare the client and server session id's, and bail out if they mismatch. A targeted attack against the particular site could work around this, but something that just intercepts all HTTPE traffic blindly would be detected.

Additionally, you could design the encryption protocol so that the initial stages are indistinguishable from an encryption-with-authentication protocol. Using tcpcrypt again as an example, you can layer auth on top of it by just setting up encryption as usual and then sending the server's session id to the client, signed with the server's certified public key. Then the attacker has to decide whether to try a MITM attack before knowing whether it has any chance of working. If he tries it and it turns out there is auth, he's detected, guaranteed.

So it's not true that encryption without authentication is useless. It does nothing against an ideal attacker, so it gives no guarantees. But it can make it a lot easier to detect attackers who are careless, and can make it harder for even careful attackers to pull off indiscriminate attacks with no risk of detection.

But I'm not sure it would be any easier to get such an encryption scheme working in browsers than to fix HTTPS so every site can use it. If we had SNI support and DNSSEC stapling support in all browsers of consequence, and DNSSEC were readily available for all domains, then good tools could conceivably make HTTPS about as easy to set up and maintain as a hypothetical HTTPE.

New URI suggestion

Posted Oct 18, 2011 20:59 UTC (Tue) by rich0 (guest, #55509) [Link]

Agreed, and by forcing an active attack it makes detection and countermeasures possible.

Police could trace cookie theft to some coffee shop, then set up shop in the shop. They'd browse to some site and wait for somebody to MITM their connection (they'd be running software that detects an IP for gmail/facebook/etc that isn't valid). Then they'd capture MAC IDs, camera images, and maybe even use DF to figure out who the culprit is. They'd have probable cause to make an arrest and search a laptop to confirm it was used. Then you nail them to the wall and publicize this. Pretty soon the level of casual hacking goes WAY down.

Iranian government involvement?

Posted Sep 8, 2011 2:50 UTC (Thu) by mbg (subscriber, #4940) [Link]

What is the evidence that there was anything "likely sponsored or run by the Iranian government"?

Iranian government involvement?

Posted Sep 8, 2011 4:37 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

One of the forged certificates was for balitarin, a site heavily used by Iranian dissidents (and run by a friend of mine). Iran has also used techniques like DPI HTTP-proxy detection and blanket prohibition of SSL connections, and has repeatedly blocked GMail, Facebook, and other popular social media sites. I wouldn't be the least bit surprised to learn that the IRI had an SSL MITM program as well.

Iranian government involvement?

Posted Sep 8, 2011 5:37 UTC (Thu) by dannyobrien (subscriber, #25583) [Link]

The most compelling evidence are the logs of revocation checks at DigiNotar for the fake certificates[1]. The logs show, Fox-IT says, that over 200K unique IPs made a check, almost all in Iran, suggesting they were served the fake certificate when visiting Google. Here's the graphical depiction of what it shows: http://www.youtube.com/watch?v=wZsWoSxxwVY

That implies that an active MITM attack, proxied over the majority of Iranian net space. While it seems pretty clear that a single determined independent hacker could have broken through Comodo and DigiNotar's defences, rolling out this kind of pervasive infrastructural surveillance would require the complicity of multiple Iranian ISPs.

I think the best bet right now is what is now a depressingly common combination -- indie blackhats doing the penetrations, and state actors buying and deploying what they find.

[1] - Documented in the Fox-IT report here, which is short, damning, and well worth a read: http://www.rijksoverheid.nl/ministeries/bzk/documenten-en...

Iranian government involvement?

Posted Sep 8, 2011 9:03 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Thank you for linking to the report; it's as damning as you say. I'm actually surprised that the MITM attack was so brazen: I wonder whether more careful use of the forged certificates might have opened a longer window for more targeted surveillance. (Of course, such an attack may be ongoing, and I'd rate the likelihood of such a thing far higher than I would have three months ago.) If a complete and sustained CA compromise, a coverup, and a large-scale MITM attack don't lead to changes in how we allocate trust, it'll be hard to believe that anything else will.

Iranian government involvement?

Posted Sep 8, 2011 20:27 UTC (Thu) by dashesy (subscriber, #74652) [Link]

After public unrest as a result of the election fraud the revolutionary guard bought the Iranian telecommunications company.
Apart from the economical motives, it was obvious they had other reasons. To orchestrate such MITM attack one has to control many ISPs and communications infrastructure.
In addition, a few month ago the ministry of intelligence was boasting publicly on how they can read private emails!

Certificates and "authorities"

Posted Sep 8, 2011 8:05 UTC (Thu) by AlexHudson (subscriber, #41828) [Link]

I think the issue here is basically that most users simply don't want to know much about how the trust works: they just want to know who to trust.

I installed Convergence here a few days ago. I'm pretty techie but I don't really get it. I now see all https certs are verified by a local self-signed cert (boy, was that a surprise..) and I get that this is more like some kind of quorum system. I don't get who is going to run these various notaries for people to check against; I suppose the browser makers could run some instead of pinning but I suppose I don't see the commercial motivation for someone to run one well. Are we to rely on the charity of the likes of Google? Hmm.

I also don't really get how any of these can be explained to the man on the street. Fundamentally, the issue comes down to "how can you trust a party you've never met", and every solution involves some kind of third party/intermediary and gets increasingly more complex / convoluted as various holes get patched up.

Certificates and "authorities"

Posted Sep 8, 2011 11:42 UTC (Thu) by bboissin (subscriber, #29506) [Link]

Here is the comment from Ben Laurie (one of the Chrome engineer) about convergence: http://www.imperialviolet.org/2011/09/07/convergence.html

Certificates and "authorities"

Posted Sep 8, 2011 15:05 UTC (Thu) by tpo (subscriber, #25713) [Link]

I think the article is by Adam Langley and not Ben Laurie:

$ wget -q http://www.imperialviolet.org/iv-rss.xml -O -|grep name|head -1
<name>Adam Langley</name>

Certificates and "authorities"

Posted Sep 8, 2011 15:08 UTC (Thu) by bboissin (subscriber, #29506) [Link]

Indeed, my bad. Sorry Adam.

Firefox addons

Posted Sep 8, 2011 8:41 UTC (Thu) by Cato (subscriber, #7643) [Link]

One useful Firefox addon is Certificate Patrol: https://addons.mozilla.org/en-US/firefox/addon/certificat... - flags any certificate changes in an alert bar, so you are more aware of what's going on. Helpful when combined with Perspectives addon mentioned in article: https://addons.mozilla.org/en-US/firefox/addon/perspectives/

Certificates and "authorities"

Posted Sep 8, 2011 10:01 UTC (Thu) by etienne (subscriber, #25256) [Link]

Considering that most people in Iran may be using Window XP, and probably not an upgrade-able one (I am not sure of these prepositions), this MITM attack is efficient and will probably stay so for the foreseable future.
Maybe it is time to tell them about Linux...

Certificates and "authorities"

Posted Sep 8, 2011 11:02 UTC (Thu) by lkundrak (subscriber, #43452) [Link]

We could use some MITM to inject advertisements.

Certificates and "authorities"

Posted Sep 8, 2011 11:31 UTC (Thu) by ttonino (subscriber, #4073) [Link]

They know about Firefox. And also, falsified Windows Update and Firefox certificates seem to have been manufactured.

Certificates and "authorities"

Posted Sep 8, 2011 20:36 UTC (Thu) by dashesy (subscriber, #74652) [Link]

The very first thing I ask all my folks in Iran to do, is to delete internet explorer short-cuts (it is hard to un-install completely as being part of the OS).
I used to ask them to install firefox and keep it updated; after this incident and the clear advantage of built-in certificate pinning I changed my suggestion to use Chrome instead.
I am sure however, that the ratio of users using Linux over Xp/7 is the same as everywhere else, so maybe it is time to tell everyone about Linux :)

Certificates and "authorities"

Posted Sep 9, 2011 18:18 UTC (Fri) by rickmoen (guest, #6943) [Link]

So, instead of recommending the leading open source Web browser, you're recommending a proprietary one issued by the company that recently bought DoubleClick for US $3.1B?

Rick Moen
rick@linuxmafia.com

Certificates and "authorities"

Posted Sep 9, 2011 18:41 UTC (Fri) by nix (subscriber, #2304) [Link]

Chromium, then. It has pinned certificates too.

Certificates and "authorities"

Posted Sep 8, 2011 13:58 UTC (Thu) by Nelson (subscriber, #21712) [Link]

Why is it forlorn to hope CAs will keep their systems secured? With this current business model security is what they charge for. Not just that, you pay premiums for increased security. (if you want color bar or different authentication strengths they ask for more money!) The missing piece is the feedback loop, Comodo is big and widely used so when they screw up, nothing happens. Diginotar isn't so when they screw up they get the internet death penalty. More over the cost of the death penalty doesn't simply hurt Diginotar, it hurts their customers. This business model doesn't work. That's the problem. The protocols work, I even think the browser warnings work. Maybe the solution is to have a third party that assembles CAs, distributes them, runs OCSP services and the end users can pay a fee like with anti-virus or something.

Certificates and "authorities"

Posted Sep 8, 2011 15:07 UTC (Thu) by rgmoore (subscriber, #75) [Link]

Why is it forlorn to hope CAs will keep their systems secured?

Even if they manage to keep their systems technically tight as a drum, they can't escape social and legal pressure. Imagine, just for example, what would have happened if it had been the Dutch government that wanted to issue false certificates rather than the Iranian government. They wouldn't have needed to break into DigiNotar; they could just walk over and demand that DigiNotar issue them the false certificates. Do you really think the official Dutch CA is going to turn down a government request for a false Cert, especially if it's presented as being for some important and legitimate government purpose like tracking thieves or terrorists? Do you think any of the few CAs that Google uses to sign its official certificates would be able to escape from pressure from their national governments?

And it's not just a question of that kind of legal authority. What happens when organized crime decides that it's very valuable to be able to issue false certificates? There are all kinds of ways they could do it: using a mole to infiltrate an existing CA, blackmailing a CA employee into issuing them fake certs, or even setting up their own CA as a legitimate enterprise and sneaking out a few fake certs once in a while when their business needs them.

This is an inherent problem with the trust model. If you place a lot of trust in a specific authority, you greatly increase the value of suborning that authority. People who have skill at suborning authorities will be able to take advantage.

Certificates and "authorities"

Posted Sep 8, 2011 15:50 UTC (Thu) by mike.cloaked (subscriber, #63120) [Link]

Perhaps some accelerated effort would and could usefully be directed towards securing dns? Is it not the case that secure dns would sidestep the majority of the problem resulting from this CA problem? After all if browsers were only able to point at the "official" website instead of an illegal one, then the user would not need to check for fraudulent certs in the first place?

How are we doing generally in bringing in dnssec or a more advanced version of the same idea?

Certificates and "authorities"

Posted Sep 8, 2011 22:27 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

DNSSEC is deployed. The root is signed, many major TLD registries are equipped for DNSSEC. However, registrars are mostly in a cut-throat price war. The customer service overhead of teaching customers about DNSSEC isn't paid for by the dubious benefits of offering it. So there's an excellent chance that if you have a domain in a popular TLD today via a registrar, there's no way to get DNSSEC working with that domain without changing registrar.

This will probably change gradually, with better tools and increasing customer awareness. Today example.com, and fedoraproject.org - tomorrow Google and your banks, some day your blog.

On the client things are similarly slow moving. Enthusiasts have working DNSSEC in their client software today, but the average person does not. In the medium term the goal is that most users will go via their ISP's DNS server, and the queries performed by that server will be secured with DNSSEC, but obviously if your adversary is the government, the ISP is probably compromised anyway, so this doesn't help you.

Technically it's a done deal. Typing "ssh foo.bar.baz" and knowing you're only trusting bar, baz and the root to identify this "foo.bar.baz" machine works right now, on the public Internet (though obviously not for that made up address). But translating that into an ordinary user typing "www.facebook.com" into their browser and definitely getting the privacy-infringing social network site, not an Iranian impostor, may be years off even if we get agreement that it's desirable.

Certificates and "authorities"

Posted Sep 9, 2011 0:07 UTC (Fri) by mtaht (✭ supporter ✭, #11087) [Link]

Getting your dns signed with dnssec has become easier and easier with the more current versions of bind.

In fact, both bufferbloat.net (running on a x86_64 box) and http://jupiter.lab.bufferbloat.net (running on a mips based cerowrt box) are now both signed, and the overhead seems non-existent.

comcast is running a set of dnssec enabled dns servers now, as well, which work great as forwarders.

dns.comcast.net

There is a tool for firefox that can validate if your dns signed, here:

https://addons.mozilla.org/en-US/firefox/addon/dnssec-val...

Perhaps one day this could be more effective than CAs.

Certificates and "authorities"

Posted Sep 8, 2011 17:41 UTC (Thu) by Nelson (subscriber, #21712) [Link]

Exactly, or maybe just offering the CA more money to produce a fake certificate. Not just that, but should a CA be compromised, either but hacking or political pressures or other, they have no insentive to disclose that fact. In fact they have every reason not to.

The solution would be to establish trust with maybe a half dozen CAs in different jurisdictions, or at least that would be a solution to some of those problems, but it's cost prohibitive with the current business model.

This isn't a problem with the technology...

Certificates and "authorities"

Posted Sep 9, 2011 0:22 UTC (Fri) by gerv (subscriber, #3376) [Link]

"Do you really think the official Dutch CA is going to turn down a government request for a false Cert, especially if it's presented as being for some important and legitimate government purpose like tracking thieves or terrorists?"

Perhaps not, but everyone who gets MITMed by it gets sent a copy of the certificate, which is non-repudiable evidence about what the CA did. Publish the cert, and the CA's untrustworthiness is exposed for all to see. A few tools so that some people are more likely to notice this, and suddenly it becomes a very business-risky thing for a CA to consent to do.

And if a government blows up all the CAs in its jurisdiction like this (and believe me, CAs 2-N will flee when they see what happened to CA 1) then the attack no longer works for them.

Certificates and "authorities"

Posted Sep 9, 2011 1:12 UTC (Fri) by Nelson (subscriber, #21712) [Link]

Unless the government request is accompanied by piles of cash, in which case other CAs might want in on the action.

Certificates and "authorities"

Posted Sep 8, 2011 16:28 UTC (Thu) by karim (subscriber, #114) [Link]

Someone must've suggested this already, but why not just create a "distributed" cloud service that caches certs as seen by users with their frequency and geographic location? Browsers could then have a plugin that connects to that cloud to compare the certs they're getting and those already cached. Surely if 90% or Iranians are seeing one cert and 90% of users elsewhere are seeing something else then there's an issue with that cert. Obviously the issue then is making sure that the info you're getting from that service is accurate ... but the point is that that system would gain resilience through decentralization (vs. the CAs which are centralized.)

Not a be-all and end-all solution, but at least something that can be layered on top of what we have today and that provides an extra barrier of sorts.

Certificates and "authorities"

Posted Sep 8, 2011 16:53 UTC (Thu) by pspinler (subscriber, #2922) [Link]

That sounds like just pushing the problem back one level. A large scale determined MITM attack like this would just add the suggested cloud to the dns/cert/service list they suborn.

-- Pat

Certificates and "authorities"

Posted Sep 8, 2011 19:18 UTC (Thu) by karim (subscriber, #114) [Link]

But, but, but ... isn't this the industry where there isn't a single problem you can't solve by adding another layer?!?!? ;)

Seriously, though, I knew this would come up and you're right. Which is why we'd get a "here's a solution"/"that's not enough"-rinse-wash-repeat situation until something would come out of it (or not.) It's just the basis of an idea which I totally agree would need much more work. The benefit, though, is to leverage what's already there.

FWIW

Certificates and "authorities"

Posted Sep 12, 2011 17:21 UTC (Mon) by Chocrates (guest, #67068) [Link]

Then wouldn't it be noticable that large geographical regions have no data? Or manufactured data?

Certificates and "authorities"

Posted Sep 14, 2011 15:14 UTC (Wed) by karim (subscriber, #114) [Link]

That's brilliant. Indeed it seems that that would be an interesting side effect.

Certificates and "authorities"

Posted Sep 8, 2011 21:35 UTC (Thu) by dashesy (subscriber, #74652) [Link]

Very interesting report. The mixture of advanced and amateurish skills, even the layout of the comment section, different English proficiency, and the pastebin suggests that there are at least two people involved in this, perhaps with different agenda, even living in different countries.
A black-hat hacker that enjoys the protection behind a government agent that is even willing to employ his hard earned CAs in wild and make it more exciting.

Certificates and "authorities"

Posted Sep 9, 2011 20:47 UTC (Fri) by ortalo (subscriber, #4654) [Link]

Just my 0.02 (anyway I suppose the guys in power don't really bother), but if certificate authorities want to survive, they will not only need to improve their certification process (something that their *customers* may not be truely ready to pay for btw) but also to solve the entire X.509 revocation model. Because maybe it is time to realize that... well, it does not work.
Personally, I have no idea how to do that. However, I've been repeating for years to students that issuing access rights is easy and that all the difficulty is in *removing* rights; so maybe I'm just too bored to figure out something clever and someone else will find.
In the meantime, I'll turn to other things (whether PGP or Convergence).

Multiply-signed certs

Posted Sep 11, 2011 0:42 UTC (Sun) by dskoll (subscriber, #1630) [Link]

One idea would be for Mozilla et. al. to compile a list of "independent" CAs. That is, CAs that are independent businesses and not subsidiaries of one another. Then users could only trust certs that are signed by N > 1 independent CAs, where users could choose N based on their circumstances.

This would, alas, make life more expensive and more complicated for Web site owners, but it means that hackers would have to compromise N CAs instead of 1 CA to perform a MITM attack. And high-value targets like Google, Paypal, banks, eBay, etc. can surely afford certificates signed by 4 or 5 independent CAs.

Certificates and "authorities"

Posted Sep 16, 2011 1:41 UTC (Fri) by Hausvib6 (guest, #70606) [Link]

I hope to see this happens to one or more too-big-to-fail CAs (Verisign class).

Certificates and "authorities"

Posted Sep 16, 2011 19:38 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

That sounds like "be careful what you wish for" territory to me...

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds