|
|
Subscribe / Log in / New account

SSL man-in-the-middle attacks

By Jake Edge
December 24, 2008

A while back, we looked at the new Firefox 3 warnings for self-signed and expired SSL certificates. As annoying as some found those to be, it certainly increased the visibility of "invalid" certificates. Those certificates could lead to man-in-the-middle attacks, which is what led Mozilla to issue such eye-opening warnings. More recently, Eddy Nigg of Startcom—issuer of free SSL certificates—found another way to do man-in-the-middle attacks without setting off any of the new warnings.

What Nigg found was that he could get a perfectly valid certificate for a domain he did not control: in this case mozilla.com. He could then masquerade as the secure Mozilla site with impunity; any browsers that landed there would verify the certificate as belonging to mozilla.com. He did it through a Comodo reseller with no questions asked: "Five minutes later I was in the possession of a legitimate certificate issued to mozilla.com – no questions asked – no verification checks done – no control validation – no subscriber agreement presented, nothing."

That is clearly a bug in the verification process, but it is completely out of the control of the browser. The browser must trust some set of key signing authorities (i.e. Certificate Authorities or CAs), but has no way to control how well or poorly they actually vet the keys they sign—or their downstream resellers sign. We saw the same potential problem in a slightly different guise with "Extended Validation" certificates back in 2006. It all comes down to trusting CAs.

Sometime after Nigg's story hit Slashdot, Comodo revoked the certificate, which did cause Firefox to put up an error and disallow the connection. One wonders how many bad certificates have been issued but not revoked because a phisher or other scammer received them. One would think those folks would be less likely to publicly announce what they had done.

Bringing attention to the problem will likely help, but there are just too many ways to create bad SSL certificates for those that really want them—bribing CA employees if nothing else. Another useful outcome is that Richard Bejtlich got interested in just how the revocation process works. He collected packet data from accessing Nigg's certificate after it had been revoked which gives look inside the Online Certificate Status Protocol (OCSP).

OCSP is designed to do just what it did, cause a bad certificate to fail when verified by the browser. Nigg's certificate listed an OCSP server that should be consulted. Because that information has been signed by the CA, it can't be tampered with. So long as the browser makes the OCSP check, certificates can be revoked in this manner—as long as the CA is aware that revocation is needed.

Public key cryptography—the basis of SSL and many other encryption schemes—is an amazing method for doing encryption, but it does suffer from a major shortcoming: key exchange. For relatively simple situations, where both parties know each other and have a way to securely exchange keys, it works well. When trying to handle other kinds of communications, either a "web of trust" (a la PGP and GPG) or some kind of trusted authority is required. When those break down, man-in-the-middle and other scams are possible.


Index entries for this article
SecuritySecure Sockets Layer (SSL)/Certificates


to post comments

SSL man-in-the-middle attacks

Posted Dec 25, 2008 13:10 UTC (Thu) by jamesh (guest, #1159) [Link] (1 responses)

Surely the acronym should read OCSP rather than OSCP?

SSL man-in-the-middle attacks

Posted Dec 25, 2008 15:22 UTC (Thu) by jake (editor, #205) [Link]

> Surely the acronym should read OCSP rather than OSCP?

Gah! Indeed, fixed now, thanks ...

jake

please move this stuff into DNS

Posted Dec 25, 2008 13:30 UTC (Thu) by weasel (subscriber, #6031) [Link] (10 responses)

Clearly we need to get rid of CAs at least for the most basic certificate uses - they might be useful with the Extended Validation thing again, I don't know.

Once we have DNSSEC (it'll be there RSN, with the Hurd and DNF supporting it out of the box) we should just put the cert (fingerprint) into DNS and be done with it.

please move this stuff into DNS

Posted Dec 25, 2008 14:11 UTC (Thu) by TRS-80 (guest, #1804) [Link] (9 responses)

I think the problem with putting the cert fingerprint into DNS is the application doesn't know if the response was secured by DNSSEC or not.

To get rid of CAs for basic cert uses, which is protecting passwords from being sent in the clear, Mozilla should be implementing and advocating RFC 5054, TLS/SRP, however NSS (a Mozilla subproject) won't add it until Mozilla does the UI work, but Mozilla wants to do the UI work as extensions, so needs NSS done first.

please move this stuff into DNS

Posted Dec 25, 2008 20:15 UTC (Thu) by quotemstr (subscriber, #45331) [Link] (4 responses)

Er, if I'm reading this right, it's just an HTTP-authentication-style password exchange system, built out of TLS primitives.

For vain reasons, it'll never be used: web designers like being able to specify how their login boxes look.

please move this stuff into DNS

Posted Dec 26, 2008 2:23 UTC (Fri) by TRS-80 (guest, #1804) [Link] (3 responses)

Well, it's not just applicable to HTTP - you can use it for IMAP and SMTP authentication too. How many people use a self-signed cert for those, and are going to be bitten when Thunderbird 3 comes out with the same anti-self-signed UI as Firefox?

Anyway, for web designers HTML 5 offers a way to have HTML login forms for HTTP auth.

please move this stuff into DNS

Posted Dec 26, 2008 3:18 UTC (Fri) by drag (guest, #31333) [Link] (2 responses)

You know that creating your own signing certificate is not significantly more difficult then making a self-signed... I mean I started off with self-signed for mucking around with doing things, but figured that since I am worrying about encryption anyways I might as well do it myself.

It just strikes me as a bit lazy. Not a lot lazy as the SSL/TLS stuff is difficult to get right. But for as long as this stuff has been out it should be fairly simple to do.

please move this stuff into DNS

Posted Dec 26, 2008 3:36 UTC (Fri) by TRS-80 (guest, #1804) [Link] (1 responses)

The point isn't how easy/lazy it is, the point is to avoiding have to trust (now apparently) untrustworthy CAs. Maintaining your own CA (is that what you mean by signing certificate?) might be OK if you're the only user, but asking other people to install your CA is a right pain, and then you have to worry about keeping the CA secure, plus all the regular PKIX hassles of updating certs etc.

Security problems with CAs

Posted Dec 26, 2008 13:21 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

Sad fact is that really checking is expensive, and CAs aren't in the business of "wasting" money to then turn a paying customer away... plus certificates are the same whether they are meant to protect (probably not very interesting) email from prying eyes, commercial transactions in the range of a few tens of dollars, or multi-million dollar movements. The association of the "personal" certificate with all sorts of identifying data makes the planned use of those a privacy nightmare. The whole concept is deeply flawed. For an in-depth discussion of the current issues, look at Peter Gutmann's PKI tutorial (a large PDF presentation).

please move this stuff into DNS

Posted Dec 26, 2008 8:27 UTC (Fri) by weasel (subscriber, #6031) [Link] (3 responses)

I think the problem with putting the cert fingerprint into DNS is the application doesn't know if the response was secured by DNSSEC or not.

Unfortunately the article you link to only states the same fact as you, and does not even try to give an explanation, reason or argument.

At least ssh appears to be able to figure out if information it gets from DNS is secure or not. It does that by checking the AD bit in the dns response (see dns.c in its source and the VerifyHostKeDNS entry in the ssh_config manpage).

please move this stuff into DNS

Posted Dec 28, 2008 23:40 UTC (Sun) by jamesh (guest, #1159) [Link] (2 responses)

Unless I am mistaken, that code is only testing that the DNS server OpenSSH uses thinks that the DNS record is secure. It doesn't actually do any verification itself (following the signature chain all the way back to the DNS root).

If I set up a public wifi network, I could easily provide a DNS server that said every record was secure and use DHCP options to get machines that connect to use that server. How would OpenSSH be able to tell the difference between this network and a secure network where the responses from the DNS server can be trusted?

So there seem to be real problems with applications trusting DNSSEC results given the types of networks people connect to these days ...

please move this stuff into DNS

Posted Dec 30, 2008 14:40 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (1 responses)

You could provide such a DNS server, but you can't force me to trust it.

On a laptop I can choose to run a local DNS server, which implements DNSSEC and (as soon as the root is signed) get a complete end-to-end chain. Perhaps you don't know anybody who does this today, and perhaps in five years you won't know anybody who doesn't.

On a moderately secure wired LAN (or suitably protected wireless one) I can provide a local DNS server and sacrifice the last hop security for improved performance from the shared cache.

What's much nicer about using DNSSEC for this is then all I'm relying on is the immediately evident hierarchy, thus...

physics.soton.ac.uk relies on the root, then the UK government and its DNS operator Nominet, the JaNET (UK academic network) management & operator, and the University (of Southampton)'s management and systems team. This makes sense - it's almost the same hierarchy that issued the machine with an IP address.

Whereas with current CA-based SSL physics.soton.ac.uk may well rely on the integrity of a cheap reseller from Taiwan, who acts as a front for an outfit in California, which in fact subcontracts the technical work to a small business in Finland run by a 14 year old girl. But I can't tell any of that, all I get is a picture of a padlock.

DNSSEC can be leveraged to deliver secure-by-default to the web, something which I think would be more revolutionary than most people realise.

please move this stuff into DNS

Posted Dec 31, 2008 5:37 UTC (Wed) by jamesh (guest, #1159) [Link]

I am sure that you are smart enough not to enable VerifyHostKeyDNS option in ssh for any machine that uses an untrusted DNS resolver. But surely you understand why the option is disabled by default, right?

Until we get to the point where people get a secure DNS resolver installed by default, it doesn't make sense for application developers to trust the DNS response by default. Relying on a pre-shared public key gives the application much better assurance (even if this assurance is weaker than what they'd get from a properly verified DNSSEC response).

Perhaps if an operating system installed a DNS resolver that performed the necessary checks by default, it would make sense for applications to trust the response flags. But until that point, applications are better off using some other trust mechanism.

SSL man-in-the-middle attacks

Posted Dec 26, 2008 17:17 UTC (Fri) by dps (guest, #5725) [Link] (1 responses)

What no browser implemnts, AFAIK, is autoamgic display of who the a valid certificate authenticates. I could register a domain name and get an SSL certificate. Only those suspicious enough to check the certificate would notice the authenticated domain was not what the HTML indicated.

Maybe we need a separate list of bad certificates, not controlled by any CA, that browsers could check. An online "sting" site might be a good idea too.

Just in case anyone is wondering {phish,phishing}.{org,com,co.uk,org,uk} are all registered already. I am not associated with any of those sites.

SSL man-in-the-middle attacks

Posted Dec 29, 2008 10:13 UTC (Mon) by TRS-80 (guest, #1804) [Link]

What no browser implemnts, AFAIK, is autoamgic display of who the a valid certificate authenticates. I could register a domain name and get an SSL certificate. Only those suspicious enough to check the certificate would notice the authenticated domain was not what the HTML indicated.
Extended Validation (EV) certificates are supposed to solve this - the browser displays the registered company name in the UI (examples in IE, FF and Safari).

SSL man-in-the-middle attacks

Posted Dec 26, 2008 22:25 UTC (Fri) by james-mathiesen (guest, #50470) [Link] (3 responses)

hmm... does the revocation protocol leak a lot of information about online activities to 3rd parties? ip address 1.2.3.4 apparently banks at xxx, shops at amazon, etc...

SSL man-in-the-middle attacks

Posted Dec 27, 2008 17:45 UTC (Sat) by hmh (subscriber, #3838) [Link] (2 responses)

It depends.

The entire revocation list is downloaded and stored for further reference.

The URL to the revocation list is not in the certificate, but in the issuer certificate from the CA, so the information leak is very limited on a normal certificate from a normal CA.

SSL man-in-the-middle attacks

Posted Dec 27, 2008 19:35 UTC (Sat) by hmh (subscriber, #3838) [Link] (1 responses)

Never mind. This is incorrect. Yes, you disclose information, OCSP wants to be lightweight, so you tell the server just the certificates you're interested in.

That teaches me to re-check my facts before posting...

SSL man-in-the-middle attacks

Posted Dec 28, 2008 2:00 UTC (Sun) by james-mathiesen (guest, #50470) [Link]

Thanks for checking. I was hoping I had missed something. :(

SSL man-in-the-middle attacks and trust

Posted Dec 29, 2008 21:28 UTC (Mon) by clugstj (subscriber, #4020) [Link]

"but it does suffer from a major shortcoming: key exchange"

I can't think of a system that wouldn't suffer from this "shortcoming". Any system like this requires that someone/thing be "trusted".

SSL man-in-the-middle attacks

Posted Dec 30, 2008 4:52 UTC (Tue) by jwb (guest, #15467) [Link]

Mozilla.org has a lengthy explanation on their web site about how a root cert is allowed into the distribution. I assume that now that it has been proven that this vendor does not meet the standards of the Mozilla.org policy, their key will be removed from the distribution in the next update.

CAcert vs. commercial CAs

Posted Dec 30, 2008 12:29 UTC (Tue) by angdraug (subscriber, #7487) [Link]

I think most important observation in this story is that CAcert.org, the free CA which Mozilla refuses to support in their browsers, implements a better vetting process than many of those commercial CAs supported by Mozilla since days of yore.

Not that I'm surprised with the commercial CAs (I strongly suspect that Comodo wouldn't be the only one with a flakey process): their primary purpose is making money, so the choice between charging you for another certificate and turning you down by the means of a strong vetting process is really a no-brainer. What doesn't stop to amaze me is that Mozilla, supposedly free and security-conscious project, continually refuses to support a fellow free security-focused project. Kind of proves my theory that money is bad for free software.


Copyright © 2008, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds