Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 23, 2013
An "enum" for Python 3
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
Updating software every time a CA blows it is crazy.
Fraudulent *.google.com certificate issued
Posted Aug 30, 2011 13:36 UTC (Tue) by cesarb (subscriber, #6266)
Posted Aug 30, 2011 14:13 UTC (Tue) by jg (subscriber, #17537)
Posted Aug 30, 2011 15:10 UTC (Tue) by butlerm (subscriber, #13312)
Posted Aug 30, 2011 15:45 UTC (Tue) by dkg (subscriber, #55359)
The easy way to remedy most of this problem is to drop the use of CA issued certificates for domain validation and use DNSSEC validated certificates instead.
Sure, DANE is a decent way to ensure that malicious CAs are out of the loop, so they can't be targeted by governments or corporations who want to impersonate or replace an existing presence on the 'net. DANE does this by placing much more reliance on DNS itself.
However, governments and corporations have already demonstrated a willingness to tamper with DNS directly. It's not clear to me that DANE (or anything else like that relies solely on DNS) going to solve the larger problem of powerful adversaries being able to impersonate or damage specific network services.
This authenticity problem is caused by centralized and implicitly-trusted authority, not just crappy CAs. We need a naming scheme that is decentralized and cryptographically-verifiable with explicit corroboration mechanisms, like Monkeysphere (i contribute to this project) or Convergence to address the issue. A "solution" which further centralizes authority seems likely to consolidate abuse, not eliminate it.
Posted Aug 30, 2011 16:51 UTC (Tue) by job (guest, #670)
Posted Aug 30, 2011 16:57 UTC (Tue) by dlang (✭ supporter ✭, #313)
Posted Aug 30, 2011 17:16 UTC (Tue) by dkg (subscriber, #55359)
Posted Aug 30, 2011 18:44 UTC (Tue) by raven667 (subscriber, #5198)
Posted Aug 30, 2011 19:03 UTC (Tue) by dkg (subscriber, #55359)
Then, the adversary serves this RR in response to the victim's DNS request, and manages the sub-zone themselves. With such an RR in hand, the adversary only needs to control the victim's upstream network connection in order to be able to compromise the integrity and confidentiality of their communications.
if the delegated zone is a high-level one (e.g. .com), then something like phreebird in front of a filtering DNS cache should be fine (filtering to replace the authoritative keys for the sub-zones with its own key, that is). It would take a bit of engineering, but it's far from an insurmountable task.
Posted Aug 30, 2011 19:15 UTC (Tue) by butlerm (subscriber, #13312)
At a minimum you would need the DNS root private key (or the cooperation of the people who hold the key) to do this without compromising the client, which places it out of reach for any but the governments powerful enough to compel ICANN to give them the key or sign a full set of compromised TLDs for them.
Posted Sep 1, 2011 8:26 UTC (Thu) by Comet (subscriber, #11646)
Given that DNS implementations are optimising for fast signing for NSEC3 anyway, it's not an impressive feat to transparently re-sign only those areas needed.
Posted Sep 1, 2011 18:06 UTC (Thu) by raven667 (subscriber, #5198)
One difference between the existing CA infrastructure and DNS that I just thought about is that for DNS there is a lot of coordination to prevent duplicate registrations as dups are not allowed whereas there is zero technical protection from CAs signing anything they like.
Posted Sep 1, 2011 20:49 UTC (Thu) by Comet (subscriber, #11646)
CAs can constrain themselves with nameConstraints; more commonly, a trusted CA would charge $$$ for a corporation to be able to issue their own certs without needing to go up, because the corp has scaling issues getting their own root cert onto every client device in a trusted manner, across all the vendors and contractors and the like; so example.com megacorp pays $$$ to the root CA for a basicConstraints CA:TRUE cert and the root CA preserve their income stream by making sure the newly minted CA cert has nameConstraints=permitted;DNS:*.example.com in it.
Another reason to be worried when software doesn't do any certificate chain validation, or tries to roll its own validation steps for the chain.
What's needed is constraints _outside_ the CA's control. A nameConstraints which can be applied to the CA, to keep the certs in-country and optionally warn for use outside the country, for vetting/approval (but default to block, to avoid continuing to train people to click through stuff they don't understand). What's needed is more of the steps like Google's cert-pinning, letting site operators at least get as far as the SSH security model of "latch on first use". Not ideal, but a massively reduced attack window (which can be shrunk to zero if you get the pinning into the source, rather than learnt; that then leaves "just" compromise of the browser distribution mechanism ...)
Posted Sep 1, 2011 21:47 UTC (Thu) by raven667 (subscriber, #5198)
Distributed naming system
Posted Sep 1, 2011 20:09 UTC (Thu) by robbe (guest, #16131)
I don't know whether that's viable though. Bitcoin itself seems to have scaling problems if applied to something as large as current DNS.
Posted Aug 30, 2011 19:00 UTC (Tue) by butlerm (subscriber, #13312)
Perhaps because the root is operated by an organization with no reason to tamper with it, in a country where tampering with a major domain would shortly become common knowledge and lead to major political repercussions?
Assuming the .com registry cooperated, intercepting, re-encrypting, and forwarding a large fraction of Google's HTTPS traffic would be a bit of a trick too. The only way a large government could get away with it is with Google's help, because they would probably be the first to know.
Posted Aug 30, 2011 19:12 UTC (Tue) by dkg (subscriber, #55359)
intercepting, re-encrypting, and forwarding a large fraction of Google's HTTPS traffic would be a bit of a trick too.
Unless, of course, the adversary is doing a more targeted attack against a specific network they happen to be upstream of.
In that case, they can ignore all the rest of the traffic, and focus their resources on compromising traffic coming out of a network segment they are interested in.
But my larger concern here isn't about Google being compromised. That's bad, but (as the current situation shows) Google actually has the resources and infrastructure to potentially catch when something is going wrong. What about smaller sites? Google vs. a medium-sized government is like King Kong vs. Godzilla. It's not clear who would win. But what if one of these titans turns their focus on small fry? Our current infrastructure suggests a sorry future for the hope of a free and autonomous global network.
Posted Aug 30, 2011 19:38 UTC (Tue) by raven667 (subscriber, #5198)
As long as clients don't accept the upstream keys in the hierarchy changing between requests, to spoof one child domain you have to spoof them all, right?
Posted Aug 30, 2011 22:53 UTC (Tue) by jebba (✭ supporter ✭, #4439)
Posted Aug 31, 2011 0:29 UTC (Wed) by cesarb (subscriber, #6266)
(I believe Firefox switched to also caching intermediate certificates because, since Internet Explorer caches intermediate certificates, a lot of people forgot to put the whole chain on their servers, and it "worked" on IE but failed - as it should - on Firefox.)
Posted Aug 31, 2011 1:35 UTC (Wed) by jebba (✭ supporter ✭, #4439)
Trust the root -- trusssst it
Posted Sep 1, 2011 20:41 UTC (Thu) by robbe (guest, #16131)
The only credible opponent in this game is the US government, which through coercion, legal or otherwise, openly or not (National Security letters, anyone?), could influence any of its subjects. But as I understand it the KSK can only be got at by corrupting three individuals, with most of them living outside the US of A -- see http://www.root-dnssec.org/tcr/selection-2010/ for your list of targets.
If the NSA wants to spy on your google.com traffic it is altogether more likely that they would attack com's key via rubber hose techniques, which is probably not as well-protected.
Posted Sep 1, 2011 20:48 UTC (Thu) by job (guest, #670)
Let's not go overboard with paranoia here. DNS is centralized by design, and most end users would not even notice if SSL was stripped by a DNS-forging middleman so we need to secure it anyway.
Yes, ICANN is a single point of failure in the DNSSEC system -- but we have the opportunity here to replace a system which amounts to multiple points of epic fail.
Posted Aug 30, 2011 18:49 UTC (Tue) by rickmoen (subscriber, #6943)
For now, what I've used to mitigate the risk is CertWatch, which is blessedly simple and easy to fully understand: It merely keeps records about usage of SSL certs, root CAs, and intermediate certs in a sqlite database, lets you know every time you're using a new/changed SSL cert or CA root cert or
intermediate cert for the first time. So, if suddenly my online banking login for $MY_BANK has an unexpected new cert, and especially if the new cert is from a different certificate authority that doesn't look familiar, I have the opportunity and option to be doubtful about site authenticity.
Posted Aug 30, 2011 19:59 UTC (Tue) by pabs (subscriber, #43278)
"So unfortunately the DNSSEC trust relationships depend on sketchy organizations and governments, just like the current CA system."
Posted Aug 31, 2011 0:50 UTC (Wed) by tialaramex (subscriber, #21167)
Secondly his emphasis on trust "agility" that's useless to everybody but a tiny number of nerds like Marlinspike or myself. My mother isn't going to spend hours every week reconsidering her choice of authority, she isn't even going to spend ten minutes a year. She'll accept the out of box default like every other user, the same situation (and thus the same problem) as we have now.
Finally Marlinspike's confusion between the root operators and ICANN is either ignorant (in which case who cares what somebody who doesn't know the first thing about DNS thinks?) or malicious. ICANN lacks the technical capability to do what this blog entry suggests, the KSK isn't in their possession so they simply can't create the imaginary alternate key hierarchy needed for such spoofing. Manipulating ICANN is a very different thing from going after the root operators, either in the form of the corporations and other legal entities or the actual men-with-beards who perform the public key ceremonies.
Posted Sep 1, 2011 20:54 UTC (Thu) by job (guest, #670)
Posted Sep 6, 2011 4:14 UTC (Tue) by clint (subscriber, #7076)
Posted Sep 6, 2011 7:45 UTC (Tue) by job (guest, #670)
But the point here is that I can choose which TLD I register my domains under, and trust is not implicitly delegated between them. Even if the .xxx top level domain (as a completely made up example) is run by greedy or incompetent people they can't create a mess for any one else, as opposed to the current CA model where DigiNotar can sign "CN=*.*.com".
That's is not just an implementation detail, it's a fundamental difference.
Posted Sep 1, 2011 20:57 UTC (Thu) by sgros (subscriber, #36440)
Maybe the real solution is somewhere in the middle? There is golden rule in the security that nothing is secure. In essence, any cracker with enough resources (think some government) can attack any CA and issue fraudulent certificates. And nothing can be done against it.
But, it can be made harder. What do you think about using multiple CAs? In other words, browser/user requires that server's certificate is signed by two (or even more) CAs in order to be accepted as valid?
I wrote a bit about that in a short blog post. I appologize for a shameless self promotion but I wanted it to be on one more public place than this comment section. Also, I thought that I already wrote a comment but can not find it.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds