centralized trust models are a weaker, insecure subset of distributed trust models
Posted Feb 22, 2010 10:35 UTC (Mon) by dkg
In reply to: Interesting how people skipped my argument entirely...
Parent article: Trust, but verify
(disclaimer: i'm quoted in the article, and i'm a contributor to the monkeysphere project) I appreciate your commentary, and especially your skepticism about changing core infrastructure. These things need to be taken seriously. I hope you'll train your skepticism on the existing problematic systems as well.
You can decide to ignore some central authorities (like you can choose to ignore CNNIC), but it's not possible and not feasible to ignore all of them.
Why is it infeasible to ignore any of them that you distrust? Today, it's because of the inherent bias in the structure X.509 certificates, because any certificate can only have one issuer. With the current infrastructure, you simply can't express the idea of "I only trust FooCA's certifications if they're corroborated by some other entity". And if you decide to say "I don't trust FooCA's certifications at all", your only clear option in the current regime is to not visit services certified by FooCA, because there is no way that the service could be concurrently certified by another CA which you do trust. But what if more than one CA could certify a service?
Recommendations are absolutely important. Vital, even. But Web-Of-Trust is not a that. It's some complex technical tool designed to automatically determine if you should trust someone or not.
OpenPGP's Web-of-Trust is actually not about automatically determining whether you should trust someone or not. It's ultimately about deciding whether someone is who they claim to be, just like that Other PKI, X.509. The WoT uses your own indications about who you trust to identify other people (or services) and then automates the process of binding those identities to public keys, which are bound in turn to your communications.
If this is confusing, it might be because current implementations and documentation don't do a good job of separating out the concepts and the terminology. I agree that's a problem, and it needs to be fixed. But the questions you need to be able to answer to use the WoT are very much within reach of ordinary humans.
As a baseline for use of the WoT, you need to be able to answer one question:
- Who do i know i can i rely on to correctly identify another party?
For full participation (so that others can choose to rely on your certifications, and so that you can be sure that your indicated preferences will be properly respected), you need to add two more concepts:
- Is a given person who they say they are?
- Does the key that I have for them match the key they claim to have?
Note that the first two concepts are normal human concepts, so built-in that we don't even think about them explicitly much. If you've known your good friend Alfredo Lopez for 6 years, you have very good reason to believe that he is
Alfredo Lopez. Slightly more complicated: if you've known Alfredo for years, and he's a reasonable guy, and he says "hey, meet my friend Maria Jones", you probably have good reason to believe that the person in question is indeed "Maria Jones". Some of us have friends who we know will try to fool each other with prank names like "I. P. Freely", or acquaintances who would be happy to impersonate a bank teller for financial gain -- we know not to rely on these friends or acquaintances for proper identification without corroboration.
The final concept (about matching keys) does require a bit of sophistication -- it means you need to understand that some digital object called a "key" exists, and can be used as a means of identifying people (or other entities). And it means you need to know how to compare the fingerprint of a key: this just involves reading a series of letter and numbers and making sure they match; most people can do this.
Web-of-trust model is complex and opaque, central authority model is simple and transparent.
In fact, if people want central authorities, it's trivial to implement them in a WoT. Simply mark all the central authorities your tools already implicitly "trust" as being entities you feel you can rely on to identify another party. Now, your WoT is exactly as simple and transparent as a hierarchical model. But if you decide that something is wrong with that model, you have a way to address the problem.
What could be wrong with the hierarchical model? Try asking people who they actually trust with the ability to compromise all of their networked communications. Mozilla Firefox 3.5 ships with entities like the dubious GTE CyberTrust Global Root (using a 1024-bit RSA key with an expiration date of 2018 -- 8 years longer than NIST recommends), governmental root certificate authorities from Taiwan, Netherlands, Japan, and soon China, and
"too-big-to-fail" agencies with a history of corruption, simple incompetence or acquiescence to corporate or governmental bullying like Network Solutions, Equifax, or Verisign. You don't have to distrust all of these entities to think this arrangement is suboptimal. You only need to distrust one of them. It's a weakest-link arrangement.
The solution to these problems is not to force users into blind "trust" arrangements that are inherently insecure. It's to make sure users have access to clear, comprehensible information about who they are relying on to make identification decisions, to make it easy for end users to reject untrustworthy middlemen, and for people who don't understand the system to rely on people or groups they actually do know and trust to make identity certifications (even if they turn out to be delegated ones). As far as i can tell, this isn't possible with the dominant technical infrastructure for the central authority trust model (and it should be noted that X.509 itself is also neither simple nor transparent).
We can do better, and we should.
to post comments)