LWN.net Logo

Trust, but verify

By Jake Edge
February 17, 2010

Public-key cryptography has been an enormous boon for securing internet communication, but it suffers from a difficult-to-solve problem: authentication and key management. When presented with a public key over an insecure channel—as part of setting up a secure channel for example—how does one determine that the public key belongs to the entity that it purports to? There are several ways to solve that problem, but none are completely satisfactory. The Monkeysphere project seeks to turn the currently used system on its head, to some extent, and entrust users, rather than centralized authorities, with the power to bestow trust on a key.

There are three main ways for a key to be "trusted": the key (or its fingerprint) is transferred via some secure channel (by phone or in person for example), the key is signed by an authority which has been entrusted to only sign valid keys, or the key is signed by "enough" different entities that are fully or partially trusted (i.e. a web of trust). Most of today's secure internet communications use SSL/TLS which requires keys that have been signed by certificate authorities (CAs), which are "trusted" authorities.

There are two smaller subsets of secure communication, mostly only used by computer-savvy folks, that use other means for determining trust: SSH for interactive encrypted communication and PGP for encrypted email. SSH relies on key fingerprints being exchanged securely, at least in theory, while PGP relies on a web of trust. Monkeysphere's first project is to move the PGP web of trust into the SSH world.

A web of trust is a decentralized, user-controlled key management scheme whereby keys are signed by multiple entities, each using its own keys. The signature can be verified based on the public key of the signer and the user can decide which signers are to be trusted—and at what level to trust them. In practice, if Adam signs Bonnie's key, and Clarisse trusts Adam, that means that Clarisse can trust Bonnie's key. Whether Clarisse should trust David's key, which is signed by Bonnie, depends to a large extent on how much she trusts Adam.

Key signing only implies that the signer verified the identity of the key holder, i.e. that the key holder is the same person or organization that is identified in the key. It is not necessarily an indication that the key holder should be trusted in a general sense, only that the key holder is who they say (via the key) they are. The web of trust used by the Monkeysphere OpenSSH framework is based on the GNU Privacy Guard (GnuPG or GPG) implementation of the OpenPGP standard (RFC 4880).

There are levels of trust that a user can place on a particular signer privately in the user's GPG configuration. They can also issue a trust signature that specifies publicly what trust level they have for a particular signer. So, from the example above, if Adam has published a trust signature for Bonnie saying that she is fully trusted by him, and Clarisse fully trusts Adam (publicly or privately), she is likely to trust David's key. The number of signatures and trust levels required to fully trust a key are configurable by the user, allowing users to decide what their trust parameters are.

What Monkeysphere has done is to add some Perl around OpenSSH to manage keys, along with the known_hosts and authorized_keys files which normally live in the ~/.ssh directory. No modification to the OpenSSH client or server is required, though using Monkeysphere requires that all outbound connections go through the "monkeysphere ssh-proxycommand" command. On the server side, OpenSSH needs to be configured to use an alternate, Monkeysphere-managed AuthorizedKeysFile. The documentation page outlines the configuration needed for OpenSSH and GPG on the client or server sides.

For SSH, especially for sites with lots of hosts, it means that users or system administrators don't have to laboriously propagate keys into authorized_keys files on each new system. Instead, they can say that any key signed by their organization's key is trusted. Each user then has their key signed and can log in to any machine. Of course, ensuring that the organizational keys don't get lost, or fall into the wrong hands, is imperative.

While it is much more user-centric than a trusted authority mechanism, and does not require a separate secure channel for fingerprint exchange, a web of trust is no panacea. There are still issues with handling key revocations, especially if the user loses their key. A bigger problem may be getting a large enough web of trust, with enough trusted key signers, built such that users' keys, especially new users' keys, have a reasonable shot at being accepted.

The very user-centrism that makes a web of trust so intriguing to those who care about secure communications may in fact be one of its biggest downfalls. Non-technical users have shown very little inclination towards wanting any control over which keys they accept or decline. Someone faced with trying to decide who to trust, and at what level, along with how many different signatures/types they require is likely to throw up their hands in frustration. Non-technical users typically don't use SSH or encrypted email, but they may use other services, like SSL/TLS encrypted web traffic that might also benefit from a web of trust model.

LWN commenter dkg pointed to Monkeysphere (or similar techniques) as a possible solution for the problem of blindly trusting whatever CA root certificates a browser installs: "The more communications security is in the hands of the end users, with tools that are intelligible to end users, the more we can reject these abusive (or at least easily abused) centralized authorities." The italicized phrase is both the most important, and probably the hardest, part to get right.

Tools like Monkeysphere, and efforts like those of CAcert, are good starting points. How well those can translate into workable, user-friendly, user-centric authentication and key management mechanisms is an open question. While those of us who are technically inclined will be able to use a web of trust if desired, it would be nice one day if our parents, siblings, and others who aren't so technical could also stop relying on potentially corrupt organizations for their internet communication security. A web of trust may be a big step down that path.


(Log in to post comments)

Trust, but verify

Posted Feb 18, 2010 6:00 UTC (Thu) by neilbrown (subscriber, #359) [Link]

Who are Adam and Bonnie?? Bring back Alice and Bob!!

Good idea, but don't expect too much

Posted Feb 18, 2010 9:31 UTC (Thu) by khim (subscriber, #9252) [Link]

While those of us who are technically inclined will be able to use a web of trust if desired, it would be nice one day if our parents, siblings, and others who aren't so technical could also stop relying on potentially corrupt organizations for their internet communication security.

This is stupid goal for two reasons:
1. It's unachievable.
2. It's not something we'd like to have anyway.

Think about it this way: you can not live without trusting "central authorities". You trust your supermarket when you are buying food, you trust your car mechanic when you are driving car, you trust your electric company when you turn your computer on and so on. Heck, a lot of guys you trust based on central authority can kill you! Not just your medic, but you gas-man! It's neither usable nor feasible to play these "web of trust" games in real world - why cyberworld must be any different?

Sure, we need people who'll catch "potentially corrupt" authorities when they'll become "actually corrupt" and expose them - just like we need them in our non-computer related life. But to expect that Joe Average will play these "web of trust" games... this is not just stupid, this is unfair! If "central authorities" model is good enough for you to entrust your life then why it's not good enough to entrust your files? Do you value your life less then you ssh account?

Good idea, but don't expect too much

Posted Feb 18, 2010 11:31 UTC (Thu) by dion (subscriber, #2764) [Link]

I may have to trust my doctor not to poison me, but that's far from being the same thing as trusting that a faceless, profit motivated, cooperation on the other side of the world to accurately bestow trust on systems and people I need to talk to.

IOW: This is a silly argument, you cannot possibly compare the threat scenarios of Internet connected computers with physical situations.

I also doubt that web-of-trust is going to be a mainstream solution, but I'm far from being happy about the current highly centralized CA regime.

Good idea, but don't expect too much

Posted Feb 18, 2010 14:04 UTC (Thu) by andypep (subscriber, #33588) [Link]

I wonder. A very common way of selecting a tradesman is to ask around among your friends. In other words, word of mouth recommendations are very much alive and well, even now. So it might actually work if it could be made easy enough to build the web of trust through friend networks.

Good idea, but don't expect too much

Posted Feb 18, 2010 16:05 UTC (Thu) by drag (subscriber, #31333) [Link]

Yeah.

If you blindly trust a doctor your doing it wrong. If you blindly trust a mechanic your a setting yourself up.

When I need a car worked on I either do it myself if I can manage (oil changes, brake pad changes, etc etc.. things that are easy) or I have very specific couple of shops that I know that are trustworthy. I am willing to put myself in a great deal of inconvenience in order to go to a mechanic I can trust. They are worth their weight in gold and very often carry a premium for their services.

Same thing with Doctors. Don't blindly trust them. Do not believe what they tell you is true, do not trust the drugs they prescribe to you are safe. You look up that stuff on the internet. I mean, seriously, why do experts always recommend for you to get a second opinion on anything that is remotely serious?

Plenty of times doctors have ignored symptoms that ended up killing patients. They prescribe drugs that kill their customers. Anybody with half a brain knows that they have to rely on their own judgment for many things since even if the people they are working with are wonderful and have their best interests in heart they can still make mistakes.

That is the 'Trust But Verify' for doctors.. 'Get a Second Opinion'. That is fundamental requirement. Is it foolproof? NOPE. But it's important. If you have a problem and you get something that sounds funny from your current doctor then you hire a second, unrelated, doctor to get his opinion.

And there are times I've gotten unsafe food from the supermarket. Had stuff go rotten or be rotten in containers even if the date on the packaging says otherwise. I know that different stores are more trustworthy then others and some stores have fresher food or higher quality produce then others.

Hell you can see that in the rise of 'Whole Foods' types stores were they provide higher quality food then the average supermarket. Not all of them are equal.

And on top of that some of the food they sell is not safe to you. Things like IceCream, while a treat, have a very similar effect to a slow poison on the human body. If you blindly choose your foods on what looks good and what tastes good then you're going to end up fat and dead.

So on and so forth. It's not that these people are evil. But it's simply a requirement of a healthy society that it's citizens have a healthy skepticism and be willing to put the effort into understanding what is going on around them.

It's not that you don't trust them. Its that you do what you can, in your limited way, to make sure that you can trust them.

A central authority like Verizon can actually make everything worse. Anybody with some cash can pay to get 'trusted'. It does not matter who. A official government-recorded corporation can be created with as little as 200-300 dollars and a couple signatures. A P.O. box or trustee can be a official address.

They give the illusion that a website is safe, when really you have no idea. That authority can be used to shield and make dishonest people seem legitimate. It's used all the time.

Drug companies use the FDA to make their stuff seem safe, when it really is not. Same thing with food. People trust the FDA to protect them so some dishonest people use that perception against you.

So things like central commercial certificate authorities do not have the ability, desire, or resources to make sure that a website is 'safe'. All the cert means is that the company is legit enough to pay somebody money to sign their cert and that you probably have secure communications with that host. Hell.. they could be completely honest folks, but have some crappy webserver that allows for cross sight scripting attacks.

When you buy something online from a store you've never used... do you not google around and see if you can find some sort of history or users complaining about that store? Do you not check your accounts to make sure that payments taken out are correct?

That is 'trust but verify'

Good idea, but don't expect too much

Posted Feb 18, 2010 16:08 UTC (Thu) by drag (subscriber, #31333) [Link]

Er.. ya. s/sight/site/g <doh>

Good idea, but don't expect too much

Posted Feb 18, 2010 21:10 UTC (Thu) by martinfick (subscriber, #4455) [Link]

Your examples are not typically considered "central authorities".

There are many supermarkets and many independent supermarket companies, not very central. But even then, many do refuse those "authorities" and I wouldn't call those who refuse to shop in supermarkets "stupid".

A single car mechanic is not at all a central authority. Perhaps a dealer is a bit more like one (but not really one), and those who tend to trust central authorities are more likely to be those who would take their car to a dealer instead of independent mechanics. Surely those who don't shouldn't be called "stupid", should they?

Some people trust there electric company, some don't. Some buy their own backup generators, some have UPSes on their PCs, many more at least have surge protectors... maybe they don't trust the central authority? After all, utility companies are one of the most complained about monopolies, particularly because most people are forced to use them even when when they don't "trust" them!

Clearly recommendations play a huge role in the real world, usually a bigger one then central authorities, so why would you think they should not translate well to the computer world? Luckily in the real world, most people can figure out which models they prefer. But in reality, the central authority model really is just a small piece of the web of trust model, wouldn't it be nice to extend that web of trust to smaller entities also?

Interesting how people skipped my argument entirely...

Posted Feb 19, 2010 7:40 UTC (Fri) by khim (subscriber, #9252) [Link]

Your examples are not typically considered "central authorities".

Because people like to feel they are free and ignore reality, or why?

There are many supermarkets and many independent supermarket companies, not very central. But even then, many do refuse those "authorities" and I wouldn't call those who refuse to shop in supermarkets "stupid".

I never said they are. I said that it's stupid goal to try to make everyone refuse central authorities and use small local shops instead. This model just does not scale. Your doctor has a license, your car dealer has a license, your electric has certificate, etc - these are your "central- authority issued certificates". Some have self-signed certificate (your friend who has no licenses at all but does terrific work fixing computers, for example), but most use "central authorities". You can decide to ignore some central authorities (like you can choose to ignore CNNIC), but it's not possible and not feasible to ignore all of them.

Clearly recommendations play a huge role in the real world, usually a bigger one then central authorities, so why would you think they should not translate well to the computer world?

Recommendations are absolutely important. Vital, even. But Web-Of-Trust is not a that. It's some complex technical tool designed to automatically determine if you should trust someone or not. Most people have neither need nor abilities to properly use it.

But in reality, the central authority model really is just a small piece of the web of trust model, wouldn't it be nice to extend that web of trust to smaller entities also?

KISS principle. Web-of-trust model is complex and opaque, central authority model is simple and transparent. In security it's often better to have simple and rigid model rather then complex and flexible one. It's Ok to have some web-of-trust advisory (like phishing filters employed by modern browsers) but even to try to replace central authority model with this... it's neither feasible nor desirable.

centralized trust models are a weaker, insecure subset of distributed trust models

Posted Feb 22, 2010 10:35 UTC (Mon) by dkg (subscriber, #55359) [Link]

(disclaimer: i'm quoted in the article, and i'm a contributor to the monkeysphere project) I appreciate your commentary, and especially your skepticism about changing core infrastructure. These things need to be taken seriously. I hope you'll train your skepticism on the existing problematic systems as well.

khim wrote:
You can decide to ignore some central authorities (like you can choose to ignore CNNIC), but it's not possible and not feasible to ignore all of them.

Why is it infeasible to ignore any of them that you distrust? Today, it's because of the inherent bias in the structure X.509 certificates, because any certificate can only have one issuer. With the current infrastructure, you simply can't express the idea of "I only trust FooCA's certifications if they're corroborated by some other entity". And if you decide to say "I don't trust FooCA's certifications at all", your only clear option in the current regime is to not visit services certified by FooCA, because there is no way that the service could be concurrently certified by another CA which you do trust. But what if more than one CA could certify a service?

khim wrote:
Recommendations are absolutely important. Vital, even. But Web-Of-Trust is not a that. It's some complex technical tool designed to automatically determine if you should trust someone or not.

OpenPGP's Web-of-Trust is actually not about automatically determining whether you should trust someone or not. It's ultimately about deciding whether someone is who they claim to be, just like that Other PKI, X.509. The WoT uses your own indications about who you trust to identify other people (or services) and then automates the process of binding those identities to public keys, which are bound in turn to your communications.

If this is confusing, it might be because current implementations and documentation don't do a good job of separating out the concepts and the terminology. I agree that's a problem, and it needs to be fixed. But the questions you need to be able to answer to use the WoT are very much within reach of ordinary humans.

As a baseline for use of the WoT, you need to be able to answer one question:

  • Who do i know i can i rely on to correctly identify another party?
For full participation (so that others can choose to rely on your certifications, and so that you can be sure that your indicated preferences will be properly respected), you need to add two more concepts:
  • Is a given person who they say they are?
  • Does the key that I have for them match the key they claim to have?
Note that the first two concepts are normal human concepts, so built-in that we don't even think about them explicitly much. If you've known your good friend Alfredo Lopez for 6 years, you have very good reason to believe that he is Alfredo Lopez. Slightly more complicated: if you've known Alfredo for years, and he's a reasonable guy, and he says "hey, meet my friend Maria Jones", you probably have good reason to believe that the person in question is indeed "Maria Jones". Some of us have friends who we know will try to fool each other with prank names like "I. P. Freely", or acquaintances who would be happy to impersonate a bank teller for financial gain -- we know not to rely on these friends or acquaintances for proper identification without corroboration.

The final concept (about matching keys) does require a bit of sophistication -- it means you need to understand that some digital object called a "key" exists, and can be used as a means of identifying people (or other entities). And it means you need to know how to compare the fingerprint of a key: this just involves reading a series of letter and numbers and making sure they match; most people can do this.

khim wrote:
Web-of-trust model is complex and opaque, central authority model is simple and transparent.

In fact, if people want central authorities, it's trivial to implement them in a WoT. Simply mark all the central authorities your tools already implicitly "trust" as being entities you feel you can rely on to identify another party. Now, your WoT is exactly as simple and transparent as a hierarchical model. But if you decide that something is wrong with that model, you have a way to address the problem.

What could be wrong with the hierarchical model? Try asking people who they actually trust with the ability to compromise all of their networked communications. Mozilla Firefox 3.5 ships with entities like the dubious GTE CyberTrust Global Root (using a 1024-bit RSA key with an expiration date of 2018 -- 8 years longer than NIST recommends), governmental root certificate authorities from Taiwan, Netherlands, Japan, and soon China, and "too-big-to-fail" agencies with a history of corruption, simple incompetence or acquiescence to corporate or governmental bullying like Network Solutions, Equifax, or Verisign. You don't have to distrust all of these entities to think this arrangement is suboptimal. You only need to distrust one of them. It's a weakest-link arrangement.

The solution to these problems is not to force users into blind "trust" arrangements that are inherently insecure. It's to make sure users have access to clear, comprehensible information about who they are relying on to make identification decisions, to make it easy for end users to reject untrustworthy middlemen, and for people who don't understand the system to rely on people or groups they actually do know and trust to make identity certifications (even if they turn out to be delegated ones). As far as i can tell, this isn't possible with the dominant technical infrastructure for the central authority trust model (and it should be noted that X.509 itself is also neither simple nor transparent).

We can do better, and we should.

Good idea, but don't expect too much

Posted Feb 18, 2010 22:32 UTC (Thu) by iabervon (subscriber, #722) [Link]

In a lot of real-world interactions, the trust is rooted in an established direct relationship. When I contact my credit card company, I do so by calling the phone number printed on my credit card. When I mail them a check, it goes to an address that I am familiar with from when I opened the account. Also, when they receive the check, they actually request an electronic funds transfer from my bank, and my bank and my credit card company have identified each other from experience, from looking at the identifiers on my check, and by their government charters.

In none of these cases do the parties use an arbitrary trusted authority. Either they have shared information that they use to identify each other as being in an established relationship, or they have a specific body, with whom they have a direct relationship, that they use to introduce them to each other.

Terrific example

Posted Feb 19, 2010 7:54 UTC (Fri) by khim (subscriber, #9252) [Link]

When I contact my credit card company, I do so by calling the phone number printed on my credit card.

Yup. And by doing this you blindly trust your telecom provider, your phone manufacturer, producer of the CPU for you phone, producer of the OS for you phone and so on.

When I mail them a check, it goes to an address that I am familiar with from when I opened the account.

But you use another organization certified by trusted authority - be it USPS or DHL. Heck, when you visit USPS or DHL office you trust the sign on doors - and integrity of this sign is guaranteed by central authority (called government)!

In none of these cases do the parties use an arbitrary trusted authority.

Sure they do. More often then not there are a lot of parties involved which are used because they are certified by central authority (they have a license from government or they are certified by some agency licensed by government, etc).

It's good idea to use web-of-trust-like models to get a second opinion (today's browser implement it via different services designed to prevent scams), but to try to replace usual chain of certificates with web-of-trust model... that's just crazy.

Good idea, but don't expect too much

Posted Feb 19, 2010 18:23 UTC (Fri) by dmag (subscriber, #17775) [Link]

> Think about it this way: you can not live without trusting "central
> authorities".

Agreed.

> You trust your supermarket when you are buying food, you trust your
> car mechanic when you are driving car, you trust your electric company
> when you turn your computer on and so on.

Non Sequitur. Yes, most people's non-farming lifestyles force them to trust a supermarket. But the article was talking about an alternative to generic profit-driven CAs that FORCE you to trust ALL items signed by them. Just because you are forced to trust A supermarket, doesn't mean you are forced to trust ALL supermarkets. It's not an all-or-nothing model like CAs are.

The current way SSH operates is "everyone makes their own decisions on which keys to trust" (both clients and servers). Totally not secure unless you give everyone lots of security training. The web of trust idea allows a company to say to it's employees "you're approved on all our servers" and "you should trust all our servers". That's pretty cool. No tin foil hat required.

> It's neither usable nor feasible to play these "web of trust" games
> in real world

Wrong. The opposite is true: It's not feasible to give EVERYONE the same level of trust.

Would you give a ride to someone on the street? Probably not, but you'd have no problem giving a ride to someone in your Yoga class.

Would you let a stranger into your house? Probably not, but you'd let in someone who is a friend of your mom.

Do you eat food from strangers on the street? No, but you probably eat the free samples at the grocery store (trusting that the grocery store isn't going to poison you).

Ok, only geeks use the term "web of trust", but it still exists in the real world.

> why cyberworld must be any different?

Just because we can't do something in the real world doesn't me we shouldn't do it in the cyberworld. Look at people with 1000's of friends in their social networks: They are much more likely to get a job by posting "I need a job" to their social network (cyberworld) than scanning the newspapers or asking a handful of close friends (real world).

Another monkey project

Posted Feb 18, 2010 10:05 UTC (Thu) by epa (subscriber, #39769) [Link]

Since we already have SeaMonkey, SpiderMonkey, TraceMonkey, GreaseMonkey, the WebMonkey site, and at least another dozen projects with 'monkey' in the name (not to mention Mono of course), I think it might be time to place a moratorium on any more project names in this vein. Who's co-ordinating these decisions, anyway?

Another monkey project

Posted Feb 18, 2010 12:20 UTC (Thu) by nix (subscriber, #2304) [Link]

Infinitely many monkeys. They finished Shakespeare some eons ago and now have other plans.

Not enough monkey projects

Posted Feb 20, 2010 16:39 UTC (Sat) by man_ls (subscriber, #15091) [Link]

As everyone probably knows by now, and I accidentally found out only last week, this project must refer to Cracked.com's MonkeySphere, a popular account for Dunbar's number. As such it has a legitimate cause (i.e. a pop reference) for using that "Monkey" part. I wonder about where the fixation with monkeys inside Mozilla came from.

Trust, but verify

Posted Feb 18, 2010 12:18 UTC (Thu) by nix (subscriber, #2304) [Link]

The italicized phrase is both the most important, and probably the hardest, part to get right.
In your article, the entire quote is italicized, of course :) in the original comment, "with tools that are intelligible to end users" was specifically italicized, but maybe you should have changed it to bold, or de-italicized just that section...

Trust, but verify

Posted Feb 18, 2010 13:11 UTC (Thu) by jake (editor, #205) [Link]

> In your article, the entire quote is italicized, of course :)

Hmm, not on my screen ... the quote is in red and non-italicized, and the "with tools that are intelligible to end users" part is in italics. Not sure what browser/CSS/site options you are using that cause all quotes to be italicized.

That said, I probably should have found a better way to indicate what I was talking about.

jake

Trust, but verify

Posted Feb 19, 2010 0:21 UTC (Fri) by nix (subscriber, #2304) [Link]

Well, I tried it with Firefox 3.5.7 on Fedora, Konqueror 3.5.10, Firefox
3.6 on Windows, and IE on Windows (yuck). All show the entire quote
italicized...

Trust, but verify

Posted Feb 19, 2010 0:55 UTC (Fri) by jake (editor, #205) [Link]

> All show the entire quote italicized...

Heh, then the problem must be with *you* :)

More seriously, I suspect you have your quoted text preferences in "My Account" set to 'italics' ...

the fact that we give you that option just makes it more obvious that i should have chosen a different way to point that out ...

oh well, my apologies ...

jake

Trust, but verify

Posted Feb 19, 2010 13:26 UTC (Fri) by nix (subscriber, #2304) [Link]

Oh blast I quite forgot that that option existed (or indeed that any of this stuff was customizable).

But then by the sound of it so did you ;)

Reduce the effort of verification

Posted Feb 18, 2010 20:55 UTC (Thu) by buchanmilne (subscriber, #42315) [Link]

For SSH, especially for sites with lots of hosts, it means that users or system administrators don't have to laboriously propagate keys into authorized_keys files on each new system.

I typically avoid this.

In some instances, by using the openssh lpk patch, which allows the user's public keys to be stored in LDAP, and have sshd find them there directly. I have also scripted retrieving the keys from LDAP to a central authorized_keys file (rather than per-user keys files, to adhere to security policies etc.), for either the case where you can't patch sshd, or if you want to provide fail- back in case of some problem reaching LDAP (e.g. failure of firewall or network or similar). In this scenario, host key verification still needs to be addressed, but there is a scalable solution - but it requires DNSSEC. Now that we are seeing DSNSEC deployments, maybe this will become more common.

In other instances, where a decent Kerberos setup is in place, you don't use public/private keys for users at all, but of course Kerberos authentication. There is also a patch in development for OpenSSH to do host verification by Kerberos, from the service principal, so host manual key verification isn't necessary.

Both of these seem to scale better than MonkeySphere, as according to the documentation:

You'll probably only set up Identity Certifiers when you set up the machine. After that, you'll only need to add or remove Identity Certifiers when the roster of admins on the machine changes, or when one of the admins switches OpenPGP keys.

So, when a new admin joins a team, another admin has to log in to all servers and run a command to authorize the new admin as a "Identity Certifier". This is exactly the kind of work duplication I have tried to avoid by using LPK and Kerberos. And, if they don't know GPG/PGP, I would have to teach them that first ... and unfortunately admins who have never had a GPG/PGP key are all too common these days. In many environments, even the term "ssh public key" had to be explained many times, including to the network security team.

Now, unfortunately, AFAIK neither LPK nor the Kerberos host key verification patches are likely to be merged upstream, as the OpenSSH team doesn't seem to think they are necessary, and probably that they will introduce insecurities.

Of course, the pragmatic view that making it *easier* to implement adequate security is of value seems to have been missed.

Reduce the effort of verification

Posted Feb 22, 2010 11:13 UTC (Mon) by dkg (subscriber, #55359) [Link]

buchanmilne, the solutions you outline (LDAP, kerberos) do work well for single-authority enterprise-type situations, and it's not unreasonable to use them there, if you're willing to accept the tradeoffs associated with shared-secret schemes (e.g. suboptimal passwords, users reusing them insecurely with outside services, etc). (though as you say, LPK only identifies users to hosts, and not vice versa). But as soon as you're dealing with a distributed identity model (where no single central authority is accepted for all entities), LDAP and krb5 are less useful as authentication services.

For example, imagine company A needs to contract with company B to do some work on company A's equipment. Either each worker from company B now needs a separate authentication credential with company A (and how do you establish those credentials?) or company A's LDAP or krb5 services need to declare cross-domain trust on company B's LDAP or krb5 services (assuming that A and B both run compatible authentication services. And Company B's LDAP or krb5 services need to be offered on the public network, or at least be accessible to company A's equipment. These are do-able, but non-trivial operations.

It turns out that monkeysphere can handle all of these cases, scaling cleanly, and even works well in the sole enterprise model you describe. You wrote:

So, when a new admin joins a team, another admin has to log in to all servers and run a command to authorize the new admin as a "Identity Certifier". This is exactly the kind of work duplication I have tried to avoid by using LPK and Kerberos.
You're responding to a description of one way to use the monkeysphere (and it's a useful method in a loose confederation of affiliated machines). In the enterprise approach, you'd simply create a company OpenPGP certifying key, and add it as the identity certifier for all your hosts.

Then, you have three options for dealing with a team of admins in a larger enterprise:

  • You could give each admin access to the central certifying key, and let them certify users from the trusted root,
  • You could create a certification-capable subkey of the master for each member of the admin team, and let them each control their own certifying subkey, or
  • You could issue trust signatures over the personal OpenPGP keys of every member of the admin team, and let them certify people individually
The latter two options give you the ability to block bad certifications if an admin turns rogue, without ever having to touch the hosts that rely on this certification.

And you don't even need to run your own authentication servers to do this -- revocation and re-keying can be handled by the existing global HKP network (though it's easy to run your own HKP server if you do want to keep things all in-house).

Given this infrastructure, how do you handle the company A/company B scenario above? Company A's OpenPGP certifying authority makes a trust signature on Company B's certifying authority, limited to only cover certifications in company B's domain. All existing infrastructure remains otherwise in place.

Note also that monkeysphere works without any patches to OpenSSH, and with any reasonably-modern version.

Trust, but verify

Posted Feb 18, 2010 22:54 UTC (Thu) by smoogen (subscriber, #97) [Link]

I think the biggest issue is that for the most part people are lazy which is why centralized security models tend to occur. The amount of time to go and verify signatures to start off your web-of-trust is usually too much even for technical security people. Or when they do verify they really don't know how to determine trust well. [Ok Alice knows Bob, but does she know how well Bob system administrates OR how much Bob trusts others.. ]

Trust, but verify

Posted Feb 19, 2010 15:13 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

Right, this level of complexity is reflected in PGP / GnuPG, but most people don't understand how to use it, and once it's explained they complain that it's too complicated (it is - but it reflects reality, which is also too complicated).

It's not clear to me that Monkeysphere even appreciates the problem.

Trust, but verify

Posted Feb 19, 2010 16:04 UTC (Fri) by smoogen (subscriber, #97) [Link]

Yes I would agree that reflects reality.. but without the builtin parts of the brain that make such decisions automagically for us using circuits built in genetically and built upon by environment. I think most of the trust relationships many people make are done within the first 1 to 2 minutes of first meeting. I can now see a large lisp AI written to build a neural network and then work out things like "Ooh I like his .sig" to make trust management easier :).

Trust, but verify

Posted Feb 19, 2010 20:30 UTC (Fri) by djao (subscriber, #4263) [Link]

I would go even farther: the truth is, even the centralized trust model is too complicated for average people to use. Note that I am a professional cryptography researcher, although unlike many other researchers, I pay attention to what is practical and what is not.

It doesn't take very much searching to dig up various instances (1, 2) where centralized certificates fail to provide the level of security that they theoretically guarantee. SSH, however, has never been attacked cryptographically via its trust model, even though it's clearly a worthwhile and lucrative target -- if you don't think so, just look at the number of SSH brute force attacks in the wild.

The critical feature of the SSH trust model is that it has no trust model. It is entirely up to the user to verify the key. For a capable user, this is no problem, because such a user knows how to verify a key out of band. For the unskilled user, this method is still better than any other alternative, because it involves the least complexity. The concept of "store this key please, and notify me if it ever changes" is a lot easier for average users to understand than anything involving certificates or webs of trust.

It is also interesting that, in practice, the null trust model used by SSH tends to produce better results than any other trust mechanism, quite independent of the human factor advantage of its lower complexity. Most of the time, when you make a secure connection to a server in a context where you care about security, you are connecting to a server that you have used before. In this situation, you know what the server's key was before, and you can compare it to the key that it has right now. It turns out that, contrary to expert opinion, the SSH technique of simply raising an alarm whenever a key changes is in fact one of the best ways to prevent man-in-the-middle attacks. In order to perform a man-in-the-middle attack against SSH, where any new key will raise an alarm, you need to have an "always on" network presence to react to all new connections as they are made, and even the best network engineering that money can buy is not capable of providing 100% network reliability, even in a friendly (non-adversarial) context.

On top of all that, technology has evolved to the point where most people today (in developed nations, anyway) have multiple lines of access to the internet: work, home, wifi, smartphone, and so on. Once you give someone two different views of the internet, caching and comparing keys really is the best way to prevent man-in-the-middle attacks, and it's certainly a lot better than blindly trusting a central authority, or introducing the complexities of a web of trust.

I view MonkeySphere in the same category as VeriSign and other companies trying to extort money from users for providing inferior security. They are worse than a solution in search of a problem. They are actually creating new security problems where none existed before, in the name of profit.

Trust, but verify

Posted Feb 21, 2010 21:07 UTC (Sun) by cassee (subscriber, #5336) [Link]

> SSH, however, has never been attacked cryptographically via its trust model [...]

Actually, it has. There was a neat trick way back when SSH servers still accepted both protocol 1 and 2. A man-in-the-middle could force a change in protocol by changing the packets so that it was probable that the host key would not be in the client's known_hosts file. The user would receive a relatively benign 'The authenticity of host X can't be established' message instead of the hostile 'WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!'. It would be easy for an inattentive user to ignore the warning and accept the compromised connection.

More details (although with awkward text flow) at: http://hubpages.com/hub/sshprotocol

Trust, but verify

Posted Feb 22, 2010 15:19 UTC (Mon) by micah (subscriber, #20908) [Link]

(disclosure: i contribute to the monkeysphere project)

djao said:
> The critical feature of the SSH trust model is that it has no trust
> model. It is entirely up to the user to verify the key. For a capable
> user, this is no problem, because such a user knows how to verify a key
> out of band. For the unskilled user, this method is still better than
> any other alternative, because it involves the least complexity. The
> concept of "store this key please, and notify me if it ever changes" is
> a lot easier for average users to understand than anything involving
> certificates or webs of trust.

You are right that this method is uncomplicated, however it encourages users to "click through" and fail to verify in the same way that people click through SSL certificate failure windows in their browser. This is bad behavior reinforcement. All too often the most capable users fail to verify keys out of band, and just type "yes" at that ssh host-key prompt so they can get to the machine. The number of admins that I know who accept a ssh host key fingerprint without verifying it are frighteningly low. Even fewer admins make those fingerprints available to others via a cryptographically verifiable mechanism. The reasons why capable people don't do this are varied, but one reason might be that it is not obvious where or how you should check this fingerprint. The fact that so many people accept these host keys without verification because there is no clear way of doing that verification is a problem.

Likewise, if you have verified the FP of a host key, and then it does change because the admin had to re-key (perhaps as the result of a routine rebuild of the box). Next time you connect, you will be presented with the ssh Big Scary Warning(tm) and you will need to remove that host key, find out what happened and then verify a new host key before you can continue. This is a good thing, although annoying to have to deal with when the re-key was done on purpose by the correct people running the machine and if you have no reliable method of verifying that this new host key is the right one and that this change should have happened.

Both the scenario of a user connecting to a machine for the first time and being asked to verify a host key, as well as the scenario of the admin needing to re-key the machine are made smoother by the use of the Monkeysphere because it makes the mechanism more simple, more clear and now only presents you with these questions when absolutely necessary. This reduces the "click through" reinforcement and raises the importance of those messages, as they appear only when something is wrong. The user is now *only* prompted to confirm a host-key fingerprint if they have no mechanism of trust to verify the key, or if there is an actual man-in-the-middle. The admin can do a re-key and re-certification smoothly, without freaking out the end-users.

djao also said:
> I view MonkeySphere in the same category as VeriSign and other
> companies trying to extort money from users for providing
> inferior security. They are worse than a solution in search of a
> problem. They are actually creating new security problems where
> none existed before, in the name of profit.

Clarification: Monkeysphere is *not* a profit-driven enterprise. It is not a company, and it is not involved in extortion of money. The Monkeysphere is a free software project, just a few regular folks getting together to hack, in the name of freedom, not profit.

Trust, but verify

Posted Feb 22, 2010 17:14 UTC (Mon) by nix (subscriber, #2304) [Link]

(I've said this before. Sorry for repeating myself, but, well, I think it bears repeating every so often.)
All too often the most capable users fail to verify keys out of band
A large part of the problem here is that everyone who uses multiple Unix machines by now uses OpenSSH, but the manpages say nothing about how to verify the validity of the remote host key if you get a man-in-the-middle warning. The SSH book may, but anything that depends for its security on all its users buying an expensive book is not going to work.

(FWIW, it was nine years after I started using OpenSSH that I figured out what the fingerprints that ssh-keygen could print were good for, and I'm not technically clueless, merely not a crypto geek. Maybe if I was a crypto geek I'd have known this instantly on reading the manpage, but, again, not everyone is a crypto geek, and you shouldn't have to be a crypto geek to be secure. Right now, thanks to documentation issues like this, no matter how secure OpenSSH is technically, it's open to all sorts of social-engineering attacks simply because its users can't tell how to respond to reports of potential security problems. I'd fix this, but I can't because I don't know the answers myself.)

This is not an academic problem. Last month I got a panicky Sunday phone call from one huge banking client, who'd reinstalled a production server and had started getting 'IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY' errors. It took a lot of effort to calm the poor man down, and he kept on asking why none of the documentation he'd read had explained what the 'SOMETHING NASTY' might be, or had bothered to say 'if you reinstall your OS but keep the same IP address, you'll get this message on all the machines that connect to it'. Then he asked how to identify all those clients and teach them what the new machine was, and I had to say there wasn't a way: ssh-keyscan(1) does the opposite. That didn't go down very well. He's constructing known-hosts files centrally with ssh-keyscan(1) and pushing them out to clients, now, but of course none of the documentation mentions that you might need to do that, either. (Admittedly I hadn't thought of it either, until it was too late.)

(this is your annual djm@ nose-tweaking. I think it's been almost a year since my last one. Anyone written that HOWTO yet? It's really silly that's not there, it's probably vastly simpler to write it than OpenSSH was to write... at least a wiki somewhere on which people can collect common OpenSSH problems and solutions to them, so Google can pick it up. Now that I can set up, I just haven't, so pre-emptive apologies, also I only thought of it ten seconds ago.)

host key backup

Posted Feb 23, 2010 7:10 UTC (Tue) by xoddam (subscriber, #2322) [Link]

> 'if you reinstall your OS but keep the same IP address, you'll get this message on all the machines that connect to it'.

To save this happening again (and again ...)

Would it make sense to (remotely) backup the host key and restore it after the reinstallation?

There could well be something I'm missing...

host key backup

Posted Feb 23, 2010 22:56 UTC (Tue) by nix (subscriber, #2304) [Link]

Yes, it would. But people don't always do that (often they don't even
realise what the host key is *for* until it's too late, and then, oops!).

More than once I've seen disaster-recovery hosts hotswapped into the place
of the machine they replace, IP address and all... but oops! they have a
different host key! Too late to fix, the original machine is dead now.
(Yes, this is a configuration error. But it's a pretty common one. More
common than not, I'd almost say.)

The Web of Trust isn't better, it's just better than nothing

Posted Feb 19, 2010 15:06 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

Actually what's currently missing is the central authority. It's not that SSH users don't trust anyone, it's that there is no clear authority they could look to.

DNSSEC deployment on the root (in principle this summer, check http://www.root-dnssec.org/) provides such a central authority, or rather, it provides a hierarchy with a central authority, and OpenSSH is already set up ready to be able to trust it. Just a one line config change and "ssh foo.bar.com" implies "look up the SSH key for foo.bar.com via DNS at the same time as the address, and fail if the key doesn't match".

I'm sure some people will decide that trusting the root operators, their TLD registry and whoever serves up DNS for their machines, is not acceptable, but I expect this to be a small minority. Particularly when I consider how often I see people blindly click or type past the routine "unknown host key" message (as distinct from the scarier "host key changed" message)

The Web of Trust isn't better, it's just better than nothing

Posted Feb 20, 2010 16:37 UTC (Sat) by nix (subscriber, #2304) [Link]

Unless you use only IP addresses when sshing everywhere, you're *already*
trusting the root. (Or maybe you use hosts files, yuck.)

(But! oh no! you're trusting everyone's BGP announcements as well! And
they're really easy to spoof...)

The Web of Trust isn't better, it's just better than nothing

Posted Feb 22, 2010 15:36 UTC (Mon) by micah (subscriber, #20908) [Link]

>(But! oh no! you're trusting everyone's BGP announcements as well! And
> they're really easy to spoof...)

Not if you are using authentication (typically MD5 based) and ACLs, or S-BGP. If you are accepting BGP advertisements from anyone, you are asking for it. You should only accept routing updates from trusted peers, peers that you have identified as ones that you should be receiving announcements from.

The Web of Trust isn't better, it's just better than nothing

Posted Feb 22, 2010 17:39 UTC (Mon) by nix (subscriber, #2304) [Link]

I'm assuming that you shouldn't really trust MD5-based BGP auth these
days, either. MD5 is quite broken these days (although perhaps not broken
enough to be able to forge BGP announcements with).

The Web of Trust isn't better, it's just better than nothing

Posted Feb 22, 2010 19:30 UTC (Mon) by paulj (subscriber, #341) [Link]

Attacks on BGP at a session level (e.g. breaking MD5 to sneak in bogus
packets) are not really the main worry when BGP systemically assumes that
speakers are trusted. There are various ways you can subvert routing,
including some quite ingenious, stealthy re-routing techniques described in
the last few years at blackhat conferences.

The Web of Trust isn't better, it's just better than nothing

Posted Mar 2, 2010 13:59 UTC (Tue) by robbe (guest, #16131) [Link]

> Unless you use only IP addresses when sshing everywhere, you're
> *already* trusting the root.

Am I? If I follow sound security practises (checking fp on new keys, not
ignoring the Big Scary Warning[TM]) all a malicious DNS can do is DOS me.

If you have HashKnownHosts disabled, you can even use known_hosts as a
poor man's directory service.

Trust, but verify

Posted Feb 22, 2010 11:45 UTC (Mon) by dkg (subscriber, #55359) [Link]

Thanks for the writeup, Jake! A comment about terminology. You wrote:

In practice, if Adam signs Bonnie's key, and Clarisse trusts Adam, that means that Clarisse can trust Bonnie's key. Whether Clarisse should trust David's key, which is signed by Bonnie, depends to a large extent on how much she trusts Adam.
[...]
if Adam has published a trust signature for Bonnie saying that she is fully trusted by him, and Clarisse fully trusts Adam (publicly or privately), she is likely to trust David's key.

statements like the above get confusing pretty fast because you're using the term "trust" in two very different ways. You're not the first to do this -- the gnupg documentation itself conflated these ideas until relatively recently.

In trying to clarify what's happening here, i prefer to drop the (abused) term "trust" altogether, and instead use two separate ideas: "ownertrust" and "calculated validity":

  • Ownertrust answers a question about a key. It answers the question "how much do i think i can rely on certifications issued by the person (or persons) who control this key?"
  • Calculated validity answers a question about a (key,user ID) pair. It answers the question "How strongly do i believe that this key belongs to someone with the given User ID?" ("with the given User ID" usually means "with the given real name and e-mail address" in the case of people, or "the ssh or https service at a given hostname" in the case of service User IDs)
Web of Trust-based cryptosystems like OpenPGP use a person's explicitly-stated ownertrust to help them automatically calculate the validity of a key for its User ID.

Thanks also for your highlighting the usability point. Modern tools have done a terrible disservice to ordinary users. We offer all kinds of flashy nonsense, but have done very little to offer intelligible, critical information like "your daughter Amanda (who understands these things) confirms that this is in fact the web site of the credit union you both use, and not a scam." I'd love to see that change.

As more of our society moves online, normal people need functional tools to help them manage their digital identity. People can make good choices when they're asked in a context and a framing that they understand.

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds