As kernel.org recovers from its compromise,
there are a number of changes being made to improve security of this
critical piece of kernel development infrastructure. One of the biggest
changes is to remove shell access for the 450 or so developers and only
allow Git pushes using SSH keys. But there is something of a
chicken-and-egg problem: how do the kernel.org administrators reliably get
the SSH credentials to each authorized kernel hacker, while ensuring that
only that authorized user can get them? Enter GPG ...
GNU Privacy Guard (GPG) is an implementation
of the OpenPGP standard (RFC4880) that provides
secure encrypted communication using public key cryptography. The
standard is a descendant of Phil Zimmerman's original Pretty Good Privacy
(PGP) program from the early 1990s—something that put him directly at
odds with the US government for a time. GPG is typically used to protect
email, by encrypting it so that only the recipient can
decrypt it, or by signing it in such a way that recipients can verify
the message sender.
Public key cryptography (PKC) is used for most network encryption tasks,
including SSH and SSL/TLS for secure web browsing (i.e. HTTPS). It requires
that each user have two keys, one public and one private. The public key
can (and generally should) be published widely and can be used to encrypt a
that only the holder of the corresponding private key can decrypt. The
private key can be used by its owner to digitally "sign" messages (or other
data) such that anyone can verify the signature by using the public key.
These two modes can be combined so that an email can be sent that is only
readable by its recipient who can also verify who wrote the message.
One of the main differences between OpenPGP, SSH, and
SSL/TLS is in how the public keys are managed—and authenticated. SSL/TLS
relies on central certificate authorities to vouch for public keys
(i.e. making the connection between a public key and a domain name)—a
mechanism that has suffered from serious
problems of late. SSH
keys are typically handled directly by the user (or administrator), by placing
the public key into the authorized_keys file on the host that is to
be accessed. GPG key authentication is, instead, handled in a completely
fashion using a "web of
The kernel.org administrators would like to be able to email
credentials to and from kernel hackers securely using GPG keys. But a
connection needs to be made between a given public key and a particular
kernel hacker. Anyone can create a key pair claiming to belong to, say,
Linus Torvalds that
uses his email address; they could then present a public key that appears
to be his. One could also use that key to sign Git tags, for
example. How can
someone distinguish Torvalds's legitimate key from any impostors? That's
where the web of trust comes into play.
The web is built by people signing each other's public keys.
Signing a public key serves as an assertion
that the signer believes that the mapping from key to user is valid
(i.e. that the name and email specified in the key is correct).
So, Torvalds and Andrew Morton could get together (at the upcoming
Kernel Summit for example), sign each other's key, and add those keys to their
key rings (essentially a list of known keys). At that point, Morton could
easily detect that the impostor's key is bogus, but other kernel hackers
would not necessarily be sure, especially if the impostor also crafted a
bogus key for Morton and signed both fake keys with the other.
The web of trust is what solves that particular problem. If Torvalds and
Morton also sign a bunch (or even a handful) of other peoples' keys, those
people can distinguish which of the keys are legitimate. If those people
go on to sign additional keys, the web will grow. Anyone who can trace a
path from one of their trusted keys (i.e. one that they signed), through
one or more intermediates—each signed by the previous link in the
chain—to the key in question, they can be reasonably assured that the key
is owned by the name/email specified in it.
A chain like that described above only provides reasonable assurance
because it relies on each individual being diligent about verifying the
identity of people (and their keys) before signing. It also relies on
people ensuring that their private keys are not compromised. Finding
multiple independent paths through the web of trust, all of which agree,
increase the level of
trust one can place in a key as well. Shorter paths and/or more trusted
signatures can also increase the trust level.
There are several levels of "trust" that one can have in a particular key.
A key that you have signed is, presumably, one that you fully trust
corresponds to the person that it purports to. That doesn't necessarily mean
that you trust that person to be diligent about signing other keys. GPG
allows trust levels to be associated with keys and has various
configuration options to determine whether a given key is to be trusted for
By default, it requires that a key be signed by at least one person that
is fully trusted or three people that are partially trusted before it will
use that key.
Some projects, notably distributions like Debian, Gentoo, Fedora, and
others, already have well-established webs of trust. The keys are used for
package signing and other purposes, so it is important for those projects
to ensure that the keys are trusted. In fact, Henrique de Moraes Holschuh
suggested that geographically isolated
kernel developers might find it easier to track down a nearby Debian
developer to get their key signed. Most Debian developers' keys are in the
"strong set" of interconnected keys
in the web of trust and keys signed by strong set members automatically
join that set.
In order to sign a particular public key, a user must access their private
key, but, for verifying a signature, only the signer's public key is required.
Basically, a cryptographic hash of the item to be signed is calculated and
the hash value is what actually gets encrypted using the private key.
Because of the way PKC works, the public key can be used to decrypt the
hash value, which can then be compared to the hash value of the signed item.
If the two match, then only the holder of the private key (which should
correspond to the identity associated with the public key) could have generated
Because protecting private keys is so important, many GPG users only store
those keys in a single secure location (encrypted on a secure machine or USB
stick, not on their laptop). That means that the private key may not be
available when someone requests a key signature. The key can still be
signed, however, by collecting the key "fingerprint" (a shorter hash value
that represents the key) and verifying the person's identity, then doing the
actual signing later. The key to be signed can be retrieved from a
keyserver and the fingerprint verified. If they match, the key can be
signed and sent back to the keyserver with the new signature applied.
As part of the process for bringing kernel.org back, the administrators
have put out some guidelines for generating
keys and getting them signed. Several key
signing parties are planned as well so that kernel hackers' keys can
more quickly gather enough signatures to establish a reasonably sized web
of trust. That way, the administrators can have confidence that they can
send sensitive credential information to the right parties. That, in turn,
will allow various kernel trees to return to the kernel.org infrastructure.
While the compromise of kernel.org is embarrassing—and
worrisome—there is something of a silver lining to the incident. It
will result in much tighter security, not only for kernel.org, but likely
for various other pieces of critical free software infrastructure as well. With
luck, it will serve as a wakeup call to many different projects and
organizations who may have gotten a bit lax with their security. GPG
and its web of trust will be useful tools in those efforts.
to post comments)