Responses to gpg.fail
At the 39th Chaos Communication Congress (39C3) in December, researchers Lexi Groves ("49016") and Liam Wachter said that they had discovered a number of flaws in popular implementations of OpenPGP email-encryption standard. They also released an accompanying web site, gpg.fail, with descriptions of the discoveries. Most of those presented were found in GNU Privacy Guard (GPG), though the pair also discussed problems in age, Minisign, Sequoia, and the OpenPGP standard (RFC 9580) itself. The discoveries have spurred some interesting discussions and as well as responses from GPG and Sequoia developers.
Flaws
Out of 14 discoveries listed on the gpg.fail site, 11 affect
GPG—they range from a flaw in GPG's --not-dash-escaped
option (that would allow signature
forgery) to a memory-corruption
flaw that "might be exploitable to the point of remote code
execution (RCE)
". Two of the discoveries affect Minisign (one, two); both of the
vulnerabilities allow attackers to insert content into trusted-comment
(e.g. metadata attached to a signature) fields.
The researchers also described an exploit in OpenPGP's Cleartext
Signature Framework, which could allow an attacker to substitute
the signed data with malicious content "while retaining a seemingly
valid cryptographic verification
" when using GPG or Sequoia. It is worth
noting, as they did, that the framework already has documented
issues and the GPG project recommends
against its use, though it is still supported and the
recommendation was in the form of a short phrase in a man page.
We won't try to recap each vulnerability in detail here; the gpg.fail site already has detailed write-ups of the discoveries. In addition, Groves and Wachter's presentation (video) is informative as well as entertaining. The slides and all of the proof of concepts from the presentation have not yet made their way to the site; I sent a query on January 15 about the slides to the contact email provided on the site, but have not received any reply.
Reactions
Demi Marie Obenour started
a discussion of the GPG vulnerabilities on the oss-security
mailing list, which prompted some of the list's participants to
examine and weigh in on the researchers' claims. Jacob Bachmeyer was
quick to respond
to many of the researchers' findings. He agreed that
the memory-corruption flaw was a serious error in GPG, but claimed
that most of the flaws reported "are definitely edge cases
", and minimized the possible real-world impact. For example, he
said that a flaw in GPG's
sanitization of file paths was potentially serious, but that it
also relied on a social-engineering attack. In short, the described
attack relies on a user following an attacker's suggested method of
opening a file using GPG, which then would trigger a fake prompt,
which would look like this:
$ gpg --decrypt pts.enc && gpg pts.enc gpg: WARNING: Message contains no signatures. Continue viewing [Y/n]?
If the user responds affirmatively to the prompt, the proof of
concept designed by the researchers would overwrite a file of the
attacker's choosing. Bachmeyer was dismissive of this, though, because
he felt it would only be effective against inexpert users. "While a
naive user might use the suggested command, a more-experienced user
should immediately smell a rat.
" Bachmeyer had also wondered about
this discovery about
gpg truncating plain text in a way that would allow an
attacker to extend a signed message with arbitrary data in a way that
would still pass signature verification. He said that if there was a
bug, then it was an out-of-bounds read.
In response,
Groves acknowledged that the exploit chain would need to abuse the
naivety of a user to trigger the technical problem. Despite that, she
said that software should do its best to protect against human
error. It might not fool "a hardcore cyperpunk, but to be honest,
it'd get me
".
Groves also apologized for getting the writeup of the signature
truncation exploit "slightly wrong
", but said that it had been
correctly described in the presentation. She then provided a lengthy
explanation of the bug in depth. She said, in part, that the bug was
really a malleability
attack where an attacker could try to manipulate the output of a GPG
operation. That was practically exploitable because GPG defaults to
standard Zlib compression, which has a predictable header, and an
attack would only require guessing seven bytes from the header to set it
up. The OpenPGP standard describes protection
against malleability that GPG violates in two ways. The standard
says that an implementation should not attempt to parse or release
data to the user if it appears to be malleable. GPG, however, still
does attempt this. The standard also says that an implementation must
generate a clear error that indicates the integrity of a message is
suspect, but Groves said that it is possible for an attacker to
circumvent that.
This is bypassed by *another* described bug, where by triggering an error *before* the checksum is printed, we can change the error message from "WARNING: encrypted message has been manipulated!" to a harmless-appearing "decryption failed: invalid packet". A user looking at the plausible PGP packet stream output would not suspect that there is anything wrong [...]
This chain of exploits allows doing this by just abusing logic bugs and odd decisions in GnuPG. Several of those, especially the bypass silencing the warning that MUST be printed, are technical, logical bugs that can and should be fixed.
Bachmeyer responded
that he now saw what he missed on the first examination. "Clever,
very clever. :-)
"
GPG creator and maintainer Werner Koch said
on December 29 that he agreed with most of the comments in
Bachmeyer's first email. Koch pointed to a tracking bug for the reports
that were filed with the GPG project by the researchers. According to
Koch, the reports were filed "one after the other
" in
October.
Because there was no clear statement on when we were allowed to publish them and no further communication, most of them were set to private. I set them to public when I noticed the schedule for the talk on December 26.
Koch called the memory-corruption bug good research but said it
was "the only serious bug from their list
" and that it was
questionable whether it would actually allow an RCE. That bug was
fixed in the 2.5.14
release in November, but it had not been fixed in the 2.4
branch. Koch said that there was another release of 2.4 pending, which
presumably would contain a fix for the bug. However, he added that the
end of life for 2.4 would be coming in six months, so it would be
better for users to switch to 2.5.
GPG 2.4.9 was, in fact, released on December 30—though one would be forgiven for having missed it as there was no announcement of its release. It is not mentioned in the NEWS file that the GPG site advises users to consult, nor has it been mentioned on the gnupg-announce mailing list, though the 2.5.16 release was announced the same day that 2.4.9 was published.
Koch published a blog
post ("Cleartext Signatures Considered Harmful") that recommends
detached
signatures instead. In one of the bug reports, Koch said that the suggestion
to remove cleartext signatures entirely was a no-go: "there are too
many systems (open source or in-house) that rely on that format. If
properly used (i.e. using --output to get the signed text) there is no
problem
".
"Staggeringly complex"
Peter Gutmann said
that he was concerned that two researchers "walked up to GPG and
quickly found a pile of bugs, many relating to authentication
".
OpenPGP signatures are the de facto standard for authenticating
code and binaries in the non-Windows world and GPG, "the one with
all the bugs in its authentication handling
", is what's used. The
first problem, he said, is that GPG is staggeringly complex. It is not
just a sign-and-encrypt application, but one that spawns off other
programs, has many command-line options that change between releases,
and even runs services. The other problem is the OpenPGP format
itself.
To appreciate just how bad it really is, grab a copy of RFC 9580 and see how long it takes you to write down the sequence of fields (not packets, fields) that you'd expect to see in a message encrypted with RSA and signed with Ed25519 (to grab the two opposite ends of the cryptographic spectrum) as well as the cryptographic bindings between them, i.e. which bits get hashed/signed/ processed, and also provide a confidence level in your work. I suspect most people won't even get to that, the answer would be "I can't figure it out".
He said that it would be better if an application that does
something critical, like authenticating downloaded binaries, does that
and nothing more. Obenour suggested
it might make sense to use OpenSSH signatures instead. Gutmann replied
that PGP signatures are fine as long as they are used in a way that "there's only one simple, unambiguous, and minimal-attack-
surface way to apply them, as well as a means of having them work over longer
time periods
". He has, apparently, had some unpleasant experiences
with trying to find unexpired keys to verify Linux packages or images.
Sequoia thoughts
Sequoia developer Neal H. Walfield published a blog
post on January 12 in response to the presentation, and in
particular to the Cleartext Signature Framework attack that "the
researchers claim demonstrates a security weakness in Sequoia
". He
praised the impressive number and breadth of vulnerabilities found,
but said that the researchers had used a "naive translation
" of
gpg commands to sq commands. Using the standard
workflows for sq would have prevented the attack from being
successful.
Despite that, he said that the researchers had found a real bug in Sequoia:
When verifying a signature using sq, the caller specifies the type of signature that should be checked. In this case, we use --cleartext. Yet, the inline signature was verified, which should only be done if the caller passed --message. This is due to a known issue in our library, which unfortunately we haven't yet had the chance to fix. Had we fixed this, this would have mitigated this attack. Nevertheless, the possibility for confusion remains, and the next step should always use the verified data and not the original file. We plan to address this issue this quarter. Thanks to the security researchers for showing us that the issue has a practical security impact.
Walfield concluded by saying, once again, that the researchers had done impressive work—but wished that they had explained that the signature-verification attack required incorrect use of Sequoia.
Overall, it would appear that the gpg.fail researchers have uncovered some real issues that need to be addressed—only some of which are easily patched. The complexity of the tools and their use will remain a barrier for secure use long after any code vulnerabilities are fixed.
| Index entries for this article | |
|---|---|
| Security | GNU Privacy Guard (GPG) |
