|
|
Log in / Subscribe / Register

Responses to gpg.fail

By Joe Brockmeier
January 21, 2026

At the 39th Chaos Communication Congress (39C3) in December, researchers Lexi Groves ("49016") and Liam Wachter said that they had discovered a number of flaws in popular implementations of OpenPGP email-encryption standard. They also released an accompanying web site, gpg.fail, with descriptions of the discoveries. Most of those presented were found in GNU Privacy Guard (GPG), though the pair also discussed problems in age, Minisign, Sequoia, and the OpenPGP standard (RFC 9580) itself. The discoveries have spurred some interesting discussions and as well as responses from GPG and Sequoia developers.

Flaws

Out of 14 discoveries listed on the gpg.fail site, 11 affect GPG—they range from a flaw in GPG's --not-dash-escaped option (that would allow signature forgery) to a memory-corruption flaw that "might be exploitable to the point of remote code execution (RCE)". Two of the discoveries affect Minisign (one, two); both of the vulnerabilities allow attackers to insert content into trusted-comment (e.g. metadata attached to a signature) fields.

The researchers also described an exploit in OpenPGP's Cleartext Signature Framework, which could allow an attacker to substitute the signed data with malicious content "while retaining a seemingly valid cryptographic verification" when using GPG or Sequoia. It is worth noting, as they did, that the framework already has documented issues and the GPG project recommends against its use, though it is still supported and the recommendation was in the form of a short phrase in a man page.

We won't try to recap each vulnerability in detail here; the gpg.fail site already has detailed write-ups of the discoveries. In addition, Groves and Wachter's presentation (video) is informative as well as entertaining. The slides and all of the proof of concepts from the presentation have not yet made their way to the site; I sent a query on January 15 about the slides to the contact email provided on the site, but have not received any reply.

Reactions

Demi Marie Obenour started a discussion of the GPG vulnerabilities on the oss-security mailing list, which prompted some of the list's participants to examine and weigh in on the researchers' claims. Jacob Bachmeyer was quick to respond to many of the researchers' findings. He agreed that the memory-corruption flaw was a serious error in GPG, but claimed that most of the flaws reported "are definitely edge cases", and minimized the possible real-world impact. For example, he said that a flaw in GPG's sanitization of file paths was potentially serious, but that it also relied on a social-engineering attack. In short, the described attack relies on a user following an attacker's suggested method of opening a file using GPG, which then would trigger a fake prompt, which would look like this:

    $ gpg --decrypt pts.enc && gpg pts.enc
    gpg: WARNING: Message contains no signatures. Continue viewing [Y/n]?

If the user responds affirmatively to the prompt, the proof of concept designed by the researchers would overwrite a file of the attacker's choosing. Bachmeyer was dismissive of this, though, because he felt it would only be effective against inexpert users. "While a naive user might use the suggested command, a more-experienced user should immediately smell a rat." Bachmeyer had also wondered about this discovery about gpg truncating plain text in a way that would allow an attacker to extend a signed message with arbitrary data in a way that would still pass signature verification. He said that if there was a bug, then it was an out-of-bounds read.

In response, Groves acknowledged that the exploit chain would need to abuse the naivety of a user to trigger the technical problem. Despite that, she said that software should do its best to protect against human error. It might not fool "a hardcore cyperpunk, but to be honest, it'd get me".

Groves also apologized for getting the writeup of the signature truncation exploit "slightly wrong", but said that it had been correctly described in the presentation. She then provided a lengthy explanation of the bug in depth. She said, in part, that the bug was really a malleability attack where an attacker could try to manipulate the output of a GPG operation. That was practically exploitable because GPG defaults to standard Zlib compression, which has a predictable header, and an attack would only require guessing seven bytes from the header to set it up. The OpenPGP standard describes protection against malleability that GPG violates in two ways. The standard says that an implementation should not attempt to parse or release data to the user if it appears to be malleable. GPG, however, still does attempt this. The standard also says that an implementation must generate a clear error that indicates the integrity of a message is suspect, but Groves said that it is possible for an attacker to circumvent that.

This is bypassed by *another* described bug, where by triggering an error *before* the checksum is printed, we can change the error message from "WARNING: encrypted message has been manipulated!" to a harmless-appearing "decryption failed: invalid packet". A user looking at the plausible PGP packet stream output would not suspect that there is anything wrong [...]

This chain of exploits allows doing this by just abusing logic bugs and odd decisions in GnuPG. Several of those, especially the bypass silencing the warning that MUST be printed, are technical, logical bugs that can and should be fixed.

Bachmeyer responded that he now saw what he missed on the first examination. "Clever, very clever. :-)"

GPG creator and maintainer Werner Koch said on December 29 that he agreed with most of the comments in Bachmeyer's first email. Koch pointed to a tracking bug for the reports that were filed with the GPG project by the researchers. According to Koch, the reports were filed "one after the other" in October.

Because there was no clear statement on when we were allowed to publish them and no further communication, most of them were set to private. I set them to public when I noticed the schedule for the talk on December 26.

Koch called the memory-corruption bug good research but said it was "the only serious bug from their list" and that it was questionable whether it would actually allow an RCE. That bug was fixed in the 2.5.14 release in November, but it had not been fixed in the 2.4 branch. Koch said that there was another release of 2.4 pending, which presumably would contain a fix for the bug. However, he added that the end of life for 2.4 would be coming in six months, so it would be better for users to switch to 2.5.

GPG 2.4.9 was, in fact, released on December 30—though one would be forgiven for having missed it as there was no announcement of its release. It is not mentioned in the NEWS file that the GPG site advises users to consult, nor has it been mentioned on the gnupg-announce mailing list, though the 2.5.16 release was announced the same day that 2.4.9 was published.

Koch published a blog post ("Cleartext Signatures Considered Harmful") that recommends detached signatures instead. In one of the bug reports, Koch said that the suggestion to remove cleartext signatures entirely was a no-go: "there are too many systems (open source or in-house) that rely on that format. If properly used (i.e. using --output to get the signed text) there is no problem".

"Staggeringly complex"

Peter Gutmann said that he was concerned that two researchers "walked up to GPG and quickly found a pile of bugs, many relating to authentication". OpenPGP signatures are the de facto standard for authenticating code and binaries in the non-Windows world and GPG, "the one with all the bugs in its authentication handling", is what's used. The first problem, he said, is that GPG is staggeringly complex. It is not just a sign-and-encrypt application, but one that spawns off other programs, has many command-line options that change between releases, and even runs services. The other problem is the OpenPGP format itself.

To appreciate just how bad it really is, grab a copy of RFC 9580 and see how long it takes you to write down the sequence of fields (not packets, fields) that you'd expect to see in a message encrypted with RSA and signed with Ed25519 (to grab the two opposite ends of the cryptographic spectrum) as well as the cryptographic bindings between them, i.e. which bits get hashed/signed/ processed, and also provide a confidence level in your work. I suspect most people won't even get to that, the answer would be "I can't figure it out".

He said that it would be better if an application that does something critical, like authenticating downloaded binaries, does that and nothing more. Obenour suggested it might make sense to use OpenSSH signatures instead. Gutmann replied that PGP signatures are fine as long as they are used in a way that "there's only one simple, unambiguous, and minimal-attack- surface way to apply them, as well as a means of having them work over longer time periods". He has, apparently, had some unpleasant experiences with trying to find unexpired keys to verify Linux packages or images.

Sequoia thoughts

Sequoia developer Neal H. Walfield published a blog post on January 12 in response to the presentation, and in particular to the Cleartext Signature Framework attack that "the researchers claim demonstrates a security weakness in Sequoia". He praised the impressive number and breadth of vulnerabilities found, but said that the researchers had used a "naive translation" of gpg commands to sq commands. Using the standard workflows for sq would have prevented the attack from being successful.

Despite that, he said that the researchers had found a real bug in Sequoia:

When verifying a signature using sq, the caller specifies the type of signature that should be checked. In this case, we use --cleartext. Yet, the inline signature was verified, which should only be done if the caller passed --message. This is due to a known issue in our library, which unfortunately we haven't yet had the chance to fix. Had we fixed this, this would have mitigated this attack. Nevertheless, the possibility for confusion remains, and the next step should always use the verified data and not the original file. We plan to address this issue this quarter. Thanks to the security researchers for showing us that the issue has a practical security impact.

Walfield concluded by saying, once again, that the researchers had done impressive work—but wished that they had explained that the signature-verification attack required incorrect use of Sequoia.

Overall, it would appear that the gpg.fail researchers have uncovered some real issues that need to be addressed—only some of which are easily patched. The complexity of the tools and their use will remain a barrier for secure use long after any code vulnerabilities are fixed.


Index entries for this article
SecurityGNU Privacy Guard (GPG)


to post comments

command line complexity

Posted Jan 21, 2026 17:37 UTC (Wed) by ballombe (subscriber, #9523) [Link] (9 responses)

Debian popularity-contest needs to encrypt a file non-interactively with a fixed, locally available, public PGP key.
This requires the creating of a temporary directory and the use of 11 command line options to gpg:

GPGHOME=`mktemp -d`
gpg --batch --no-options --no-default-keyring --trust-model=always \
--homedir "$GPGHOME" --keyring $KEYRING --quiet \
--armor -o "$POPCONGPG" -r $POPCONKEY --encrypt "$POPCON"
rm -rf "$GPGHOME"

In particular there does not seem any way to specify a public key as a standalone file instead of as a part of a keyring.

command line complexity

Posted Jan 21, 2026 18:07 UTC (Wed) by hailfinger (subscriber, #76962) [Link] (1 responses)

Oh wow. According to the man page for Sequoia, this is easier with sq.

sq encrypt --for-file=publickey.pgp message.txt --output message.pgp

command line complexity

Posted Jan 21, 2026 18:33 UTC (Wed) by guillemj (subscriber, #49706) [Link]

With a SOP (Stateless OpenPGP CLI <https://dkg.gitlab.io/openpgp-stateless-cli/>) implementation it is trivial as well:

$ $SOP encrypt cert.pgp <message.txt >message.pgp

Where $SOP can be any of 'sqop', 'rsop', 'gosop', 'pgpainless-cli' (or other implementations) for example.

command line complexity

Posted Jan 21, 2026 18:22 UTC (Wed) by dskoll (subscriber, #1630) [Link]

Yes, trying to script anything with gpg is an absolute nightmare. It was never really designed to be run non-interactively, I think.

Plain text keyring directory.

Posted Jan 21, 2026 23:43 UTC (Wed) by alx.manpages (subscriber, #145117) [Link]

This is also relatively inherent to the fact that gpg(1) uses a binary keyring unnecessarily.

I wish (and suggested in the mailing list some time ago) that the keyring was just a set of plain-text files, similar to how SSH works. That would as a side-effect (likely) make it more easy to specify different files for a given use, or maybe even stdin.

command line complexity

Posted Jan 22, 2026 5:22 UTC (Thu) by jkingweb (subscriber, #113039) [Link]

There's something deeply ironic about a command invocation with eleven options where one of the options is "--no-options".

command line complexity

Posted Jan 22, 2026 11:52 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link] (3 responses)

Available since 2.1.14, released summer 2016 which should even be available in Debian for some years:
   gpg -e -a --batch  -o "$POPCONGPG" -f "$FILEWITHKEY"  "$POPCON"
I would of use this in a pipeline without -o when sending, though.

command line complexity

Posted Jan 22, 2026 15:36 UTC (Thu) by IanKelling (subscriber, #89418) [Link]

Random anecdote:

I discovered the "gnupg-ring:" option from randomly greping source code. It fixed my problem of working with keyring files after an upgrade. eg: gpg --no-default-keyring --keyring gnupg-ring:/file/path.gpg

command line complexity

Posted Jan 22, 2026 20:12 UTC (Thu) by ballombe (subscriber, #9523) [Link] (1 responses)

Thanks, this is useful!

(gnupg (v1) support was added to Debian in summer 2013).

The full name of the -f option is --recipient-file which is not make this option easy to find, since there is no recipients involved.

command line complexity

Posted Jan 24, 2026 19:13 UTC (Sat) by smcv (subscriber, #53363) [Link]

The option name does make sense if you think of gpg as a system for encrypting and authenticating messages (emails, or messages being submitted via http, or similar). In Debian's popcon, the recipient of the message (which therefore needs to be able to decrypt it) is the popcon server rather than a person, but it's still true that it's the recipient if you think of it that way.

communication

Posted Jan 21, 2026 19:18 UTC (Wed) by Phantom_Hoover (subscriber, #167627) [Link] (6 responses)

Given the long-rising tensions between FOSS maintainers and security researchers I really think the latter should be thinking carefully about branding vulnerability drops with names like ‘gpg.fail’. I get that this stuff was fun and punky back in the day but security now has a big, boring and serious compliance industry attached to it and maintainers are already cracking under the strain; they don’t need their work insulted on top of that. At the end of the day, vulnerability disclosures by themselves do *nothing* to make anyone safer: they depend on the labour of maintainers patching them to materialise any actual benefit. So these researchers are part of a collaborative effort, and I don’t address my colleagues by getting up on stage and slating them for their epic fails.

communication

Posted Jan 21, 2026 20:53 UTC (Wed) by tux3 (subscriber, #101245) [Link]

>At the end of the day, vulnerability disclosures by themselves do *nothing* to make anyone safer

Well, I was writing another response, but it vanished in a misclick. I'll say that at least for me, I'd heard of all those fancier modern alternatives and never had any reason to use them instead of good old battle tested GPG. I didn't expect GPG to be this complex internally, for what little use I make of it. I will be marginally safer, and I'll thank both the maintainers and the researchers.

I'm particularly impressed by the age maintainer, who responded by delivering an award in person to the researchers. My eyes can't help but see a contrast in the responses.

Disclosure timelines eventually end in publication of unpatched bugs. Vulnerability research can be inconvenient for compliance. I think that Compliance and security are just two very different axes, aren't they?

communication

Posted Jan 22, 2026 4:16 UTC (Thu) by wtarreau (subscriber, #51152) [Link] (1 responses)

I totally agree with you. All these domain names, logos etc are only there to maximize the buzz that creates self-promotion for the security teams who find bugs. But they're not heroes, just code reviewers who find complicated bugs, and should be more respectful for the ones doing all the fixing work, very often in code they didn't write themselves, but only inherited from previous developers, or accepted from external contributions.

There can be plenty of reasons to criticize the gpg tool for being overly complicated to use, almost unusable in scripts not running in a tty, or for spawning an agent daemon even when you don't want one in recent versions, but this might just not be the right tool for certain tasks. And in any case it doesn't deserve insults like filing a domain such as this one, that the project maintainers will have no handle on regardless of their efforts to fix everything mentioned there.

When you think about it, for many OSS developers, the project they work on is an important part of their resume. Here, you apply for a job, proudly arboring 10 years in gpg, and the employer says "ah, yes, any responsibility in gpg.fail?". It's just not fair to force these people to justify themselves when everyone has been relying on their work, so I hope this domain will not be renewed once it expires.

communication

Posted Jan 25, 2026 17:04 UTC (Sun) by SLi (subscriber, #53131) [Link]

> There can be plenty of reasons to criticize the gpg tool for being overly complicated to use, almost unusable in scripts not running in a tty, or for spawning an agent daemon even when you don't want one in recent versions, but this might just not be the right tool for certain tasks.

Would you allow that there might some some responsibility for a developer of a major security tool that gains significant "not right tool" uses to be more vocal about it not being the right tool?

communication

Posted Jan 22, 2026 9:26 UTC (Thu) by neal (subscriber, #7439) [Link]

Disclosure: I'm one of the Sequoia PGP co-founders.

I fully agree with this comment: we need to tone down the language. If security researchers are professionals and intend to collaborate with maintainers, then sensationalism has no place in the discourse.

Relatedly, the security researchers want and deserve respect. But, it's hard to separate the wheat from the chaff and unfortunately the wheat to chaff ratio is very low. The Sequoia project receives multiple vulnerability reports per week. (This is partially because we have a bug bounty program, but a lot of reports are not submitted via the bug bounty program.) The reports are mostly convincing and invalid. This is because almost all---both the valid and the invalid ones!---are generated using AI. I simply cannot respond to most of them.

communication

Posted Jan 22, 2026 11:17 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link]

Well said. Thank you for this comment.

A few days ago we received a few other bug reports and one of them is indeed serious. As usual we work with them, fix the bugs, prepare new releases and publish this all. And than we can only hope that many sites will update to fix that vulnerability. Business and community work as usual.

communication

Posted Jan 24, 2026 14:34 UTC (Sat) by Heretic_Blacksheep (subscriber, #169992) [Link]

I don't agree. At all.

The reason I don't agree is because disclosure, regardless of how it's managed, prompts bad faith companies and even open source projects with self-promoted claims to security and quality to come clean or be proven to have no clothes. The bombast, if you will, actually started with grandiose claims by software companies and open source projects first (ex. OpenBSD's over-the-top security claims that have been walked back several times over the years).

Obviously, there's collateral damage to projects like cURL that try their best to Do The Right Thing without a lot of hoopla. What needs to change isn't necessarily the "tone", but the gating of quality reports, because some of these supposedly overblown inquiry results are not at all overblown, rather they're more like the preliminary steps that initiate repairs to the software code from organizations that would rather just sit on bad code indefinitely (*ahem* Oracle) till they take a PR/sales hit, or prompt people to move away to better maintained products or projects. This has been going on for years, and I doubt it's going to change. The question isn't what's "professional" or not, it's how open source projects deal with the not-really-new-but-definitely-evolving disclosure landscape.

The tone of professionalism is very much an opinion and one of culture. Some people don't like this, some don't like that. Others heartily approve something else entirely. As a case in point, many Americans often find The Register's tone as unnecessarily sarcastic, abrasive, and unprofessional, while many Brits see it as normal professional journalistic bombast.

These discussions are worthy to hold whether the problems were already known, or they're new. This industry has a serious problem with the New Kids ignoring the Old And Busted then tripping over stuff that was a known problem or "new" technique that's actually 20+ years old. GPG.fail was a mix of old and new, and they both needed to be (re)visited, and where necessary, pointed out that something that was a bad idea in 1996 but there was no way to change it then probably should have been removed by 2025 since a viable, more secure, replacement has been in place for what 10-15 of those years? Sure give people time to change over, but that shouldn't take more than 3-4 years tops, even if it's an enterprise (who will sit on broken tech indefinitely till they're forced to move by hook or crook).

Disconnect between the developers and the users

Posted Jan 22, 2026 9:12 UTC (Thu) by dottedmag (subscriber, #18590) [Link]

> the only serious bug from their list

This raises a question: who is the target audience for the gpg CLI tool?

Serious security researchers? Then why the project's website does not have a huge "Do not even attempt to use this project yourself" banner, and why gpg signatures are used to verify downloads, why gpg CLI usage is given as "how to check the downloads" documentation?

Non-security gurus? Then these "non-serious bugs" are actually very, very serious.

"Staggeringly complex"

Posted Jan 22, 2026 9:57 UTC (Thu) by hailfinger (subscriber, #76962) [Link] (1 responses)

One thing which is a bit counter-intuitive is that GnuPG had a much longer time to mature than the other OpenPGP implementations, but it still has the majority of the bugs found here.

Possible explanations:
- GnuPG had more bugs to begin with
- The other implementations chose a language less prone to bugs (but most of the bugs here seem to be logic bugs)
- The other implementations are easier to understand and their code is easier to read
- GnuPG was mostly implemented before people knew how to do secure programming
- The other implementations make it easier to contribute fixes or refactoring

None of the explanation attempts above are particularly reassuring.
I prefer running battle-tested code, but here apparently the length of the battle-testing didn't matter as much as the overall code quality. If that means most GnuPG usage should be replaced by less error-prone implementations, maybe following the lead of Debian is a good idea. https://wiki.debian.org/OpenPGP/Sequoia https://lwn.net/Articles/1017315/

"Staggeringly complex"

Posted Jan 22, 2026 12:16 UTC (Thu) by kevincox (subscriber, #93938) [Link]

Another very possible explanation is that GnuPG is the most notable implementation so that was the main focus of the researchers. When they found an issue that worked against GnuPG they mostly tried that issue and minor variations against the other implementations, or otherwise just spent less time focusing on them.

Fedora Verification Instructions

Posted Jan 22, 2026 10:25 UTC (Thu) by neal (subscriber, #7439) [Link]

The presentation opens with a very impressive attack that appears to show the researchers following the Fedora download verification instructions, and then booting an image that is clearly not Fedora. The instructions were:
  1. Download an ISO image
  2. Download the Fedora OpenPGP certificate
  3. Download the signed checksum file
  4. Use gpgv to verify the signed data in the checksum file
  5. Use sha256sum to verify the image using the checksum file
The issue that the attackers took advantage of was that sha256sum used the original checksum file, but that is not exactly what gpgv verified. Instead, sha256sum should have used the verified data. There's a discussion on the fedora-devel mailing list about this, which has resulted in an issue against the Fedora Website. Within a couple of days, the webmasters updated the instructions. And now instead the broken two-step verification:
gpgv --keyring ./fedora.gpg Fedora-Workstation-43-1.6-x86_64-CHECKSUM
sha256sum --ignore-missing -c Fedora-Workstation-43-1.6-x86_64-CHECKSUM
There is one step where the verified output is sent to sha256sum:
gpgv --keyring ./fedora.gpg --output - \
                  Fedora-Workstation-43-1.6-x86_64-CHECKSUM \
                  | sha256sum -c --ignore-missing
            
sq verify --cleartext --signer-file ./fedora.gpg \
                  Fedora-Workstation-43-1.6-x86_64-CHECKSUM \
                  | sha256sum -c --ignore-missing
Of particular note: neither gpgv nor sq had to be updated to fix this issue. (That's not to say that misusing a tool is not an issue with the tool. IMHO, tools should make it hard to make mistakes, and we should constantly be on the look out for ways to improve our tooling.)

cleartext problems known for 30 years

Posted Jan 22, 2026 11:44 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link] (3 responses)

Hi!

I already mention this in my article on gnupg.org: The problems with cleartext signatures are old and should thus been known to hackers and implementers of MUA and other tools which provide a signature status. I remember the time I followed the Mutt ML and its IRC channel nearly 30 years ago. In those pre-PGP/MIME time it was kind of a game to find clever ways to circumvent the cleartext signature verification. Most bug reports from the 39C3 use the same pattern. It is unfortunate that useful knowledge obviously gets lost over the decades.

cleartext problems known for 30 years

Posted Jan 22, 2026 13:36 UTC (Thu) by yourfate (subscriber, #175466) [Link] (2 responses)

It seems like it should have been fixed / removed within the last 30 years then.

cleartext problems known for 30 years

Posted Jan 22, 2026 15:07 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link] (1 responses)

30 years ago it could not be removed because PGP/MIME did not yet exist or was only implemented by Mutt and not by the back then more common MUAs (e.g. Pine). Or think of IRC and BBS.

20 years ago PGP/MIME was widely used but cleartext was still in active use. Also at that time it was common to sign manifest files using cleartext signatures. If verified properly, this is no problem. However, still today not everyone implementing such a scheme gets it right. I have doubts that this will get better by switching to detached signatures.

OTOH, we should be glad that meanwhile most projects know about the importance of signatures for the software ecosystem. Well, most - when I need to update supporting libraries used by Gpg4win, I stumble upon projects with no way to verify that the download is authentic (e.g. libpng). As an attacker I would start there, updating image libraries is often required due to their complexity and thus bug proneness.

cleartext problems known for 30 years

Posted Jan 25, 2026 17:06 UTC (Sun) by SLi (subscriber, #53131) [Link]

I think if Fedora gets it wrong in their release process, that should demonstrate that there's way too much confidence in people knowing to not use it this way.

Regarding Minisign

Posted Jan 23, 2026 9:11 UTC (Fri) by SageHane (subscriber, #177988) [Link]

The developer of minisign voiced his view of the flaws described, here: https://github.com/jedisct1/minisign/issues/175#issuecomm...


Copyright © 2026, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds