|
|
Log in / Subscribe / Register

The State of OpenSSL for pyca/cryptography

Paul Kehrer and Alex Gaynor, maintainers of the Python cryptography module, have put out some strongly worded criticism of OpenSSL. It comes from a talk they gave at the OpenSSL conference in October 2025 (YouTube video). The post goes into a lot of detail about the problems with the OpenSSL code base and testing, which has led the cryptography team to reconsider using the library. "The mistakes we see in OpenSSL's development have become so significant that we believe substantial changes are required — either to OpenSSL, or to our reliance on it." They go further in the conclusion:
First, we will no longer require OpenSSL implementations for new functionality. Where we deem it desirable, we will add new APIs that are only on LibreSSL/BoringSSL/AWS-LC. Concretely, we expect to add ML-KEM and ML-DSA APIs that are only available with LibreSSL/BoringSSL/AWS-LC, and not with OpenSSL.

Second, we currently statically link a copy of OpenSSL in our wheels (binary artifacts). We are beginning the process of looking into what would be required to change our wheels to link against one of the OpenSSL forks.

If we are able to successfully switch to one of OpenSSL's forks for our binary wheels, we will begin considering the circumstances under which we would drop support for OpenSSL entirely.



to post comments

This was inevitably going to happen

Posted Jan 15, 2026 5:19 UTC (Thu) by wtarreau (subscriber, #51152) [Link] (3 responses)

... and it will continue as projects are forced to change their code to adapt to changing APIs, then discover massive performance regressions while testing the changes.

The accumulation of bad design choices is well explained in the article. Running "perf top" during operation is frightening, with malloc/free often taking the lead given how much stress the lib is putting on them for tiny elements (I seem to remember noticing numerous calls to malloc(4) at some point making me wonder why not to store that into a uint32_t instead).

I agree that the position of "openssl 1.1.1 performance will never be recovered" is totally unacceptable, given that it was already sub-standard due to locking and allocations. And the competition from much faster libraries like aws-lc and rusttls which are also way more responsive and listening to their users will put extra pressure on them to reconsider certain choices.

I think that more feedback like this article is needed. It's definitely unpleasant to write, because it feels a bit harsh to bash opensource software like this (we were in the same situation when writing our article on the state of ssl stacks), but it does help them better understand applications' use cases and important stakes, because they can feel that it's not just a developer whining on a small API change, but deep problems that go as far as condemning some use cases. And for me what happened here is a perfect illustration of how difficult it probably is for a library to sense how it's being used by applications.

On a positive note, we're in regular contact via periodic meetings with several members of the OpenSSL team who recently became more open to hearing about the issues faced by their users. I sense that such discussions might not always be pleasant for them to hear but at least they're asking for this feedback and they're taking notes of griefs, which is really great. They've improved their CI to integrate performance regression tests, not just on crypto functions but by directly running applications relying on their lib. This is the right thing to do, and if they had been doing that during 3.0 development, it would simply never had taken that direction at all. It's unclear how many years (decades?) it will take to undo all the massacre that was done to the code base since 1.1.1 and whether they'll stay the main library or if they'll just stay there for a few legacy applications. I would still prefer to see them manage to fix their problems (starting with the technical decisions making process so as to favor re-architectural work that addresses the causes rather than voting on strong beliefs leading to pointless workarounds like RCU), and ultimately take the lead back for the good of everyone. Let's wish them good luck as it will be a long and tedious work.

This was inevitably going to happen

Posted Jan 15, 2026 15:02 UTC (Thu) by willy (subscriber, #9762) [Link] (2 responses)

Could you clarify who "we" is in this comment?

This was inevitably going to happen

Posted Jan 15, 2026 23:12 UTC (Thu) by neggles (subscriber, #153254) [Link] (1 responses)

I believe he means the HAProxy project/company, and is referring to this post https://www.haproxy.com/blog/state-of-ssl-stacks (which is also linked from pyca's post)

This was inevitably going to happen

Posted Jan 16, 2026 4:15 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

> I believe he means the HAProxy project/company

yes, sorry, I thought I had mentioned it, maybe I dropped it when refactoring my response before the final post.

Worse API?

Posted Jan 15, 2026 8:50 UTC (Thu) by maniax (subscriber, #4509) [Link]

I had to do some work with the API of OpenSSL 1.x in the olden days, and it always struck me as horrible, complicated and pretty much impossible to use correctly. I do remember a case where the documentation was wrong, most examples were wrong and the only working example was in an email in some OpenSSL list. I know that OpenSSL's locking/multithreaded support is so complicated that people solve that with single-process workers that handle just SSL, and skip all that.

I can't believe they actually made this worse.

Even though it'll be a pain, moving away from OpenSSL might be the right thing to do, overly complicating an already complicated system that people should be able to understand is just wrong.

Perl-preprocessed C in the 21st Century

Posted Jan 15, 2026 12:04 UTC (Thu) by paulj (subscriber, #341) [Link] (3 responses)

" in order to make managing arrays of OSSL_PARAM [core to the new API] palatable, many OpenSSL source files are no longer simply C files, they now have a custom Perl preprocessor for their C code"

Perl-preprocessed C to support a new API in 2021 is an... interesting choice.

Perl-preprocessed C in the 21st Century

Posted Jan 15, 2026 13:20 UTC (Thu) by kleptog (subscriber, #1183) [Link] (2 responses)

Looks to me they got Second System Syndrome badly.

Not that the original OpenSSL API was any good, but it sure looks like they went way overboard. If you want keyword arguments, don't use C. Wrong tool for the job.

Perl-preprocessed C in the 21st Century

Posted Jan 15, 2026 14:37 UTC (Thu) by paulj (subscriber, #341) [Link]

Well... you actually can do keyword arguments in C reasonably easily and well (for a reasonable number of arguments), with no need to resort to expensive key-val that require runtime lookups of key objects. Just use a struct for keyword arguments, and create an anonymous struct at the caller to pass in the keyword args via the struct - unspecified kw args will be the type's default (e.g. 0 or NULL).

#include <stdio.h>

struct bar_args {
int x;
int y;
};

int bar(int z, struct bar_args args) {
return z * args.x + args.y;
}

int main (const int argc, const char *argv[]) {
printf("1 2 (0) %u\n", bar (2, (struct bar_args) { .x = 2}));
printf("1 2 3 %u\n", bar (2, (struct bar_args) { .x = 2, .y = 3}));
return 0;
}

Perl-preprocessed C in the 21st Century

Posted Jan 15, 2026 15:05 UTC (Thu) by willy (subscriber, #9762) [Link]

I was reminded of https://thedailywtf.com/articles/the_inner-platform_effect -- when you're layering your own programming language on top of the real programming language, you're definitely doing it wrongly.

Awesome article, some return from the field

Posted Jan 15, 2026 12:33 UTC (Thu) by Tarnyko (subscriber, #90061) [Link] (13 responses)

Awesome article!
Confirms the feedback of this earlier one ( https://lwn.net/Articles/1020309/ ). The tone is harsher, but I support it as it is meant to generate reactions - and ideally, constructive feedback.

I am using OpenSSL3 currently, mainly because it is provided in the default toolchain. Works good enough... but I am not parsing lots of keys & certificates, so the bottleneck is mainly at startup: the rest is just communication, with a limited number a handshakes/negotiations (given it's not a general-purpose server).

That said, I already planned to switch if needed: crypto backend is a parameter in the build system, the software only directly calling an abstraction.

While it is a shame the alternatives (BoringSSL/LibreSSL) don't want to re-use the same public APIs for newer stuff, the article may explain why: their definition (based on generic arrays) is just not convenient.

I can't argue, but I would really like for OpenSSL to catch up though: spreading the effort between 3-4 backends/APIs means lots of extra work & subtle behavior changes, encouraging using wrappers (like we do in the GUI world)... we always managed to avoid that for crypto at least, better keep it this way.

Awesome article, some return from the field

Posted Jan 15, 2026 12:45 UTC (Thu) by hkario (subscriber, #94864) [Link] (12 responses)

except that the performance regression of 3.5 compared to 1.1.x is small, but that doesn't get the headlines so that was omitted from the article...

Awesome article, some return from the field

Posted Jan 15, 2026 14:16 UTC (Thu) by Vorpal (guest, #136011) [Link] (4 responses)

Doesn't sound like the regression is small:

> Several years ago, we filed a bug reporting that elliptic curve public key loading had regressed 5-8x between OpenSSL 1.1.1 and 3.0.7. The reason we had noticed this is that performance had gotten so bad that we’d seen it in our test suite runtimes. Since then, OpenSSL has improved performance such that it’s only 3x slower than it used to be.

And

> As a result of these sorts of regressions, when pyca/cryptography migrated X.509 certificate parsing from OpenSSL to our own Rust code, we got a 10x performance improvement relative to OpenSSL 3 (n.b., some of this improvement is attributable to advantages in our own code, but much is explainable by the OpenSSL 3 regressions). Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.

I cannot reconcile that with your statement. And as they said, performance is not the only problem, the API is terrible too.

Awesome article, some return from the field

Posted Jan 15, 2026 14:35 UTC (Thu) by pizza (subscriber, #46) [Link] (3 responses)

> Doesn't sound like the regression is small:
> I cannot reconcile that with your statement

The article you quote specifically is about 1.1.1 versus 3.0.7, whereas the comment you are replying to is about 3.5.x

(3.0.0 was released in September 2021, 3.5.0 was released in April 2025, and a significant chunk of the work during those 3.5 years was focused on improving performance)

Awesome article, some return from the field

Posted Jan 15, 2026 14:41 UTC (Thu) by randomguy3 (subscriber, #71063) [Link] (2 responses)

Since then, OpenSSL has improved performance such that it’s only 3x slower than it used to be.

It doesn't make clear what version is being referred to here, but it's implying a current version with "since then" - i would assume either 3.5 or 3.6.

Awesome article, some return from the field

Posted Jan 15, 2026 16:13 UTC (Thu) by hkario (subscriber, #94864) [Link]

pyca/cryptography folks have been complaining about openssl since the relase of 3.0, so, no, it's not a safe assumption

Awesome article, some return from the field

Posted Jan 15, 2026 18:54 UTC (Thu) by iabervon (subscriber, #722) [Link]

From the fact that the next paragraph is about them switching some non-cryptographic parsing operations to their own Rust code and getting performance better 1.1.1, I would assume that openssl 3.something was 3x slower than 1.1.1 when they switched, and they're not interested in profiling 3.5 or 3.6 unless they hear it's now significantly better than 1.1.1, not just about the same.

Awesome article, some return from the field

Posted Jan 15, 2026 15:02 UTC (Thu) by Tarnyko (subscriber, #90061) [Link] (5 responses)

Sure I agree, and this was also my point: 3.5 seems to be only 3x slower, and mostly during key parsing/negotiation phases (well, for negotiation it is logical to assume, but the latest article doesn't mention it...feel free to correct me if needed).
I don't develop general-use client/servers, so I don't rely so much on it: in short, it is not a deal breaker.

What could be a deal breaker in the future though, would be ecosystem fragmentation: if people start to drift away from OpenSSL, we're not assured to get it in all future development toolchains. Hence why the - currently unused - backend swap option, that I hope not having to use personally.

Awesome article, some return from the field

Posted Jan 15, 2026 16:23 UTC (Thu) by hkario (subscriber, #94864) [Link] (4 responses)

What is expensive in 3.x compared to 1.x is algorithm _fetching_ that's because the 3.x series supports providers, so a lot more stuff needs to be done through generic APIs, not through algorithm-specific APIs. And then the library needs to fetch the concrete implementation and validate all those parameters.

When you compare reusing the same object (without re-fetching the algorithm and re-initializing the algorithm) on 1.x with 3.x then there are minuscule (single percentage point) differences between them: here's example of what happens if you don't use the API in optimal way: https://github.com/openssl/project/issues/1681

Could "optimal" be easier? maybe. But there's always balance between many factors: ease of use, how generic is the API, how easy is to provide the backend...

Upstream OpenSSL decided that having an easy way to add completely new algorithms (or new implementation of algorithms) should be easy, as that allows use of hardware accelerators, PKCS#11 modules, experimental crypto, national algorithms, etc. without having to include them in OpenSSL proper. For some people it's important, for others it's not.

Awesome article, some return from the field

Posted Jan 15, 2026 16:33 UTC (Thu) by Tarnyko (subscriber, #90061) [Link] (3 responses)

Thanks for the link to the benchmark: I value such feedback a lot.
In this case, I can confirm the impact on my specific use case is minimal (algorithm initialization is sparse, mostly done at program startup) and my concern is 99% about ecosystem.
I probably use the deprecated API signatures by the way; not that it matters if I read the results correctly.

Awesome article, some return from the field

Posted Jan 15, 2026 19:31 UTC (Thu) by hkario (subscriber, #94864) [Link] (2 responses)

Actually, there are much more comprehensive test results available: https://openssl-library.org/performance/

Awesome article, some return from the field

Posted Jan 16, 2026 5:06 UTC (Fri) by wtarreau (subscriber, #51152) [Link] (1 responses)

Yes, these ones are really nice and a great improvement. Unfortunately they don't include 1.1.1 so it's not always visible where application faced a significant loss and what makes them complain.

Awesome article, some return from the field

Posted Jan 16, 2026 10:57 UTC (Fri) by hkario (subscriber, #94864) [Link]

They don't include 1.1.1 only for functions that are not in 1.1.1. Look for "evp_setpeer dh" as example of one that has 1.1.1 data...

Awesome article, some return from the field

Posted Jan 16, 2026 5:04 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

> the performance regression of 3.5 compared to 1.1.x is small

In client mode it's not. For us 3.5 and 3.4 basically show the same performance. If you go down to the end of this article https://www.haproxy.com/blog/state-of-ssl-stacks and scroll up to the latest graph, you'll see that we're still facing a roughly 4.5x degradation in end-to-end TLS between 1.1.1 and 3.4 (and hence 3.5), despite the former already being significantly slower than alternatives.

Thus it really depends on use cases, but for those who need to encrypt on both sides, it's a real pain, and explains why some applications are now willing to pay the high price of migrating to alternatives, like explained by the Python guys in this article.

Hanlon or not Hanlon?

Posted Jan 19, 2026 4:33 UTC (Mon) by marcH (subscriber, #57642) [Link] (1 responses)

Lack of QA and complexity make the perfect couple to hide... "intentional bugs" = backdoors. Complexity takes care of code reviews and static analysis.

I have enough experience to have observed Hanlon's razor left and right but... OpenSSL is not "any"software. It's the type of target that secret services around the world prioritize if they work as they are supposed to.

Hanlon or not Hanlon?

Posted Jan 19, 2026 18:22 UTC (Mon) by jd (guest, #26381) [Link]

Precisely, which is why security software needs far more stringent design and QA than you'd normally use, it should be regarded as mission-critical with more than just a dash of "failure is not an option".

Open Source learned that the hard way with Skipjack and two deliberately-tained PRNGs, but also with contaminated compression libraries. Methinks it's time to stop with the learning and actually apply the lessons.

Now, I'm not suggesting that they do an SEL4 and provide end-to-end proofs of implementation correctness (although, tbh, that would be truly awesome and something I could see security vendors seriously mulling over as something they could "crowdsource" at the inter-corporate level), but there are plenty of simpler paradigms (such as contracts for functions) that could be statically checked against to detect suspicious behaviours and implementation flaws.

To be fair, though, it might well be that developers will have to pull a Linux, unless LibreSSL has a good architecture to work from (basically the EGCS approach).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds