|
|
Subscribe / Log in / New account

Bad practices all around

Bad practices all around

Posted Jul 9, 2024 22:58 UTC (Tue) by wahern (subscriber, #37304)
In reply to: Bad practices all around by Cyberax
Parent article: Offload-friendly network encryption in the kernel

> Check (modern crypto shouldn't need rekeying!).

Wireguard rekeys and it's quiet young still as protocols go. What are typically considered modern AEAD algorithms (e.g. as used in Wireguard or TLS) are actually a limitation in this regard on account of their parameter sizing.[1] One could in theory do better with older HMAC-based schemes. There are *more* modern AEAD schemes, such as XChaCha20-Poly1305 with extended nonces, but they haven't seen much uptake as of yet. Anyhow, there are other benefits to rekeying, especially functional, that simply can be addressed by better primitives or APIs.

The big lie was that primitives like AES-GCM and Chacha20-Poly1305, or protocols like Noise and Double Ratchet conclusively solved most technical crypto problems. They didn't. Features like cryptographic agility that were decried 10+ years ago as patently stupid have in some cases returned as lauded features as newer, better primitives appear and people rediscover age-old dilemmas upgrading infrastructure.

What's old is new again, albeit slightly improved, fortunately. Cryptographic "best practices" will continue to evolve, sometimes circling back, just as they always have.

[1] Relatedly, the 32-bit SPI in PSP raises suspicion as a premature space optimization still encountered in "modern" crypto.


to post comments

Bad practices all around

Posted Jul 9, 2024 23:17 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

In case of Wireguard rekeying is justifiable because its connections can last for years, and symmetric keys are not persisted. PSP is more like TLS, so there should be no _need_ to do rekeying.

Bad practices all around

Posted Jul 10, 2024 14:14 UTC (Wed) by anmoch (subscriber, #85760) [Link] (1 responses)

Can't HTTP2 connections be useful for days, perhaps longer? Especially if you use them for RPCs, as with (AFAIU, haven't used it) gRPC for example.

Bad practices all around

Posted Jul 10, 2024 19:47 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Yes? I kinda don't understand the point?

The only problem with long-lived connections in TLS/PSP is that AES-GCM is weak against nonce reuse. If you can find two different messages encrypted with the same nonce, then you will be able to forge GCM signatures, although you won't be able to decrypt messages.

This is mostly a theoretic attack. The nonce length is just 96 bits, so if you use random nonces, you start having an appreciable risk of collision after around 2^48 messages transmitted (so at least around after 2^56 bytes, in reality even more). And if you use incremental nonces instead of random ones, you are not at risk at all.

Bad practices all around

Posted Jul 9, 2024 23:33 UTC (Tue) by atnot (subscriber, #124910) [Link]

I think there's an important difference between "cryptographic agility" and the ability to upgrade a protocol? Cryptographic agility as an idea is, to me, very much still dead. And it keeps getting deader with more robust primitives and complex cryptosystems where you absolutely could not just swap out a primitive without a complete reevaluation of the whole system.

However that is different from not giving yourself any way of upgrading a protocol in the future, which is always a bad idea regardless of cryptography. You can still just have a version 2 which swaps out the crypto for another construction you've decided you like better. You don't need to make every client create a tier list of their favorite hash functions or play spot-the-difference in a list of block cipher modes for that.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds