|
|
Subscribe / Log in / New account

Voodoo coding

Voodoo coding

Posted Jul 14, 2014 19:37 UTC (Mon) by andresfreund (subscriber, #69562)
In reply to: Voodoo coding by wahern
Parent article: First Release of LibreSSL Portable Available

> If I have a non-blocking server and an already established socket to a browser and want to establish a secure channel with perfect forward secrecy, and I try to generate some random numbers, but the operation of simply generating a random number could fail, do you have any idea how f'ing ugly it is to insert a _timer_ and a loop trying acquire that resource?

Why do you need a timer? Why is this different than any of the other dozen or two of things you need to do to establish a encrypted connection to another host?
If error handling in any of these parts - many of which are quite likely to fail (dns, connection establishment, public/private key crypto, session key negotiation, renegotiation) - is a fundamental structural problem something went seriously wrong.

> Of course it's possible. But it's infinitely nastier than dealing with other kinds of failures, and completely unnecessary. (And compound all of this by trying to do this in a library, lest you simply argue that one should open /dev/urandom and leave it open, which is sensible but still problematic.)

You argued that it's required to this without /dev/urandom because it is *impossible* to do error handling there. Which has zap to do with being asynchronous btw.
Note that /dev/urandom - if it actually would block significantly for the amounts of data we're talking about here - would allow for *more* of an async API than a dedicated getentropy() call. The latter basically has zero chance of ever getting that. You're making arguments up.


to post comments

Voodoo coding

Posted Jul 14, 2014 20:12 UTC (Mon) by wahern (subscriber, #37304) [Link]

I never said it was impossible. I said it wasn't a sane interface.

And I stand by that claim. Why make something which could fail when you don't have to and it's trivial not to?

I always try to write my server programs in a manner which can handle request failures without interrupting service to existing connections. There are various patterns to make this more convenient and less error prone, but one of the most effective is RAII (although I don't use C++), where you acquire all the necessary resources as early as possible, channeling your failure paths into as few areas as possible. I also use specialized list, tree, and hash routines which I can guarantee will allow me to complete a long set of changes to complex data structures free of OOM concerns. One must rigorously minimize the areas that could encounter failure conditions so as to ensure as few bugs as possible in the few areas that are contingent on success or failure or logical operations.

But how many applications do you know of which bother trying to ensure entropy is available in the very beginning of process startup or request servicing? How do would you even do this in a generic fashion? Is it really sane to open a descriptor for every request, or to cache a separate descriptor inside every component or library that might need randomness? If you seed another generator, how do you handle forking? getpid? pthread_atfork? There's a reason most PRNGs (CSPRNGs included) support automatic seeding; not just for convenience, but for sane behavior in the common case.

Hacks and tweaks to the kernel implementation of /dev/urandom to ensure entropy is ready as soon as possible is a perennial bicker-fest, and yet those can't compare to the contortions applications would need to go through just to maintain a descriptor. And they'd all be doing it differently! That's not a recipe for a secure application ecosystem. And getting people to use third-party libraries (like Nick Matthewson's excellent libottery) would be like herding cats and adds an unnecessary dependency. It harks back to the bad old days of EGD, before even /dev/urandom was available.

Of course it's possible. Lots of things are possible, but not all things are practical given limited human and machine resources, and even less unequivocally contribute to a safer software ecosystem free of hidden traps.

When I talk about CSPRNGs being deeply embedded within other algorithms, imagine things like a random sort, or a random UUID generator. These are almost always implemented through a single routine and normally would never need to communicate a failure because _logically_ they should never fail. And yet they could fail, even with valid input, if you rely on /dev/urandom without taking other extraordinary measures completely unrelated to the core algorithm.

Computational complexity attacks, side-channel attacks, etc, have made use of CSPRNGs useful and in many cases mandatory within many different kinds of algorithms which once upon a time could never fail.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds