User: Password:
|
|
Subscribe / Log in / New account

On the safety of Linux random numbers

On the safety of Linux random numbers

Posted May 11, 2006 17:47 UTC (Thu) by zooko (guest, #2589)
In reply to: On the safety of Linux random numbers by Ross
Parent article: On the safety of Linux random numbers

What you say is indeed the widespread belief, and the original motivation for the distinction between /dev/random and /dev/urandom, but the truth of this belief has not been proven.

For starters, there is no guarantee that /dev/random is outputting an amount no greater than the amount of random input. (Indeed, it is impossible to prove that it did without being able to read the mind of your attacker.) For example, the patches which form the subject of this LWN story are one attempt to tweak the estimate of the randomness input because someone (Matt Mackall) thought that the current estimator was estimating something incorrectly.

My point isn't that the estimator is particularly bad -- my point is that the existence of the estimator demonstrates that there is no hard guarantee that the amount of output is no greater than the amount of input.

In theory, it is an open question whether /dev/random or /dev/urandom is more secure against cryptanalysis. In practice, they are very likely both secure, except that the blocking behavior of /dev/random introduces different vulnerabilities. (And except for a few unfortunate flaws in /dev/urandom...).


(Log in to post comments)

On the safety of Linux random numbers

Posted May 11, 2006 18:44 UTC (Thu) by Ross (guest, #4065) [Link]

I see what you are saying. Yes, it is an open question just how closely the entropy estimates match the actual gathered entropy. Certainly the double-counting (like corrected in the floppy driver patch) doesn't make one feel secure. However the counts are in general very conservative so it is more likely they are an underestimate than an overestimate.

About comparing the security of /dev/random and /dev/urandom: I don't understand the problem. They are equivalent under cryptanalysis because they use exactly the same mechanism to produce output, or at least that was my understanding. The only difference, which is not cryptographic, is the blocking behavior of /dev/random. Blocking readers when the estimated entropy is too low can not make it easier to reverse the hashing or otherwise attempt to determine the contents of the entropy pool. Now it may not make it any harder, but that is a different statement. That possibility is probably acceptable because the only disadvantage to blocking is as you said earlier: denial of service and slowed performance. But the system where both methods are present is good because system administrators can use /dev/urandom (or even change the /dev entry) all the time if that is what they really want.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds