Don't mess with my random numbers
Don't mess with my random numbers
Posted Oct 27, 2015 0:47 UTC (Tue) by eternaleye (guest, #67051)In reply to: Don't mess with my random numbers by malor
Parent article: Other approaches to random number scalability
Not quite; "abuser" is being used very loosely on the ML.
> rather than seeing urandom weakened in any way.
Thinking that the proposed idea would weaken urandom is "correct" in the same way as thinking that because ZFS uses 128-bit sizing it's less likely to run out of space on your computer than ext4 which uses 64-bit sizing: the math is correct, but the actual expenditure needed to even approach it being _relevant_ is so unrealistic in terms of *physics* as to be laughable.
> People truly depend on that stuff, and it's quite possible that predictable output from /dev/urandom could end up killing someone.
Considering that ChaCha20 has exactly the same weight on its shoulders, this is again true but irrelevant. (If ChaCha20 is weak in a way that affects using it in urandom, then TLS is in a lot of trouble).
> Further, it's going to be real hard to test from outside, and as a user of bits, you'd want some kind of signal that you were no longer getting ones with the same guarantee anymore.
The *existing* Linux RNG is hard to test - incidentally, researchers did it anyway. Turns out it's an ad-hoc mess and makes *worse* guarantees than ChaCha20. Funny, that.
> /dev/urandom is understood to not be as good as /dev/random
Not only wrong, but *dead* wrong, and actively harmful, to the point of causing vulnerabilities to DOS and other attacks: see http://sockpuppet.org/blog/2014/02/25/safely-generate-ran...
Cryptographers have long-since come to the conclusion that entropy estimation _doesn't work_ worth a damn, and the only difference between /dev/urandom and /dev/random is that the latter blocks based on entropy estimation.