Actually still confused
Actually still confused
Posted Apr 30, 2015 18:10 UTC (Thu) by fandingo (guest, #67019)In reply to: Actually still confused by tpo
Parent article: Random numbers from CPU execution time jitter
Not exactly and perhaps this can clear up some of the confusion. There is an entropy pool of data that is filled immediately and always stays full. Over time, this pool has new random data from a variety of sources mixed into it. As data is mixed in, the kernel estimates how much entropy it thinks is now in the pool and sets a counter appropriately. In the background, there is a kernel thread that checks a different output pool. If the pool isn't full, f(epool) is run to populate the output pool.
Both urandom and random are using the same f() and entropy pool, but they do get individual output pools. The only difference between urandom and random is that the background worker to populate random's output pool will block if the estimated entropy is too low.
> I've checked "entropy" and "entropy pool" on Wikipedia, but either I misunderstand it or Wikipedia is confused when using phrases like "entropy depletion" and similar, which according to what you say is not possible inside an entropy pool.
Check out this image: https://i.imgur.com/VIPLO2d.png from this paper: https://eprint.iacr.org/2012/251.pdf *
There is not much information and a lot of confusion about PRNG and CSPRNG. Linux implements a CSPRNG, and the CS stands for cryptographically secure. This means that a partial disclosure of PRNG state should not compromise the output both forwards and backwards. That's the heart of why the kernel says it "consumes" estimated entropy as the entropy pool data is used for output. The state must continually incorporate new entropy data and mix the pool, or else partial state disclosures can make outputted data predictable.
There are a lot of people that disagree with the blocking nature of /dev/random and how much of the CSPRNG operates. In particular, FreeBSD has a nonblocking /dev/random. They also use the very fast arc4 for their output function. Personally, I prefer the Linux CSPRNG because I like the security considerations, even though they come at a high performance cost. It's better to get high quality and secure random data (that includes urandom) from the kernel, and then feed it in as the key for a very fast stream cipher if that's what you need. (For example, `dd if=/dev/urandom of=/dev/sda` is a terrible misuse. Instead, use something like `openssl enc -aes128 -k "shred" < /dev/urandom > /dev/sda`.)
* This is an excellent paper that covers the CSPRNG in both an approachable and mathematical methodology.
