Actually still confused
Actually still confused
Posted Apr 30, 2015 15:51 UTC (Thu) by tpo (subscriber, #25713)In reply to: Random numbers from CPU execution time jitter by fandingo
Parent article: Random numbers from CPU execution time jitter
My current mental model of the likes of /dev/random is that one starts with a certain amount of gathered randomness R (the seed). Then, once someone pulls data from /dev/random, you apply some function f(R,t) or fn(R,fn-1) that calculates/generates random bytes from the initially gathered seed either incrementally via reiteration or by including some monotonically increasing input such as a clock.
Now, as you explain, I effectively am confused by the fact that let's say ssh-keygen or openssl keygen will block and wait to gather more entropy even after the machine has been running for months and thus has seen "infinite" amounts of randomness. What is the reason to repeatedly start gathering further entropy at that point if as you seem to imply generating an infinite amount of random bytes from the initial seed does not reduce the random quality of future generated random bytes?
I've checked "entropy" and "entropy pool" on Wikipedia, but either I misunderstand it or Wikipedia is confused when using phrases like "entropy depletion" and similar, which according to what you say is not possible inside an entropy pool.
Is there basic, coherent explanation of the whole mechanism somewhere?
*t
