|
|
Log in / Subscribe / Register

Random numbers from CPU execution time jitter

Random numbers from CPU execution time jitter

Posted Apr 30, 2015 7:31 UTC (Thu) by alankila (guest, #47141)
Parent article: Random numbers from CPU execution time jitter

> It may be that there is some very complex state which is hidden inside the the CPU execution pipeline, the L1 cache, etc., etc. But just because *you* can't figure it out, and just because *I* can't figure it out doesn't mean that it is ipso facto something which a really bright NSA analyst working in Fort Meade can't figure out. (Or heck, a really clever Intel engineer who has full visibility into the internal design of an Intel CPU....)

This is all perfectly theoretical anyway because it will be very hard to attack a random number generator which gets data in from multiple sources. Saving the seed to disk and merging it into the random pool during the next boot is the most important thing, I think. Any source not perfectly controlled by attacker from the start of time will input at least some unpredictable bits sometimes, and unless the attacker can gain access of the PRNG state, the problem is completely intractable.

Since there is no practical usage for "real" entropy, I don't see why Linux bothers with /dev/random.


to post comments

Random numbers from CPU execution time jitter

Posted Apr 30, 2015 9:32 UTC (Thu) by matthias (subscriber, #94967) [Link] (2 responses)

Getting real entropy is a big problem if you want to have cryptography on embedded devices. The way of cracking a key by going all the way down through a RNG is not very practical, but if you do not use enough entropy, then you will e.g. generate RSA keys that share a common factor with other RSA keys produced on similar systems. These keys provide no security at all.

The following is just the first reference, I found:

http://arstechnica.com/business/2012/02/15/crypto-shocker...

The systems did not have enough real entropy, else these collisions should not occur. And saving and reloading a seed is no help, if these devices need to create cryptographic keys on first boot.

Random numbers from CPU execution time jitter

Posted Apr 30, 2015 13:56 UTC (Thu) by dkg (subscriber, #55359) [Link] (1 responses)

saving and reloading a seed also has other potential risks:
  • the non-volatile storage itself may not be in tight control of the processor -- it represents a possible risk for both leakage ("i know your seed") and tampering ("i can force your seed to be whatever i want")
  • if the saved seed is somehow (accidentally? due to system failure?) reused across multiple boots, and there is no other source of entropy then the boots that share the seed will have the exact same stream of "randomness", potentially leading to symmetric key reuse, predictable values, and all other kinds of nastiness.
It's not that these risks are impossible to avoid, but avoiding them requires thoughtful system engineering, and might not be possible to do generically. The proposed approach in this article (if it is actually measuring real, non-predictable entropy) seems more robust.

Random numbers from CPU execution time jitter

Posted May 3, 2015 10:08 UTC (Sun) by alankila (guest, #47141) [Link]

The seed on disk won't make things worse, even if it is revealed to attacker or reused. I think technically what is stored as seed is some amount of data from the current entropy pool, and it is fed in as entropy using some userspace random injection API.

So, even if the random seeding entropy is known to attacker, there's still the entropy the system accumulated until that point, so we are no worse off than before; if the seed is shared between multiple systems or reused at boot, the situation is the same as well. It would be good to periodically rewrite the entropy seed while the system is running, though, to limit the risk of reusing the entropy.

In my opinion, it is not difficult to come up with lots of low-quality entropy, the issue is that Linux counts only extremely high quality bits as entropy. Those bits can be made arbitrarily scarce by increasing the requirements posed on what qualifies as random, to the point that the random subsystem is starved of all entropy until relatively late at boot and therefore can't function properly. I think this is a case of making the requirements too hard.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds