On entropy and randomness
Linux random number generation (RNG) is often a source of confusion to developers, but it is also a very integral part of the security of the system. It provides random data to generate cryptographic keys, TCP sequence numbers, and the like, so unpredictability as well as very strong random numbers are required. When someone notices a flaw, or a possible flaw in the RNG, kernel hackers take notice.
Recurring universally unique identifiers (UUIDs), as reported by the smolt hardware profiler client program, had some worried about problems in the kernel RNG. As it turns out, the problem exists in the interaction between Fedora 8 LiveCD installations and smolt – essentially the UUID came from the CD – but it sparked a discussion leading to some possible improvements. Along the way, some common misconceptions about kernel RNG were cleared up.
The kernel gathers information from external sources to provide input to its entropy pool. This pool contains bits that have extremely strong random properties, so long as unpredictable events (inter-keypress timings, mouse movements, disk interrupts, etc.) are sampled. It provides direct access to this pool via the /dev/random device. Reading from that device will provide the strongest random numbers that Linux can offer – depleting the entropy pool. When the entropy pool runs low, reads to /dev/random block until there is sufficient entropy.
The alternative interface, the one that nearly all programs should use, is /dev/urandom. Reading from that device will not block. If sufficient entropy is available, it will provide random numbers just as strong as /dev/random, if not, it uses the SHA cryptographic hash algorithm to generate very strong random numbers. Developers often overestimate how strong their random numbers need to be; they also overestimate how easy "breaking" /dev/urandom would be, which leads to programs that, unnecessarily, read /dev/random. Ted Ts'o, who wrote the kernel RNG, puts it this way:
There is still a bit of hole in all of this: how does a freshly installed system, with little or no user interaction, at least yet, get its initial entropy? When Alan Cox and Mike McGrath started describing the smolt problem, the immediate reaction was to look closely at how the entropy pool was being initialized. While that turned out not to be the problem, it did lead Matt Mackall, maintainer of the kernel RNG, to start thinking about better pool initialization. Various ideas about mixing in data specific to the host, like MAC address and PCI device characteristics were discussed.
As Ts'o points out, that will help prevent things like UUID collisions, but it doesn't solve the problem of predictability of the random numbers that will be generated by these systems.
Linux provides random numbers suitable for nearly any purpose via /dev/urandom. For the truly paranoid, there is also /dev/random, but developers would do well to forget that device exists for everything but the most critical needs. If one is generating a large key pair, to use for the next century, using some data from /dev/random is probably right. Anything with lower requirements should seriously consider /dev/urandom.
Index entries for this article | |
---|---|
Kernel | Random numbers |
Security | Linux kernel/Random number generation |
Security | Random number generation |
Posted Dec 13, 2007 6:43 UTC (Thu)
by nettings (subscriber, #429)
[Link] (15 responses)
Posted Dec 13, 2007 10:51 UTC (Thu)
by ekj (guest, #1524)
[Link] (9 responses)
Posted Dec 14, 2007 3:34 UTC (Fri)
by ikm (guest, #493)
[Link] (8 responses)
Posted Dec 16, 2007 21:54 UTC (Sun)
by dlang (guest, #313)
[Link] (7 responses)
Posted Dec 17, 2007 19:09 UTC (Mon)
by ikm (guest, #493)
[Link] (6 responses)
Posted Dec 18, 2007 3:18 UTC (Tue)
by dlang (guest, #313)
[Link] (5 responses)
Posted Dec 18, 2007 4:10 UTC (Tue)
by ikm (guest, #493)
[Link] (3 responses)
p.s. While I can't make google find any real stuff, I found posts of other people concerned with the same question, e.g. this thread. Someone there made statistical conclusions on the period based solely on the hash length, but that's statistical, while the algorithms are deterministic and not too perfect.
Posted Dec 18, 2007 15:11 UTC (Tue)
by njs (subscriber, #40338)
[Link] (2 responses)
Posted Dec 19, 2007 3:39 UTC (Wed)
by ikm (guest, #493)
[Link] (1 responses)
Posted Dec 20, 2007 17:57 UTC (Thu)
by njs (subscriber, #40338)
[Link]
Posted Dec 18, 2007 14:43 UTC (Tue)
by njs (subscriber, #40338)
[Link]
Posted Dec 13, 2007 14:08 UTC (Thu)
by nix (subscriber, #2304)
[Link] (3 responses)
Posted Dec 13, 2007 14:26 UTC (Thu)
by NRArnot (subscriber, #3033)
[Link] (2 responses)
Posted Dec 13, 2007 16:49 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Dec 14, 2007 6:02 UTC (Fri)
by bronson (subscriber, #4806)
[Link]
Posted Dec 25, 2007 12:15 UTC (Tue)
by flok (subscriber, #17768)
[Link]
Posted Dec 13, 2007 8:32 UTC (Thu)
by jimbo (subscriber, #6689)
[Link] (2 responses)
As a lot of Linux distributions download packages, why
not provide a dynamically-generated package that contains seeding data
from the package server's own entropy pool?
With all the disk and network activity on a busy package server [I
suppose that we can't rely on keyboard and mouse event timings as entropy
sources on a server:-)], there should be a rich source of pool
data there.
Posted Dec 13, 2007 12:58 UTC (Thu)
by brother_rat (subscriber, #1895)
[Link] (1 responses)
Posted Dec 13, 2007 18:47 UTC (Thu)
by cpeterso (guest, #305)
[Link]
Posted Dec 13, 2007 19:01 UTC (Thu)
by adamgundy (subscriber, #5418)
[Link] (5 responses)
Posted Dec 14, 2007 2:06 UTC (Fri)
by cpeterso (guest, #305)
[Link] (4 responses)
Posted Dec 14, 2007 15:24 UTC (Fri)
by adamgundy (subscriber, #5418)
[Link] (2 responses)
Posted Dec 14, 2007 20:18 UTC (Fri)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Dec 14, 2007 21:34 UTC (Fri)
by adamgundy (subscriber, #5418)
[Link]
Posted Dec 14, 2007 15:56 UTC (Fri)
by TRS-80 (guest, #1804)
[Link]
Posted Dec 14, 2007 4:06 UTC (Fri)
by ikm (guest, #493)
[Link] (1 responses)
Posted Dec 14, 2007 7:29 UTC (Fri)
by nix (subscriber, #2304)
[Link]
Posted Dec 14, 2007 20:25 UTC (Fri)
by zooko (guest, #2589)
[Link]
thermal noise?
at least for machines equipped with a sound card or onboard sound interface (i.e. almost
everywhere, except some embedded or server machines), couldn't we just record some thermal
circuit noise and have 48k sufficiently random "pieces of entropy" per second to mix into
/dev/random?
thermal noise?
There's plenty of randomness for 99.99% of all computers, the remaining ones can easily get
some hardware entropy-source, for example one based on the soundcard as you suggest.
/dev/urandom is actually a lot STRONGER than sha, because it does use whatever real entropy is
available, and while sometimes low enough that /dev/random would block, it is seldom -zero-.
Predicting the next number coming out of urandom is similar to predicting the next number
coming out of a scheme like this:
do:
pool = sha(pool)
output(pool[0])
Which would perhaps be doable if sha was severly broken. But there's an added complication:
Every once in a while, some -real- entropy from whatever source enters the pool via the rough
equivalent of:
pool = sha(pool xor real-random-data)
This should mess things up enough that -even- if sha is severly broken, predicting the
sequence is, essentailly, impossible.
Our editor is rigth: If you are generating a keypair to use for a decade, by all means, use
real randomness. If you are doing anything less, use urandom and forget about it.
offtopic
> pool = sha(pool)
I'm curious if there exists such a value of pool that the 'pool == sha(pool)' condition would
be true, effectively turning the rng's period to 0. What is the actual period of such a
generator, the one based on a sha hash? Is there some info to read about this? A proof that
the period is constant for any initial seed values, or a proof that it's not? Any pointers
would be greatly appreciated.
p.s. I understand that the external entropy would (eventually) shift the generator out of the
pit, but I am more concerned about what happens when there is no external entropy available,
on theoretical grounds.
offtopic
if pool == sha(pool) then the sha hash would be broken as there would be one situation where
knowing the output would tell you what the input was.
offtopic
You're joking, right? Here's another situation for you:
6768033e216468247bd031a0a2d9876d79818f8f == sha( 0000000000000000000000000000000000000000 )
Here, knowing the output (6768..8f8f) tells us what the input was (00000..0000)! There are
2^160 situations like this, not just one or two. As long as the only way to know the input for
some particular output is to try all the inputs, the hash is not broken. What difference would
it make if some unique value would both be the argument and the result? I fail to see any.
As far as I understand, no one has ever stated that a crypto hash may never produce the same
output value as its input value. That's why I was asking in the first place.
offtopic
every hash function I am aware of involves doing many iterations of the same funtion, mixing
the output back into the input for the next stage.
in fact, the only difference between sha1 and sha256 are the number of iterations (as I
understand it anyway)
if you ever have a hash produce it's input as it's output you end up in a loop where
additional iterations will always produce the same output.
I was incorrect before when I said the problem was knowing the input that caused the output.
the fact that multiple inputs produce the same output prevents that knowledge from being
useful (expect in the case where the input is small enough that you can enumerate all of them
to produce a rainbow table that you can use in that specific situation)
Well, I personally have no idea how hashes work. As long as they comply with the usual requirements (i.e. resistance to reversion and collision-finding), I'm quite happy. What I am concerned about is their use as RNGs, as I can't google out any papers on how mathematically correct it is to feed a hash output to its input to build an RNG. What would be the period? Does it depend on the seed supplied? The aforementioned requirements for hashes don't allow any conclusions to be drawn on these questions. Hashes are simply designed for a different purpose. While using a hash to compress the resulting entropy from another source is probably ok (as I doubt it would make it less random than it was), looping it to itself seems questionable to me. E.g., consider an example RNG, for instance, a very simple Marsaglia's SHR3. It has a period of exactly 2^32-1, which means that it iterates through all numbers within [1..2^32-1] in some random order. You can use any value to seed it other than 0, and it would still have an exact same period (and a value of zero would make it always produce zeroes). Usually RNGs have these properties well understood. Did anyone make a research on SHA1 in that regard? I don't know and I would want to know. Not that all this stuff matters in real life, but still it's interesting.
offtopic
offtopic
Okay, seriously, last crypto tutorial comment from me for a while, I swear...
There's a lot of research of how you generate good cryptographic PRNGs. Some approaches use
hashes, some don't. The analysis in general is utterly different from the analysis of
scientific PRNGs, for the natural reason that you don't care so much about having provably
insane period and provably insane equidistribution requirements; but you do care a huge amount
about e.g. making sure that the internal state remains secret even when people can observe the
PRNG's output, and that even if current state *were* revealed that this would not let you "run
the PRNG backwards" and work out what earlier states were, and that you are clever about how
you mix in new entropy from whatever external sources. Of course, you'd *like* a provably
insane period too (to the extent that "period" is even meaningful when you are mixing in
entropy as you go), but it's very hard to get an algorithm that resists all cryptographic
analysis, *but* is still amenable to period analysis...
Random posts on python-list are, unfortunately, pretty useless for grokking this stuff. If
you really are curious to learn how CPRNGs are designed, one convenient place to start might
be Schneier's various online-accessible writings on the subject:
http://www.schneier.com/paper-prngs.html
http://www.schneier.com/paper-yarrow.html
http://www.schneier.com/blog/archives/2007/11/the_strange...
(Yarrow in particular is a fun and interesting algorithm, though no longer quite considered
state of the art. The last link is awesome for anyone who enjoys some geeky conspiracy, and
has lots more links.)
offtopic
Yay! Thanks. You're right -- I've never thought too much about the use of RNGs outside
Monte-Carlos and such, and the properties you've mentioned can mean a lot more than typical
equidistribution/period requirements in other contexts indeed.
I'll go dig the links you've provided. Thanks again, your help is much appreciated.
offtopic
Cool, glad to help. Have fun!
(If you really want to get into this, I also highly recommend Ferguson and Schneier's
/Practical Cryptography/. /Applied Cryptography/ is better known, but it's all like "so in
RSA you use modular arithmetic on primes in the following way" while /Practical Cryptography/
is all like "so if for some reason you want to reinvent SSL, here are the design trade-offs
you have to decide about, and here are the subtle bugs you're going to put in that will break
everything".)
offtopic
>in fact, the only difference between sha1 and sha256 are the number of iterations (as I
understand it anyway)
No -- SHA-1 has quite a different design from the SHA-2 family (which includes SHA-224,
SHA-256, SHA-384, and SHA-512). In fact, SHA-256/224 have fewer rounds than SHA-1. Not that
this matters to anyone except real cryptography geeks, but hey, in case you were curious.
Except, of course, that it's why the recent attacks against SHA-1 haven't generalized to SHA-2
yet (though the increased bit-length would probably protect them anyway). It is unclear to
what extent this is coincidence and to what extent it is NSA Sneakiness.
>if you ever have a hash produce its input as its output you end up in a loop where additional
iterations will always produce the same output.
True (at least for the simplest hash-based CPRNG design), but I'm pretty sure no-one has ever
found such a input/output pair, and finding one is very similar to accomplishing a preimage
attack, so I wouldn't worry about it much in practice.
thermal noise?
The thermal circuit noise isn't as random as all that: it gets interference from system
circuitry, which is regular as anything.
It could certainly contribute some randomness, I'd think, but nowhere near as much as 48kbits.
thermal noise?
48K samples/second, each 16 bits, of which the least significant should be random regardless
of how much patterned noise is being picked up from the system's innards. In practice one
would run some verification tests first. One wouldn't be relying on it being a truly random
bitstream, just on it containing a reasonable amount of entropy.
thermal noise?
I've tried it; you don't get anywhere near that much entropy. Quite a lot of it is strongly
correlated, at least with cheap sound cards, even at the least significant bits.
I wish this wasn't true, but it is :/ or at least it was in 2003, when I last looked at it (I
still don't have any sound cards newer than that: I'm not made of money and the newer cards
don't offer anything I can use. I don't have enough speakers or a suitably-shaped room for
surround-sound...)
thermal noise?
Electrical engineers learn to become deeply fearful of harmonics. They show up everywhere!
Thanks to this, (the harmonics, not the fear), I think you'll find that /dev/urandom offers far
more random data than anything arriving over cheap hardware.
Well, unless that hardware was explicitly designed to avoid harmonics, like the transistor
noise RNG on some of Via's chips.
thermal noise?
What about installing either audio-entropyd or video-entropyd? For the first one every cheap
soundcard will do nicely.
http://www.vanheusden.com/aed/ and http://www.vanheusden.com/ved/
On entropy and randomness
Jimbo
On entropy and randomness
For the same reason that using things like a MAC address is a bad idea. You really want to be
sure the data from /dev/urandom is not only random but secret too. There are services such as
http://www.random.org/ that provide really random numbers, but they are aimed at scientific
and statistical applications rather than cryptographic uses.
Also the original problem relates to randomness available to an installer, where I'm sure the
network is unconfigured.
On entropy and randomness
What if you had a random.org-like service with a *shared* internet-wide entropy pool, where
users could *upload* entropy? Sure there would be griefers uploading continuous streams of
non-random data (e.g. 00000000000000000000000000000000000...) to be mixed into the public
entropy pool. But isn't the number and actions of an "internet-ful" of griefers also
unpredictable (and thus increasing entropy)? :)
On entropy and randomness
we've hit this problem at the company I work for with cyrus-popd, depleting the entropy pool
and hanging due to a bunch of SSL connections.
the 'nasty' solution was to install the 'rngd' daemon pointed at /dev/urandom as it's data
source.. this essentially loops data back from urandom into the 'real' random pool when its
entropy level gets low. the quality of the random numbers is obviously reduced, but it seems
to work well..
I suspect many SSL using servers out there hit this issue more frequently than they realize -
once we'd spotted it on one server we realized others (openvpn, https) etc were also
occasionally blocked on /dev/random for no good reason...
On entropy and randomness
Servers can easily be starved for entropy since they don't get many keyboard, mouse, or disk
interrupts. I think there are some kernel patches or optional build configs to feed network
I/O into the entropy pool, but I think this is off by default because of tinfoil hats.
On entropy and randomness
yeah, we've seen those. the problem is that we intentionally try to stick with the distrib
kernel so we don't end up recompiling kernels every time there's a new security patch...
the alternatives are to compile our own cyrus with the magic flag telling it to use
/dev/urandom (same problem as above, plus we'd have to recompile apache, openvpn, ...), or
hack on udev to make it create a /dev/random which is actually /dev/urandom... couldn't
convince udev to do that reliably though.
rngd seems to do the trick as a userspace workaround. it's main purpose is supposed to be
pulling entropy from hardware addons, but it seems to be pretty common to use it the way we do
too.
On entropy and randomness
KERNEL=="urandom", NAME="random"
(or SYMLINK, if you prefer)
should do the trick, I'd expect.
On entropy and randomness
pretty sure we tried something like that.. sometimes it would work, sometimes not (timing?)
I forget exactly the issue with udev, we just couldn't convince it to do what we wanted and
rngd worked out of the box.
Recent server chipsets include hardware entropy sources, which rng-tools will feed into /dev/random.
On entropy and randomness
On entropy and randomness
> If one is generating a large key pair, to use for the next century, using some data from
/dev/random is probably right.
Much concern should be put when using so-called 'real' random data. Unlike particular PRNG
algorithms, whose random properties are usually well studied and known, the so-called
'physical sources of real randomness' might easily exhibit certain non-random properties,
since the actual properties of such sources, being dependent on numbers of factors (physical,
temporal, spatial etc), are much more obscure than that of the deterministic RNGs. This was
e.g. shown by George Marsaglia when he was trying out several Johnson noise based TRNGs while
preparing his random-data CDROM. To get the best result, it is usually best to mix the 'real
random data' with an output of some good deterministic PRNG. The result won't get less random
than it was (assuming, of course, that TRNG and PRNG outputs did not correlate, which would be
quite a straightforward assumption). But assuming the worst has happened and the TRNG was
indeed flaky, this can really save the day, since the output would still be quite as perfect,
better than of any of the sources if they were used separately.
I can only hope that /dev/random does exactly that (since, yes, I'm too lazy to check the
source), but if not, I personally wouldn't trust its output as much as it is hyped to be.
On entropy and randomness
/dev/random does indeed do exactly that.
On entropy and randomness
Consider also the issues of crash and restart, and of your machine actually being virtual
(although your code, of course, can't necessarily tell if the machine is virtual):
http://www1.ietf.org/mail-archive/web/cfrg/current/msg013...