Appropriate sources of entropy
A steady stream of random events allows the kernel to keep its entropy pool stocked up, which in turn allows processes to use the strongest random numbers that Linux can provide. Exactly which events qualify as random—and just how much randomness they provide—is sometimes difficult to decide. A recent move to eliminate a source of contributions to the entropy pool has worried some, especially in the embedded community.
The kernel samples unpredictable events for use in generating random numbers, storing that data in the entropy pool. Entropy is a measure of the unpredictability or randomness of a data set, so the kernel estimates the amount of entropy each of those events contributes to the pool. Many kernels run on hardware that is lacking some of the traditional sources of entropy. In those cases, the timing of interrupts from network devices has been used as a source of entropy, but it has always been controversial, so it was recently proposed for removal.
Two of the best sources of random data for the entropy pool—user interaction via a keyboard or mouse and disk interrupts—are often not present in embedded devices. In addition, some disk interfaces, notably ATA, do not add entropy, which extends the problem to many "headless" servers. But network interrupts are seen as a dubious source of entropy because they may be able to be observed, or manipulated, by an attacker. In addition, as network traffic rises, many network drivers turn off receive interrupts from the hardware, allowing the kernel to poll periodically for incoming packets. This would reduce entropy collection just at the time when it might be needed for encrypting the traffic.
This is not the first time eliminating the IRQF_SAMPLE_RANDOM flag
from network drivers has come up; we looked at the issue two years
ago (though the flag was called SA_SAMPLE_RANDOM at that time).
It has come up again, starting with a query on linux-kernel from
Chris Peterson: "Should network devices be allowed to contribute
entropy to /dev/random?
" Jeff Garzik, kernel network device driver
maintainer, answered: "I tend to push people to /not/ add
IRQF_SAMPLE_RANDOM to new drivers,
but I'm not interested in going on a pogrom with existing code.
"
For anyone that is interested in such a pogrom, Peterson proposed a patch to eliminate the flag from the twelve network drivers that still use it. This sparked a long discussion on how to provide entropy for those devices that do not have anything else to use. While the actual contribution of entropy from network devices is questionable, mixing that data into the pool does not harm it, as long as no entropy credit—the current estimate of entropy in the pool—is awarded. Alan Cox proposed a new flag to track sources like that:
Some were in favor of an approach like this, but Adrian Bunk notes that:
If a customer wants to use /dev/random and demands to get dubious data there if nothing better is available fulfilling his wish only moves the security bug from his crappy application to the Linux kernel.
Part of the problem stems from a misconception about random numbers gotten from /dev/random versus those that are read from /dev/urandom, which we described in a Security page article last December. In general, applications should read from /dev/urandom. Only the most sensitive uses of random numbers—keys for GPG for example—need the entropy guarantee that /dev/random provides. In a system that is getting regular entropy updates, the quality of the random numbers from both sources is the same.
There is still an initialization problem for some systems, though, as Ted Ts'o points out:
A potential entropy source, even for embedded systems, is to sample other kernel and system parameters that are not predictable externally. Garzik suggests:
And there are plenty of untapped entropy sources even so, such as reading temperature sensors, fan speed sensors on variable-speed fans, etc.
Heck, "smartctl -d ata -a /dev/FOO" produces output that could be hashed and added as entropy.
Another source is from hardware random number generators. The kernel already has support for some, including the VIA Padlock that seems to be well thought of. Not all processors have such support, however. The Trusted Platform Module (TPM) does have random number generation and is becoming more widespread, especially in laptops, but there is no kernel hw_random driver for TPM.
Garzik advocates adding a kernel driver for what he calls the "Treacherous Platform Module", but as others pointed out, it can all be done in user space using the TrouSerS library. Even for the hardware random number generators that are supported in the kernel there is no automatic entropy collection, as it is left up to user space to decide whether to do that. This is done to try and keep policy decisions about the quality of the random data out of kernel code.
Systems that wish to sample that data should use rngd to feed the kernel entropy pool. rngd will apply FIPS 140-2 tests to verify the randomness of the data before passing it to the kernel. Andi Kleen is not in favor of that approach:
There is concern that some of the hardware random number generators are poorly implemented or could malfunction, so it would be dangerous to automatically add that data into the pool. Doing the FIPS testing in the kernel is not an option, leaving it up to user space applications to make the decision. There is nothing stopping any superuser process from adding bits to the entropy pool—no matter how weak—but the consensus is that the kernel itself must use sources it knows it can trust.
Another instance of this problem—in a different guise—appears in a discussion about random numbers for virtualized I/O, with Garzik asking: "Has anyone yet written a "hw" RNG
module for virt, that reads the host's
random number pool?
" Rusty Russell responded with a patch for a virtio "hardware"
random number generator as well as one that adds it into his lguest
hypervisor. The lguest patch reads data from the host's
/dev/urandom,
which is not where H. Peter Anvin thinks it
should come from:
The virtio implementation only provides the hw_random implementation, thus it requires user space help to get entropy data into the kernel. Much like any process that can read /dev/random, lguest could exhaust the host entropy pool, so there was some discussion of limiting how much random data guests can request from the device. A guest implementation could then use a small pool of entropy read from the host to seed its own random number generator for the simulated hardware device.
Removing the last remaining uses of IRQF_SAMPLE_RANDOM in network drivers seems likely, though some way to mix that data into the entropy pool without giving it any credit is still a possibility. With luck, that will encourage more effort into incorporating new sources of entropy using tools like EGD or, for systems that have it available, random number hardware. For systems that lack the traditional entropy sources, this should lead to a better initialized and fuller pool, while eliminating a potential attack by way of network packet manipulation.
Index entries for this article | |
---|---|
Kernel | Random numbers |
Kernel | Security |
Posted May 22, 2008 2:40 UTC (Thu)
by ikm (guest, #493)
[Link] (6 responses)
Posted May 22, 2008 10:42 UTC (Thu)
by jmspeex (subscriber, #51639)
[Link]
Posted May 22, 2008 15:19 UTC (Thu)
by cpeterso (guest, #305)
[Link]
Posted May 24, 2008 5:09 UTC (Sat)
by bronson (subscriber, #4806)
[Link] (3 responses)
Posted May 24, 2008 6:50 UTC (Sat)
by ikm (guest, #493)
[Link] (2 responses)
Posted May 24, 2008 8:10 UTC (Sat)
by bronson (subscriber, #4806)
[Link] (1 responses)
Posted May 24, 2008 18:47 UTC (Sat)
by ikm (guest, #493)
[Link]
Posted May 22, 2008 20:27 UTC (Thu)
by aegl (subscriber, #37581)
[Link] (5 responses)
Most modern systems can measure the interval between interrupts to a very high precision using a processor cycle counter ... and Linux does use this when it is available when adding randomness to the pool. It seems implausible that an attacker can reliably observe or manipulate network traffic to sub nano-second precision (unless (s)he has a logic analyser connected to the target system!).
If the only clock source is "jiffie" resolution, then I can see this is an issue.
Posted May 24, 2008 17:49 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (4 responses)
Why? All the physical processes that move bits along a network, and drive the CPU and memory states for processing them, are predictable with that precision. E.g. if I send two packets into a network N nanoseconds apart and nobody else is using that part of the network, why wouldn't the receiver register them as received N nanoseconds apart?
Posted May 24, 2008 23:45 UTC (Sat)
by dlang (guest, #313)
[Link]
Posted May 25, 2008 13:39 UTC (Sun)
by kleptog (subscriber, #1183)
[Link] (1 responses)
Posted May 26, 2008 16:34 UTC (Mon)
by aegl (subscriber, #37581)
[Link]
Agree with most of what you've said here ... but I have to comment that Linux doesn't use the time delta between different interrupt sources. It keeps a per-IRQ history and computes delta-t based on the previous interrupt using the same IRQ (if multiple devices are sharing the same IRQ, then this will be a cross-device time, but generally people try to arrange that devices do not share IRQs).
I have no idea why Linux does this ... in some cases using deltas between different interrupt sources would provide some defense against an attacker who does have tight control over the packets on one or more interfaces.
Posted May 29, 2008 10:01 UTC (Thu)
by forthy (guest, #1525)
[Link]
Actually, no, none of the physical processes are predictable to sub-ns
precision. First, the sender's CPU uses cache - if some data is not in
the cache, it will take more time to read in. Same happens on the
receiver CPU - a single cache miss, and the counter will read differently
(last few bits only, for sure). Then, there's also clock synchronization. CPU clocks aren't
synchronized, and the crystals aren't too precise - a 200MHz HT clock for
an AMD processor can actually run at 201.xxMHz (the one I'm using right
now is running at 201.155MHz, the other, very similar configured machines
I've in the office have the last digits at 53, 54, 55, and 89). As
outside attacker, you can't know the exact value (well, even as inside
attacker you can't know the exact number - only to a limited precision),
but you need to to inject deterministic data into the random number
pool. Also, today's networks are usually switched. So there's a
store-and-forward switch in between. These switches can theoretically be
quite deterministic, but their clock is slower than the CPU clock (25MHz
for fast, 125MHz for Gb ethernet). This clock is not synchronized with
the other PCs. So if you have a 2.5GHz CPU sending a packed through a
single Gb switch, and receive it with another 2.5GHz CPU, you can add 4
bits of randomness into your pool. Chances are high that even more bits
are random. I suggest a challenge: Try to produce a deterministic pattern with
full knowledge over both sides - e.g. by modifying the kernel's ping code
so that it sends the rtdsc timestamp as answer). Whatever quality you
achieve in this challenge will be the "dubious" part of the randomness,
and the remaining noise can be added.
Posted May 22, 2008 21:52 UTC (Thu)
by cpeterso (guest, #305)
[Link] (11 responses)
Posted May 22, 2008 23:22 UTC (Thu)
by pr1268 (guest, #24648)
[Link] (10 responses)
I'll happily share my technique for grabbing some random bytes from Random.org (run as root, obviously): #!/bin/sh
# Grab 4 unsigned bytes from random.org's HTTP interface,
# parse out just the numbers (filtering the tab chars out),
# create a 4-byte (32-bit) hexdump, and finally,
# write output to /dev/random.
wget -o /dev/null -O - \
"http://www.random.org/integers/?num=4&min=0&max=255&col=4&base=16&format=plain&md=new" \
| sed -e 's/\t//g' \
| xxd -r -p \
> /dev/random
Feel free to use or derive from the above; while I'm the author, I do not imply any copyright on it. Comments, suggestions, and criticism are most certainly welcome.
Posted May 23, 2008 2:22 UTC (Fri)
by pr1268 (guest, #24648)
[Link]
Standard disclaimers and a correction: The value of the num= query parameter specifies how many bytes are provided. I'm unsure whether 8 bytes (64 bits) would be preferred for 64-bit Linux users (anyone 64-bit user out there wish to comment on that?). Finally, my apologies for sounding like a Random.org shill, but there are some really good articles and links on entropy and randomness there.
Posted May 24, 2008 0:09 UTC (Sat)
by jch (guest, #51929)
[Link] (6 responses)
Posted May 24, 2008 1:31 UTC (Sat)
by pr1268 (guest, #24648)
[Link] (5 responses)
I believe that this will mix new data into the entropy pool, but not actually increase the entropy estimate. But isn't that what's happening all the time anyway, with the "environmental noise from device drivers and other sources" mentioned in the random(4) man page? What I was attempting to do is add another, external entropy source. Actually, I don't use this script hardly at all; it was more an attempt to make something useful while honing my shell scripting skills. I did learn xxd(1). In other words, any process blocking on /dev/random will remain blocked until some other, accounted for, entropy is added. I don't think the entropy pool has ever been drained completely (i.e., /dev/random blocked for reading) on my workstation. Except for the time I did that myself from a shell console. A few keystrokes, mouse movements, and network traffic bytes later (30 seconds or so), the pool had been refilled, according to /proc/sys/kernel/random/entropy_avail. I believe you'll need to ioctl(RNDADDENTROPY) in order to fix that. But this is supposed to be a shell script, not a C program! ;-) I do appreciate your feedback and comments, thanks! Understand that I learn a lot reading our editors' stuff and others' comments.
Posted May 24, 2008 2:12 UTC (Sat)
by jch (guest, #51929)
[Link] (4 responses)
Posted May 24, 2008 3:52 UTC (Sat)
by pr1268 (guest, #24648)
[Link] (3 responses)
So how is me writing bytes to /dev/random via a script any different than what my distro (Slackware 12.0) does in rc.S (also a script) when the "carry-over" entropy file (/etc/random-seed) is written to the random device? Other than a difference in quantity of bytes, I see no difference. My intuition was that /dev/random is (root) writable so that the sysadmin can incorporate additional sources of entropy. I don't mean to sound like I'm arguing, just sincerely interested... Thanks!
Posted May 24, 2008 5:53 UTC (Sat)
by dlang (guest, #313)
[Link]
Posted May 24, 2008 16:26 UTC (Sat)
by jch (guest, #51929)
[Link] (1 responses)
Posted May 26, 2008 2:23 UTC (Mon)
by pr1268 (guest, #24648)
[Link]
Thanks to those who replied to my questions. Coincidentally, Ted T'so recently had a related explanation (to dlang's and jch's above) on the LKML.
Posted May 24, 2008 17:40 UTC (Sat)
by giraffedata (guest, #1954)
[Link]
You should change your wording for this kind of thing. In most places, copyright is implied by law, whether you do something to imply you want it or not. You have to explicitly disclaim it. "I disclaim any copyright" would work. "I contribute this to the public domain" is the usual wording.
Posted May 30, 2008 10:12 UTC (Fri)
by Duncan (guest, #6647)
[Link]
Posted May 24, 2008 17:51 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (2 responses)
Where does a hardware RNG get its entropy?
Posted May 26, 2008 5:42 UTC (Mon)
by ikm (guest, #493)
[Link] (1 responses)
Posted May 30, 2008 22:16 UTC (Fri)
by ejr (subscriber, #51652)
[Link]
Posted May 29, 2008 11:46 UTC (Thu)
by roel (guest, #41887)
[Link] (1 responses)
Posted May 30, 2008 10:34 UTC (Fri)
by Duncan (guest, #6647)
[Link]
Posted May 30, 2008 9:58 UTC (Fri)
by flok (subscriber, #17768)
[Link] (1 responses)
Posted Aug 1, 2012 15:07 UTC (Wed)
by flok (subscriber, #17768)
[Link]
Posted Aug 26, 2010 16:35 UTC (Thu)
by AmyDown (guest, #69799)
[Link]
Appropriate sources of entropy
> Removing the last remaining uses of IRQF_SAMPLE_RANDOM in network drivers seems likely,
though some way to mix that data into the entropy pool without giving it any credit is still a
possibility.
Why is it just a possibility? I don't get it -- what could possibly be said against that? By
following that route, you'd make dubious data hopefully a bit less dubious, but you wouldn't
make it more dubious than it is already. So what's the downside? "All or nothing" syndrome?
Appropriate sources of entropy
> Why is it just a possibility? I don't get it -- what could possibly be said against that?
I think they mean a possibility in terms of "it's possible to write code that isn't too ugly",
rather than "possible" in the theoretical sense. Of course, the worst it can do is not
increase the entropy. To actually decrease the entropy, if would need to be correlated to some
other information in pool and cause really weird interactions (highly unlikely IMO).
Linux's RNG does have code (a flag called Appropriate sources of entropy
dont_count_entropy
) to mix new entropy data into the entropy pool without increasing the entropy credits (i.e. /dev/urandom data would be more random but blocked /dev/random readers would not be woken up). But no code actually seems to take advantage of it.
Appropriate sources of entropy
The downside was just demonstrated by Debian.
Mixing good randomness with dubious randomness seems harmless right? But what happens if the
good randomness dries up? Will you notice? Will you end up using dubious randomness for
something that matters?
Appropriate sources of entropy
This is not like the Debian's situation. If dubious randomness isn't accounted for as incoming
entropy bits, /dev/random would block the same way as it would be blocking without any dubious
randomness at all. As for /dev/urandom, without any external randomness /dev/urandom would be
looping inside sha1 feedback, acting as a pure PRNG. Any kind of external randomness injected
into that loop would only make its randomness better, it can't possible make it worse (due to
crypto properties of sha1 you can't forge any correlation here). Point is, when good
randomness dries up, what you're getting from /dev/urandom is a PRNG output. Any dubious
randomness mixed in could only improve this situation. So, to answer your question, you really
won't notice in both cases, but your randomness would be a bit better if you mix in some
dubious stuff there as well. In the latter case, your chances of using dubious randomness
(pure PRNG) are actually smaller.
Appropriate sources of entropy
My point is, either you care about the strength of your random numbers or you don't.
If you care, you're using /dev/random and you only mix in strong entropy. Mixing in weak
entropy seems harmless but will mask problems that would otherwise be obvious. The Debian
situation.
If you don't care, then you're happy with a strong, well-seeded PRNG and there's no need to
mix in dubious random data.
Is there a middle ground? I don't see one.
Appropriate sources of entropy
Any cryptographic PRNG needs to be reseeded once in a while, and some dubious data will do
just fine for that, given that it is mixed in in a cryptographically secure way. A box with
only a network connection is a good example of that -- it does not have much real entropy
coming in. You say that in absence of any trusted entropy a crypto PRNG is never to be
reseeded. I would disagree. One of the problem is what would happen if a seed file, which
stores state across reboots, is compromised. Another acoounts for any sort of weaknesses found
in a PRNG itself. If you need more details, see Schneier's Yarrow design paper, I could only
agree with what he had to say. The point is, sticking to the one initial seeding forever is a
bad idea.
"But network interrupts are seen as a dubious source of entropy because they may be able to be observed, or manipulated, by an attacker"
Appropriate sources of entropy
Appropriate sources of entropy
It seems implausible that an attacker can reliably observe or manipulate network traffic to sub nano-second precision
Appropriate sources of entropy
if there is no other traffic involved then you may have a point, but if there is no other
activity happening on the server other then what's initiated by the bad guy, there's nothing
interesting on the server for the bad guy to get.
if the server is doing anything else (talking to anyone else on the network, doing processing
of some kind, etc) then you start having things happen on the server that are not controlled
by the bad guy
Appropriate sources of entropy
This is something I don't quite understand. Yes, the packets might be registered at the
network card exactly N nanoseconds apart, but between the time that the packet is registered
by the card and when entropy might be added there is:
- Waiting for the PCI bus to be free to check the status of the card
- The CPU finding the code to run which may involve looking up page tables, pulling code out
which may be in any number of caches, each of which take an unpredicatable amount of time to
respond.
- The process of DMAing the data to main memory takes an unpredictable amount of time,
depending on the state of the DRAM.
- The busses are shared between various CPUs which are doing other things at the same time.
- The execution time of CPU instructions is affected by branch prediction logic and
instruction scheduling algorithms. Hyperthreading makes it worse.
And you're saying that at the end of this there's not even a single bit of entropy? If the
machine were otherwise completely idle I might understand it but if you just register lots of
dubious sources and use as entropy the time between different dubious sources I don't see how
it could be in any way predictable.
If I had any idea how to do it, I'd create a device that tried to extract entropy from the
timer interrupt and see if there is any correlation to be found...
"the time between different dubious sources"
Appropriate sources of entropy
Appropriate sources of entropy
Appropriate sources of entropy
Another source of randomness for daemons like EGD or rngd could be to periodically wget
news.google.com or random.org's numbers and pipe it into /dev/urandom.
Wgetting bytes from random.org
Disclaimers and correction to above wget script
Wgetting bytes from random.org
> wget ... | xxd -r ... > /dev/random
I believe that this will mix new data into the entropy pool, but not actually increase the
entropy estimate. In other words, any process blocking on /dev/random will remain blocked
until some other, accounted for, entropy is added.
I believe you'll need to ioctl(RNDADDENTROPY) in order to fix that.
Wgetting bytes from random.org
Wgetting bytes from random.org
> But isn't that what's happening all the time anyway, with the "environmental noise from
device drivers and other sources"
No. The various sources of entropy add entropy to the random pool and increase the entropy
estimate by some small value.
You may want to see for yourself in the kernel sources -- they end up calling
add_timer_randomness (drivers/char/random.c line 571) which calls add_entropy_words and then
credit_entropy_store. This increases the value of entropy_count, and may cause processes
blocking on /dev/random to wake up.
Wgetting bytes from random.org
Wgetting bytes from random.org
the sysadmin can add randomness, but the system will not trust that it is random (without
tweaking things via the ioctl), even if the sysadmin sends completely predictable data to
/dev/random it won't do any harm.
Wgetting bytes from random.org
> My intuition was that /dev/random is (root) writable so that the sysadmin can incorporate
additional sources of entropy.
The distinction between mixing in new data into the random pool and adding to the entropy
estimate is what the article is about.
The in-kernel RNG maintains a pool of random data and an estimate of how much entropy is in
the pool.
When you read from /dev/(u)random, the entropy estimate is reduced. When it reaches 0, reads
from /dev/random will block. That's the easy part.
The difficult part is deciding when to increase the entropy estimate. When you write 100
bytes to /dev/random, unless the 100 bytes are perfectly random, they should not add 800 bits
to the entropy estimate, but some lower value that only the person who generated the data is
able to choose reasonably.
For that reason, merely writing to /dev/random does not add to the entropy estimate; you need
to explicitly increase it by using the ioctl.
> So how is me writing bytes to /dev/random via a script any different than what my distro
(Slackware 12.0) does in rc.S (also a script) when the "carry-over" entropy file
(/etc/random-seed) is written to the random device?
It's no different. Your distribution is mixing the old data into the random pool, but not
increasing the entropy estimate. This way if the carry-over data is not truly random, no
serious security vulnerability will ensue.
Thanks for the replies
Wgetting bytes from random.org - copyright
while I'm the author, I do not imply any copyright on it.
Wgetting bytes from random.org
[reposted as the initial submission timed out without confirmation]
Viewing the previous replies to your script posting, I'd say it's a good
thing you don't increase the entropy count based on that script. After
all, you're using an unencrypted http: connection. As such, it'd be a
(relatively) simple matter for an attacker, indeed, even a remote
attacker, to do a MitM attack and substitute whatever he wanted into
the "response from random.org", which for all you know is anything but.
So yes, in line with the theme of the article, adding the bits shouldn't
do any harm, as long as you don't count it as added entropy. However, it
certainly can't be counted on to /help/ either, since you've really no
idea where the data is coming from or how predictable it might be, so it's
a good thing your script does /not/ have the system count it as added
entropy.
Of course, the first instinct would then be to use an encrypted/ssl
connection. However, I believe that'd be defeating the purpose to some
extent, since creating the encrypted connection will (I assume, I'm no
authority and really haven't a clue, only a guess) consume entropy in the
first place. Assuming it's allowed, one could then grab more entropy from
random.org than was consumed, but there'd still need to be some entropy
available initially or the encrypted connection itself would be suspect.
I'm really surprised nobody else noted this in their replies... <shrug>
Duncan
Appropriate sources of entropy
Another source is from hardware random number generators.
Johnson noise and other crap like that. Usually it's not really totally uniformly random, but still quite unpredictable :)
Appropriate sources of entropy
Appropriate sources of entropy
Search for "randomness extractor" for information on turning an unknown
random distribution into a known one. It's quite difficult, but often
possible. Cool combinatorics at work, and also closely related to error
correcting codes and hashing. (They're all related by the concept of
distance between and distinguishability of the sampled points.)
Randomness server
Is it maybe possible to receive randomness through an encrypted connection from a 'randomness'
server?
Randomness server
Hmm... your post asking the question, posted May 29, a comment subthread
discussing basically that, initial post May 22...
It might be worthwhile reading or at least scanning the existing comments
before you ask a question in your own comment...
That said, the subthread in question proposed an /unencrypted/ connection
to such a server (random.org). You at least get credit for not
making /that/ mistake. However, as I just pointed out in a reply to that
subthread, an encrypted connection probably (I'm no expert, but I believe
the usual SSL method does anyway) requires some initial entropy to setup
in the first place, so unless you fetch more than that, it's hardly worth
it, and even then, you'd have to have at least some initial good quality
entropy to setup the connection or anything received on it could hardly be
trustworthy in the first place, so it's a bit of a chicken an egg problem.
Of course, if the encryption entropy is pregenerated and stored, such as
with one-time-pads or the like, it's possible. OTOH, the longer such
pregenerated entropy is held, the more opportunity there has been to
compromise it by some means or other, so that's not a perfect solution
either. Still, it may be "good enough", but then again, the unencrypted
http solution discussed above, or indeed, the "enriched" PRNG solution
of /dev/urandom, is likely "good enough" for most general use cases as
well.
Duncan
If you have a spare audio-card lying around one could also use audio-entropyd. Same thing for a video4linux-device (e.g. a webcam): video-entropyd.
audio-entropyd / video-entropyd
audio-entropyd / video-entropyd
Johnson noise and other crap like that.
Appropriate sources of entropy