LWN: Comments on "Random numbers from CPU execution time jitter" https://lwn.net/Articles/642166/ This is a special feed containing comments posted to the individual LWN article titled "Random numbers from CPU execution time jitter". en-us Mon, 06 Oct 2025 23:26:08 +0000 Mon, 06 Oct 2025 23:26:08 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Random numbers from CPU execution time jitter https://lwn.net/Articles/650233/ https://lwn.net/Articles/650233/ roblucid <div class="FormattedComment"> Also, how does a Kernel only pool make things worse?<br> Entropy can be depleted on some systems, and consumed by user space a DOS.<br> <p> Adding another hurdle, even if imperfect provides practical benefit, and an attacker able to control the machine environment so precisely has probably won the game already.<br> </div> Mon, 06 Jul 2015 08:59:54 +0000 Lowering entropy https://lwn.net/Articles/644454/ https://lwn.net/Articles/644454/ DigitalBrains <div class="FormattedComment"> <font class="QuotedText">&gt; how do you decide that you need "128 shannons of entropy for my crypto"?</font><br> <p> I'm going by the principle that the only secret thing about my crypto application is its key. I'm assuming for the moment that my attacker is able to exhaustively search the 96 shannons, or reduce the search space far enough to do an exhaustive search on what remains.<br> <p> Because only my key is unknown, I'm assuming the attacker can reproduce the deterministic portions.<br> <p> When you argue that mixing in bad quality randomness is not a problem because there's still plenty left, this seems like the bald man's paradox. If you have a full set of hair (good quality randomness), and some individual hairs fall out (bad randomness), you still have a full set of hair. Fine, so mixing in some bad sources doesn't make you go bald. But at some point, if enough individual hairs fall out, you are going bald: producing bad quality randomness. So I think that if you suspect that this cpu execution time jitter produces only 0.2 shannons per bit, you should not use it as if it has a full shannon per bit. You can still use it, but you shouldn't pretend it has more information content than it has. And if you don't feel confident giving a reliable lower bound on the amount of entropy delivered by the method, you might even be better off not using it. Better be safe than sorry.<br> <p> It's about delivering an application what it expects, about quantifying what you mean when you say you are using a certain amount of randomness. A crypto application requesting 128 shannons of entropy does that because its designers decided this is a good amount of entropy to use. There are always margins built in, so it might be safe to give it only 96 shannons. But you're eating away at the built in margins, and at some point you're going past them.<br> <p> The main point I tried to make is that I agree with commenters saying that you can't lower the entropy by mixing in determinism, but that that is not the point. Other than that, I think this is a really complicated subject and I'm not an expert at all, just an interested hobbyist.<br> <p> <font class="QuotedText">&gt; [...] unless the attacker knows/controls the deterministic data that was mixed in, it's still effectively random as far as the attacker is concerned.</font><br> <p> I think that when you say that something is not a good quality source of randomness, you're effectively saying that you suspect someone could predict a part of its output. So yes, then there are attackers that might know the deterministic data to some extent. They can use this knowledge to reduce their search space. It's still only a small reduction; they still have a long way to go.<br> <p> </div> Thu, 14 May 2015 15:07:44 +0000 Lowering entropy https://lwn.net/Articles/644313/ https://lwn.net/Articles/644313/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; If I need 128 shannons of entropy for my crypto, I will not get there with 96 shannons and something deterministic mixed in.</font><br> <p> how do you decide that you need "128 shannons of entropy for my crypto"?<br> <p> and even if you only have 96 shannons of entropy, unless the attacker knows/controls the deterministic data that was mixed in, it's still effectively random as far as the attacker is concerned. This only becomes a problem when the deterministic factor can be known by the attacker, and even different amounts of deterministic data will result in different output.<br> </div> Wed, 13 May 2015 19:25:22 +0000 Lowering entropy https://lwn.net/Articles/644175/ https://lwn.net/Articles/644175/ DigitalBrains <div class="FormattedComment"> <font class="QuotedText">&gt; if you add an additional source of entropy and don't let it affect the accounting of 'good enough', it can at worst do nothing.</font><br> <p> But that's the whole problem, is it not? You can't lower the actual entropy of the pool by mixing in /dev/zero. This is however beside the point! What is actually affected is the kernel's estimate of the quality of the randomness. Say you're down to an estimate of 8 bits of entropy. The kernel starts mixing in something completely deterministic like /dev/zero, but it thinks it is increasing entropy in the pool and is now under the impression it has a whopping 256 bits of entropy to give out. Too bad it still only has 8 bits of actual entropy, which gets used as a cryptographic key!<br> <p> I'm using overly dramatic numbers, and the kernel purposely underestimates its availability of entropy. But entropy is a well-defined, if difficult to measure, concept. If I need 128 shannons of entropy for my crypto, I will not get there with 96 shannons and something deterministic mixed in.<br> </div> Wed, 13 May 2015 09:36:37 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643646/ https://lwn.net/Articles/643646/ toyotabedzrock <div class="FormattedComment"> Given that he is looping over and over means the CPU will be in a more consistent state for this "jitter".<br> <p> Ask yourself what is the best way to lower jitter and it would be this code. At best it would provide a very predictable lever of jitter because it would heat up the chip in a predefined way.<br> </div> Fri, 08 May 2015 05:53:27 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643603/ https://lwn.net/Articles/643603/ itvirta <div class="FormattedComment"> <font class="QuotedText">&gt; [0] say, predictable crypto keys as from Debian's valgrind olympics: a</font><br> <font class="QuotedText">&gt; black-magic issue because of valgrind's status as monolithic and</font><br> <font class="QuotedText">&gt; incomprehensible making it an object of authority.</font><br> <p> valgrind or openssl?<br> <p> </div> Thu, 07 May 2015 16:29:47 +0000 Actually still confused https://lwn.net/Articles/643594/ https://lwn.net/Articles/643594/ itvirta <div class="FormattedComment"> <font class="QuotedText">&gt; (For example, `dd if=/dev/urandom of=/dev/sda` is a terrible misuse.</font><br> <font class="QuotedText">&gt; Instead, use something like </font><br> <font class="QuotedText">&gt; `openssl enc -aes128 -k "shred" &lt; /dev/urandom &gt; /dev/sda`.)</font><br> <p> Doesn't that still read from urandom as much as the dd since urandom<br> is used as the input data?<br> <p> Maybe you mean something like <br> openssl enc -aes128 -pass file:/dev/urandom &lt; /dev/zero &gt; /dev/sda<br> <p> (Or even with the -nosalt flag added, because otherwise the<br> output always starts with the string "Salted__".)<br> <p> <p> The idea is good, however. I've used shred(1) for wiping disks, and<br> in random mode it uses urandom directly to get the randomness. It<br> makes it hideously slow. Perhaps someone(tm) should patch it to support<br> a faster generator or just make a smarter dedicated (simple-to-use) tool. :)<br> <p> <p> <p> </div> Thu, 07 May 2015 16:23:43 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643546/ https://lwn.net/Articles/643546/ ksandstr <div class="FormattedComment"> <font class="QuotedText">&gt; The jitter entropy module allocates a 2KB buffer during initialization that it loops through and simply adds one to the value stored there (which causes both a load and a store). The buffer is larger than the L1 cache of the processor, which should introduce some unpredictable wait states into the measurement. </font><br> <p> A 2 KiB buffer will fit entirely into any L1 cache since 1998-ish. The only mainstream exception are the Pentium 4 "NetBurst"'s earliest models with their (even at the time) tiny L1d set-ups -- and even those were 2 KiB twice over.<br> <p> Granted, it's almost guaranteed that a rarely-touched 2 KiB buffer would be at least partially cold if the entropy-mixing code is cold also, however, 1) that's not what the article says, and 2) an entropy mixer's behaviour should remain consistent regardless of how hot its own code path and/or accessory buffer is.<br> <p> These are characteristics of poorly-understood code that merely appears to do the right thing, rather than provably doing so even in the face of attempted compromise. Experience shows that poorly-understood but established code (i.e. black magic), such as what this has the potential to become, is very difficult to remove even in the face of grave failure[0]. Considering that the cache and/or hardware counter behaviour of future architectures may change arbitrarily, there's little value in poorly-defined cache latency shenanigans such as this over their humble-but-obvious "read counter, stir pool w/ low 2 bits" counterpart.<br> <p> <p> [0] say, predictable crypto keys as from Debian's valgrind olympics: a black-magic issue because of valgrind's status as monolithic and incomprehensible making it an object of authority.<br> </div> Thu, 07 May 2015 13:06:58 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643072/ https://lwn.net/Articles/643072/ robbe <div class="FormattedComment"> I find it a bit disheartening that the linked paper only tested one embedded CPU (MIPS) ... the rest were x86-compatible CPUs, which, with their looong pipelines, deep cache hierarchies, turbo-mode et cetera, are very prone to indeterminism.<br> <p> So this serves the "my (virtual) x86 server needs entropy for HTTPS" case pretty well ... but embedded?<br> </div> Mon, 04 May 2015 12:21:44 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643006/ https://lwn.net/Articles/643006/ alankila <div class="FormattedComment"> The seed on disk won't make things worse, even if it is revealed to attacker or reused. I think technically what is stored as seed is some amount of data from the current entropy pool, and it is fed in as entropy using some userspace random injection API.<br> <p> So, even if the random seeding entropy is known to attacker, there's still the entropy the system accumulated until that point, so we are no worse off than before; if the seed is shared between multiple systems or reused at boot, the situation is the same as well. It would be good to periodically rewrite the entropy seed while the system is running, though, to limit the risk of reusing the entropy.<br> <p> In my opinion, it is not difficult to come up with lots of low-quality entropy, the issue is that Linux counts only extremely high quality bits as entropy. Those bits can be made arbitrarily scarce by increasing the requirements posed on what qualifies as random, to the point that the random subsystem is starved of all entropy until relatively late at boot and therefore can't function properly. I think this is a case of making the requirements too hard.<br> </div> Sun, 03 May 2015 10:08:54 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/643004/ https://lwn.net/Articles/643004/ alankila <div class="FormattedComment"> The attack outlined is probably not applicable to a entropy generator input situation. The key problem is that the inputs are likely to contain the current seed of the random number generator in some form. E.g. if you have some new data x you want to feed into the pool, a straightforward solution is to update the random number generator state with "state = H(state || x)" where H is a hash function returning suitably wide result. Since we are going to assume that the attacker is not already in possession of the seed, the attack is not possible.<br> </div> Sun, 03 May 2015 09:44:33 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642954/ https://lwn.net/Articles/642954/ jzbiciak <P>I agree. In the SoCs I've been involved with, the main source of indeterminism involves crossing asynchronous clock domains, where each clock domain is driven by different crystal (or other independent oscillator). Otherwise, the SoC is pretty darn deterministic.</P> <P>As Ted Ts'o says, just because <I>you</I> can't work it out, it doesn't mean <I>I</I> (or a sufficiently motivated attacker) can't work it out.</P> <P>That even applies to caches with so-called random replacement policies. They're really <I>pseudo-random</I>, and in principle you can work out whatever state is in that PRNG eventually.</P> <P>I've spent way too much skull sweat staring at waveforms and what not to think of cache behavior as truly random. Sure, it's unpredictable from the context of a given application running that doesn't know the full state of the machine. But, if you know the actual state of the cache and the sequence of requests coming from the application and so on, the whole memory hierarchy is pretty much deterministic.</P> <P>(Now that said, it's quite common in the SoC's I've worked with that the external memory is driven by a distinct clock from the processor and memory hierarchy. That will affect the timing of cache misses to external memory by a couple of cycles here or there. So, there is indeterminism in that domain crossing. But, the entropy you should expect to extract from that should be very low.)</P> Sat, 02 May 2015 08:14:45 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642953/ https://lwn.net/Articles/642953/ vstinner <div class="FormattedComment"> I already saw HAVEGE used in cloud virtual machines, because /proc/sys/kernel/random/entropy_avail was too low. Virtual machines get no keyboard stroke nor hardware interrupts. The problem is not the quality the of entropy, but the quantity of entropy. Without HAVEGE, SSH quickly hangs at the connection after a few tries...<br> <p> A better fix is to configure virtio-rng for the virtual machine.<br> </div> Sat, 02 May 2015 07:29:51 +0000 Actually still confused https://lwn.net/Articles/642839/ https://lwn.net/Articles/642839/ cesarb <div class="FormattedComment"> Think of the pool's entropy as a "measure of unpredictability". If the entropy is ten bits, for instance, you'd need at most 1024 guesses to find the pool state.<br> <p> You should think of the random generator as having two separate and independent parts: the pool itself and the output function.<br> <p> Inputs are mixed into the pool using a cryptographic function which takes two values: the previous pool state and the input value, and outputs the new pool state. This function thoroughly mixes its inputs, such that a one-bit difference on any of them would result on average to half of the output bits changing, and other than by guessing it's not possible to get from the output back to the inputs.<br> <p> Suppose you start with the pool containing only zeros. You add to it an input containing one bit of entropy. Around half of the pool bits will flip, and you can't easily reverse the function to get back the input, but since it's only one bit of entropy you can make two guesses and find one which matches the new pool state. Each new bit of entropy added doubles the number of guesses you need to make; but due to the pigeonhole principle, you can't have more entropy than the number of bits in the pool.<br> <p> To read from the pool, you use the second part, the output function. It again is a cryptographic function which takes the whole pool as its input, mixes it together, and outputs a number. Other than by guessing, it's not possible to get from this output to the pool state.<br> <p> Now let's go back to the one-bit example. The pool started with zero entropy (a fixed initial state), and got one bit of entropy added. It can now be in one of two possible states. It goes through the output function, which prevents one reading the output from getting back to the pool state. However, since there were only two possible states (one bit of entropy), you can try both and see which one would generate the output you got... and now the pool state is completely predictable, that is, it now has zero bits of entropy again! By reading from the pool, even with the protection of the output function, you reduced its entropy. Not only that, but there were only two possible outputs, so the output itself had only one bit of entropy, no matter how many bits you had asked for.<br> <p> Now if you read a 32-bit number from a pool with 33 bits of entropy, you can make many guesses and find out a possible pool state. However, again due to the pigeonhole principle, there's on average two possible pool states which will generate the same 32-bit output. Two pool states = one bit. So by reading 32 bits, you reduced the remaining entropy to one bit.<br> <p> This is important because if you can predict the pool state, you can predict what it will output next, which is obviously bad.<br> <p> ----<br> <p> Now, why isn't the situation that bad in practice? First, the input entropy counter tends to underestimate the entropy being added (by design, since it's better to underestimate than to overestimate). Second, "by guessing" can take a long enough time to be impractical. Suppose you have a 1024-bit pool, and read 1023 bits from it. In theory, the pool state should be almost completely predictable: there should be only two possible states. In practice, you would have to do more than 2^1000 guesses (a ridiculously large number) before you could actually make it that predictable.<br> <p> However, that only applies after the pool got unpredictable enough. That's why the new getrandom() system call (which you should use instead of reading /dev/random or /dev/urandom) will always block (or return failure in non-blocking mode) until the pool has gotten enough entropy at least once.<br> </div> Fri, 01 May 2015 13:52:36 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642825/ https://lwn.net/Articles/642825/ boog <div class="FormattedComment"> But can you trust black-box HW generators not to have been influenced by the NSA or the Chinese government via their manufacturers?<br> </div> Fri, 01 May 2015 11:35:48 +0000 Actually still confused https://lwn.net/Articles/642817/ https://lwn.net/Articles/642817/ tpo <div class="FormattedComment"> So I've been reading that paper [1] and think it is neither particularly clear nor precise. But my opinions about that paper are besides the issue that I'm interested in, which is how the random generator actually works.<br> <p> The one thing that has become clearer to me - thank you! - is that there exists a mechanism to add input data to the entropy pool, which has the property of not reducing the existing entropy in the pool no matter what the entropy quality of the new input data is. I've not verified that claim, but assume it true, it being a long standing mathematical finding. That's good news to me.<br> <p> However you write:<br> <p> <font class="QuotedText">&gt; There is an entropy pool of data that is filled immediately and always stays full. Over time, this pool has new random data from a variety of sources mixed into it. As data is mixed in, the kernel estimates how much entropy it thinks is now in the pool and sets a counter appropriately. In the background, there is a kernel thread that checks a different output pool. If the pool isn't full, f(epool) is run to populate the output pool.</font><br> <p> I think the contentious claim here is "the entropy pool ... always stays full". If you mean "stays full" in the sense of "a stack that never gets an element popped out from it" then I agree with that, since the pool is a fixed size structure, that, even if it were "empty" still contains "something" even if its "only all zeros". However that is not what is relevant in this discussion. The relevant thing is that by generating random data from that pool you transfer entropy out of the entropy pool. I quote the paper:<br> <p> "When k bytes need to be generated, ... k output bytes are generated <br> from this pool and the entropy counter is decreased by k bytes."<br> <p> Thus if we measure "fullness" by the here relevant metric of "amount of entropy" contained in the entropy pool, then the pool is *not* always full and in fact sometimes even empty as in the case where you have ssh-keygen pulling random data out of /dev/random and blocking because the kernel is unable to refill the entropy pool from its entropy sources.<br> <p> All this said, the above is only my understanding acquired by reading what I have been referred to and what I could find. My understanding may well still be insufficient and wrong. If you've put up enough with an ignorant of my likeness then I can fully understand that. Otherwise I'll be happy to hear more and try to improve my understanding of the matter.<br> <p> Thanks,<br> *t<br> <p> [1] <a href="https://eprint.iacr.org/2012/251.pdf">https://eprint.iacr.org/2012/251.pdf</a><br> </div> Fri, 01 May 2015 10:10:31 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642701/ https://lwn.net/Articles/642701/ flussence <div class="FormattedComment"> This sounds conceptually identical to HAVEGE[1], though I suppose there's some good reason for not using that algorithm here.<br> <p> Looking forward to having it by default; my router box has started taking its sweet time to restore connectivity between reboots, because newer hostapd versions seem to be stricter about the state of the /dev/random pool.<br> <p> [1]: <a href="http://www.issihosts.com/haveged/">http://www.issihosts.com/haveged/</a><br> </div> Thu, 30 Apr 2015 20:39:36 +0000 Actually still confused https://lwn.net/Articles/642684/ https://lwn.net/Articles/642684/ HIGHGuY <div class="FormattedComment"> <font class="QuotedText">&gt; ... will block and wait to gather more entropy even after the machine has been</font><br> <font class="QuotedText">&gt; running for months and thus has seen "infinite" amounts of randomness.</font><br> <p> Now in this case you would assume that the entropy pool is a piece of memory with random content that grows as it gathers more entropy. This would be impractical as it would deplete memory for the "infinite" amount of randomness.<br> <p> Actually, you should think of it as a fixed size memory buffer where feeding data into it is a transformation function ent_new = f(ent_old, new_data).<br> </div> Thu, 30 Apr 2015 18:43:05 +0000 Actually still confused https://lwn.net/Articles/642658/ https://lwn.net/Articles/642658/ fandingo <div class="FormattedComment"> <font class="QuotedText">&gt; Then, once someone pulls data from /dev/random, you apply some function f(R,t) or fn(R,fn-1) that calculates/generates random bytes from the initially gathered seed</font><br> <p> Not exactly and perhaps this can clear up some of the confusion. There is an entropy pool of data that is filled immediately and always stays full. Over time, this pool has new random data from a variety of sources mixed into it. As data is mixed in, the kernel estimates how much entropy it thinks is now in the pool and sets a counter appropriately. In the background, there is a kernel thread that checks a different output pool. If the pool isn't full, f(epool) is run to populate the output pool. <br> <p> Both urandom and random are using the same f() and entropy pool, but they do get individual output pools. The only difference between urandom and random is that the background worker to populate random's output pool will block if the estimated entropy is too low.<br> <p> <p> <font class="QuotedText">&gt; I've checked "entropy" and "entropy pool" on Wikipedia, but either I misunderstand it or Wikipedia is confused when using phrases like "entropy depletion" and similar, which according to what you say is not possible inside an entropy pool.</font><br> <p> Check out this image: <a href="https://i.imgur.com/VIPLO2d.png">https://i.imgur.com/VIPLO2d.png</a> from this paper: <a href="https://eprint.iacr.org/2012/251.pdf">https://eprint.iacr.org/2012/251.pdf</a> *<br> <p> There is not much information and a lot of confusion about PRNG and CSPRNG. Linux implements a CSPRNG, and the CS stands for cryptographically secure. This means that a partial disclosure of PRNG state should not compromise the output both forwards and backwards. That's the heart of why the kernel says it "consumes" estimated entropy as the entropy pool data is used for output. The state must continually incorporate new entropy data and mix the pool, or else partial state disclosures can make outputted data predictable. <br> <p> There are a lot of people that disagree with the blocking nature of /dev/random and how much of the CSPRNG operates. In particular, FreeBSD has a nonblocking /dev/random. They also use the very fast arc4 for their output function. Personally, I prefer the Linux CSPRNG because I like the security considerations, even though they come at a high performance cost. It's better to get high quality and secure random data (that includes urandom) from the kernel, and then feed it in as the key for a very fast stream cipher if that's what you need. (For example, `dd if=/dev/urandom of=/dev/sda` is a terrible misuse. Instead, use something like `openssl enc -aes128 -k "shred" &lt; /dev/urandom &gt; /dev/sda`.)<br> <p> * This is an excellent paper that covers the CSPRNG in both an approachable and mathematical methodology.<br> <p> </div> Thu, 30 Apr 2015 18:10:14 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642659/ https://lwn.net/Articles/642659/ xxiao <div class="FormattedComment"> For embedded system we're just choosing cpus with built-in hardware RNG generator, or an external Hardware RNG will do.<br> </div> Thu, 30 Apr 2015 17:02:29 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642648/ https://lwn.net/Articles/642648/ epa <div class="FormattedComment"> Right, you may do some accounting of how much entropy you have or whether the data you have added is 'good'. In that case, you had better be sure of the quality of what you're putting in before labelling it 'good'. However, it remains true (despite, IMHO, what djb wrote) that if you add an additional source of entropy and don't let it affect the accounting of 'good enough', it can at worst do nothing.<br> </div> Thu, 30 Apr 2015 16:08:10 +0000 Actually still confused https://lwn.net/Articles/642643/ https://lwn.net/Articles/642643/ tpo <div class="FormattedComment"> On further searching around I find, that I still do not understand the basic mechanism of an entropy pool.<br> <p> My current mental model of the likes of /dev/random is that one starts with a certain amount of gathered randomness R (the seed). Then, once someone pulls data from /dev/random, you apply some function f(R,t) or fn(R,fn-1) that calculates/generates random bytes from the initially gathered seed either incrementally via reiteration or by including some monotonically increasing input such as a clock.<br> <p> Now, as you explain, I effectively am confused by the fact that let's say ssh-keygen or openssl keygen will block and wait to gather more entropy even after the machine has been running for months and thus has seen "infinite" amounts of randomness. What is the reason to repeatedly start gathering further entropy at that point if as you seem to imply generating an infinite amount of random bytes from the initial seed does not reduce the random quality of future generated random bytes?<br> <p> I've checked "entropy" and "entropy pool" on Wikipedia, but either I misunderstand it or Wikipedia is confused when using phrases like "entropy depletion" and similar, which according to what you say is not possible inside an entropy pool.<br> <p> Is there basic, coherent explanation of the whole mechanism somewhere?<br> *t<br> </div> Thu, 30 Apr 2015 15:51:43 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642635/ https://lwn.net/Articles/642635/ tpo <div class="FormattedComment"> Thanks for enlightening me!<br> *t<br> </div> Thu, 30 Apr 2015 15:26:19 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642589/ https://lwn.net/Articles/642589/ fandingo <div class="FormattedComment"> Your criticism is predicated on a complete change of how entropy pools are used. Randomness is never "sucked out" of an entropy pool. New random data is folded into the existing data, and the overall pool never decreases in size. The output of an entropy pool is transformed data, too, so you're never giving out the seed data (because that would disclose state). <br> <p> (This seems to confuse a lot of people when they look at the blocking behavior of /dev/random. The pool never depletes, but a calculation of the quality of the randomness in the pool -- i.e. the entropy -- causes blocking, not a depletion of the actual data.) <br> <p> That's why adding data doesn't hurt. If you have an entropy pool that you trust at t1, folding in a bunch of low-quality data still leaves you with the original t1 randomness.<br> </div> Thu, 30 Apr 2015 15:08:40 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642581/ https://lwn.net/Articles/642581/ tpo <div class="FormattedComment"> <font class="QuotedText">&gt; You may be right but the thing about an entropy pool is that mixing in some new data can never make things worse.</font><br> <p> Your assertion is wrong unless you qualify it more precisely.<br> <p> Let's say you have some entropy pool and add /dev/null as a further source to it. Now, depending on the size of the pipe that sucks randomness out of that pool it might be that the pool is empty - except for /dev/null.<br> <p> So if instead of blocking you now continue to feed the pipe from /dev/null, then randomness disappears into complete determinism.<br> <p> So I think you have to explain how the output of your entropy pool is actually mixed before asserting that "it never can make things worse". <br> </div> Thu, 30 Apr 2015 14:26:15 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642582/ https://lwn.net/Articles/642582/ dgm <div class="FormattedComment"> At 1% it means waiting for 12,800 cycles, which is not much.<br> </div> Thu, 30 Apr 2015 14:22:03 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642573/ https://lwn.net/Articles/642573/ dkg saving and reloading a seed also has other potential risks: <ul><li>the non-volatile storage itself may not be in tight control of the processor -- it represents a possible risk for both leakage ("i know your seed") and tampering ("i can force your seed to be whatever i want") </li> <li> if the saved seed is somehow (accidentally? due to system failure?) reused across multiple boots, and there is no other source of entropy then the boots that share the seed will have the exact same stream of "randomness", potentially leading to symmetric key reuse, predictable values, and all other kinds of nastiness. </li> </ul> It's not that these risks are impossible to avoid, but avoiding them requires thoughtful system engineering, and might not be possible to do generically. The proposed approach in this article (if it is actually measuring real, non-predictable entropy) seems more robust. Thu, 30 Apr 2015 13:56:33 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642556/ https://lwn.net/Articles/642556/ ortalo <div class="FormattedComment"> And I suppose any sane reader has a duty to experience similar reservations, and I certainly share your prudent posture.<br> However, this work starts to be convincing - at least convincing enough to be a serious competitor with other sources of randomness (esp. given the recent increase in doubts with respect to potentially flawed hardware generators...).<br> Plus, the overall thing sounds so absurdly appealing: using the non-determinism of supposedly deterministic processors as fuel for the non-deterministic functions of the system especially to compensate for the potentially malicious determinism of non-deterministic generators. It rocks when you say it!<br> <p> Admittedly, this is not a fully reasonable argument, but I wonder if that's not the point at the moment: oppose somehow to what all the "very reasonable people" do in the name of security. A minimum of madness, as a precaution.<br> <p> <p> </div> Thu, 30 Apr 2015 11:39:11 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642535/ https://lwn.net/Articles/642535/ epa <blockquote>But what if z comes from a malicious source that can snoop on x and y?</blockquote> This is an interesting thing to consider but it is not usually that relevant. If my understanding of the article is correct, the assumption is that the attacker <i>cannot</i> snoop on the other entropy sources normally, but can somehow influence the generation of the new entropy source so that it takes into account the others. <p> So you would have to suppose some means of influencing the CPU jitter measurements that requires knowledge of another entropy source, but at the same time suppose that the other entropy source is not normally predictable by an attacker. This seems very far fetched. <p> The article goes on to make another argument: that adding more entropy is simply not needed. Once you have enough (say 256 bits) you can generate all the randomness from that. That may or may not be so, but it doesn't in itself add weight to the claim that adding new entropy sources is actively bad because they may be able to snoop on other sources (in some unspecified magical way) and so end up removing randomness from the result. Thu, 30 Apr 2015 10:09:28 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642528/ https://lwn.net/Articles/642528/ matthias <div class="FormattedComment"> Getting real entropy is a big problem if you want to have cryptography on embedded devices. The way of cracking a key by going all the way down through a RNG is not very practical, but if you do not use enough entropy, then you will e.g. generate RSA keys that share a common factor with other RSA keys produced on similar systems. These keys provide no security at all.<br> <p> The following is just the first reference, I found:<br> <p> <a href="http://arstechnica.com/business/2012/02/15/crypto-shocker-four-of-every-1000-public-keys-provide-no-security/">http://arstechnica.com/business/2012/02/15/crypto-shocker...</a><br> <p> The systems did not have enough real entropy, else these collisions should not occur. And saving and reloading a seed is no help, if these devices need to create cryptographic keys on first boot.<br> </div> Thu, 30 Apr 2015 09:32:50 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642518/ https://lwn.net/Articles/642518/ intgr <div class="FormattedComment"> <font class="QuotedText">&gt; on a well-designed system, we usually saw a variance of less than 1% in boot process timing (measured at the clock-cycle level).</font><br> <p> In terms of percentages, sure, that seems like a small number. But in absolute terms, if you're executing billions of instructions per second, the number of non-deterministic clock cycles in that "less than 1%" is still enormous. In a large sample like yours, such random events will tend to even out, so on a per-instruction basis the non-determinism may be even greater.<br> <p> To sufficiently seed a random number generator, all you need is 128 random bits -- 128 unpredictable clock cycles.<br> <p> </div> Thu, 30 Apr 2015 09:17:55 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642522/ https://lwn.net/Articles/642522/ shmget <div class="FormattedComment"> "The conventional wisdom is that hashing more entropy sources can't hurt: [...]<br> The conventional wisdom says that hash outputs can't be controlled; the conventional wisdom is simply wrong."<br> <p> <a href="http://blog.cr.yp.to/20140205-entropy.html">http://blog.cr.yp.to/20140205-entropy.html</a><br> <p> </div> Thu, 30 Apr 2015 09:11:31 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642519/ https://lwn.net/Articles/642519/ epa <div class="FormattedComment"> You may be right but the thing about an entropy pool is that mixing in some new data can never make things worse. Even if this jitter measurement turns out to be totally and trivially predictable, it will not make the random number generator easier to break than it would be without it. So often you may as well throw together ten different entropy sources. Even if nobody is certain that any individual source can't be predicted, it is unlikely an attacker would be able to predict or control all ten.<br> </div> Thu, 30 Apr 2015 09:00:12 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642499/ https://lwn.net/Articles/642499/ alankila <div class="FormattedComment"> <font class="QuotedText">&gt; It may be that there is some very complex state which is hidden inside the the CPU execution pipeline, the L1 cache, etc., etc. But just because *you* can't figure it out, and just because *I* can't figure it out doesn't mean that it is ipso facto something which a really bright NSA analyst working in Fort Meade can't figure out. (Or heck, a really clever Intel engineer who has full visibility into the internal design of an Intel CPU....)</font><br> <p> This is all perfectly theoretical anyway because it will be very hard to attack a random number generator which gets data in from multiple sources. Saving the seed to disk and merging it into the random pool during the next boot is the most important thing, I think. Any source not perfectly controlled by attacker from the start of time will input at least some unpredictable bits sometimes, and unless the attacker can gain access of the PRNG state, the problem is completely intractable.<br> <p> Since there is no practical usage for "real" entropy, I don't see why Linux bothers with /dev/random.<br> </div> Thu, 30 Apr 2015 07:31:58 +0000 Random numbers from CPU execution time jitter https://lwn.net/Articles/642503/ https://lwn.net/Articles/642503/ alonz FWIW, I share Ts'o's reservations&hellip; <p>I had spent quite a few years designing SoCs and embedded systems based on them, and the system design process actively aims to reduce non-determinism at all levels&#160;&ndash; including, in particular, CPU timings. <p>At least during early boot (before the system communicates with external components) the only sources of timing non-determinism are stray capacitances or environmental heat; on a well-designed system, we usually saw a variance of less than 1% in boot process timing (measured at the clock-cycle level). Thu, 30 Apr 2015 07:25:44 +0000