LWN: Comments on "A system call for random numbers: getrandom()" https://lwn.net/Articles/606141/ This is a special feed containing comments posted to the individual LWN article titled "A system call for random numbers: getrandom()". en-us Sat, 01 Nov 2025 19:29:31 +0000 Sat, 01 Nov 2025 19:29:31 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net A system call for random numbers: getrandom() https://lwn.net/Articles/919399/ https://lwn.net/Articles/919399/ darwi <blockquote><font class="QuotedText"> <p>Linux tries to do this. It initializes the entropy pool very early in the boot process... However, on some system, there just isn't much randomness around... And if you have applications that drain the pool by requesting too much randomness, you can run out, even on good systems. </font></blockquote> This has been earlier reported, and a fix was applied by tglx and torvalds. Check earlier LWN articles <a href="https://lwn.net/Articles/800509/">here</a> and <a href="https://lwn.net/Articles/802360/">here</a> for context. Mon, 09 Jan 2023 10:08:03 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/675159/ https://lwn.net/Articles/675159/ akostadinov <div class="FormattedComment"> Other users of urandom should not cause urandom to become less secure. As some comments pointed out, other users or urandom may even increase urandom entropy (by making it's internal state less predictable).<br> <p> A good read why `random` is not good idea <a href="http://www.2uo.de/myths-about-urandom/">http://www.2uo.de/myths-about-urandom/</a><br> </div> Thu, 11 Feb 2016 08:59:00 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/616053/ https://lwn.net/Articles/616053/ anselm <p> Possibly, but there isn't enough energy in the universe to count up to 2**256. </p> Mon, 13 Oct 2014 18:19:11 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/616047/ https://lwn.net/Articles/616047/ fuhchee <div class="FormattedComment"> <font class="QuotedText">&gt;&gt; would only be one in 2**32, not one in 2**1048576 as would</font><br> <font class="QuotedText">&gt;&gt; be the case for the same quantity of truly random data.</font><br> <p> <font class="QuotedText">&gt; Right, but neither of those numbers can be counted to by</font><br> <font class="QuotedText">&gt; computers in our universe in its lifetime"</font><br> <p> Your cell phone can count to 4 billion in a second or two.<br> </div> Mon, 13 Oct 2014 17:45:38 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/607285/ https://lwn.net/Articles/607285/ raven667 <div class="FormattedComment"> I'm not sure that /dev/random has "better" or more "real" random numbers than /dev/urandom, when /dev/urandom is fully seeded and initialized it is as good as anything out there. Maybe the only real use case for /dev/random is seeding your own PRNG in userspace, if you are just consuming randomness for cryptographic purposes then /dev/urandom is what you want.<br> </div> Thu, 31 Jul 2014 16:13:13 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/607217/ https://lwn.net/Articles/607217/ eternaleye <div class="FormattedComment"> There's also that the kernel's random number generator is intended to be cryptographic randomness; this is considerably more stringent (and slower, and more computationally expensive) than the statistical randomness needed for Monte Carlo &amp;co. So it's just plain less useful than alternatives like WELL[1] or xorshift+[2]<br> <p> In addition, it depletes the scarce entropy resources of the kernel by the truckload, which may cause things that _really_ need good cryptographic randomness (long-term public keys, etc) to block indefinitely on /dev/random (since while urandom doesn't block, it _depletes the same pool_ causing random to block).<br> <p> [1] <a rel="nofollow" href="https://en.wikipedia.org/wiki/Well_Equidistributed_Long-period_Linear">https://en.wikipedia.org/wiki/Well_Equidistributed_Long-p...</a><br> [2] <a rel="nofollow" href="https://en.wikipedia.org/wiki/Xorshift">https://en.wikipedia.org/wiki/Xorshift</a><br> </div> Thu, 31 Jul 2014 07:41:56 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/607210/ https://lwn.net/Articles/607210/ lordsutch <div class="FormattedComment"> Well, if you're doing Monte Carlo or some other statistical analysis, usually you want the ability to replicate the analysis with a chosen seed and get the same numbers out (as well as being able to change the seed and see if you get the same results). The kernel random facility doesn't give you the ability to do that; indeed it's designed to make it very, very hard to get the RNG in the exact same state twice.<br> </div> Thu, 31 Jul 2014 05:29:22 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606930/ https://lwn.net/Articles/606930/ jimparis <blockquote>As a practical matter, I think it's obvious in this case that "refuse to proceed" should just mean "return -1" when the open fails, which would ultimately cause the LibreSSL to return failure to the user instead of creating a connection.</blockquote> This has nothing to do with "creating a connection"; existing code calls RAND_bytes() all the time for all sorts of things and <a href="https://searchcode.com/?q=RAND_bytes">doesn't always check the return code</a>. <blockquote>I'm really just asking why would a developer single out this one particular catastrophic failure for heroic action to avoid it?</blockquote> Because this is only a problem on Linux. Because the discussion was triggered by an article entitled <a href="https://www.agwa.name/blog/post/libressls_prng_is_unsafe_on_linux">LibreSSL's PRNG is Unsafe on Linux</a>. Because, as a developer points out in the comments there, "we really want to see linux provide the getentropy() syscall, which fixes all the mentioned issues." Mon, 28 Jul 2014 23:13:55 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606924/ https://lwn.net/Articles/606924/ giraffedata <blockquote> But what does "refuse to proceed" mean? Return an easily-ignored error code? Terminate the process? Sit in a busy loop? You'll get different answers based on who you ask. </blockquote> <P> It really doesn't matter that there are options, because at least one of them is an entirely reasonable response to a catastrophic failure such as file descriptor exhaustion - a more reasonable response than designing a new kernel interface or computing entropy some other way. As a practical matter, I think it's obvious in this case that "refuse to proceed" should just mean "return -1" when the open fails, which would ultimately cause the LibreSSL to return failure to the user instead of creating a connection. The user can ignore that failure, but there's no way he can leak private information to an eavesdropper over a connection that does not exist. <p> <blockquote> Making it so that the problem can never occur is just another way of fixing it. </blockquote> <p> I'm really just asking why would a developer single out this one particular catastrophic failure for heroic action to avoid it? I'll bet the same code allocates memory various places and just "refuses to proceed" if the allocation fails. And at some point it creates a socket and likely just "refuses to proceed" if it fails because of file descriptor exhaustion. Maybe it even uses a temporary file somewhere, and just "refuses to proceed" if the filesystem is full. Mon, 28 Jul 2014 22:50:08 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606921/ https://lwn.net/Articles/606921/ nybble41 <div class="FormattedComment"> The parent post was just giving example numbers. Those 32 bits would indeed be a fairly small seed for something like /dev/urandom, though it was the standard size for the C library's PRNG seed on 32-bit systems. (Hopefully no one was relying on rand() for anything security-related.)<br> <p> On the other hand, if you seed /dev/urandom with 256 bits, but all but 32 of those bits are predictable to an attacker, you might as well be using a mere 32-bit seed... some entropy-starved embedded systems may be in this situation shortly after startup.<br> </div> Mon, 28 Jul 2014 22:20:25 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606886/ https://lwn.net/Articles/606886/ apoelstra <div class="FormattedComment"> <font class="QuotedText">&gt;I didn't know that the PRNG was considered successfully seeded with only 32 bits</font><br> <p> It's not :) unless the parent post was just giving example numbers, he meant to say "32 bytes" or 256 bits.<br> </div> Mon, 28 Jul 2014 15:36:58 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606797/ https://lwn.net/Articles/606797/ raven667 <div class="FormattedComment"> I'm a bit of a layman, I didn't know that the PRNG was considered successfully seeded with only 32 bits, that seems awfully low, 256 bits sounds like a more reasonable number. It seems to be that it would be doable for a well financed organization to run the PRNG algorithm through every possible 32 bit seed value for a couple of megabytes of output at least. System startup isn't exposed to that many random variables, so it wouldn't surprise me if randomness taken from IRQ/IO timings and whatnot were clustered and not white noise there is enough different hardware/software combinations out there that this might not matter in a practical sense but your 32 bits of entropy is really something slightly smaller.<br> <p> Over time as new randomness was folded in and the offset gets larger then I would have confidence that the state would be too random to predict but anything that uses the PRNG output shortly after it is initially set up seems that it could be using predictable values. This would seem to be of concern to users of randomness early in the boot process, ssh key generation being the most obvious, but there are other things which use randomness.<br> <p> I would presume that the people who actually fully understand this stuff have thought about all of these things and are way ahead of a layman such as myself in mitigating these issues.<br> </div> Mon, 28 Jul 2014 00:13:21 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606795/ https://lwn.net/Articles/606795/ nybble41 <div class="FormattedComment"> I don't think you're actually disagreeing with me.<br> <p> If you don't control the offset, then yes, that contributes somewhat to the amount of entropy introduced into the PRNG. For example, if there could have been up to 1 MiB read from the PRNG in one-byte increments after it was seeded with 32 random bits but before you read your data, then that introduces at most 20 additional bits of entropy. You would have to search though a 52-bit space--32 bits of seed plus 20 bits of offset--to find a match for your data and determine the PRNG's internal state with a high degree of probability.<br> <p> I say "at most 20 bits" because it would be unreasonable to assume that the possible offsets are uniformly distributed from zero to 1 MiB; some sizes will be more likely than others, reducing the search space.<br> <p> On the other hand, if you fully randomized the PRNG's internal state, then any additional offset past that would contribute no additional entropy. Instead of searching the larger seed + offset space, you'd just search the PRNG's state space directly. If, that is, it were at all practical to brute-force search a 256-bit space.<br> </div> Sun, 27 Jul 2014 23:12:57 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606784/ https://lwn.net/Articles/606784/ jimparis <div class="FormattedComment"> <font class="QuotedText">&gt; it's impossible for it to achieve its goal of creating weakness in a cryptographic step if LibreSSL refuses to proceed when the open of /dev/urandom fails.</font><br> <p> But what does "refuse to proceed" mean? Return an easily-ignored error code? Terminate the process? Sit in a busy loop? You'll get different answers based on who you ask. I generally agree with your point, but it's not as simple as you make it out to be. Making it so that the problem can never occur is just another way of fixing it.<br> </div> Sun, 27 Jul 2014 17:49:38 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606780/ https://lwn.net/Articles/606780/ giraffedata <blockquote> Is it that hard to create a side program that uses some technique to force the exhaustion of fds during the entropy gathering (to create some weakness in a cryptographical step) and then stops, leaving the attacked programs with plenty of fds, as if nothing ever happened? </blockquote> <p> It doesn't matter because even if it's possible to create such a program, it's impossible for it to achieve its goal of creating weakness in a cryptographic step if LibreSSL refuses to proceed when the open of /dev/urandom fails. <p> That's what we've been talking about: the design choice of LibreSSL refusing to proceed in that case (the easy, natural, conventional thing to do) versus getting random numbers in some way that doesn't require file descriptors (which involves wishing for a new kind of system call) and proceeding. Sun, 27 Jul 2014 16:11:43 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606771/ https://lwn.net/Articles/606771/ gioele <div class="FormattedComment"> <font class="QuotedText">&gt; It's more like the developers were really confused, thinking it's worth adding a whole new system call to the kernel just to make a program progress a little further before succumbing to file descriptor exhaustion.</font><br> <p> Is it that hard to create a side program that uses some technique to force the exhaustion of fds during the entropy gathering (to create some weakness in a cryptographical step) and then stops, leaving the attacked programs with plenty of fds, as if nothing ever happened?<br> </div> Sun, 27 Jul 2014 11:57:34 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606764/ https://lwn.net/Articles/606764/ dlang <div class="FormattedComment"> I think your assumptions are incorrect<br> <p> As I understand it (vastly simplified and numbers small for examples sake)<br> <p> you take 32 bits of random data, it gets mixed and seeds the PRNG, but the PRNG has it's state pool.<br> <p> This state pool starts off with the 32 bits of random data, but is much larger (say 256 bits)<br> <p> each time data is read from the PRNG, it calculates some random data. Some of this random data is fed to the user, the rest of the random data replaces the existing pool.<br> <p> for 32 bits of random data, you can generate many TiB of output, and that output cannot be identified as not being random by any anlysis, yes, at some point it could repeat, but nobody can predict when that is, even if they have the contents of the pool<br> <p> so the offset into the stream can be much larger than the randomness used to initialize the pool in the first place<br> <p> If you are the only user of the PRNG, the offset into the stream is a known value to you and adds no randomness.<br> <p> But if there are other users of the PRNG output, then that adds to the randomness of the bits you read from the PRNG<br> </div> Sun, 27 Jul 2014 04:02:50 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606763/ https://lwn.net/Articles/606763/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; you would also need to guess the correct offset into the resulting stream </font><br> <p> A fair point, assuming that the PRNG has an internal state larger than the seed. One might alternatively consider that offset to be part of the seed. I was assuming that you generated the output after preparing the PRNG with at most 32 bits of entropy *in total*.<br> </div> Sun, 27 Jul 2014 03:52:45 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606759/ https://lwn.net/Articles/606759/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; It's more like the developers were really confused, thinking it's worth adding a whole new system call to the kernel just to make a program progress a little further before succumbing to file descriptor exhaustion.</font><br> <p> well, that sort of thinking is par for the course for people who get tightly absorbed into security thinking. They start to see the small things that can fail and forget that the overall system is probably going to be down first.<br> </div> Sat, 26 Jul 2014 21:18:34 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606758/ https://lwn.net/Articles/606758/ giraffedata <blockquote> so, this comment that was quoted in the article: <blockquote> or consider providing a new failsafe API which works in a chroot or when file descriptors are exhausted. </blockquote> (which comes from the LibreSSL source) was not enough to convince you that the LibreSSL folks (at least) are worried about file descriptor exhaustion? </blockquote> <p> OK, I missed that. So the article is not mistaken. It's more like the developers were really confused, thinking it's worth adding a whole new system call to the kernel just to make a program progress a little further before succumbing to file descriptor exhaustion. Or there's some totally nonobvious attack vector I'm missing. <P> (I do understand that there are other, sensible, reasons to have getrandom()). Sat, 26 Jul 2014 15:55:46 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606747/ https://lwn.net/Articles/606747/ jake <div class="FormattedComment"> <font class="QuotedText">&gt; It looks to me like the article is simply mistaken about the </font><br> <font class="QuotedText">&gt; relevance of file descriptor exhaustion attacks.</font><br> <p> so, this comment that was quoted in the article:<br> <p> <font class="QuotedText">&gt; or consider providing a new failsafe API which</font><br> <font class="QuotedText">&gt; works in a chroot or when file descriptors are exhausted.</font><br> <p> (which comes from the LibreSSL source) was not enough to convince you that the LibreSSL folks (at least) are worried about file descriptor exhaustion?<br> <p> <font class="QuotedText">&gt; I think the reason LibreSSL has alternatives to /dev/urandom is </font><br> <font class="QuotedText">&gt; that /dev/urandom might just be broken or not implemented on that </font><br> <font class="QuotedText">&gt; system.</font><br> <p> interesting, but it certainly isn't what they *say* ...<br> <p> jake<br> </div> Sat, 26 Jul 2014 04:03:34 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606745/ https://lwn.net/Articles/606745/ wahern <div class="FormattedComment"> "As I see it, djb is arguing that mixing in maybe-not-so-random data from source which can be controlled by an attacker can be worse than using a PRNG pre-seeded with higher quality, less controllable inputs."<br> <p> The argument is more nuanced than that. He's actually addressing the anxiety around hardware-based RNGs like on recent Intel chips. Those sources have privileged access to the existing RNG state in the kernel because they can access main memory directly. It's possible that they could smuggl data out of the system by carefully choosing the RNGs they generate. Then people like the NSA sniff carrier signals, such as TCP sequence numbers.<br> <p> "However, where he relies on the line that 'we can figure out how to use a single key to safely encrypt many messages'... that has been a problem for various cryptosystems in the past. If you're not careful, someone with access to enough ciphertexts may be able to infer the key used to encrypt them, particularly if they also know the corresponding plaintexts."<br> <p> His argument is premised on (1) CSPRNGs and (2) secure sources of entropy. We _definitely_ have #1. The problems with cryptosystems are higher up the ladder, almost always PEBKAC related.<br> <p> We have multiple sources for #2, but we shouldn't trust them. But we can mix them together. However _continued_ mixing could make you more susceptible to impossible-to-detect exfiltration attacks, so you should mix them until you're satisfied, then never interact with those sources again. Sort of a "wham, bam, thank you ma'am" relationship.<br> <p> The real problem is knowing when you've collected sufficient entropy. You need enough, but as DJB shows collecting too much could expose you to new forms of attack. Probably the best answer is to initially seed with hardware based solutions like Intel RdRand, then mix in low-quality sources until your satisfied that you've sufficiently closed the exfiltration gap. After that, you leave well enough alone. On networked systems we're talking a matter of seconds, or minutes at most.<br> <p> </div> Sat, 26 Jul 2014 03:43:08 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606743/ https://lwn.net/Articles/606743/ idupree <div class="FormattedComment"> Does getrandom(buf, 0, flags) block/EAGAIN if the requested kind of entropy is unavailable? Or does it succeed? If the former, getrandom(buf, 0, GRND_NONBLOCK) could be a way to find out if the urandom pool is uninitialized.<br> <p> Why "It should not be used for Monte Carlo simulations or other programs/algorithms which are doing probabilistic sampling." (in the patch's man page): I'd like to see the man page say why. According to <a href="http://thread.gmane.org/gmane.linux.kernel.cryptoapi/11666">http://thread.gmane.org/gmane.linux.kernel.cryptoapi/11666</a> the reason is: "It will be slow, and then the graduate student will whine and complain and send a bug report. It will cause urandom to pull more heavily on entropy, and if that means that you are using some kind of hardware random generator on a laptop, such as tpm-rng, you will burn more battery, but no, it will not break. This is why the man page says SHOULD not, and not MUST not. :-)"<br> </div> Sat, 26 Jul 2014 02:23:45 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606741/ https://lwn.net/Articles/606741/ giraffedata <blockquote> if the program zeros a buffer, then tries to read random data into that buffer and doesn't check the error codes properly, the result is that it continues on with zeros instead of it's random seed. ... in practice, we all know that such checks are not always done. </blockquote> <p> So that still doesn't shed any light on how the fact that file descriptors could be exhausted means LibreSSL needs a fallback method of generating random numbers. LibreSSL <em>does</em> check the error condition -- that's how it knows to fall back. <p> <blockquote> Also, note that shutting down the service is a DoS that is also to the advantage of the bad guy </blockquote> <p> And yet, no other program under the sun avoids DoS attacks by working around inability to open files. In fact, the program using LibreSSL most probably uses files other than /dev/urandom, so the bad guy can kill it by exhausting file descriptors regardless of what LibreSSL does. <p> It looks to me like the article is simply mistaken about the relevance of file descriptor exhaustion attacks. I think the reason LibreSSL has alternatives to /dev/urandom is that /dev/urandom might just be broken or not implemented on that system. Sat, 26 Jul 2014 01:42:59 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606739/ https://lwn.net/Articles/606739/ apoelstra <div class="FormattedComment"> <font class="QuotedText">&gt; However, where he relies on the line that "we can figure out how to use a single key to safely encrypt many messages"... that has been a problem for various cryptosystems in the past. If you're not careful, someone with access to enough ciphertexts may be able to infer the key used to encrypt them, particularly if they also know the corresponding plaintexts.</font><br> <p> In the literature this sort of thing is called a "chosen plaintext attack", and any public-key cryptosystem requires a mathematical proof demonstrating that a successful CPA attack can be harnessed to solve some "hard" computational problem, e.g. the discrete logarithm problem for an elliptic curve group.<br> <p> Are these mathematical proofs worth anything? After all, they don't consider side-channel attacks or implementation bugs or compromised RNG's (except to assume them away, typically), and sometimes the proofs themselves are incorrect. This is a point of great controversy, but the fact is that as an academic discipline has moved beyond the "well, try not to let the attacker get -too- much information" kind of magical thinking that was typical for pre-1970's cryptography.<br> <p> If your encryption primitive is not CPA-secure (at least CPA-secure --- systems in use today typically have stronger security properties), then its security depends, at best, on the exact way it is used. It is hard enough to build cryptosystems when your primitives are secure against these very general attacks. Without it, you are hopeless!<br> <p> The security requirement for PRNGs, by the way, is that a computationally bounded adversary (i.e. one who is able to do polynomially many operations in the size of the seed) cannot distinguish the PRNG output from random with non-negligible probability. If your PRNG fails this requirement, it is not cryptographically secure and no amount of seed-guarding will change this. If it doesn't fail this, then a 256-bit seed is fine.<br> <p> To contrast, the attack djb describes where malicious entropy is inserted into whatever channels exist for this, is not only possible to attackers today, but is generally applicable: it will work no matter what the PRNG algorithm!<br> <p> <font class="QuotedText">&gt; In any case, my main point was simply that PRNGs don't create any randomness beyond whatever may have been in their initial seed. Seed a PRNG with 32 bits and generate 1 MiB of "random" data from it, and you still only have at most 32 bits of entropy--the probability of guessing the output (knowing which PRNG was used) would only be one in 2**32, not one in 2**1048576 as would be the case for the same quantity of truly random data.</font><br> <p> Right, but neither of those numbers can be counted to by computers in our universe in its lifetime, so the distinction is not important from a security perspective. (If you are defending against a computationally unbounded adversary, your RNG does not matter since your other cryptographic primitives are not secure anyway.) This is what djb is saying when he argues that if 256 bits is enough security for a signing key, it's enough for a PRNG.<br> <p> So if there's no benefit to "increasing the entropy" in this way, and it opens up a trivial algorithm-agnostic attack to any attacker who can influence the entropy source in any way....it's a bad idea.<br> <p> </div> Sat, 26 Jul 2014 01:33:16 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606737/ https://lwn.net/Articles/606737/ dlang <div class="FormattedComment"> Figuring out the length of a PRNG cycle, where you are in it, and the seed needed to create is is exactly the problem of figuring out the sate of the pool from the output that is theoretically possible, but everyone practical (including djb) dismisses as being an impractical attack, something that nobody has ever come close to showing even in the most specific case, let alone for the general case where some of the output is deliberately thrown away to make it harder for the attacker.<br> <p> In other words, in theory it's a weakness against the PRNG and a reason to not use it, but in practice, avoiding a PRNG for this reason is pure paranoia.<br> </div> Sat, 26 Jul 2014 00:35:14 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606735/ https://lwn.net/Articles/606735/ raven667 <div class="FormattedComment"> I must have grossly misunderstood your original point because I think we are largely in agreement here, I guess the only thing on which I think differently is that I'm not sure that you are _losing_ entropy as you use your PRNG, it's that you have a _fixed_ amount of entropy, well not exactly fixed because as pointed out elsewhere the position in the PRNG output is also a bit of information. Given a sufficient amount of seed entropy you can _recycle_ it for a very long time.<br> <p> I suppose the question really is, how long can you recycle the same initial hardware randomness input in a PRNG before an attacker could figure something out. That's kind of like figuring out how long your private keys need to be to be resilient against attack for a particular period of time. I have no idea how the math works out on that though.<br> </div> Sat, 26 Jul 2014 00:19:14 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606731/ https://lwn.net/Articles/606731/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; the probability of guessing the output (knowing which PRNG was used) would only be one in 2**32, not one in 2**1048576 as would be the case for the same quantity of truly random data.</font><br> <p> not true, you not only would need to guess the correct 2**32 seed, you would also need to guess the correct offset into the resulting stream that the 1MiB of data was pulled from, that adds some additional bits of randomness (but still far less than the 2**1048576 if every bit was random)<br> </div> Fri, 25 Jul 2014 23:43:14 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606730/ https://lwn.net/Articles/606730/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; How would exhausting file descriptors get some software to pick a bad random number? The natural result of that would be for software that uses random numbers to refuse to continue. </font><br> <p> if the program zeros a buffer, then tries to read random data into that buffer and doesn't check the error codes properly, the result is that it continues on with zeros instead of it's random seed.<br> <p> This is an advantage for the bad guy.<br> <p> Yes, in theory this is handled by properly checking all error conditions<br> <p> But in practice, we all know that such checks are not always done.<br> <p> Also, note that shutting down the service is a DoS that is also to the advantage of the bad guy<br> </div> Fri, 25 Jul 2014 23:41:37 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606728/ https://lwn.net/Articles/606728/ nybble41 <div class="FormattedComment"> That actually isn't the point at all. Yes, you want to filter and whiten your raw random source, but the PRNG doesn't introduce any new randomness; your entropy is limited to what you extracted from original random source. The PRNG just spreads the original entropy around; the more output you generate from a given amount of random input, the less actual entropy you have per bit. As a rule, the filtering and whitening steps *reduce* the bitrate relative to the original source. You're trying to concentrate the entropy (ideally achieving one bit of entropy for each output bit), not dilute it.<br> <p> Of course, the practical difference between an ideal PRNG with 256+ bits of internal state, seeded with an equivalent amount of entropy, and a true random number source is vanishingly small. The risk is that your PRNG isn't ideal (and is thus vulnerable to cryptoanalysis) or your seed doesn't have as much initial entropy as you thought.<br> </div> Fri, 25 Jul 2014 23:10:48 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606727/ https://lwn.net/Articles/606727/ nybble41 <div class="FormattedComment"> As I see it, djb is arguing that mixing in maybe-not-so-random data from source which can be controlled by an attacker can be worse than using a PRNG pre-seeded with higher quality, less controllable inputs. I have no objection to this point. He also argues, in effect, that reverse-engineering the PRNG state from the output is not a practical sort of attack to be worrying about. I agree, and said as much before--to my knowledge, this sort of attack has never succeeded against a well-seeded modern PRNG.<br> <p> However, where he relies on the line that "we can figure out how to use a single key to safely encrypt many messages"... that has been a problem for various cryptosystems in the past. If you're not careful, someone with access to enough ciphertexts may be able to infer the key used to encrypt them, particularly if they also know the corresponding plaintexts.<br> <p> In any case, my main point was simply that PRNGs don't create any randomness beyond whatever may have been in their initial seed. Seed a PRNG with 32 bits and generate 1 MiB of "random" data from it, and you still only have at most 32 bits of entropy--the probability of guessing the output (knowing which PRNG was used) would only be one in 2**32, not one in 2**1048576 as would be the case for the same quantity of truly random data.<br> </div> Fri, 25 Jul 2014 22:54:16 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606725/ https://lwn.net/Articles/606725/ giraffedata <blockquote> An attacker might exhaust file descriptors maliciously, just to get some software to pick a bad random number, </blockquote> <p> How would exhausting file descriptors get some software to pick a bad random number? The natural result of that would be for software that uses random numbers to refuse to continue. <p> But regardless of whether it's a valid expectation of the attacker, it doesn't explain why LibreSSL needs to have a fallback other than "return -1" for exhausted file descriptors. No other software does. Fri, 25 Jul 2014 22:13:35 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606726/ https://lwn.net/Articles/606726/ apoelstra <div class="FormattedComment"> djb argues quite forcefully against this conventional wisdom: <a href="http://blog.cr.yp.to/20140205-entropy.html">http://blog.cr.yp.to/20140205-entropy.html</a><br> </div> Fri, 25 Jul 2014 22:06:16 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606720/ https://lwn.net/Articles/606720/ jimparis <div class="FormattedComment"> <font class="QuotedText">&gt; Is exhaustion of file descriptors really an example of what this system call is intended to deal with? When a system runs out of file descriptors or any other system resource, all Hell breaks loose and one more program failing, because it can't establish a secure connection, should be barely noticeable. </font><br> <font class="QuotedText">&gt; I can't recall ever seeing code that goes out of its way to work around being generally unable to open files.</font><br> <p> An attacker might exhaust file descriptors maliciously, just to get some software to pick a bad random number, which could end up leaking a private key from a privileged process. The attacker would be careful in this case to try to cause the random number seeding to fail, while allowing the program to otherwise continue correctly.<br> <p> </div> Fri, 25 Jul 2014 20:15:48 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606717/ https://lwn.net/Articles/606717/ raven667 <div class="FormattedComment"> But that's the point, the way to get good random values for cryptography is to feed a randomness source into a PRNG and then use the output of the PRNG, the PRNG doesn't run out and you only need the source of randomness to get enough entropy to seed the PRNG. The PRNG has a bunch of safeguards to make sure the output is uniform white noise, real sources of randomness don't have that filter and are often non-uniformly random so using real random sources directly is often a bad idea.<br> </div> Fri, 25 Jul 2014 20:01:24 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606714/ https://lwn.net/Articles/606714/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; /dev/urandom doesn't "run out" of random numbers</font><br> <p> Actually, it does. It doesn't run out of *pseudo*-random numbers, but real random numbers are hard to come by and in limited supply. Given enough data out of the PRNG relative to the size of the entropy pool, it is possible, at least in theory, to reverse-engineer the PRNG's internal state and predict which numbers it will produce next.<br> <p> So far as I know this attack has never been successful in practice, assuming a properly seeded PRNG. There is some concern when the system is starved for sources of randomness, primarily in embedded devices, since that can drastically reduce the search space.<br> </div> Fri, 25 Jul 2014 19:14:14 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606706/ https://lwn.net/Articles/606706/ raven667 <div class="FormattedComment"> I could be wrong but I don't think that makes sense, /dev/urandom doesn't "run out" of random numbers, once the PRNG is seeded it never runs out and the entropy never decreases, it does get new entropy added periodically while the system runs but its not a failure if there is no new randomness. /dev/random uses some of the same entropy and a PRNG but has different semantics and can block but if you've seeded your PRNG well in the first place then there is little benefit in doing so, also the heuristics for determining when you have enough quality entropy are a bit sketchy so applications are generally steered to /dev/urandom because it works better.<br> </div> Fri, 25 Jul 2014 18:33:11 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606702/ https://lwn.net/Articles/606702/ giraffedata Is exhaustion of file descriptors really an example of what this system call is intended to deal with? When a system runs out of file descriptors or any other system resource, all Hell breaks loose and one more program failing, because it can't establish a secure connection, should be barely noticeable. <p> I can't recall ever seeing code that goes out of its way to work around being generally unable to open files. Fri, 25 Jul 2014 16:39:40 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606665/ https://lwn.net/Articles/606665/ justincormack <div class="FormattedComment"> Or just tell the user they need to wait longer; non blocking is always useful.<br> </div> Fri, 25 Jul 2014 13:19:43 +0000 A system call for random numbers: getrandom() https://lwn.net/Articles/606410/ https://lwn.net/Articles/606410/ ncm <div class="FormattedComment"> On the contrary: with /dev/urandom, you run out of randomness at the same time as you would have with /dev/random. You just can't tell when it happens, because it delivers bits either way. <br> <p> But you don't really run out, as such. Rather, you get a decreasing amount of entropy with each read. Reads from /dev/random just block when it judges that the entropy it can deliver has been stretched too thin.<br> </div> Fri, 25 Jul 2014 05:58:10 +0000