User: Password:
|
|
Subscribe / Log in / New account

Waiting for entropy

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

By Jonathan Corbet
June 6, 2017
Many bytes have been expended over the years discussing the virtues of the kernel's random number generation subsystem. One of the biggest recurring concerns has to do with systems that are unable to obtain sufficient entropy during the boot process to meet early demands for random data. The latest discussion on this topic got off to a bit of a rough start, but it may lead to an incremental improvement in this area.

Jason Donenfeld started the thread with a complaint that /dev/urandom will, when read from user space, return data even if the kernel's internal entropy pool has not yet been properly seeded. In such a case, it is theoretically possible for an attacker to predict the not-so-random data that will be returned. He asserted that /dev/urandom should simply block until the entropy pool is ready, and dismissed the reasoning behind the current behavior: "Yes, yes, you have arguments for why you're keeping this pathological, but you're still wrong, and this api is still a bug."

Bug or not, as Ted Ts'o pointed out, making /dev/urandom block causes distributions like Ubuntu and OpenWrt to fail to boot. That sort of behavioral change is typically called a "regression", and regressions of this sort are not normally allowed. So /dev/urandom will retain its current behavior. But that isn't the point Donenfeld was really trying to address anyway. The real issue, as it turns out, has to do with getting random data from within the kernel instead of from user space. That can be done with a call to:

    void get_random_bytes(void *buf, int nbytes);

This function will place nbytes of random data into the buffer pointed to by buf; it will do so regardless of whether the entropy pool is fully initialized. So, once again, it is possible to get data that is not truly random. Since this function is called from inside the kernel, those calls can happen early in the boot process, so the chance of encountering an insufficiently random entropy pool are relatively high.

This problem is not unknown to the kernel development community, of course. In 2015, Stephan Mueller proposed the addition of a version of get_random_bytes() that would block until the entropy pool is ready, should that be necessary. That idea ran into trouble, though, when Herbert Xu pointed out that it could lead to deadlocks — just the sort of random event that tends not to be of interest. So, instead, a callback interface was created. Kernel code that wants to ensure that it gets good random data starts by creating a callback function and placing a pointer to that function in a random_ready_callback structure:

    struct random_ready_callback {
	struct list_head list;
	void (*func)(struct random_ready_callback *rdy);
	struct module *owner;
    };

That structure is then passed to add_random_ready_callback():

    int add_random_ready_callback(struct random_ready_callback *rdy);

When the random-number subsystem is ready, the given callback function will be called. By adding some more structure (most likely using a completion), the calling code can create something that looks like a synchronous function to get random data.

As Donenfeld pointed out, this interface is a little bit on the cumbersome side, which may have something to do with the fact that it has exactly one call site in the kernel. He suggested that it might make sense to add a synchronous interface that could be used in at least some situations; that would make it possible to fix some places in the kernel that are at risk of using nonrandom data. Ts'o agreed that this approach might make sense:

Or maybe we can then help figure out what percentage of the callsites can be fixed with a synchronous interface, and fix some number of them just to demonstrate that the synchronous interface does work well.

The end result was a patch series from Donenfeld adding a new function:

    int wait_for_random_bytes(bool is_interruptable, unsigned long timeout);

As its name might suggest, wait_for_random_bytes() will wait until random data is available. If is_interruptable is set, the function will return early (with an error code) should the calling process receive a signal. The timeout parameter can be used to put an upper bound on how long the call will wait. This functionality turned out to be a bit more than was needed, though; in particular, Ts'o expressed skepticism about the timeout idea, asking: "If you are using get_random_bytes() for security reasons, does the security reason go away after 15 seconds?" The third version of the patch set removed all of the arguments to wait_for_random_bytes(), making all waits interruptible with no timeout.

The patch series then adds a set of convenience functions to combine waiting and actually getting the random data, including:

    static inline int get_random_bytes_wait(void *buf, int nbytes);

Most of the comments on the patch set at this point are about relatively minor issues. So chances are that some version of this patch set will find its way into the kernel eventually, with the result, hopefully, that there will be a reduced chance of kernel code using insufficiently random data. But there is one other aspect of this situation that seems entirely deterministic: the arguments about the quality of the kernel's random-number subsystem are far from finished. That is, after all, the fundamental problem with random numbers: it is difficult to be sure that they are truly random.


(Log in to post comments)

Waiting for entropy

Posted Jun 6, 2017 16:13 UTC (Tue) by arjan (subscriber, #36785) [Link]

entropy is a hard thing.

On the one hand, the hardcore crypto/random folks want to raise the bar (and block more), which from a strict perspective makes sense,

On the other hand, the result ends up hurting higher layers in the stack, and apps and libraries then sadly decide to implement their own poor PRNG instead, with a worse result.

I suppose the good news is that many modern cpus have instructions to get random numbers, and while the hardcore folks are very skeptical about those for /dev/random use, I can imaging /dev/urandom able to use them...

Waiting for entropy

Posted Jun 12, 2017 15:44 UTC (Mon) by dany (guest, #18902) [Link]

if that CPU instruction is available, /dev/urandom and /dev/random already use it (will mix bytes from this instruction into entropy pool)

Why not involve the boot loader

Posted Jun 6, 2017 16:28 UTC (Tue) by hsivonen (subscriber, #91034) [Link]

Is there a reason better than NIH for not adopting the OpenBSD approach of making it the bootloader's responsibility to supply a seed to the kernel by having the boot loader read the seed from the disk from a location where the kernel wrote a seed when the system was previously running?

Why not involve the boot loader

Posted Jun 6, 2017 17:04 UTC (Tue) by smurf (subscriber, #17840) [Link]

It's still a regression if you boot a new kernel without that bootloader. OpenBSD controls userspace, it do this. Linux cannot, and some embedded systems don't _have_ boot loaders which can be adapted to do so.

Why not involve the boot loader

Posted Jun 6, 2017 17:10 UTC (Tue) by nix (subscriber, #2304) [Link]

Because not every system *has* a bootloader any more. In fact, a number of kernel developers consider the ideal state to be the bootloaderless state: it's one less famously unreliable moving piece to break.

In particular, reading modern filesystems from a bootloader is a disaster waiting to happen: it's barely safe to read ext2 or FAT (with the only problem being the need to write yet more code to read filesystems when the kernel already contains perfectly good code to do just that), but as soon as you're reading a journalled filesystem, you really must replay the journal too. GRUB doesn't, leading to disaster on XFS, at least (since XFS considers everything committed to disk once it's committed to the journal, since, well, it is). Heck, in XFS the journal location can specified via a mount-time option if you're using an external journal: how on earth do you pass that to the bootloader? So there is nowhere safe to write the information out, unless you write it to a FAT filesystem or something simple like that.

Better to toss all that crap out of the window if possible and let the firmware handle the problem of picking an OS to load. Some EFI implementations can do this quite well (but, sigh, not all, hence the need for things like rEFInd for systems with EFI implementations too crap to let you choose among CONFIG_EFI_STUB-compiled kernels on the EFI System Partition). This does then mean that the kernel has to get randomness somehow, but surely *it* is better able to get some randomness from somewhere on the disk than the bootloader? It already has tested, working filesystem access code, after all.

Why not involve the boot loader

Posted Jun 6, 2017 18:30 UTC (Tue) by jdulaney (subscriber, #83672) [Link]

Well, in theory the kernel has tested, working file system code.

Why not involve the boot loader

Posted Jun 6, 2017 18:35 UTC (Tue) by nix (subscriber, #2304) [Link]

It has filesystem code you're already relying on in any case. :)

Why not involve the boot loader

Posted Jun 9, 2017 12:31 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

And not being able to do this everywhere is no reason to not do it on at least one set of systems that *are* capable of supporting it.

Why not involve the boot loader

Posted Jun 6, 2017 22:19 UTC (Tue) by mjg59 (subscriber, #23239) [Link]

> It already has tested, working filesystem access code, after all.

So does the firmware. Just stash something on the ESP and read it in the boot stub (or use the EFI RNG interface if it's present). We'll just handwave about the risk of having your seed stored unencrypted…

Why not involve the boot loader

Posted Jun 7, 2017 14:47 UTC (Wed) by nix (subscriber, #2304) [Link]

Yes, that's actually what I was driving at. :)

I suppose you can encrypt the thing via the TPM at read time or something, to ensure that an attacker who removes the disks and reads the seed state still can't tell what the seed will be without also attacking the TPM, which we assume to be difficult to do undetected. (The usual objections to use of the TPM for anything, notably that it's really slow and that there isn't really very much information you actually want to become unreadable if your motherboard fails, don't apply here: the random seed is *random*, after all: all you're using the TPM for here is to make sure that the contents of the entropy pool are hard to predict from the entropy on the disk.)

This does seem like a rather extreme scenario to me, though it's so easy to defend against I might go off and hack this up just to learn something about EFI programming! Once the machine is booted the on-disk seed is useless and probably deleted: this is very much a defense against the sort of evil maid who not only sticks a USB into your machine but boots off a USB disk and clones the quiescent internal disk while you're out, and then finds a useful way to use that information before the entropy pool gets more unpredictable state dumped into it a few seconds into your next boot. I suppose if you were regenerating an SSH key on every boot this would let the attacker derive that, but... who does that except on a machine with no persistent storage at all? Even SSL session keys, etc, are generally derived long after the crucial instants.

Why not involve the boot loader

Posted Jun 7, 2017 15:22 UTC (Wed) by mjg59 (subscriber, #23239) [Link]

Of course, if you've got a TPM you've also got a hardware RNG…

Why not involve the boot loader

Posted Jul 20, 2017 19:05 UTC (Thu) by flussence (subscriber, #85566) [Link]

>This does then mean that the kernel has to get randomness somehow, but surely *it* is better able to get some randomness from somewhere on the disk than the bootloader? It already has tested, working filesystem access code, after all.
There's another alternative: store and load the entropy seed via pstore. Works on EFI, KVM and a few other platforms, no block device necessary. (and with the recent addition of in-kernel TLS, that opens the door to horrible ideas like netboot over HTTPS)

Why not involve the boot loader

Posted Jul 21, 2017 21:37 UTC (Fri) by smckay (guest, #103253) [Link]

Do you need TLS in the kernel? I'm pretty sure iPXE supports HTTPS already.

Why not involve the boot loader

Posted Jul 24, 2017 16:55 UTC (Mon) by nix (subscriber, #2304) [Link]

My only worry about pstore is that with occasional reports of machines getting bricked by use of pstore I'm frankly scared to turn it on on anything but sacrificial machines...

Why not involve the boot loader

Posted Jul 25, 2017 14:59 UTC (Tue) by flussence (subscriber, #85566) [Link]

You might be confusing it with the efivarfs problem (where rm -rf could brick the chipset). pstore is a different interface to the same storage area, but it's currently append-only. The one likely failure mode I'm aware of with it is the EFI variable store itself can become full if your system's crashing in a loop and set to dump panic logs there.

(I saw a colleague going in circles for hours trying to debug why their EFI bootloader changes weren't surviving a reboot, turns out it was that and the kernel wasn't yelling -ENOSPC at efibootmgr when it should've...)

Why not involve the boot loader

Posted Jul 28, 2017 20:31 UTC (Fri) by nix (subscriber, #2304) [Link]

Yeah, that was my confusion, indeed. Thanks for clearing it up.

Why not involve the boot loader

Posted Jun 7, 2017 7:34 UTC (Wed) by matthias (subscriber, #94967) [Link]

Many home routers initialize their private keys on first boot. There is no entropy available that could be read from disk (or whatever). I once read a report, which I do not find any more that harvesting many public keys from home routers yields a surprising number of pairs of public keys, where the gcd is greater than one. This allows for trivial factorization and thus breaking the encryption. The reason was insufficient entropy at first boot.

Unfortunately the installation images are identical on first boot. A seed would be the same on all identical routers providing no entropy at all.

Why not involve the boot loader

Posted Jun 9, 2017 12:34 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

This can be fixed in the distribution. I have patched embedded distros to both add a random seed (obtained from the build host) into the image *and* another small one (obtained from the host the imaging script is run on) during the time the image is installed/flashed onto the target.

Later firmware upgrades would then “just” need to carry over an entropy seed to the upgraded version before booting into it.

And if you combine this with bootloader and kernel patches to pass (part) of such entropy to the kernel before handing control over to it, you win. (No, we did not do that for Linux back then.)

Why not involve the boot loader

Posted Jun 12, 2017 13:55 UTC (Mon) by error27 (subscriber, #8346) [Link]

That was some years ago. I think we fixed this by feeding a bunch of stuff like mac address into the entropy pool. It's not really random but at least it's not exactly the same for every system on the internet so you don't end up with identical keys.

Feedback from the userspace: Python urandom disaster

Posted Jun 6, 2017 20:57 UTC (Tue) by vstinner (subscriber, #42675) [Link]

I modified Python 3.5 to use the new fd-less getrandom() syscall. Quickly, we got two blocker bug reports about Python blocked forever. The Debian issue was that a Python 3 script was used to compute a checksum at boot, but Python was blocked at getrandom(). The bugs occurred on VMs and embedded devices.

Quickly, our security experts concluded that it's a feature and that os.urandom() must block. Ok but how to fix the issur with Python initialization? Then the discussion became crazy, the bug tracker and the mailing list were flooded by hundred of messages of people wanting to give their opinon... The two most important security Python experts resigned...

Trust me or not: a new mailing list was created just to answer the question of os.random(), should it block or not?

Two PEPs were written. Mine won and is implemented in Python 3.6:
https://www.python.org/dev/peps/pep-0524/

Python 3.6 now blocks on os.urandom(), but Python internal code to initialize Python falls back on non-blocking but unsafe /dev/urandom if getrandom() would block.

Read also my blog post:
https://haypo.github.io/pep-524-os-urandom-blocking.html

Feedback from the userspace: Python urandom disaster

Posted Jun 8, 2017 12:10 UTC (Thu) by robbe (subscriber, #16131) [Link]

Wouldn’t you be better off only calling getrandom() once the first call to os.urandom() actually occurs, not unconditionally at startup or import of the random module?

(Sorry for beating a dead horse.)

Feedback from the userspace: Python urandom disaster

Posted Jun 8, 2017 12:42 UTC (Thu) by vstinner (subscriber, #42675) [Link]

> Wouldn’t you be better off only calling getrandom() once the first call to os.urandom() actually occurs, not unconditionally at startup or import of the random module?

It's more complex than that in practice :-)

To workaround a denial-of-service (DoS) on Python hash function, the hash function is now randomized by default. We need entropy for that. Python 3.6 now tries to get entropy from getrandom(), or falls back to /dev/urandom if getrandom() would block.

There is a similar strategy (fallback) when the random module is imported, to create an instance of the Mersenne Twister PRNG.

But when os.urandom() is called explicitly, we now always block on getrandom().

Feedback from the userspace: Python urandom disaster

Posted Jun 10, 2017 20:27 UTC (Sat) by OrbatuThyanD (subscriber, #114326) [Link]

> Then the discussion became crazy, the bug tracker and the mailing list were flooded by hundred of messages of people wanting to give their opinon

ugh, I had almost the exact same experience dealing with the python dev community back when I wrote a pep that got included in 3.3. it felt like it was a full year of arguing with bike shedders and the whole thing really turned me off to python all together.

Feedback from the userspace: Python urandom disaster

Posted Jun 10, 2017 22:08 UTC (Sat) by vstinner (subscriber, #42675) [Link]

Oh, I am sorry to hear that feedback. Sometimes we try to be too kind and welcome on python-dev, nobody asks to stop the bikeshedding. Well, I did exactly that once, for the Python 3.6 fspath protocol. A PEP was written to make the discussion more constructive.

I was just wrote a PEP 546 to backport ssl.MemoryBIO to Python 2.7. After around 100 emails, the PEP was approved. The thing is that the discussion was interesting! But it's hard to follow such active discussion.

In my experience, writing a PEP is painful but it is also really worth it! After the long discussion, the quality is much better and also corner cases have been analyzed ;-)

Waiting for entropy

Posted Jun 6, 2017 21:17 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

I'm kinda disgusted by all this crap about random numbers. Has there been an actual attack through insecure urandom? It all looks like a bunch of people thinking up hypothetical stuff and breaking actual code in process.

Waiting for entropy

Posted Jun 6, 2017 21:41 UTC (Tue) by pbonzini (subscriber, #60935) [Link]

I am reminded of those devices with the same SSH private key because the contents of /dev/random were entirely deterministic on the first boot.

So in addition to your question, I'd ask: besides problems due to lack of entropy in /dev/urandom, how much could you trust /dev/urandom to be sufficiently nondeterministic on embedded systems, before a seed has been read from disk?

Waiting for entropy

Posted Jun 6, 2017 21:59 UTC (Tue) by jepler (subscriber, #105975) [Link]

Try Weak Keys Remain Widespread in Network Devices and its citations [21] and [26]:
In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures.…

In this [2016 -- jepler] paper, we measure the actions taken by vendors and end users over time in response to the original disclosure.

Spoiler: They didn't find that things had improved much, factoring over 313,000 public keys with the technique disclosed in 2012.

Waiting for entropy

Posted Jun 7, 2017 2:57 UTC (Wed) by gdt (subscriber, #6284) [Link]

Non-random keys and covert-channel (timing, power) attacks are the two ways into otherwise-secure crypto protocols. There's no shortage of academic papers with demonstrations against real platforms, and therefore likely no shortage of NSA implementations.

Waiting for entropy

Posted Jun 7, 2017 16:30 UTC (Wed) by hmh (subscriber, #3838) [Link]

There are documented security episodes due to identical systems with identical initial state early at boot and poor entropy gathering capabilities, yes. They end up generating the same crypto keys at first start-up.

Look for the paper "Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices", by Nadia Heninger, Zakir Durumeric, Eric Wustrow and J. Alex Halderman. Read the "Repeated keys due to low entropy" section.

There is an online copy of that paper at: https://factorable.net/weakkeys12.extended.pdf

That has nothing to do with /dev/urandom being secure or insecure, though. The correct term would be "/dev/urandom misuse", IMHO. But it is too easy to misuse.

The problem is real.

Waiting for entropy

Posted Jul 6, 2017 1:34 UTC (Thu) by vomlehn (subscriber, #45588) [Link]

An analogy--the mere presence of a lock on your door, no matter whether it is a lousy lock or even a fake, will prevent most people from even trying to open it. That's one of the social engineering aspects of security.

Waiting for entropy

Posted Jun 7, 2017 0:46 UTC (Wed) by neilbrown (subscriber, #359) [Link]

Surely "get_random_bytes()" should return an error if there aren't any random (by which I suspect we really mean "unguessable") bytes to be had. Reporting success, but returning predictable data seems to violate the law of least surprise.
Then we could have "get_random_bytes_or_4()", which never fails, but sometimes just returns "4" (https://xkcd.com/221/). This can be used by code which cannot wait, but cannot handle failure.

Hardware requirement

Posted Jun 7, 2017 7:05 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

Another way forward, which would be painful for Linux to choose today, but remains an option, is to decide that hardware RNG is a mandatory platform feature, the same way Linux insists platforms should have certain other features in order to get a Linux port, most obviously virtual addressing.

If you _want_ you could choose a platform in which the hardware RNG is set to output '4' or its own serial number, or whatever, and there would be some Linux whitening layer on top of your hardware RNG that would make that look at least pseudo-random, much like those old patches that would have Linux run "successfully" on machines where RAM didn't work properly. And when it breaks you'd get to keep both halves.

Hardware requirement

Posted Jun 7, 2017 17:26 UTC (Wed) by dps (subscriber, #5725) [Link]

Requiring a hadrware random number generator would be a really bad idea IMHO.

I have 6 core AMD phenom II box which AFAIK it does not have any random number generation hardware. I still works and I want to coninute to be able to run linux on it and a hardware requiremnet would make it this impossible. As a desktop stystem it probably has a reasonable amount of entropy most of the time. There are far too many boxes that don't have these features---last time I looked about 90% of amd motherboards do *not* have a tpm, random number hardware or efi firmware. I suspect that most mothetboards still don't have a tpm or rng hardware.

I usually build my own desktop boxes, patly because it is usuall significantly less expensive and partly because that way I know exactly what hardware is in the box.

Linux never ran on any hardware without a virtual pagied address space even in the days of 0.99pl13, which only supported <1Gb of memory, one core 32 bit 80x86 processors. At the time you could be the fastest x86 processor was a single core 80486DX2/66, and most motherboards did not support more than 16Mb of RAM. The elks project exists for those that can't afford a MMU but a 8086 compatible motheboard might not be cheap.

Hardware requirement

Posted Jun 7, 2017 20:39 UTC (Wed) by flussence (subscriber, #85566) [Link]

Decreeing a billion tons of existing Linux devices to be e-waste might make the likes of Samsung happy, but a saner solution would be to fix it in software, like HAVEGE already does. There's plenty of usable entropy to be found in an average computer if you stop treating it as a perfectly spherical IBM 8088 with clock speed a fixed multiple of Planck time.

Hardware requirement

Posted Jun 7, 2017 21:15 UTC (Wed) by rahvin (subscriber, #16953) [Link]

There are a not insignificant number of people that believe the NSA worked through the hardware companies to produce hardware RNGs in a way that makes them deterministic for the group that understands the subtle bugs they put into the hardware. Whether or not you accept that it's not hard to understand that without genuine random data your encryption is worthless so any progress on making random numbers better is a good thing.

I'm personally concerned about RNG's on routers with thinks like the rise of Mirari botnet. Mirari uses the simplest of compromise methods, default passwords, but it wouldn't be hard to upgrade it to target routers with good passwords but bad entropy and in doing so it would go from 600 million devices to several billion. Mirari is actually a threat to the internet itself given its scale and the ease with which anyone can build their own mirari and use it.

Hardware requirement

Posted Jun 8, 2017 6:39 UTC (Thu) by daenzer (subscriber, #7050) [Link]

FYI, the name of the botnet is not "Mirari" but "Mirai", the transliteration of Japanese 未来 for "future".

Hardware requirement

Posted Jun 12, 2017 19:37 UTC (Mon) by bfields (subscriber, #19510) [Link]

"There are a not insignificant number of people that believe the NSA worked through the hardware companies to produce hardware RNGs in a way that makes them deterministic for the group that understands the subtle bugs they put into the hardware."

What I don't get is why you'd focus on that specific attack. I'm not sure how you design an operating system kernel for security against an attacker with the resources to pressure hardware companies to add backdoors to their designs.

Hardware requirement

Posted Jun 8, 2017 8:32 UTC (Thu) by njs (guest, #40338) [Link]

There's an important distinction here between entropy *sources*, and the kernel's *estimate* of how much entropy it has. In practice, there are lots of good entropy sources: many modern systems have hardware RNGs, and even if they don't, then measuring on any modern CPU cache latencies gives non-deterministic values as far as anyone can tell. But in both of these cases, the problem is that it's very hard to *know* whether they're good sources of entropy. (For hardware RNGs, it's not just the NSA issue; it's also that if they suffer some internal physical failure then the first notification you get is that your bitcoin wallet is suddenly empty. And the cache thing works really well as far as anyone can tell but in theory it *should* be totally predictable to someone who understands the hardware, so it makes people nervous.)

Ted is happy to mix these sorts of sources into the pool, and they make you safer. But he doesn't count them when trying to estimate whether there's "enough" entropy for the kernel to *know* that you're safe. And it's that estimate that's used to decide whether to block or not during early startup, and causes difficult engineering problems downstream. So hardware RNGs don't actually help at all with that problem.

Passing entropy at boot

Posted Jun 9, 2017 12:29 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

OpenBSD just changed the bootloader protocol, adding an additional entry that is bootloader-provided entropy passed to the kernel before handing control over to the kernel. The entropy is read from disc by their second-stage bootloader, and overwritten early by userspace to prevent repeats.

Of course, this won’t work for _all_ scenarios in which the Linux kernel is currently booted, but it can be added to _some_ of them peu à peu.

You could also conceivably put some write to /dev/urandom into the initrd, a couple of bytes just so that each initrd regeneration will add a bit of (by then pseudo-)random data to each boot, which, combined with the other input (including time, which, yes, I know, is not a secret and additionally susceptible to replay attacks in VMs, but still better than nothing¹), may help.

Investigating platform-specific things is also a (not very low-hanging though) fruit. For example, if a board of a system is known to have two separate oscillators, measure the delta for a short while during early boot and add that¹².

① In the words of the author of RANDOM.SYS for DOS: “Every bit counts.”
② My recipe for this is: add enough sources, for which each source is at least one of, and for which the (XOR or otherwise well-combined) sum of all sources is all of: unpredictable; unobservable; mathematically well-distributed (e.g. prior PRNG output) — then worry about the mixing algorithm ☺


Copyright © 2017, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds