This is something I don't quite understand. Yes, the packets might be registered at the
network card exactly N nanoseconds apart, but between the time that the packet is registered
by the card and when entropy might be added there is:
- Waiting for the PCI bus to be free to check the status of the card
- The CPU finding the code to run which may involve looking up page tables, pulling code out
which may be in any number of caches, each of which take an unpredicatable amount of time to
- The process of DMAing the data to main memory takes an unpredictable amount of time,
depending on the state of the DRAM.
- The busses are shared between various CPUs which are doing other things at the same time.
- The execution time of CPU instructions is affected by branch prediction logic and
instruction scheduling algorithms. Hyperthreading makes it worse.
And you're saying that at the end of this there's not even a single bit of entropy? If the
machine were otherwise completely idle I might understand it but if you just register lots of
dubious sources and use as entropy the time between different dubious sources I don't see how
it could be in any way predictable.
If I had any idea how to do it, I'd create a device that tried to extract entropy from the
timer interrupt and see if there is any correlation to be found...