> If one is generating a large key pair, to use for the next century, using some data from
/dev/random is probably right.
Much concern should be put when using so-called 'real' random data. Unlike particular PRNG
algorithms, whose random properties are usually well studied and known, the so-called
'physical sources of real randomness' might easily exhibit certain non-random properties,
since the actual properties of such sources, being dependent on numbers of factors (physical,
temporal, spatial etc), are much more obscure than that of the deterministic RNGs. This was
e.g. shown by George Marsaglia when he was trying out several Johnson noise based TRNGs while
preparing his random-data CDROM. To get the best result, it is usually best to mix the 'real
random data' with an output of some good deterministic PRNG. The result won't get less random
than it was (assuming, of course, that TRNG and PRNG outputs did not correlate, which would be
quite a straightforward assumption). But assuming the worst has happened and the TRNG was
indeed flaky, this can really save the day, since the output would still be quite as perfect,
better than of any of the sources if they were used separately.
I can only hope that /dev/random does exactly that (since, yes, I'm too lazy to check the
source), but if not, I personally wouldn't trust its output as much as it is hyped to be.