Theoretical vs. practical cryptography in the kernel
Theoretical vs. practical cryptography in the kernel
Posted Aug 16, 2020 18:55 UTC (Sun) by nivedita76 (subscriber, #121790)In reply to: Theoretical vs. practical cryptography in the kernel by kleptog
Parent article: Theoretical vs. practical cryptography in the kernel
The reason to reseed any PRNG, CS or not, is because you are concerned about an attacker using some means to exfiltrate the state of the PRNG (for a non-CSPRNG, this might be as easy as observing a moderate number of outputs, an LFSR just requires observing a number of outputs that is proportional to the state size, for example).
Given the scenario that reseeding is protecting against, adding small amounts of entropy at a time provides almost no additional security, and has two problems. Firstly, it effectively "wastes" that entropy: if you had waited to accumulate enough entropy and add it all at the same time, you could actually have gained security. Secondly, it actually reveals the bits you are mixing in to the attacker. If those bits are then also used later to seed a different PRNG, you've reduced the security of that PRNG as well, for no gain in security of the first one.
Posted Aug 24, 2020 14:16 UTC (Mon)
by ecree (guest, #95790)
[Link]
Except that this isn't actually true.
Viewing the attack from the information-theoretical perspective, there is _no difference_ between "mix in the entropy in dribs and drabs as you collect it" and "save it up until you can catastrophically re-seed": either you're adding entropy faster than it leaks (in which case you're fine) or you're leaking state faster than you add entropy (in which case you're screwed).
The difference only appears when you consider the amount of work required to brute-force-invert the mixing function _if_ state is leaking fast enough to allow the information-theoretic attack; partial-state-extension on frequent small re-seedings adds up to less total work than one big re-seed because the 2^k factor dominates.
Catastrophic re-seeding is thus important if your security depends on the cryptographic strength of your PRNG, but _not_ if your security depends on adding noise to the state faster than the attacker can obtain information to constrain that state (e.g. by observing outputs) — in which case cat reseed may actually _harm_ you by letting the attacker reach a greater partial state in the (longer) interval between re-seed events.
> for a non-CSPRNG, this might be as easy as observing a moderate number of outputs, an LFSR just requires observing a number of outputs that is proportional to the state size, for example
From the information-theoretical perspective, the same is true of any PRNG, CS or no, because you can always enumerate all possible states and see which one matches. Observing (on average) n consecutive outputs and doing O(n*2^n) work will crack an n-bit CSPRNG just fine. The difference with the LFSR is that there's a fairly easy 'cryptanalysis' that lets you do it with much less work than brute force.
One of the things that drove me crazy reading the original thread on netdev was the way some people (especially Spelvin) equivocated between the information-theoretical argument and the work argument. Tbh it looked like taking the Schneier cat-reseed paper and cargo-culting it into some kind of panacea-cum-universal-necessity.
Theoretical vs. practical cryptography in the kernel