Quote of the week
Posted Jul 27, 2018 16:26 UTC (Fri)
by david.a.wheeler (subscriber, #72896)
[Link] (1 responses)
I do think that *sometimes* it's useful to build in an "engineering margin" for the unknowns. Many bridges are standing today because the engineers not only built it for what was expected, but added a margin to handle the unexpected.
I don't know if 512 bits is adding enough engineering margin. If the algorithm is COMPLETELY broken, then the number of bits is irrelevant. The main argument I can see for using 512 bits would be if adding those extra bits will create a safety margin from a *partial* break. That's not completely insane; many algorithms in the past have started with *partial* breaks, and using more bits provided some additional time. A history hash algorithms here might be useful: http://valerieaurora.org/hash.html The challenge here is estimating the likelihood that there will be something that breaks the 256-bit version AND the 512-bit version provides useful margin to counter the break (for at least a few more years).
Posted Aug 2, 2018 22:51 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Umm ... *PROPERLY* theorised software is even better than the well-engineered variety. The problem is people don't realise how hard it is to properly theorise. And as soon as someone finds a hole in the axioms of your theory then you're SOL. Isn't that a pretty accurate description of Meltdown/Spectre? And I bang on about RDBMSs - imho its axioms are self-contradictory ... (which is why I think it's *impossible* to create a well-engineered RDBMS! :-)
Cheers,
Yes! Software is engineered, not theorized. Engineering margin?
Yes! Software is engineered, not theorized. Engineering margin?
Wol