Diverse Double-Compiling (DDC) counters "trusting trust"
Posted Jun 21, 2013 19:02 UTC (Fri) by
david.a.wheeler (subscriber, #72896)
In reply to:
Diverse Double-Compiling (DDC) counters "trusting trust" by paulj
Parent article:
Van den Oever: Is that really the source code for this software?
A few comments...
"assuming that different compilers have independent authors, or independent sets of people involved in their release, seems a weak assumption to me." - Actually, I believe it's pretty strong. With some exceptions, different compilers are written by different people. And if they ARE the same people, well, use a different one, or write your own for use in testing. But you do not need to write a "production-quality compiler" to verify a compiler using DDC, you just need one that functions correctly for one program... and that makes it a much easier problem.
"Further, as Thompson pointed out, the compiler is just one vector, and he explicitly noted his attack could be implemented in _any_ programme-handling programme. Thus, it is not just the compiler, but the whole environment the compiler and any other programme-handling programme run in that must be verified or trusted." -
Sure, but DDC can used for them, too. You don't have to limit it to just the "compiler" as traditionally defined. I specifically talk about using DDC to test the compiler+operating system as a system.
"Finally, assume that increasing numbers of additional compilers might buy more re-assurance. That does not mean it is practical though. Even with just *1* compiler, tcc, you ran into issues (bugs) that made reliable builds to the same binary difficult. The more compilers you add to the mix, the more of these problems you run into, and the less practical it becomes." -
Ah, but the problems only normally happen the first time you're applying DDC, and in any case, it's usually hard to do something when no one has ever done it before.
"My critique gives more examples of assumptions that equate to having to write/verify everything yourself OR trust, as was Thompson's point."
- I think we fundamentally differ on what the problem is, leading us to differences on whether or not I've solved it :-). I believe the problem is having to totally trust any one thing, without being able to verify it. If you can independently verify something, then it's no longer the total "trusting trust" problem that Thompson pointed out. Sure, you have to trust something, but if you can independently verify that thing you trust in multiple ways, it's nowhere near as risky.
(
Log in to post comments)