You've restated what I wrote, hence I don't disagree with you.
Note that we don't know how hard this bar would be. There are things like
'clumping' of expertise, such that that in any specialised area in a technical
field the people working in it tend to be drawn from a much smaller group than
the set of all people qualified in the field. I.e. the set people who *write*
compiler A are less independent from those who author compiler B. Hence
your assumption that the attacker would have to *hack* into the other
compiler is unsafe. Rather they could simply transition from working on A to B,
either as part of their normal career progression or at least seemingly so.
Next, as dwheeler also notes in his paper, it may be hard to obtain another
unsubverted compiler. Indeed, looking carefully at his work it seems his proofs
specifically require 1 compiler-compiler that can be absolutely trusted to
compile the general compiler correctly, as the starting point of the process. (I
thought at first that perhaps a set was sufficient, such that you didn't have
to know which compiler was trustable, as long as you could be confident at
least one compiler was). See the first sentence of 8.3 in his thesis, and the
multiple discussions of the role of a trusted compiler in the DDC process.
So this still seems to boil down to "you have to write (or verify all the source)
of your compiler in order to really be able to trust it".
I'm not poo-poo'ing the work per se, just saying this good work is slightly
marred by the overly grand claim made in its title.