Going a step backwards from this specific incident, don't we need first some kind of risk analysis of our systems (the whole infrastructure if possible).
Obviously, compromission of the primary package signing key of a major distributor is a kind of a big deal. But while adressing communication concerns, crisis organization, etc., shouldn't we ask ourselves too if there are there other big risks (possibly unadressed) and try to propose some sensible list?
I can think of several other "big threats" that could be "amusing" to consider (if we do that before they occur):
- single-file compiler compromission;
- top-level administrator compromission (via big amount of money or other kind of human pressure)
- whole company compromission (e.g. company acquisition by an open-source hostile competitor)
- theoretical cryptanalysis breakthrough (ouch, AES in the dustbin!)
- [your idea here]
[Start another list with safety-related issues: fire, water, electricity, etc.]
Furthermore, with open-source systems, we can do no less than trying to do such risk analysis in the open too, no? And, an "open" risk analysis would be some rather new and possibly very interesting thing in that area (my humble opinion).
There is also a recurrent problem with this kind of debate: for a while after a big incident happened, everyone listens to the question but also heavily focusses on the specific incident; and when everything goes well, nobody really listens to those trying to prepare (quietly) for bad things... (Well, I suppose that's human...;-)