Doesn't do so much for remote verification
Posted May 28, 2005 14:30 UTC (Sat) by jvotaw
Parent article: The Integrity Measurement Architecture
If I understand correctly, the system boils down to securely computing a hash of all programs run on a machine, signing it, and reporting it. A remote party, such as a publisher or bank, refuses to communicate with you unless you're only running software that will look after the data (intellectual property, financial data, whatever).
The reporting step seems to be the weakest link. If you want to steal intellectual property, simply run your machine once in the secure mode and record the signed hash as it is transmitted. Reuse that value in the future as needed. The same approach works if the attacker is a thief and the remote party is your bank: the thief records the correct signed hash on an uncompromised system and forces your (compromised) system to return it.
This may be more or less difficult if there are race conditions in IMA, there are available computers that can eavesdrop, the TPM's key is unique per computer, or your connection to the remote party is encrypted. I can think of a few things that would probably solve the problem completely, but it's not clear if the authors have considered them.
Or am I confused and missing something important?
to post comments)