> The idea of secure boot is to ensure that it's as difficult as possible to run untrusted code before you've set up your desired trust chain. If you can launch untrusted code within the Linux kernel then it's a simple matter to use that as an attack vector for Windows - rather than writing your malware from scratch, you drop a copy of the exploitable kernel in the EFI system partition, provide a trivial initramfs that gets you into the kernel, pass in the NT kernel and appropriate drivers, set up the loader parameter block, set up some new page tables and jump into the Windows kernel. Of course, rather than just booting it you've taken the opportunity to compromise it in some subtle way. Windows boots slightly more slowly than usual, but there's no reason for most users to notice - and worse, the malware checking code that would normally be able to rely on the kernel not having to be compromised is now unable to do anything useful.
> So that's the why - breaking the trust barrier results in revocation.
That is the first problem with secure boot: KNOWINGLY breaking the trust barrier results in revocation. Worse yet: breaking the trust barrier knowingly and from the point of view of anyone in the trust chain results in revokation. This is preposterous, because it causes revocation against the will of the computer owner. So, I have to agree with jiu and eduperez and say that "secure boot was devised to distract the energy of people building linux to hinder their progress" and "Secure boot was created to lock users out of their own computers".
Those two problems amount to: if someone in a high position in the "trust chain" can lock you out of your own computer because "fsck you, that's why", they can. More: they cannot lock malware out of your computer because (1) it is difficult/impossible to know what is malware after all and (2) lots of malware will stay hidden and the vulnerabilities that lead to infection and pnwing will not generate key revocations because people in the trust chain do not know about them. Again, stuxnet and flame are good example of "hidden for about two years", which is plenty of time for doing plenty of sabotage and espionage.
And then the problems in the how:
> The how is a little more difficult. There's two ways you can revoke binaries. The first is to add an update of the specific SHA256. That makes sense in many cases, but probably isn't the best choice here - any older kernels are presumably also compromisable. So instead you just revoke the individual signing key and sign your kernels with a new one. How will that scale? Great question. We'll find out.
It does not scale at all.
I have stated above that the why is a flawed premise. But the how is a nightmare. For someone to revoke a key what you have to do is to write that key or that sha256 or whatever to a database that will be read in the UEFI level. THAT, per se, is a huge vulnerability. Who writes that database? How is that database read (and checked for veracity) by the UEFI level? When? At boot time via the network (spoofable)? Read from disk (corruptible)? Signed? With the same key you are revoking? With an escrow key that can be used to revoke any key? You load a new key how? If all keys are revoked, what happens?
Basically, in security terms, you are revoking access to the computer for someone based on an externally-entered (coming from the network, no less!) string. This is as tainted as it gets.
But it only gets worse: the platforms (Linux/Windows8) and their kernels are not security proven and are not even security-oriented. They present lots of opportunities for attacks and they will continue doing so for a long time. It is virtually impossible to plug every single hole in them and it's impossible to revoke the key that signs them at every single vulnerability because if you do that (1) your revoked keys database will be huge and (2) you will force updates down the throats of the users, effectively locking them out of their computers, not forgetting (3) opening up new opportunities to attacks coming from the infrastructure outside the trust chain.
> And yes obviously there's the risk of implementation flaws. The cryptographic model in use is believed to be sound, and we assume that Microsoft have learned from their mistakes with the Terminal Server key. Is that a guarantee? No. But then SSL isn't guaranteed to be safe either, and people still rely on that. Proving security is hard.
The problem is, I am not even factoring implementation flaws, above. Just normal vulnerabilities (buffer overflows, integer overflows, finite state unexpected transitions, etc) and normal attacks OUTSIDE the "chain of trust" implementations (key distribution for reloading and revoking, etc).
And we still have to factor in cryptographic security failures that are sometimes exploitable (hash/signature collisions were used in Flame IIRC)...