Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
No signed kernel, just a signed boot loader
Posted Jun 25, 2012 3:46 UTC (Mon) by hummassa (subscriber, #307)
Someone else in this thread mentioned that security is harder for the defender than for the attacker because the defender has to plug every hole, but the attacker need only to find one hole. I would add: the defender has to plug every hole AND still be able inside the system. Yes, if you add 10-feet-thick concrete walls around your house without doors or windows, no burglar will ever enter it... neither will you.
Posted Jun 25, 2012 4:10 UTC (Mon) by mjg59 (subscriber, #23239)
So that's the why - breaking the trust barrier results in revocation. The how is a little more difficult. There's two ways you can revoke binaries. The first is to add an update of the specific SHA256. That makes sense in many cases, but probably isn't the best choice here - any older kernels are presumably also compromisable. So instead you just revoke the individual signing key and sign your kernels with a new one. How will that scale? Great question. We'll find out.
And yes obviously there's the risk of implementation flaws. The cryptographic model in use is believed to be sound, and we assume that Microsoft have learned from their mistakes with the Terminal Server key. Is that a guarantee? No. But then SSL isn't guaranteed to be safe either, and people still rely on that. Proving security is hard.
Posted Jun 25, 2012 12:28 UTC (Mon) by hummassa (subscriber, #307)
> The idea of secure boot is to ensure that it's as difficult as possible to run untrusted code before you've set up your desired trust chain. If you can launch untrusted code within the Linux kernel then it's a simple matter to use that as an attack vector for Windows - rather than writing your malware from scratch, you drop a copy of the exploitable kernel in the EFI system partition, provide a trivial initramfs that gets you into the kernel, pass in the NT kernel and appropriate drivers, set up the loader parameter block, set up some new page tables and jump into the Windows kernel. Of course, rather than just booting it you've taken the opportunity to compromise it in some subtle way. Windows boots slightly more slowly than usual, but there's no reason for most users to notice - and worse, the malware checking code that would normally be able to rely on the kernel not having to be compromised is now unable to do anything useful.
> So that's the why - breaking the trust barrier results in revocation.
That is the first problem with secure boot: KNOWINGLY breaking the trust barrier results in revocation. Worse yet: breaking the trust barrier knowingly and from the point of view of anyone in the trust chain results in revokation. This is preposterous, because it causes revocation against the will of the computer owner. So, I have to agree with jiu and eduperez and say that "secure boot was devised to distract the energy of people building linux to hinder their progress" and "Secure boot was created to lock users out of their own computers".
Those two problems amount to: if someone in a high position in the "trust chain" can lock you out of your own computer because "fsck you, that's why", they can. More: they cannot lock malware out of your computer because (1) it is difficult/impossible to know what is malware after all and (2) lots of malware will stay hidden and the vulnerabilities that lead to infection and pnwing will not generate key revocations because people in the trust chain do not know about them. Again, stuxnet and flame are good example of "hidden for about two years", which is plenty of time for doing plenty of sabotage and espionage.
And then the problems in the how:
> The how is a little more difficult. There's two ways you can revoke binaries. The first is to add an update of the specific SHA256. That makes sense in many cases, but probably isn't the best choice here - any older kernels are presumably also compromisable. So instead you just revoke the individual signing key and sign your kernels with a new one. How will that scale? Great question. We'll find out.
It does not scale at all.
I have stated above that the why is a flawed premise. But the how is a nightmare. For someone to revoke a key what you have to do is to write that key or that sha256 or whatever to a database that will be read in the UEFI level. THAT, per se, is a huge vulnerability. Who writes that database? How is that database read (and checked for veracity) by the UEFI level? When? At boot time via the network (spoofable)? Read from disk (corruptible)? Signed? With the same key you are revoking? With an escrow key that can be used to revoke any key? You load a new key how? If all keys are revoked, what happens?
Basically, in security terms, you are revoking access to the computer for someone based on an externally-entered (coming from the network, no less!) string. This is as tainted as it gets.
But it only gets worse: the platforms (Linux/Windows8) and their kernels are not security proven and are not even security-oriented. They present lots of opportunities for attacks and they will continue doing so for a long time. It is virtually impossible to plug every single hole in them and it's impossible to revoke the key that signs them at every single vulnerability because if you do that (1) your revoked keys database will be huge and (2) you will force updates down the throats of the users, effectively locking them out of their computers, not forgetting (3) opening up new opportunities to attacks coming from the infrastructure outside the trust chain.
> And yes obviously there's the risk of implementation flaws. The cryptographic model in use is believed to be sound, and we assume that Microsoft have learned from their mistakes with the Terminal Server key. Is that a guarantee? No. But then SSL isn't guaranteed to be safe either, and people still rely on that. Proving security is hard.
The problem is, I am not even factoring implementation flaws, above. Just normal vulnerabilities (buffer overflows, integer overflows, finite state unexpected transitions, etc) and normal attacks OUTSIDE the "chain of trust" implementations (key distribution for reloading and revoking, etc).
And we still have to factor in cryptographic security failures that are sometimes exploitable (hash/signature collisions were used in Flame IIRC)...
Posted Jun 25, 2012 13:43 UTC (Mon) by mjg59 (subscriber, #23239)
Posted Jun 25, 2012 13:48 UTC (Mon) by hummassa (subscriber, #307)
Posted Jun 25, 2012 13:51 UTC (Mon) by mjg59 (subscriber, #23239)
Posted Jun 26, 2012 1:49 UTC (Tue) by hummassa (subscriber, #307)
Posted Jun 25, 2012 15:49 UTC (Mon) by raven667 (subscriber, #5198)
This is BS because the owner can create their own keys, disable secure boot, etc. so they can't be shut out of their machine against their will, that is not a real thing. Now if you start talking about boot locked machines that might be a different story but we definitely are not talking about that, even MS wants to make sure x86 hosts aren't boot locked (only their ARM tablets, like many Android devices).
In any event this is a feature that can be used for the good of the system by the system owner and that is what Linux will use it for.
Posted Jun 26, 2012 18:10 UTC (Tue) by rahvin (subscriber, #16953)
Posted Jun 25, 2012 11:55 UTC (Mon) by micka (subscriber, #38720)
Posted Jun 25, 2012 15:12 UTC (Mon) by pboddie (subscriber, #50784)
Obviously, the whole thing is yet another competition-busting scheme with just enough spin to confuse the regulators and make neutral observers give it the benefit of the doubt (that it is "probably good for security" or the laughable claim that "Microsoft isn't running this show"), but these things usually meet their demise when people end up being sold "faulty" products.
Posted Jun 25, 2012 16:49 UTC (Mon) by micka (subscriber, #38720)
Of course, what applies to common people may not apply to Microsoft or Canonical.
Posted Jun 25, 2012 17:01 UTC (Mon) by gioele (subscriber, #61675)
I suppose that there are a lot of other things that must happen at the same time before you jail someone. Otherwise every borked update to Debian sid or Rawhide would send quite a bit of people in prison.
Posted Jun 25, 2012 18:26 UTC (Mon) by pboddie (subscriber, #50784)
But as I noted, such schemes are mostly used to erode the rights of users in the name of something else so that people don't question such schemes until after they have been introduced, and thus any argument about how a dominant vendor has managed to obliterate the competition can be waved away many years after the fact with excuses like "it's what the market expected" and "nobody demanded anything else".
Posted Jun 25, 2012 18:39 UTC (Mon) by raven667 (subscriber, #5198)
That is hyperbole and is not what secure boot on x86 does. It'd be great if we could stick to a discussion based on the facts.
Posted Jun 25, 2012 21:28 UTC (Mon) by pboddie (subscriber, #50784)
And I'm sure it's not beyond the skills of the vendors to make installing one's own keys a near impossibility and then claiming it was an accident for as long as it takes before they can then claim that the product is no longer supported.
So in practical terms, it is all about control. We can discuss technical workarounds as much as we like and deny that the technology imposes any particular restrictions, but the combination of one company's continuous strategy of pushing the regulatory envelope and that technology results in a shoring up of that company's position.
Why else are the distributions jumping through hoops? Because they like a challenge? The practical effect of the misuse of such a technology is as much a fact as any aspect of the "it's OK - I can still boot my kernel" technical discussion.
Posted Jun 25, 2012 21:40 UTC (Mon) by raven667 (subscriber, #5198)
So we agree on the substance of the matter. I can't comment on the rest of your post because I can't find any facts or point, just a lot of rhetorical flailing about.
Posted Jun 26, 2012 11:03 UTC (Tue) by pboddie (subscriber, #50784)
Posted Jun 26, 2012 18:10 UTC (Tue) by marcH (subscriber, #57642)
> > you can sign your own payloads and install your own keys
> So we agree on the substance of the matter. I can't comment on the rest of your post because I can't find any facts or point, just a lot of rhetorical flailing about.
Too bad things are not that obvious to Fedora and Canonical. They should have hired you and saved a lot of effort.
Posted Jun 26, 2012 19:14 UTC (Tue) by raven667 (subscriber, #5198)
Foolish on my part I suppose. http://xkcd.com/386/
Posted Jun 25, 2012 18:56 UTC (Mon) by jspaleta (subscriber, #50639)
Secureboot as a concept is not a bad thing. The policy surrounding how to enable secureboot for consumer devices needs some iteration however. There is absolutely nothing wrong with an off-by-default secureboot even with the current specification and limitations. On-by-default, has some definite challenges, and MS's certification process requirements brings these challenged directly into the forefront of the discussion.
Even with an on-by-default scheme, if users can disable secureboot to regain access to a system that has been impacted by a key revocation I really don't see a fundamental problem. As long as users are not locked out of the firmware config screens to disable secureboot on the hardware they purchased, a 3rd party revocation process is best described as a very stringent notification about a potential system compromise. If users can disable secureboot they do not lose access to their systems even after a key that their current configuration requires has been revoked.
In fact I'd wager that once the security benefit is digested more widely large institutions like the US Department of Defense and the State Department and even municipal power companies will be making heavy use of secureboot with their own signing keys on a lot of critical infrastructure and even desktops and laptops...so they don't even have to implicitly trust any Vendor (including Microsoft). They'll use the firmware reconfiguration to the fullest to load their own keys on their hardware and then self-sign binaries and to control the revocation process from end-to-end.
Posted Jun 25, 2012 19:13 UTC (Mon) by raven667 (subscriber, #5198)
I'm not sure that's true, you can have many vendor and user keys loaded into the firmware but to get your key pre-loaded would require some relationship with the vendor so your hardware coverage is likely to be less than 100%, whereas all the vendors want to be able to run MS so that key is virtually guaranteed to be loaded by default.
Actual binaries can be signed by only one key though so to boot and reduce the number of boot media spins required forces you to choose which key you are going to use to sign your initial boot loader and the MS key wins on convenience there.
> Even with an on-by-default scheme, if users can disable secureboot to regain access to a system that has been impacted by a key revocation I really don't see a fundamental problem
Which is exactly the case now for x86. Win8 ARM hosts are boot locked but that's its own separate issue at this time, I don't think any Linux vendor is going to fool around with them. Just don't buy them an expect to run anything else on them (not much different that the rest of the ARM market anyway).
> companies will be making heavy use of secureboot with their own signing keys
Thats probably something they will want to do but it depends on how to sign or re-sign boot binaries. Is it possible to re-sign the Windows 8 boot loader for example and have the system not broken? Certainly this will be do-able, maybe even common, with Linux systems.
Posted Jun 26, 2012 7:27 UTC (Tue) by ssmith32 (subscriber, #72404)
Unfortunately, regardless of the "should", there are plenty of examples to the contrary.
Whatever the original intent, in reality, UEFI's largest impact so far has been to impose a significant cost on open-source software, with the to-be-determined security benefits still vaporware..
Posted Jun 25, 2012 21:33 UTC (Mon) by dlang (✭ supporter ✭, #313)
Posted Jun 26, 2012 7:41 UTC (Tue) by micka (subscriber, #38720)
> Of course, what applies to common people may not apply to Microsoft or Canonical.
or Sony. The "Microsoft or Canonical" was not meant to be exhaustive.
Of course I'm exagerating. Even the individual cracker that bricks _one_ computer would not go to jail... t1he first time, unless they go against a police or big corporate computer.
The second time would be very different, though.
Posted Jun 25, 2012 17:09 UTC (Mon) by JEFFREY (subscriber, #79095)
Posted Jun 25, 2012 17:15 UTC (Mon) by raven667 (subscriber, #5198)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds