Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for December 5, 2013
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
NVIDIA to provide documentation for Nouveau
Posted Sep 25, 2013 7:23 UTC (Wed) by paulj (subscriber, #341)
Unless you mean that, ultimately, Secure Boot is intended to disallow users from running arbitrary code on the hardware, and allow only code limited to approved distributors to be run?
Posted Sep 25, 2013 9:20 UTC (Wed) by lsl (subscriber, #86508)
The kernel can't reasonably perform such checks before letting an user space driver access the graphics hardware (and with it, the entire system).
Posted Sep 25, 2013 12:13 UTC (Wed) by farnz (guest, #17727)
It all depends on the bootloader you're using, though. I can easily imagine that Red Hat and SuSE enterprise customers are planning to use bootloaders that check the signature of the next thing they load, so that there's a chain of blame all the way through the system from power on to running kernel that lets them trace the sign-off on any exploitable code (and thus work out if there's something they could do differently to avoid an exploit of their employee's CAD workstations or whatever).
I'd expect such customers to follow James Bottomley's guide to replacing all keys trusted by the firmware with ones under the customer's control, so it wouldn't matter that you can get the LF bootloader that's been signed by Microsoft's key, as that key is not trusted by the firmware.
Posted Sep 25, 2013 15:18 UTC (Wed) by PaXTeam (subscriber, #24616)
Posted Sep 26, 2013 18:48 UTC (Thu) by farnz (guest, #17727)
That's why I referred to a chain of blame, not a chain of trust - there's a good chance that the firmware isn't secure enough to be trustworthy. If that's the case (e.g. because it executes code signed by a Microsoft key when you've replaced that with your own key), the chain of blame ends at the firmware, and hence at the manufacturer.
Posted Sep 26, 2013 19:45 UTC (Thu) by PaXTeam (subscriber, #24616)
Posted Sep 26, 2013 20:06 UTC (Thu) by mjg59 (subscriber, #23239)
Posted Sep 26, 2013 21:35 UTC (Thu) by PaXTeam (subscriber, #24616)
Posted Sep 26, 2013 21:42 UTC (Thu) by mjg59 (subscriber, #23239)
Posted Sep 29, 2013 0:42 UTC (Sun) by PaXTeam (subscriber, #24616)
also you keep saying what the removal of the MS key means but you don't explain why the UEFI code to accomplish this can be so much more trusted than trusting what the MS key will sign.
Posted Sep 29, 2013 5:03 UTC (Sun) by mjg59 (subscriber, #23239)
Secure Boot is the same. It's possible that the UEFI code *is* hostile to us. Secure Boot is irrelevant here - hostile firmware can modify our OS even without Secure Boot being involved. But the firmware is produced by one of a range of different firmware vendors, and has in turn been modified by our system vendor. The probability that the NSA (or any other state agency) has a backdoor that exists in every firmware implementation is slim - there's too many people who have access to the source code (including random board vendors in Taiwan) to be able to guarantee secrecy.
However, the security of the firmware is irrelevant if you don't believe that Microsoft's key can be trusted. If your firmware trusts Microsoft's key then the NSA (or some other hostile body with access to Microsoft's key) can sign a bootloader that your firmware will trust and then use that to compromise the rest of your OS. Removing Microsoft's key removes that avenue of attack. Someone who wants to compromise your system can no longer simply go to Microsoft. They instead have to identify the specific firmware that you're running, locate a back door or vulnerability and then deliver an attack that's specifically tailored to you. Removing Microsoft's key doesn't make it impossible for someone to attack your boot process, but it does make it harder. That seems like an improvement in security.
The source code to AMI's Secure Boot implementation is out in the wild, thanks to Jetway leaving it on an insecure FTP site. I'm sure someone could do a reasonable audit.
Posted Oct 1, 2013 13:07 UTC (Tue) by PaXTeam (subscriber, #24616)
2. Secure Boot (and its key management in particular) was proposed here to solve the problem of not booting untrusted code, i.e., the entire trust was placed in these keys all the while ignoring the fact that the UEFI code is in the same 'trust domain'. i'm glad you actually agree with me that separating the two is non-sensical.
3. your statement about the NSA's capabilities requires justification. some food for thought: backdooring doesn't require source code (and especially not permanent source changes). backdooring doesn't require the cooperation of firmware producers (also the food chain is longer than that). backdooring can be targeted. etc.
4. the security of the firmware becomes relevant *exactly* when it comes to key management. how else would you know whether the firmware does as asked (removes the MS key for good) or just pretends to do so but still accepts some blob containing some secret magic signed by the supposedly revoked key? one more time: you *cannot* separate the trust in UEFI code from the trust in the keys it manages. so any 'improvement in security' based on removing the MS key is just a feeling at best, not actual security.
5. availability of some source code on the net has exactly zero relevance to the code running as your machine's UEFI firmware. you can audit it all you want, it won't give you all the backdoors that could exist in the actual binary stored in flash (and to be honest, it'd be rather dumb to risk exposing anything of this sort in this kind of source code anyway).
Posted Oct 1, 2013 15:26 UTC (Tue) by mjg59 (subscriber, #23239)
2. Secure Boot places the root of trust in the firmware, so yes, if you can't trust the firmware then you can't trust anything above that. But like I said, that's true even without Secure Boot.
3. There are at least 4 common firmware implementations that were independently developed. They're built with different compilers. This code is distributed to a much larger number of board vendors, each of whom then rebuilds it with their own choice of compiler.
Could a security agency compromise all of these? It's theoretically possible, but it doesn't seem like the easiest avenue of attack. The number of firmware implementations is larger than the number of operating systems that run on top of them - backdoor Windows and Linux and you have the same benefits for much less effort. That's not to say that firmware is secure and trustworthy, or that individuals won't be targeted, just that Secure Boot probably isn't the easiest avenue of attack.
4. Why would you do it that way? It'd be far too easy for a user to verify - remove the Microsoft key, check whether Windows boots. As you suggest, the obvious thing to do would be to have some additional embedded key that's checked regardless of whether or not the Microsoft key is present. So sure, removing the Microsoft key doesn't secure you against a security agency who's managed to compromise your firmware. But it *does* protect you against attacks where someone's found an exploit in something that was signed by Microsoft. Reducing your attack surface is an improvement in security.
5. So someone should just rebuild that source code and check whether it matches the binaries that Jetway ship. Matching obviously doesn't guarantee the absence of a backdoor, but a failure to do so would be a pretty strong indication that something's up.
Posted Oct 1, 2013 21:14 UTC (Tue) by PaXTeam (subscriber, #24616)
second, if you have the ability to force arbitrary microcode updates on your target then you're already root there for all intents and purposes, so you have easier ways to backdoor their system.
2. but according to local wisdom here Secure Boot is supposed to give us, well, a secure boot process if only we used our own keys. turns out we're no better off than trusting trust once again.
3. have you seen the Snowden leaks? do you know how many companies got compromised and/or compelled into cooperation by the NSA and other agencies? so yes, i'm not exactly impressed by '4 common firmware implementations', it's a very *small* industry actually (and what does building with different compilers matter? nothing?). in fact, if i were the NSA, i would have my men planted there long ago and the 'how do i compromise them' question would simply become 'when and what do you want me to do?'. as for which is more numerous, i bet there're more different vmlinux and heck, even ntoskrnl images out there than UEFI firmware updates.
4. that's not how it'd work, obviously a Windows boot despite (supposedly) not having the MS key would be a dead giveaway. however it's possible to still accept code signed by the (supposedly removed) MS key if there's an additional condition - no need for an extra key at all, just some embedded secret that only the backdoor owner would know (and whoever else reverse engineers it of course). in fact MS would not even have to be complicit here, they'd just sign such an image in good faith without being aware of the secret payload that'd trigger the 'accept this despite being signed by the removed MS key' logic in the backdoored UEFI firmware.
as for 'an exploit in something that was signed by MS' you probably didn't mean that, but rather an exploitable bug as i find it unlikely that an actual exploit embedded in the to-be-signed code would pass their processes whereas bugs (exploitable or not) slip by all the time. with this understanding it seems now that the worry about the MS key is not that someone abuses it to sign something bad (which is what i was going about before) but that otherwise well meaning code gets signed and then exploited due to its bugs. this is a legitimite concern and removing the MS key would indeed help here... except this problem applies equally well to any other key as well and considering where MS stands with its SDLC and other processes in the industry (read: mostly above everyone else) i think the users are worse off trusting anybody else's keys, key signing processes and software development capabilities than those of MS. perhaps a sad piece of truth for free software but as far as i'm concerned, this is the reality. so if the advice here is still that users should become their own CA and use their own keys for Secure Boot then my bet is that the likes of the NSA (and even less resourceful actors) will still have a field day owning their systems.
5. just a few points above you were bringing up different compilers as one of the reasons why universal backdooring would be so much harder for powerful and skilled actors (it isn't, but that's not the point here) and now you're suggesting that much less resourceful end users should try to gain confidence in their firmware by trying to second guess the exact toolchain and build environment their vendor used. sorry, this doesn't add up to a good argument ;).
Posted Oct 1, 2013 22:23 UTC (Tue) by raven667 (subscriber, #5198)
Posted Oct 2, 2013 18:28 UTC (Wed) by mjg59 (subscriber, #23239)
But Secure Boot isn't about protecting us from the firmware. It never has been. It's about limiting the set of objects that your firmware will run. Now obviously if a sufficiently powerful actor has leaned on your firmware vendor then they may be able to run arbitrary code on your firmware, but why bother? They could just have the firmware include some SMM code that'd trigger in specific circumstances and modify arbitrary addresses in your running OS.
Obviously Secure Boot does nothing to protect you against such actors, but that doesn't mean it adds nothing to security. Microsoft have signed literally hundreds of binaries. Fedora have signed significantly fewer than that, and all the ones signed by Fedora have also been signed by Microsoft. Removing the Microsoft key and only trusting the Fedora one clearly improves security, if only because you'll no longer be able to boot the Ubuntu grub that'll happily boot unsigned kernels. Perhaps you weren't aware that Microsoft is effectively the global signing authority for UEFI binaries?
Posted Sep 25, 2013 16:05 UTC (Wed) by mjg59 (subscriber, #23239)
Posted Sep 25, 2013 23:05 UTC (Wed) by dlang (✭ supporter ✭, #313)
Posted Sep 25, 2013 23:07 UTC (Wed) by mjg59 (subscriber, #23239)
Posted Sep 26, 2013 2:32 UTC (Thu) by dlang (✭ supporter ✭, #313)
not all systems that are deployed as servers have IMPI
trading a system that requires root access to do thing for a system that puts it's console on the network exposted to attackers (especially where that console is 'secured' by vendor proprietary code) doesn't seem like a win to me.
Posted Sep 26, 2013 15:06 UTC (Thu) by tialaramex (subscriber, #21167)
Sure, it's totally acceptable to choose no lights out management if you have 24/7 hands-on. The 24/7 hands-on people are physically present and meet that constraint.
The practice of calling a cheap desktop PC in a closet a "server" has plenty of other problems long before you get to remote management.
We have drifted far off topic.
Posted Sep 26, 2013 15:13 UTC (Thu) by mjg59 (subscriber, #23239)
Posted Sep 26, 2013 22:48 UTC (Thu) by jmorris42 (guest, #2203)
Nope. Physical presence does not equal ownership. We circulate laptops as library material. Do they get to do whatever they want? Oh heck no. And if the security tape over the screws is tampered with we fine em to cover our time reauditing the system.
How about a lab computer for library patrons. They are sitting at the console, so they install a new OS? Not on my systems they ain't, at last not without a screwdriver and some way to distract the staff.
Posted Sep 27, 2013 4:43 UTC (Fri) by mjg59 (subscriber, #23239)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds