|
|
Subscribe / Log in / New account

The return of the lockdown patches

By Jake Edge
April 3, 2019

It's been a year since we looked in on the kernel lockdown patches; that's because things have been fairly quiet on that front since there was a loud and discordant dispute about them back then. But Matthew Garrett has been posting new versions over the last two months; it would seem that the changes that have been made might be enough to tamp down the flames and, perhaps, even allow them to be merged into the mainline.

The idea behind kernel lockdown is to supplement secure boot mechanisms to limit the ability of the root user to cause unverified, potentially malicious code to be run. The most obvious way to do that is to use the kexec subsystem to run a new kernel that has not been vetted by the secure boot machinery, but there are lots of other ways that root can circumvent the intent of (some) secure boot users. While the support for UEFI secure boot has been in the kernel for years, providing a way to restrict the root user after that point has always run aground.

A renewed push

The latest round began with a pull request from Garrett at the end of February. He noted that he had taken over shepherding the patch set from David Howells, who is "low on cycles at the moment". There were just a few changes from the previous version that caused the ruckus a year ago.

The main change was to remove the tie-in between secure boot and lockdown mode. The main complaint that Linus Torvalds and Andy Lutomirski had a year ago was about that linkage; they felt that it was unreasonable to force those using secure boot into having a locked-down kernel—and vice versa. At a minimum, kernel developers might well want the flexibility to have one without the other. Changing the fundamental behavior of the kernel based on a BIOS setting that might not be under the control of the user was also seen as highly problematic.

Beyond that big ticket item, there were two other changes. A CONFIG_KERNEL_LOCK_DOWN_FORCE option was added that will build a kernel that always enforces lockdown. Integration with the Integrity Measurement Architecture (IMA) was also dropped, though IMA maintainer Mimi Zohar questioned that plan. There were enough comments that needed addressing to cause Garrett to send a second pull request to security maintainer James Morris in early March.

Zohar was still unhappy with the (lack of) IMA integration, however. Garrett worked on a solution to that, which showed up as a patch in a third pull request on March 25. The patch will use the IMA architecture-specific mechanism to verify a kernel image before allowing it to be booted via kexec:

Systems in lockdown mode should block the kexec of untrusted kernels. For x86 and ARM we can ensure that a kernel is trustworthy by validating a PE [Portable Executable] signature, but this isn't possible on other architectures. On those platforms we can use IMA digital signatures instead.

A patch that disables the use of the bpf() system call in locked-down kernels was also discussed. There are some BPF functions that can read kernel memory, which would allow BPF programs to extract private keys (e.g. the hibernation image signing key) and to alter kernel memory, so the patch simply disabled bpf() entirely. But, given the ever-increasing use of BPF in the kernel, that was seen as a draconian restriction by some. Jordan Glover pointed out that disabling the system call would break some systemd functionality, making locked-down systems less secure.

Disabling BPF was one of the problems that Lutomirski saw with Garrett's approach to decoupling secure boot and lockdown mode. In particular, Lutomirski wanted to see three possible states for lockdown:

Lockdown mode becomes three states, not a boolean. The states are: no lockdown, best-effort-to-protect-kernel-integrity, and best-effort-to-protect-kernel-secrecy-and-integrity. And this BPF mess illustrates why: most users will really strongly object to turning off BPF when they actually just want to protect kernel integrity. And as far as I know, things like Secure Boot policy will mostly care about integrity, not secrecy, and tracing and such should work on a normal locked-down kernel. So I think we need this knob.

The code for disabling direct model-specific register (MSR) writes on x86 systems was also questioned. Writing to MSRs can "lead to execution of arbitrary code in kernel mode", which is why it should be disabled for locked-down kernels. At the behest of Alan Cox, log messages were added to someday facilitate a whitelist of allowed MSR writes, but Thomas Gleixner was not a fan:

Maintaining a whitelist for this is a horrible idea as you will get a gazillion of excuses why access to a particular MSR is sane. And I'm neither interested in these discussions nor interested in adding the whitelist to this trainwreck.

Gleixner would much rather see direct access to /dev/msr go away entirely: "The right thing to do is to provide sane interfaces and that's where we are moving to."

Another complaint came from Greg Kroah-Hartman, who said that the heuristic-based patch that restricted debugfs operations for locked-down kernels should instead simply disable debugfs completely. Garrett noted that previous attempts to do so had resulted in "strong pushback from various maintainers", but Kroah-Hartman said he was willing to handle any of those that come along.

Version 31

He got a chance to do just that after Garrett posted version 31 of the patch set. It addressed the complaints, starting with lockdown state:

Based on Andy's feedback, lockdown is now a tristate and can be made stricter at runtime. The states are "none", "integrity" and "confidentiality". "none" results in no behavioural change, "integrity" enables features that prevent untrusted code from being run in ring 0, and "confidentiality" is a superset of "integrity" that also disables features that may be used to extract secret information from the kernel at runtime.

[...]

In the general case, I'd expect distributions to opt for nothing stricter than "integrity" - "confidentiality" seems more suitable for more special-case scenarios.

Beyond that, he removed the logging from the MSR-disabling code and disabled opening files in debugfs when in integrity mode. Perhaps predictably, that latter part led to a complaint. Lutomirski said that reading debugfs files should still be allowed for integrity mode. Kroah-Hartman, who doesn't think much of the lockdown idea in general, said that there are legitimate worries about what kinds of information debugfs provides:

Reading a debugfs file can expose loads of things that can help take over a kernel, or at least make it easier. Pointer addresses, internal system state, loads of other fun things. And before 4.14 or so, it was pretty trivial to use it to oops the kernel as well (not an issue here anymore, but people are right to be nervous).

Personally, I think these are all just "confidentiality" type things, but who really knows given the wild-west nature of debugfs (which is as designed). And given that I think this patch series [is] just crazy anyway, I really don't care :)

Garrett seems amenable to changing integrity mode to use the previous scheme and to block all reads in confidentiality mode, but doesn't want to "spend another release cycle arguing about it". That previous scheme would only allow opening "safe" debugfs files for read: those with a 00444 mode and lacking .ioctl() and .mmap() methods.

Overall, the comments seem to be fairly minor problems that can be—have been—addressed easily. While some don't buy the whole idea behind lockdown, and there will always be ways around any of its restrictions due to bugs if nothing else, it is something that some kernel users want. Distributions have been shipping with some form of lockdown for quite some time, so it is pretty hard to argue that it is completely useless.

But, of course, the elephant in the room is Torvalds. He has not commented on any of the recent postings. One might guess that most of his concerns were addressed by the decoupling of secure boot and lockdown mode, but that remains to be seen. Morris has not yet said he will merge the lockdown patches either, which would also seem to be a prerequisite. Reducing out-of-tree patches that distributions feel they need to carry is a good goal, though, so one way or another it seems likely that lockdown will get merged before too long.


Index entries for this article
KernelSecurity/UEFI secure boot
SecurityLinux kernel
SecuritySecure boot


(Log in to post comments)

The return of the lockdown patches

Posted Apr 3, 2019 20:53 UTC (Wed) by jamesmorris (subscriber, #82698) [Link]

My current thoughts are here:

https://lore.kernel.org/linux-security-module/20190325220...

It seems we're hard-coding an integrity policy into the kernel around the requirements of one secure boot scenario.

I'm thinking about how this could be done better.

The return of the lockdown patches

Posted Apr 4, 2019 1:59 UTC (Thu) by mjg59 (subscriber, #23239) [Link]

I'm certainly open to changing this, but I'm not sure how it could be done terribly effectively. It's not too difficult for admins to extend local policy to implement something similar to a lot of these patches, but having it as an automatically applied configuration seems harder. Doing it as a single LSM is probably the easiest approach, but we still end up with a single piece of code that embodies both mechanism and policy, and access control for certain resources is now maintained separately from the code providing those resources. It seems more elegant to inject it as static policy into existing LSMs, but (eg) how do we express "Block any eBPF code that attempts to read from kernel memory" in Apparmor policy?

The return of the lockdown patches

Posted Apr 4, 2019 4:51 UTC (Thu) by jamesmorris (subscriber, #82698) [Link]

I don't see any of this being expressable in an Apparmor or similar policy. Perhaps a new integrity-focused mechanism which can be integrated with other LSMs. IMA does something like this for appraisal and measurement.

The return of the lockdown patches

Posted Apr 4, 2019 4:24 UTC (Thu) by thestinger (guest, #91827) [Link]

Linux has great support for verified boot already for meaningful implementations not ending with the kernel. ChromeOS and Android verify whole base OS and use it to provide real security properties by avoiding trust in persistent state. Those implementations have clear threat models and goals. It's difficult to turn it into a meaningful implementation by avoiding trust in persistent state. A truly great implementation would have to chain trust to all the code and static data in the system, but clear goals can still be accomplished without that by preventing privileged persistent compromises without re-exploitation. These implementations do still need the ability to do the things provided by this patch, but they have it already, in much more flexible ways and without hard-wired policy in the kernel with such coarse knobs.

I don't think inflexible hard-wired policies in the kernel are a good solution. There are already powerful systems for implementing these policies that are widely deployed. These hard-wired policies often end up unusable because parts of the trusted computing base in userspace do need to use them, and that's not a problem since they are verified too. It creates another reason for people to put things in the kernel that really do not belong there. Having so much in a single address space with no security boundaries isn't something to double down on. There is not enough room for flexibility and they end up being turned off due to lack of a way to make sensible exceptions. Catering to systems with a meaningless, poorly thought out incomplete verified boot model just doesn't make sense to me.

I'm still not entirely sure what verified boot of only the kernel is supposed to accomplish. It lacks a clear goal and real world threat model as opposed to being a meaningless boundary. It needs to at least a substantial portion of userspace via dm-verity or another mechanism to be genuinely useful, and at that point you don't need all this policy hard-wired into the kernel. Implementing this just because Microsoft appropriated the term secure boot for a near meaningless incomplete implementation doesn't make much sense to me. It feels like just implementing a feature in the most minimal possible way to say that it's there, without being enough to truly be useful.

It should at least be divided up rather than one massive knob where you need to disable the entire thing because you needed to expose something to a trusted process like init or vold part of the verified base OS. I think it needs to be rethought. I feel the same way even for toggles like dmesg_restrict. It's far more useful to use SELinux or another LSM to forbid it globally while still being able to grant access without giving coarse, powerful capabilities / root access. If something can be done with an LSM with flexible policy, I don't think it belongs as a hard-wired kernel features.

I feel like this just needs to take a different form. The association with an incomplete verified boot implementation has never made sense to me even though there are probably useful changes here. I mean, can someone at least finally explain what the purpose is behind this model? An argument to authority about it being a standard doesn't count. Why not make it meaningful by verifying at least a small standard base system from the kernel via dm-verity, and loading an SELinux policy with fine-grained control? You don't end up needing to throw the baby out with the bathwater that way. It can actually be made meaningful too by not having privileged code outside that verified base system, or chaining verification to it with fs-verity / other features. If an attacker has absolutely full control over userspace, what is it accomplishing? It's not like the kernel does anything useful without direction from userspace.

Even for the full OS verification on Android and the extensions like fs-verity, it still has fairly narrow goals. There are a lot of gradual changes to make it more useful like attestation APIs usable by apps and lots of little reductions in trust of persistent state, but an attacker can still do a huge amount with persistent state. ChromeOS lost a fair bit of the original strength of the mitigation as it became more capable. I think verified boot is extremely valuable but it really needs to be done properly and it's *hard* to make it meaningful.

The return of the lockdown patches

Posted Apr 4, 2019 17:49 UTC (Thu) by mjg59 (subscriber, #23239) [Link]

Android verifies the base OS, but applications have leeway to run a great deal of native code. Android can constrain that through significant reduction in attack surface because it's not a general-purpose operating system - you don't need to worry about someone taking advantage of a userland vulnerability in code running as UID 0 and then kexec()ing into a new kernel with an autostarting app if you don't support kexec() in the first place.

The goal here is building infrastructure to allow you to have as secure a boundary between userland and the kernel as possible while still allowing for general purpose computing. At the moment there's no good way to include any userland tooling in the TCB while still maintaining the characteristics of a general purpose Linux distribution.

The return of the lockdown patches

Posted Apr 7, 2019 5:19 UTC (Sun) by thestinger (guest, #91827) [Link]

> Android verifies the base OS, but applications have leeway to run a great deal of native code.

Sure, although the ability to run native code from outside an apk dynamically is going to continue fading away (at least outside isolated_app) and that isn't part of the verified boot security model. It also really doesn't matter if the code is native. It's about reducing trust in persistent state to enforce meaningful security properties. It doesn't matter if the attacker can't inject native code if they can use interpreted code with the same privileges, or other trusted state allowing themselves to get to that point. The security model for verified boot is preventing an attacking from persisting with privileges beyond what a normal sandboxed app can do which preserves the OS security model including protecting apps from each other. It also enforces that a factory reset will purge the attacker's persistence, since that's implemented by wiping userdata. It's quite useful, and apps can chain the trust to themselves with the hardware-based key attestation feature in order to provide application-level security based on the verified boot process. This is being further improved in Android Q with a new property for apps to improve the chaining of security.

> Android can constrain that through significant reduction in attack surface because it's not a general-purpose operating system - you don't need to worry about someone taking advantage of a userland vulnerability in code running as UID 0 and then kexec()ing into a new kernel with an autostarting app if you don't support kexec() in the first place.

It doesn't make any sense to claim that it isn't a general purpose OS because it has a separation between the base OS and third party code with sandboxing for all third party code. It has nothing to do with general purpose computing or a general purpose OS. Someone can just as easily claim that by implementing this lockdown feature you are making it into a non-general-purpose OS by taking away control over the kernel. It's an arbitrary distinction that you're making and people are certainly going to be unhappy with having this feature in place. It's extremely arbitrary and doesn't provide them real security. At least with Android verified boot, you get real security advantages out of it. You're still taking away control over the system when this feature is enabled, but without giving something back. You want to draw a line between the verified base OS, with integrity preserved afterwards, and the non-verified portion, but by drawing the line at the kernel it's unable to enforce meaningful security properties improving actual security. It's only an internal bureaucratic security boundary based on the internal organization of the OS. An attacker controlling the entirety of userspace i.e. completely uncontained real root with no security policies is already a complete loss. The kernel pretty much just serves the needs of userspace. What is the real world use case for the security boundary being drawn this way rather than via policy set up by init?

> The goal here is building infrastructure to allow you to have as secure a boundary between userland and the kernel as possible while still allowing for general purpose computing. At the moment there's no good way to include any userland tooling in the TCB while still maintaining the characteristics of a general purpose Linux distribution.

The kernel doesn't do anything useful without userspace though, and the general purpose computing argument is bogus. If you think verifying and preserving the integrity a small base OS including init takes that away, so does doing it for the kernel. General purpose computing is not about an implementation detail like this. Functionality being inside or outside the kernel is an implementation detail that ultimately doesn't matter to end users. If you think it stops being a general purpose OS when full user control at runtime after it's booted is taken away, you've already done that with this feature.

You need at least some of userspace to be verified so that something is actually being protected. As long as you at least verify the initramfs and load a standard set of SELinux policies there, your policies can be enforced, but it's still unlikely that it will have any use unless you actually verify more of userspace from there. Even Android's verified boot has narrow security goals.

Verified boot can be extended to including the kernel line and initramfs, and from there it can be extended to verifying even a small portion of userspace like init and an initial set of security policies loaded by init which can enforce this kind of boundary without hard-wiring all this policy in the kernel.

The return of the lockdown patches

Posted Apr 7, 2019 5:28 UTC (Sun) by thestinger (guest, #91827) [Link]

Similarly, having firmware including the early boot chain verified with downgrade protection and no way to bypass it (without finding a vulnerability) is standard and doesn't mean it isn't a general purpose computer. Chaining along the trust all the way until init and a base set of processes providing enough high level semantics to enforce something meaningful doesn't make it not general purpose computing. Kernel vs. userspace is an implementation detail. It isn't what matters to end users or any of the high level semantics. The Linux kernel can be running in the userspace of another OS.

There needs to be a clear threat model and purpose behind a security feature for it to be useful. I would say the threat model for Android verified boot is preventing an attacker from persisting access after a compromise in a way that preserves the privileges needed to access data of arbitrary apps, etc. It also prevents them from persisting past the user doing a factory reset as a secondary thing. Users can wipe via recovery or safe mode boot even if the attacker persists as an accessibility service, device manager and all the other privileges accessible to normal apps. It also preserves enough of the application layer integrity for apps to chain from it with attestation, which is still fragile, but it's improving. So, for example, in https://github.com/GrapheneOS/Auditor I can chain trust from the OS in a way that allows meaningfully providing data from the OS about which device managers are enabled, etc. in a way that an attacker can't bypass without a verified boot bypass or re-exploitation of the OS on the next boot. The hardware attestation provides the OS version and patch level so they can't stealthily hold back updates to keep the OS vulnerable to their re-exploitation if a patch is shipped.

All I'm trying to say is that I don't think the boundary being drawn in this precise way as a hard-wired policy makes sense without it enforcing meaningful security properties / defending against a defined threat. It doesn't mean there's not a lot of useful work as part of it, I just don't think the way it's grouped together into coarse, hard-wired high level policies makes sense. I fully support most of the individual pieces of it, but I think it needs to be exposed differently, and without what I think is a meaningful secure boot model as the entire purpose of it.

The return of the lockdown patches

Posted Apr 7, 2019 5:29 UTC (Sun) by thestinger (guest, #91827) [Link]

s/meaningful secure boot model /meaningless secure boot model/

The return of the lockdown patches

Posted Apr 4, 2019 7:15 UTC (Thu) by mjthayer (guest, #39183) [Link]

Lock down is a pain for us (VirtualBox) trying to provide kernel modules/drivers out of tree. Signing works more or less for Windows or macOS, where you provide a single binary. Up until now we were able to provide source that the user built themself, and in the end making it build on all the different kernel versions and patched distributions was challenging but not worse than the problems on other platforms. But now that lots of distributions are requiring signed modules it is getting more and more painful. Currently the tie-in with secure boot is saving us: we don't yet support it in guest systems, so people can install Guest Additions from source. And on the host, people just end up disabling secure boot to make VirtualBox run. Which works, but is probably not the aim of lockdown.

Getting our modules into the kernel is not really a solution. For the host side it would just about be doable, though not ideal: we do not want to force people to use the latest kernel to have the latest product features. For the Guest Additions one of the main features is providing up-to-date support with old distributions with old kernels. Requiring a new kernel and supporting old kernels is simply not compatible. (No parallels intended to current UK politics.)

Sorry for letting off steam. I wanted a chance to draw attention to our problem here though, especially as Matthew is probably reading.

The return of the lockdown patches

Posted Apr 4, 2019 10:16 UTC (Thu) by bluca (subscriber, #118303) [Link]

The solution is very simple: use KVM! :-P
Just kidding of course - distros like Ubuntu do provide a way for users to sign their own modules, via MOK, in a pretty much automated way after the initial setup. The trouble is that it requires yet another set of patches on top of the lockdown set, that for example Debian doesn't have at the moment. In your experience, does that feature help?
One of the advantages of having this patchset finally merged upstream (fingers crossed!) is that we can then build tooling on top of it that is common between all distros, rather than the patchwork that it is now, where depending on what you run on the story is different.

The return of the lockdown patches

Posted Apr 4, 2019 12:18 UTC (Thu) by mjthayer (guest, #39183) [Link]

I still haven't had enough time and energy to work out how the Ubuntu thing works. (Does using DKMS for modules, which we used to do until I decided it was double work and double problems for the same benefit, automate it?) Of course, something in me wonders whether automatically signing kernel modules doesn't defeat the purpose. And the thought of handling every distribution out there separately does not really thrill me, but as you said, consistent tooling would improve things. I do wonder whether something which is both secure and usable is actually possible.

And yes, using KVM for the host part would actually theoretically be possible, though it does not help with our in-kernel networking code, which now presents the same interface to userspace on all supported host platforms. Nor does it help for the Guest Additions.

The return of the lockdown patches

Posted Apr 4, 2019 12:50 UTC (Thu) by bluca (subscriber, #118303) [Link]

It's automated for DKMS, but it can be used manually for binary modules (if you are distributing binary modules you could do that in post-inst like dkms does for example) with the kmodsign command.
Security-wise, it's not too different from normal MOK, in that it requires physical presence at the hardware to enroll the key when it's generated the first time around. And the key is restricted to verification of kernel modules only, it can't be used to verify images or bootloaders.

Some references:

https://wiki.ubuntu.com/UEFI/SecureBoot
https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
https://wiki.ubuntu.com/UEFI/SecureBoot/Signing

But yes, it's completely specific to Ubuntu at the moment. I've proposed a PR to get the required kernel patches in Debian as the first step, so maybe at some point we'll converge, but most likely not for Buster (if ever).

The return of the lockdown patches

Posted Apr 4, 2019 14:31 UTC (Thu) by mjthayer (guest, #39183) [Link]

Actually, looking at 0009-Add-support-for-UEFI-Secure-Boot-validation-toggling.patch it looks like something we could use. I suppose Ubuntu is important enough to justify duplicating a few lines of shell to get module signing working there. Out of interest, are you the author of that and/or update-secureboot-policy?

The return of the lockdown patches

Posted Apr 4, 2019 14:41 UTC (Thu) by bluca (subscriber, #118303) [Link]

No, I'm just a user.

The return of the lockdown patches

Posted Apr 4, 2019 17:54 UTC (Thu) by mjg59 (subscriber, #23239) [Link]

1) Get the VirtualBox drivers upstream. It doesn't solve all your problems in the short term, but it reduces them. Alternatively, rearchitect Virtualbox to use KVM rather than its own hypervisor.
2) For guests - you control db, so inject a Virtualbox certificate into it from the host and then figure out a mechanism to sign the guest drivers (eg, by building and signing them yourself for all popular guest targets)

There's no real point in having secure boot enabled if you can load arbitrary drivers, so having users disable secure boot in order to load unsigned drivers is entirely aligned with the aim of the patches - the user has to actively acknowledge that they're disabling security functionality in order to achieve their goal. Users should definitely have the freedom to make that choice.

The return of the lockdown patches

Posted Apr 6, 2019 12:28 UTC (Sat) by mjthayer (guest, #39183) [Link]

Thanks for commenting.
1) That is the "doable but not ideal" solution I was talking about. I don't think we can get our main virtualisation code into the kernel, and if we can, probably not with less effort than just using KVM. Might be worth looking at our other components, especially networking, but it is the usual thing with small teams and spare developer time.
2) Yes, that definitely makes sense, if tying lockdown to secure boot turns out to be a long-term thing. Thank you.

For the rest, as long as there is no easier way of taking over the kernel than loading wicked modules, agree. Though given the way people love to stuff things into the kernel which could be done in user-space - with perhaps a bit more effort - I think that there will usually be other ways available with not much extra effort.

The return of the lockdown patches

Posted Apr 7, 2019 5:48 UTC (Sun) by thestinger (guest, #91827) [Link]

Similarly though, what is the point of secure boot if you can boot an arbitrary userspace, including init and the entirety of the high level security model? An attacker can have full access to all user / application data, full control over everything that's displayed on screen, audio, etc. The kernel just sits there waiting for userspace to ask it to do something for it. A raw Linux kernel without a userspace doesn't actually provide anything to a user. If the attacker has compromised the entirety of userspace, what is there left to defend? They cannot persist with kernel privileges, but they can persist with real uncontained root privileges able to do everything they want outside of the kernel.

If a user decides to format the drive / reinstall the OS in order to get rid of the attacker's persistence, this doesn't help with that, since the kernel is going to be replaced as part of that. At least with incomplete verified boot for only firmware, it's eliminating a form of stealthy attacker persistence where formatting / replacing the drive doesn't purge them presence. I don't think that replacing userspace without touching the kernel is a normal thing to do though, especially if even the initramfs is not verified, so an attacker can still persist via the boot partition alone. If everything in that partition is verified, then at least it reduced what needs to be replaced to purge an attacker from the system, although I just don't see it as reasonable to find an infection and then replace only userspace, since they are usually strongly paired together and there's no disadvantage to doing both at once.

You mention above that an implementation going much further like in Android means it isn't a general purpose computing device anymore, but how is this different? You can turn off verified boot on a Pixel just as you can turn off this feature, and right here is an example of a real world, common use case that is being wiped out in the name of security. I am a big fan of verified boot and attestation and I'm fully in support of having those aggressively implemented for nearly every use case, but I just don't see the gains coming with the loss of control in this case.

Even when people do try to submit code upstream, it's often blocked for arbitrary reasons. For example, an open source kernel driver won't be accepted if there isn't an open source userspace library for using it, and there are often very arbitrary blockers for other things.

So, consider if lockdown wasn't automatically enabled by rather was a sysctl toggled by early userspace, which was verified? And what if it wasn't so coarse, but rather more fine-grained, so that you can expose something to part of the userspace TCB while disabling it everywhere (i.e. like using SELinux). What is the distinction in terms of threat model between the kernel and core userspace processes like init? What does attacker gain by compromising the kernel in terms of harm to the user that they can't by compromising all of userspace? The only thought that comes to mind is the kernel can theoretically defend hardware that is known to be broken from them, but in practice I don't see the Linux kernel actually acting as a gatekeeper in terms of disabling access to hardware until it receives firmware updates fixing the vulnerabilities if that's even possible.

The return of the lockdown patches

Posted Apr 9, 2019 11:18 UTC (Tue) by mjthayer (guest, #39183) [Link]

Actually I seem to recall that Matthew's original motivation with secure boot was preventing evil maid attacks, for which lockdown should not even be relevant - secure boot merely closed a vector for changing the system without authenticating. I would be quite interested in the use cases for kernel lockdown without user-space lockdown (I can see clear use cases for doing both together).

The return of the lockdown patches

Posted Apr 12, 2019 13:04 UTC (Fri) by bluntp (guest, #131418) [Link]

It's really great to see such a collaborative and constructive effort. Thank you very much for the article, comments and everybody that is working on this :)


Copyright © 2019, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds