LWN: Comments on "The return of the lockdown patches" https://lwn.net/Articles/784674/ This is a special feed containing comments posted to the individual LWN article titled "The return of the lockdown patches". en-us Mon, 29 Sep 2025 17:42:14 +0000 Mon, 29 Sep 2025 17:42:14 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net The return of the lockdown patches https://lwn.net/Articles/785803/ https://lwn.net/Articles/785803/ bluntp <div class="FormattedComment"> It's really great to see such a collaborative and constructive effort. Thank you very much for the article, comments and everybody that is working on this :)<br> </div> Fri, 12 Apr 2019 13:04:07 +0000 The return of the lockdown patches https://lwn.net/Articles/785328/ https://lwn.net/Articles/785328/ mjthayer <div class="FormattedComment"> Actually I seem to recall that Matthew's original motivation with secure boot was preventing evil maid attacks, for which lockdown should not even be relevant - secure boot merely closed a vector for changing the system without authenticating. I would be quite interested in the use cases for kernel lockdown without user-space lockdown (I can see clear use cases for doing both together).<br> </div> Tue, 09 Apr 2019 11:18:42 +0000 The return of the lockdown patches https://lwn.net/Articles/785113/ https://lwn.net/Articles/785113/ thestinger <div class="FormattedComment"> Similarly though, what is the point of secure boot if you can boot an arbitrary userspace, including init and the entirety of the high level security model? An attacker can have full access to all user / application data, full control over everything that's displayed on screen, audio, etc. The kernel just sits there waiting for userspace to ask it to do something for it. A raw Linux kernel without a userspace doesn't actually provide anything to a user. If the attacker has compromised the entirety of userspace, what is there left to defend? They cannot persist with kernel privileges, but they can persist with real uncontained root privileges able to do everything they want outside of the kernel.<br> <p> If a user decides to format the drive / reinstall the OS in order to get rid of the attacker's persistence, this doesn't help with that, since the kernel is going to be replaced as part of that. At least with incomplete verified boot for only firmware, it's eliminating a form of stealthy attacker persistence where formatting / replacing the drive doesn't purge them presence. I don't think that replacing userspace without touching the kernel is a normal thing to do though, especially if even the initramfs is not verified, so an attacker can still persist via the boot partition alone. If everything in that partition is verified, then at least it reduced what needs to be replaced to purge an attacker from the system, although I just don't see it as reasonable to find an infection and then replace only userspace, since they are usually strongly paired together and there's no disadvantage to doing both at once.<br> <p> You mention above that an implementation going much further like in Android means it isn't a general purpose computing device anymore, but how is this different? You can turn off verified boot on a Pixel just as you can turn off this feature, and right here is an example of a real world, common use case that is being wiped out in the name of security. I am a big fan of verified boot and attestation and I'm fully in support of having those aggressively implemented for nearly every use case, but I just don't see the gains coming with the loss of control in this case.<br> <p> Even when people do try to submit code upstream, it's often blocked for arbitrary reasons. For example, an open source kernel driver won't be accepted if there isn't an open source userspace library for using it, and there are often very arbitrary blockers for other things.<br> <p> So, consider if lockdown wasn't automatically enabled by rather was a sysctl toggled by early userspace, which was verified? And what if it wasn't so coarse, but rather more fine-grained, so that you can expose something to part of the userspace TCB while disabling it everywhere (i.e. like using SELinux). What is the distinction in terms of threat model between the kernel and core userspace processes like init? What does attacker gain by compromising the kernel in terms of harm to the user that they can't by compromising all of userspace? The only thought that comes to mind is the kernel can theoretically defend hardware that is known to be broken from them, but in practice I don't see the Linux kernel actually acting as a gatekeeper in terms of disabling access to hardware until it receives firmware updates fixing the vulnerabilities if that's even possible.<br> </div> Sun, 07 Apr 2019 05:48:30 +0000 The return of the lockdown patches https://lwn.net/Articles/785112/ https://lwn.net/Articles/785112/ thestinger <div class="FormattedComment"> s/meaningful secure boot model /meaningless secure boot model/ <br> </div> Sun, 07 Apr 2019 05:29:40 +0000 The return of the lockdown patches https://lwn.net/Articles/785111/ https://lwn.net/Articles/785111/ thestinger <div class="FormattedComment"> Similarly, having firmware including the early boot chain verified with downgrade protection and no way to bypass it (without finding a vulnerability) is standard and doesn't mean it isn't a general purpose computer. Chaining along the trust all the way until init and a base set of processes providing enough high level semantics to enforce something meaningful doesn't make it not general purpose computing. Kernel vs. userspace is an implementation detail. It isn't what matters to end users or any of the high level semantics. The Linux kernel can be running in the userspace of another OS.<br> <p> There needs to be a clear threat model and purpose behind a security feature for it to be useful. I would say the threat model for Android verified boot is preventing an attacker from persisting access after a compromise in a way that preserves the privileges needed to access data of arbitrary apps, etc. It also prevents them from persisting past the user doing a factory reset as a secondary thing. Users can wipe via recovery or safe mode boot even if the attacker persists as an accessibility service, device manager and all the other privileges accessible to normal apps. It also preserves enough of the application layer integrity for apps to chain from it with attestation, which is still fragile, but it's improving. So, for example, in <a href="https://github.com/GrapheneOS/Auditor">https://github.com/GrapheneOS/Auditor</a> I can chain trust from the OS in a way that allows meaningfully providing data from the OS about which device managers are enabled, etc. in a way that an attacker can't bypass without a verified boot bypass or re-exploitation of the OS on the next boot. The hardware attestation provides the OS version and patch level so they can't stealthily hold back updates to keep the OS vulnerable to their re-exploitation if a patch is shipped.<br> <p> All I'm trying to say is that I don't think the boundary being drawn in this precise way as a hard-wired policy makes sense without it enforcing meaningful security properties / defending against a defined threat. It doesn't mean there's not a lot of useful work as part of it, I just don't think the way it's grouped together into coarse, hard-wired high level policies makes sense. I fully support most of the individual pieces of it, but I think it needs to be exposed differently, and without what I think is a meaningful secure boot model as the entire purpose of it.<br> </div> Sun, 07 Apr 2019 05:28:51 +0000 The return of the lockdown patches https://lwn.net/Articles/785110/ https://lwn.net/Articles/785110/ thestinger <div class="FormattedComment"> <font class="QuotedText">&gt; Android verifies the base OS, but applications have leeway to run a great deal of native code.</font><br> <p> Sure, although the ability to run native code from outside an apk dynamically is going to continue fading away (at least outside isolated_app) and that isn't part of the verified boot security model. It also really doesn't matter if the code is native. It's about reducing trust in persistent state to enforce meaningful security properties. It doesn't matter if the attacker can't inject native code if they can use interpreted code with the same privileges, or other trusted state allowing themselves to get to that point. The security model for verified boot is preventing an attacking from persisting with privileges beyond what a normal sandboxed app can do which preserves the OS security model including protecting apps from each other. It also enforces that a factory reset will purge the attacker's persistence, since that's implemented by wiping userdata. It's quite useful, and apps can chain the trust to themselves with the hardware-based key attestation feature in order to provide application-level security based on the verified boot process. This is being further improved in Android Q with a new property for apps to improve the chaining of security.<br> <p> <font class="QuotedText">&gt; Android can constrain that through significant reduction in attack surface because it's not a general-purpose operating system - you don't need to worry about someone taking advantage of a userland vulnerability in code running as UID 0 and then kexec()ing into a new kernel with an autostarting app if you don't support kexec() in the first place.</font><br> <p> It doesn't make any sense to claim that it isn't a general purpose OS because it has a separation between the base OS and third party code with sandboxing for all third party code. It has nothing to do with general purpose computing or a general purpose OS. Someone can just as easily claim that by implementing this lockdown feature you are making it into a non-general-purpose OS by taking away control over the kernel. It's an arbitrary distinction that you're making and people are certainly going to be unhappy with having this feature in place. It's extremely arbitrary and doesn't provide them real security. At least with Android verified boot, you get real security advantages out of it. You're still taking away control over the system when this feature is enabled, but without giving something back. You want to draw a line between the verified base OS, with integrity preserved afterwards, and the non-verified portion, but by drawing the line at the kernel it's unable to enforce meaningful security properties improving actual security. It's only an internal bureaucratic security boundary based on the internal organization of the OS. An attacker controlling the entirety of userspace i.e. completely uncontained real root with no security policies is already a complete loss. The kernel pretty much just serves the needs of userspace. What is the real world use case for the security boundary being drawn this way rather than via policy set up by init?<br> <p> <font class="QuotedText">&gt; The goal here is building infrastructure to allow you to have as secure a boundary between userland and the kernel as possible while still allowing for general purpose computing. At the moment there's no good way to include any userland tooling in the TCB while still maintaining the characteristics of a general purpose Linux distribution.</font><br> <p> The kernel doesn't do anything useful without userspace though, and the general purpose computing argument is bogus. If you think verifying and preserving the integrity a small base OS including init takes that away, so does doing it for the kernel. General purpose computing is not about an implementation detail like this. Functionality being inside or outside the kernel is an implementation detail that ultimately doesn't matter to end users. If you think it stops being a general purpose OS when full user control at runtime after it's booted is taken away, you've already done that with this feature.<br> <p> You need at least some of userspace to be verified so that something is actually being protected. As long as you at least verify the initramfs and load a standard set of SELinux policies there, your policies can be enforced, but it's still unlikely that it will have any use unless you actually verify more of userspace from there. Even Android's verified boot has narrow security goals.<br> <p> Verified boot can be extended to including the kernel line and initramfs, and from there it can be extended to verifying even a small portion of userspace like init and an initial set of security policies loaded by init which can enforce this kind of boundary without hard-wiring all this policy in the kernel.<br> </div> Sun, 07 Apr 2019 05:19:00 +0000 The return of the lockdown patches https://lwn.net/Articles/785097/ https://lwn.net/Articles/785097/ mjthayer <div class="FormattedComment"> Thanks for commenting.<br> 1) That is the "doable but not ideal" solution I was talking about. I don't think we can get our main virtualisation code into the kernel, and if we can, probably not with less effort than just using KVM. Might be worth looking at our other components, especially networking, but it is the usual thing with small teams and spare developer time.<br> 2) Yes, that definitely makes sense, if tying lockdown to secure boot turns out to be a long-term thing. Thank you.<br> <p> For the rest, as long as there is no easier way of taking over the kernel than loading wicked modules, agree. Though given the way people love to stuff things into the kernel which could be done in user-space - with perhaps a bit more effort - I think that there will usually be other ways available with not much extra effort.<br> </div> Sat, 06 Apr 2019 12:28:31 +0000 The return of the lockdown patches https://lwn.net/Articles/784952/ https://lwn.net/Articles/784952/ mjg59 <div class="FormattedComment"> 1) Get the VirtualBox drivers upstream. It doesn't solve all your problems in the short term, but it reduces them. Alternatively, rearchitect Virtualbox to use KVM rather than its own hypervisor.<br> 2) For guests - you control db, so inject a Virtualbox certificate into it from the host and then figure out a mechanism to sign the guest drivers (eg, by building and signing them yourself for all popular guest targets)<br> <p> There's no real point in having secure boot enabled if you can load arbitrary drivers, so having users disable secure boot in order to load unsigned drivers is entirely aligned with the aim of the patches - the user has to actively acknowledge that they're disabling security functionality in order to achieve their goal. Users should definitely have the freedom to make that choice.<br> </div> Thu, 04 Apr 2019 17:54:43 +0000 The return of the lockdown patches https://lwn.net/Articles/784951/ https://lwn.net/Articles/784951/ mjg59 <div class="FormattedComment"> Android verifies the base OS, but applications have leeway to run a great deal of native code. Android can constrain that through significant reduction in attack surface because it's not a general-purpose operating system - you don't need to worry about someone taking advantage of a userland vulnerability in code running as UID 0 and then kexec()ing into a new kernel with an autostarting app if you don't support kexec() in the first place.<br> <p> The goal here is building infrastructure to allow you to have as secure a boundary between userland and the kernel as possible while still allowing for general purpose computing. At the moment there's no good way to include any userland tooling in the TCB while still maintaining the characteristics of a general purpose Linux distribution.<br> </div> Thu, 04 Apr 2019 17:49:24 +0000 The return of the lockdown patches https://lwn.net/Articles/784908/ https://lwn.net/Articles/784908/ bluca <div class="FormattedComment"> No, I'm just a user.<br> </div> Thu, 04 Apr 2019 14:41:11 +0000 The return of the lockdown patches https://lwn.net/Articles/784907/ https://lwn.net/Articles/784907/ mjthayer <div class="FormattedComment"> Actually, looking at 0009-Add-support-for-UEFI-Secure-Boot-validation-toggling.patch it looks like something we could use. I suppose Ubuntu is important enough to justify duplicating a few lines of shell to get module signing working there. Out of interest, are you the author of that and/or update-secureboot-policy?<br> </div> Thu, 04 Apr 2019 14:31:05 +0000 The return of the lockdown patches https://lwn.net/Articles/784897/ https://lwn.net/Articles/784897/ bluca <div class="FormattedComment"> It's automated for DKMS, but it can be used manually for binary modules (if you are distributing binary modules you could do that in post-inst like dkms does for example) with the kmodsign command.<br> Security-wise, it's not too different from normal MOK, in that it requires physical presence at the hardware to enroll the key when it's generated the first time around. And the key is restricted to verification of kernel modules only, it can't be used to verify images or bootloaders.<br> <p> Some references:<br> <p> <a href="https://wiki.ubuntu.com/UEFI/SecureBoot">https://wiki.ubuntu.com/UEFI/SecureBoot</a><br> <a href="https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS">https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS</a><br> <a href="https://wiki.ubuntu.com/UEFI/SecureBoot/Signing">https://wiki.ubuntu.com/UEFI/SecureBoot/Signing</a><br> <p> But yes, it's completely specific to Ubuntu at the moment. I've proposed a PR to get the required kernel patches in Debian as the first step, so maybe at some point we'll converge, but most likely not for Buster (if ever).<br> </div> Thu, 04 Apr 2019 12:50:51 +0000 The return of the lockdown patches https://lwn.net/Articles/784893/ https://lwn.net/Articles/784893/ mjthayer <div class="FormattedComment"> I still haven't had enough time and energy to work out how the Ubuntu thing works. (Does using DKMS for modules, which we used to do until I decided it was double work and double problems for the same benefit, automate it?) Of course, something in me wonders whether automatically signing kernel modules doesn't defeat the purpose. And the thought of handling every distribution out there separately does not really thrill me, but as you said, consistent tooling would improve things. I do wonder whether something which is both secure and usable is actually possible.<br> <p> And yes, using KVM for the host part would actually theoretically be possible, though it does not help with our in-kernel networking code, which now presents the same interface to userspace on all supported host platforms. Nor does it help for the Guest Additions.<br> </div> Thu, 04 Apr 2019 12:18:44 +0000 The return of the lockdown patches https://lwn.net/Articles/784886/ https://lwn.net/Articles/784886/ bluca <div class="FormattedComment"> The solution is very simple: use KVM! :-P<br> Just kidding of course - distros like Ubuntu do provide a way for users to sign their own modules, via MOK, in a pretty much automated way after the initial setup. The trouble is that it requires yet another set of patches on top of the lockdown set, that for example Debian doesn't have at the moment. In your experience, does that feature help?<br> One of the advantages of having this patchset finally merged upstream (fingers crossed!) is that we can then build tooling on top of it that is common between all distros, rather than the patchwork that it is now, where depending on what you run on the story is different.<br> </div> Thu, 04 Apr 2019 10:16:00 +0000 The return of the lockdown patches https://lwn.net/Articles/784878/ https://lwn.net/Articles/784878/ mjthayer <div class="FormattedComment"> Lock down is a pain for us (VirtualBox) trying to provide kernel modules/drivers out of tree. Signing works more or less for Windows or macOS, where you provide a single binary. Up until now we were able to provide source that the user built themself, and in the end making it build on all the different kernel versions and patched distributions was challenging but not worse than the problems on other platforms. But now that lots of distributions are requiring signed modules it is getting more and more painful. Currently the tie-in with secure boot is saving us: we don't yet support it in guest systems, so people can install Guest Additions from source. And on the host, people just end up disabling secure boot to make VirtualBox run. Which works, but is probably not the aim of lockdown.<br> <p> Getting our modules into the kernel is not really a solution. For the host side it would just about be doable, though not ideal: we do not want to force people to use the latest kernel to have the latest product features. For the Guest Additions one of the main features is providing up-to-date support with old distributions with old kernels. Requiring a new kernel and supporting old kernels is simply not compatible. (No parallels intended to current UK politics.)<br> <p> Sorry for letting off steam. I wanted a chance to draw attention to our problem here though, especially as Matthew is probably reading.<br> </div> Thu, 04 Apr 2019 07:15:46 +0000 The return of the lockdown patches https://lwn.net/Articles/784871/ https://lwn.net/Articles/784871/ jamesmorris <div class="FormattedComment"> I don't see any of this being expressable in an Apparmor or similar policy. Perhaps a new integrity-focused mechanism which can be integrated with other LSMs. IMA does something like this for appraisal and measurement.<br> <p> </div> Thu, 04 Apr 2019 04:51:30 +0000 The return of the lockdown patches https://lwn.net/Articles/784865/ https://lwn.net/Articles/784865/ thestinger <div class="FormattedComment"> Linux has great support for verified boot already for meaningful implementations not ending with the kernel. ChromeOS and Android verify whole base OS and use it to provide real security properties by avoiding trust in persistent state. Those implementations have clear threat models and goals. It's difficult to turn it into a meaningful implementation by avoiding trust in persistent state. A truly great implementation would have to chain trust to all the code and static data in the system, but clear goals can still be accomplished without that by preventing privileged persistent compromises without re-exploitation. These implementations do still need the ability to do the things provided by this patch, but they have it already, in much more flexible ways and without hard-wired policy in the kernel with such coarse knobs.<br> <p> I don't think inflexible hard-wired policies in the kernel are a good solution. There are already powerful systems for implementing these policies that are widely deployed. These hard-wired policies often end up unusable because parts of the trusted computing base in userspace do need to use them, and that's not a problem since they are verified too. It creates another reason for people to put things in the kernel that really do not belong there. Having so much in a single address space with no security boundaries isn't something to double down on. There is not enough room for flexibility and they end up being turned off due to lack of a way to make sensible exceptions. Catering to systems with a meaningless, poorly thought out incomplete verified boot model just doesn't make sense to me.<br> <p> I'm still not entirely sure what verified boot of only the kernel is supposed to accomplish. It lacks a clear goal and real world threat model as opposed to being a meaningless boundary. It needs to at least a substantial portion of userspace via dm-verity or another mechanism to be genuinely useful, and at that point you don't need all this policy hard-wired into the kernel. Implementing this just because Microsoft appropriated the term secure boot for a near meaningless incomplete implementation doesn't make much sense to me. It feels like just implementing a feature in the most minimal possible way to say that it's there, without being enough to truly be useful.<br> <p> It should at least be divided up rather than one massive knob where you need to disable the entire thing because you needed to expose something to a trusted process like init or vold part of the verified base OS. I think it needs to be rethought. I feel the same way even for toggles like dmesg_restrict. It's far more useful to use SELinux or another LSM to forbid it globally while still being able to grant access without giving coarse, powerful capabilities / root access. If something can be done with an LSM with flexible policy, I don't think it belongs as a hard-wired kernel features.<br> <p> I feel like this just needs to take a different form. The association with an incomplete verified boot implementation has never made sense to me even though there are probably useful changes here. I mean, can someone at least finally explain what the purpose is behind this model? An argument to authority about it being a standard doesn't count. Why not make it meaningful by verifying at least a small standard base system from the kernel via dm-verity, and loading an SELinux policy with fine-grained control? You don't end up needing to throw the baby out with the bathwater that way. It can actually be made meaningful too by not having privileged code outside that verified base system, or chaining verification to it with fs-verity / other features. If an attacker has absolutely full control over userspace, what is it accomplishing? It's not like the kernel does anything useful without direction from userspace.<br> <p> Even for the full OS verification on Android and the extensions like fs-verity, it still has fairly narrow goals. There are a lot of gradual changes to make it more useful like attestation APIs usable by apps and lots of little reductions in trust of persistent state, but an attacker can still do a huge amount with persistent state. ChromeOS lost a fair bit of the original strength of the mitigation as it became more capable. I think verified boot is extremely valuable but it really needs to be done properly and it's *hard* to make it meaningful.<br> </div> Thu, 04 Apr 2019 04:24:03 +0000 The return of the lockdown patches https://lwn.net/Articles/784866/ https://lwn.net/Articles/784866/ mjg59 <div class="FormattedComment"> I'm certainly open to changing this, but I'm not sure how it could be done terribly effectively. It's not too difficult for admins to extend local policy to implement something similar to a lot of these patches, but having it as an automatically applied configuration seems harder. Doing it as a single LSM is probably the easiest approach, but we still end up with a single piece of code that embodies both mechanism and policy, and access control for certain resources is now maintained separately from the code providing those resources. It seems more elegant to inject it as static policy into existing LSMs, but (eg) how do we express "Block any eBPF code that attempts to read from kernel memory" in Apparmor policy?<br> </div> Thu, 04 Apr 2019 01:59:55 +0000 The return of the lockdown patches https://lwn.net/Articles/784855/ https://lwn.net/Articles/784855/ jamesmorris <div class="FormattedComment"> My current thoughts are here:<br> <p> <a href="https://lore.kernel.org/linux-security-module/20190325220954.29054-1-matthewgarrett@google.com/T/#mbee0d328447e9f4b26823871da3acef6d00ff709">https://lore.kernel.org/linux-security-module/20190325220...</a><br> <p> It seems we're hard-coding an integrity policy into the kernel around the requirements of one secure boot scenario.<br> <p> I'm thinking about how this could be done better.<br> <p> </div> Wed, 03 Apr 2019 20:53:39 +0000