|
|
Subscribe / Log in / New account

Don't fear the TPM

By Joe Brockmeier
August 6, 2025

DebConf

There is a great deal of misunderstanding, and some misinformation, about the Trusted Platform Module (TPM); to combat this, Debian developer Jonathan McDowell would like to clear the air and help users understand what it is good for, as well as what it's not. At DebConf25 in Brest, France, he delivered a talk about TPMs that explained what they are, why people might be interested in using them, and how users might do so on a Debian system.

[Jonathan McDowell]

McDowell started with a disclaimer; he was giving the talk in his personal capacity, not on behalf of his employer. He wanted to talk about "something that is useful to Debian and folks within Debian", rather than the use of TPMs in a corporate environment.

McDowell has been a Debian developer for quite some time—more than 24 years, in fact. Professionally, he has done a lot of work with infrastructure; he has written networking software, high-end storage systems, and software-defined networking. He has also run an ISP. To him, TPMs are simply "another piece of infrastructure and how we secure things".

Unfortunately, there is a lot of FUD around TPMs, he said, especially now that Microsoft is pushing TPM devices as part of the baseline requirement for Windows 11. That has been part of the baseline since it was introduced, of course, but with the end-of-life approaching for Windows 10 people are starting to take more notice.

Many people are responding to TPMs by "throwing up their hands and going, 'this is terrible'"; but they are actually really useful devices. One of the reasons that they are useful is that they are so common. If you buy a new PC, "it is incredibly likely that you have some TPM capability on it". Unless it's an Apple system—they have Secure Enclave instead, "which does a whole bunch of different things that have some overlap".

What is a TPM?

So, he asked rhetorically, "what is a TPM?" He displayed a slide with Wikipedia's definition of a TPM, which says that a TPM is "is a secure cryptoprocessor that implements the ISO/IEC 11889 standard". McDowell said he did not recognize that definition, despite having worked with TPMs for several years. He repeated the definition and said, "that doesn't mean much to me, and it's also not entirely true", because there are multiple TPM implementations that are not secure cryptoprocessors.

There are three variants of TPM that McDowell said he was familiar with: discrete, integral, and firmware. A discrete TPM is a separate chip that lives on the motherboard. Historically, the discrete TPM has been connected over the low pin count (LPC) bus, but modern systems mostly use the serial peripheral interface (SPI) bus. Then there is the integral TPM, which sits on the same die as the CPU, but as a separate processor. Examples of integral TPM include Intel's Management Engine and AMD's Secure Technology (formerly called "Platform Security Processor"). These are logically separate from the CPU that applications run on, which gives some extra security, "but not a full discrete chip".

Finally, there are firmware TPMs, such as the Arm TrustZone technology. In that case, McDowell said, the TPM is actually running on the application processor in a more secure context, but firmware TPMs can be vulnerable to speculative side-channel attacks. The idea is that the TPM is a small, specialized device that "concentrates on cryptographic operations and is in some way more secure than doing it on your main processor".

McDowell digressed a bit to talk about TPM 1.2 devices. "I hate TPM 1.2 devices. I still have to deal with a bunch of them in life. They are ancient." TPM 2.0, which is the baseline that Windows 11 expects, launched in 2014. He would like TPM 1.2 devices to all go away and said that he would not be discussing them further.

Not for DRM

One of the things that TPMs can do is state attestation. The idea is that the TPM can attest to the software that is running on the machine:

And if all of the stars align and you get everything right, you can actually build a full chain from the first piece of code, the firmware that the CPU runs, all the way up to the application layer and say, I am running this stack of software and I will provide you a signed cryptographic proof.

However, he assured the audience, TPMs are not a realistic way of doing digital rights management (DRM), McDowell said—no matter how much Microsoft or Netflix might want to use them in that way. "They could not build a database of all these values for all the legitimate machines in the world." Trying to do so would result in "support calls coming out of their ears". It is absolutely possible to constrain things so that the TPM can provide a level of security for embedded systems and appliances, he said. "In particular, you can potentially use it for some level of knowing that someone hasn't tampered with your firmware image". But full DRM on general-purpose PCs is not going to happen.

A standard TPM for a PC has 24 platform-configuration registers (PCRs), McDowell said. PCR 0 through PCR 7 belong to the firmware and are used by UEFI to measure the bootloader and "base bits" of what the operating system runs. PCR 8 through PCR 15 are "under the control of the bootloader and the OS", and PCR 16 through PCR 23 are "something different, and we'll not talk about those at all".

PCRs are SHA hash registers; at boot time the TPM sets the values for the registers to zero. Then the hash values for various objects are measured into the registers; For example when GRUB boots, it logs its activity into the TPM event log, and performs a cryptographic hash operation to extend the value of the PCR, explained in more detail in this coverage of a talk by Matthew Garrett. McDowell displayed a slide that showed a command to read the TPM's event log:

    # tpm2_eventlog /sys/kernel/security/tpm0/binary_bios_measurements

Each entry in the log shows something that has been measured into the registers. Details about Secure Boot, for example, are put into PCR 7, which provides an attestation that "this machine has used Secure Boot, and these are the keys it has used to do Secure Boot". All of that is machine-state attestation, he said, "which is the thing that people get worried about" being used to enforce DRM.

Key storage

The much more interesting thing from a Debian point of view, he said, is key storage. While TPMs are small and incredibly slow devices, it is possible to securely generate asymmetric keys, such as SSH keys or signing keys, on the device that cannot be exfiltrated:

You can say "make me a key", and it will make you a key, and that private part of the key can only be exported from the device in a way that only the device itself can read.

Obviously an attacker could use the TPM while they are connected to the machine. But if the user kicks them out or fixes whatever has happened, the attacker would not be able to export any keys stored in the TPM to another machine. That, McDowell said, is incredibly useful. He reiterated that TPMs are slow; they are not full-blown high-performance hardware security module devices. But they are almost everywhere, "and that's why they're interesting, right?" They are a standard piece of hardware that most PCs will have if they're not too old.

If one wants to get more into the corporate side of things, he said, some hardware vendors will provide a certificate that ties the TPM's unique endorsement key to the serial number of the laptop. "So I can do a very strong statement of 'this is the machine I think it should be'." But that, he reiterated, involves a slightly complicated procedure. For single-machine use cases, "you don't have to worry about this bit too much".

This also allows the TPM to do attestation for the key. That is more complicated, McDowell said, but "you can do an attestation where the TPM goes, 'that key was definitely generated in me'". That might be desirable, for example, in terms of a certificate authority or when signing packages. If a key is hardware backed, its use demonstrates that a user has access to a specific piece of hardware—such as a company-issued laptop.

He elaborated later that using it for attestation involved an "annoyingly interactive challenge and response dance"; it was not possible to have the TPM simply generate an attestation statement that can be validated and trusted. However, if one does the full attestation dance, "I can guarantee [the key is] hardware-backed and I can guarantee it's hardware-backed by a particular vendor of TPM".

Another neat thing that users can do is to bind a key that can only be used if the PCRs are in a particular state. That means it's possible to ensure that someone hasn't messed with the firmware, to guard against an "evil maid" attack. If the machine is still running the image that the user expected to be running, then they could use their key. "If someone has subverted that with a dodgy firmware or a dodgy kernel, then I will not be able to use my key".

TPMs can also generate random numbers, he said, though that is not necessarily particularly interesting. The TPM needs random numbers for many of its operations, and it exposes that interface "so you can ask the TPM for random numbers". There are faster sources of random numbers, such as the CPU's instruction set and USB-attached random-number generators, but TPMs are still useful largely because they are present in a lot of machines.

Crypto types

McDowell had said he was not going to talk about TPM 1.2 devices, but he mentioned them again to say they did not do cryptography right. The 1.2 specification only allowed for the use of 1024-bit RSA keys, and the SHA1 algorithm. The 2.0 specification added "this thing called crypto agility", and extended the baseline support to 2048-bit RSA keys, SHA256, and NIST P-256 elliptic-curve cryptography.

Post-quantum cryptography is not there yet but it is being actively worked on upstream. Because of the crypto-agility standard, none of the interfaces used to talk to the TPM will change much—it will just be a different key type. All of the TPM vendors are ready, McDowell said; it is just a matter of waiting for the details to settle. "This will come before we need it, which is good".

Using the TPM

Next, he demonstrated how to check to see if the random-number generator was enabled, but did not go into detail on how to use the feature. McDowell cautioned that AMD's integral TPM has some problems with the random-number generator, possibly to do with locking and conflicts over accessing the device over the SPI bus.

The TPM can also be used to produce trust paths in software using the kernel's integrity-measurement architecture (IMA). For example, if a developer was building an appliance, it would be possible to use the kernel's IMA to create a list of the privileged code, with its hashes. To test this out without messing with the system TPM he recommended the swtpm package, which provides a TPM emulator.

What's more interesting about swtpm, McDowell said, is that it can be used in conjunction with QEMU to provide a TPM to a virtual machine. "I suspect a bunch of people are doing this to boot Windows 11" in virtual machines. It is a fully-featured TPM 2.0 implementation, and it was what he had used for the examples in his presentation. He also recommended the tpm2-tools package, which he called a kind of Swiss Army knife for working with TPMs. He put up a slide showing the tpm2_pcrread command being used to read PCRs 0-7 from the TPM:

    $ tpm2_pcrread sha256:0,1,2,3,4,5,6,7

The version of GNU Privacy Guard (GnuPG) in Debian 13 (trixie) includes a feature that allows users to generate a key and store it in the TPM. "That means you've got a hardware-backed key, no need for the Yubikey plugged into your machine". Even if an attacker has access to the machine they cannot copy the key from it. "That, to me, is amazing." That feature is not available in the GnuPG version in Debian 12 ("bookworm").

I asked how users could back up their key if the machine with the TPM died and was unusable. He said there were two options: generate the key in the CPU and store it in the TPM, with an offline backup on a USB key, or use a GPG subkey. "Then you have the ability to put another subkey on your laptop because the primary key is not the one stored in the TPM." His approach was to use an offline primary key, stored in a hardware token, and then to use subkeys extensively for different machines.

McDowell also showed examples of using the TPM to store a PKCS#11 token for use with SSH, which he said was "a bit annoying" because the process was convoluted. There was another method, using an SSH agent for TPM written in Go, which he described as "cheating" because it was not yet packaged for Debian. He lamented the fact that he was speaking at the same time as the Go team BoF, so he was unable to get help figuring out Debian's Go ecosystem.

Every now and again he thinks about "jumping through all those hoops" to be able to sign his own operating system images to use with Secure Boot. If he did that he could use the OpenSSL TPM 2.0 provider as a certificate authority with a secure backend stored in the TPM. But, he reminded the audience, TPMs are slow. "If you can get 10 signing operations a second out of your TPM, you're doing exceptionally well." It would never be possible to back a TLS web server with a TPM. It was much better for one-offs, such as certificate-authority operations, where a system is not being used to issue a lot of certificates.

A really interesting use of the TPM, McDowell said, was to automatically unlock a LUKS-encrypted drive. A user could set things up to automatically unlock the drive if the firmware, bootloader, and so forth are unchanged and avoid having to enter a passphrase just to decrypt the disk. He noted that users would still need to have a recovery password for LUKS, because if anything were to change about the machine—including rebuilding the initrd—then a user would have to have a passphrase to decrypt the disk. He showed a slide with an example using systemd-cryptsetup and dracut to enable this feature and said, "this is my first time playing with dracut; I didn't like it." He also noted he could not fit the entire example on the slide, but he included a link to a blog post about using TPM for disk decryption.

An audience member asked how much of a pain it would be to "magically incorporate" the proper values when the kernel is updated so that the next time the system is booted it expects the new kernel. McDowell said that systemd does have tooling that will "attempt to do the calculations for what the PCR values will end up as"; he had not looked at that tooling extensively, however. There was still more pain than there should be in automating this, which is "one of the reasons that the systemd folks are pushing unified-kernel images" (UKIs). That would allow distributions to provide the initrd as part of the whole package and provide the PCR value along with it. In the current model, where everyone builds their own initrd, "we have no way of distributing those values as a project".

In general, he said, the systemd folks have been really good about trying to drive the use of TPMs forward. LWN covered some of this work in December. McDowell also gave a call out to James Bottomley for doing a lot of work on the kernel side of things "in terms of just generally improving the infrastructure" around TPMs.

One audience member wanted to know if he had seen any work that would allow programs like Firefox to have passkeys in the TPM. He was not aware of any implementations of passkeys in the TPM; the problem with the passkey approach and TPMs, he said, is that a passkey "normally wants some proof of user presence", such as a button press on a Yubikey. There is no equivalent of user presence with a TPM that couldn't be faked programmatically.

The slides for McDowell's talk are online now, and videos from DebConf25 should be published soon.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brest for DebConf25.]


Index entries for this article
ConferenceDebConf/2025


to post comments

If there is anything to worry about, Secure Boot is the issue people should be worried about

Posted Aug 6, 2025 16:07 UTC (Wed) by Lennie (subscriber, #49641) [Link] (1 responses)

In the crypto currency space people say: not your keys, not your coins.

As long as the firmware on your computer supports setting up your own keys or those you trust, there is no problem.

The issue is: most people don't know this or check this, so there is a potential systemic problem with Microsoft having the root keys.

At the moment PC users seem to be safe, Microsoft hasn't caused problems (intentional or otherwise).

When fwupd installs updates of the firmware on your machine (or dual boot and install Windows updates), that could change, in theory.

Very likely the option to disable Secure Boot will also remain (as I understand it, that is what these companies say they will do), so there is that.

So it's very much theoretical, but also shows the Linux world is not in complete control.

If there is anything to worry about, Secure Boot is the issue people should be worried about

Posted Aug 8, 2025 8:28 UTC (Fri) by SLi (subscriber, #53131) [Link]

They have much to lose. I don't think it makes much sense for someone to enroll their own keys if they don't have use for those keys. Microsoft is also going to be 1000x better at protecting their key.

Who holds the keys?

Posted Aug 6, 2025 18:28 UTC (Wed) by Arrange1030 (subscriber, #178702) [Link] (6 responses)

These are absolutely planned to be also used for DRM. Take a look at the Android Virtualization Framework.

https://source.android.com/docs/core/virtualization/archi...

The entire software stack is verified on boot using the DICE cert chain (thanks to the TPM). This proves that no one tampered with the "protected" and closed source pVMs that are running under the untampered pKVM hypervisor. Linux also runs under pKVM and cannot access pVM's memory. The hypervisor can map the decoder/decryptor HW MMIO ranges into the pVM, or allow it to pass DRM buffers to the TEE or something. After that, userspace Android can only send the Netflix frames to the pVM for decryption. If the DICE checks fail (like with a custom ROM), you cannot talk to the pVM.

Even though many of these pieces are open sourced, you cannot flash the TEE on Android phones without the OEM key. This is partially for good reason, since I wouldn't want a malicious secondhand device to access my fingerprint/face unlock data. The flipside is that we no longer own our devices. For example, on the X1 Elite laptops you cannot even flash the hypervisor. I'm sure something like this is coming to Windows too. The TPMs are enabling this Tivoization because we don't hold the keys.

Who holds the keys?

Posted Aug 6, 2025 18:38 UTC (Wed) by mjg59 (subscriber, #23239) [Link] (5 responses)

If it's using DICE then it's definitionally not a TPM.

Who holds the keys?

Posted Aug 6, 2025 18:45 UTC (Wed) by Arrange1030 (subscriber, #178702) [Link] (4 responses)

From https://trustedcomputinggroup.org/what-is-a-device-identi...

>There are three key use cases for DICE:
...
> in more complex security architectures working together with TPM.

Who holds the keys?

Posted Aug 6, 2025 18:49 UTC (Wed) by mjg59 (subscriber, #23239) [Link] (3 responses)

DICE also works together with the CPU and RAM and storage and every other component in the phone, but the thing you're complaining about still isn't a TPM.

Who holds the keys?

Posted Aug 11, 2025 15:46 UTC (Mon) by SLi (subscriber, #53131) [Link] (2 responses)

I understand how the proper use of terminology seems and is important, but I also think "TPM" is much better known a name than "HSM", and I think what's happening here is that TPM seems to be becoming a generic name for all HSMs. A bit like kleenex, aspirin or escalator. Probably for a non-expert, it's close enough to make sense.

Who holds the keys?

Posted Aug 11, 2025 17:46 UTC (Mon) by intelfx (subscriber, #130118) [Link]

> I think what's happening here is that TPM seems to be becoming a generic name for all HSMs

At most it might be this way for HSMs _integrated into the platform_ (as reflected in the name, Trusted _Platform_ Module).

There is a variety of pluggable (PCI, USB) HSMs and, to my knowledge, nobody is trying to call them TPMs.

Who holds the keys?

Posted Aug 11, 2025 18:25 UTC (Mon) by Wol (subscriber, #4433) [Link]

The problem with the "dumbing down" of language (in general) is that it makes clear communication impossible.

For example - in the realm of computers - the number of people who now just talk about RAM. With no clue whether it's actually RAM, or disk. (Made even worse now by those systems that have matching RAM and SSD, 32GB of each maybe.)

Or the COMPUTER LECTURER who re-purposed "real time" to mean "interactive". I had a bit of a go at him but he was unrepentant. And now, twenty years on, I'm working in an industry when real-time errors (that's real real-time) are a major cause of errors and real physical crashes that damage equipment and take systems out of service for hours at a time ...

> Probably for a non-expert, it's close enough to make sense.

The problem is when the non-expert NEEDS to understand the issue, at which point the fact they can't even use the words correctly becomes a MAJOR problem.

Cheers,
Wol

Secure boot

Posted Aug 6, 2025 18:44 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> He showed a slide with an example using systemd-cryptsetup and dracut to enable this feature

Oh yep. I did this for my home server, and it never worked for me with stock Fedora. I ended up using sbctl ( https://github.com/Foxboron/sbctl ) to do signing.

Passkeys

Posted Aug 6, 2025 20:36 UTC (Wed) by grawity (subscriber, #80596) [Link] (7 responses)

> One audience member wanted to know if he had seen any work that would allow programs like Firefox to have passkeys in the TPM. He was not aware of any implementations of passkeys in the TPM; the problem with the passkey approach and TPMs, he said, is that a passkey "normally wants some proof of user presence", such as a button press on a Yubikey. There is no equivalent of user presence with a TPM that couldn't be faked programmatically.

That doesn't prevent Windows Hello from using it. In the case of passkeys, it seems to be enough that *unprivileged userspace* (websites and web browsers) cannot fake user presence programmatically, with the privileged OS component showing the confirmation UI.

Additionally, Windows Hello requires the user to enter a PIN, which is something that a TPM could potentially implement via policies (that's already done for BitLocker TPM+PIN).

So with modern Wayland/portal/flatpak desktops, something like "xdg-credential-portal" (previous name of [1]) seems entirely feasible. Though it doesn't necessarily have to be a browser-integrated system API; "u2f-hid" emulated a whole HID device and I think in theory that too could be made to use TPM + GUI confirmation in the same way.

[1] https://github.com/linux-credentials/libwebauthn

Passkeys

Posted Aug 6, 2025 21:29 UTC (Wed) by valderman (subscriber, #56479) [Link]

I think this is more than good enough for most use cases. For those where it isn't, you'd use a hardware token anyway.

I wrote a TOTP authenticator that uses the TPM to protect the shared secrets, which uses fingerprint verification via fprintd to approximate presence verification and it works pretty well. Sure, you can generate one time codes without verification if you have root, but (unlike Google Authenticator et al) at least you can't exfiltrate the secrets and keep generating codes offline.

Passkeys

Posted Aug 7, 2025 4:47 UTC (Thu) by pabs (subscriber, #43278) [Link]

Passkeys

Posted Aug 7, 2025 10:58 UTC (Thu) by muase (subscriber, #178466) [Link] (4 responses)

> That doesn't prevent Windows Hello from using it. In the case of passkeys, it seems to be enough that *unprivileged userspace* (websites and web browsers) cannot fake user presence programmatically, with the privileged OS component showing the confirmation UI.

This^^

The problem is: To do true user-presence-confirmation, you'd need a trusted link between your sensor and the secure element; either via a dedicated non-programmable signal path that can be set to true if presence is confirmed, or some kind of cryptographic pairing and sealed channel between the sensor and the secure element. I would be surprised if the TPM standards don't offer a specification for that; but afaik, nobody implements this (at least on the consumer market) – so it's not even practical atm to enforce true user-presence-confirmation without a Yubikey or similar.

The only PC-like systems I know of who do that are Macs, where the fingerprint sensor is uniquely paired with the secure enclave, and the entire connection between both is cryptographically sealed (the fingerprint representation is sent to the secure enclave and verified in there). This does not only have the nice side effect that even a kernel level exploit doesn't give you access to the user's fingerprint data; but it also allows you to generate keys with "Currently enrolled biometry" as a security requirement – so even if someone knows your password and uses it to enroll an additional fingerprint, they cannot use your key.

Funnily enough however, afaik even macOS doesn't implement this security level for passkeys atm; the user can also simply enter their password instead without ever touching the fingerprint sensor.

Passkeys

Posted Aug 8, 2025 0:02 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> Funnily enough however, afaik even macOS doesn't implement this security level for passkeys atm; the user can also simply enter their password instead without ever touching the fingerprint sensor.

That's because it's impossible with the current secure enclave API. Passkeys need to be exportable, as they can be synced between devices.

You can use the SE to get a key that is used to decrypt the stored Passkey data, use it for whatever purpose, and then discard the decrypted data. This definitely can improve the security because the passkeys are represented as clear-text only during a brief window, but it's not foolproof.

Passkeys

Posted Aug 8, 2025 15:38 UTC (Fri) by muase (subscriber, #178466) [Link]

Good point – I didn't distinguish between FIDO2 hardware-backed keys (=non-exportable), and syncable passkeys in my mind. You're completely right – with passkeys, as opposed to "classical" FIDO2 keypairs, the passkey needs to be usable outside the SE; and while you could enforce true user-presence-confirmation for unsealing the passkey, it would be kinda pointless.

Passkeys

Posted Aug 9, 2025 6:03 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (1 responses)

> The problem is: To do true user-presence-confirmation, you'd need a trusted link between your sensor and the secure element; either via a dedicated non-programmable signal path that can be set to true if presence is confirmed, or some kind of cryptographic pairing and sealed channel between the sensor and the secure element.

There's a very straightforward way of doing this. It's called "stop using TPMs to do things that security keys were designed to do." A security key integrates the sensor into the same hardware that has the chip, so the path is indeed trusted unless somebody has physically opened the key (which is rather difficult to do remotely, hence making it good enough for verifying non-authenticated user presence, which probably sounds pointless, but is quite helpful in slowing down a remote attacker's lateral movement through a large network).

Of course, the downside is that it is much easier to steal a security key than to desolder and run off with a TPM. But one could easily imagine a setup where the security key is permanently attached to the device instead of hanging off a USB port. Still not as secure as something soldered directly to the motherboard, but life is a series of tradeoffs, and you can always treat the passkey as a second factor (in addition to a password) if you're paranoid.

Unfortunately, the story with PINs is much more grim. If you don't trust the OS (and hardware, and firmware, and the Intel Management Engine that everybody keeps telling me is "probably fine," etc.), then the OS can keylog them, and there's basically nothing you can do about it, short of integrating a tiny numeric keypad into your security key. Nobody does that as far as I have heard of. OTOH, if you're really worried about this class of attack, then you probably work for a three-letter agency.

> Funnily enough however, afaik even macOS doesn't implement this security level for passkeys atm; the user can also simply enter their password instead without ever touching the fingerprint sensor.

Passkeys were not and have never been intended as a full replacement for passwords in all circumstances. They are intended to make passwords the backup flow, not to remove them entirely. It is good and proper for macOS to accept a password in lieu of a passkey.

Passkeys

Posted Aug 11, 2025 10:48 UTC (Mon) by tekNico (subscriber, #22) [Link]

> integrating a tiny numeric keypad into your security key. Nobody does that as far as I have heard of.

The Trezor hardware wallets do that.

Make Passwords a Thing of the Past
FIDO2 Is Now Available on Trezor Model T
https://blog.trezor.io/make-passwords-a-thing-of-the-past...

What a coincidence

Posted Aug 6, 2025 20:58 UTC (Wed) by Alphix (subscriber, #7543) [Link] (1 responses)

Funny timing, I'm actually working on integrating support for the kinds of keys (TPM2, FIDO2, PKCS#11) that systemd-cryptenroll supports into debian-installer right now (still early days, this requires changes that are definitely post-Trixie stuff).

What a coincidence

Posted Aug 7, 2025 19:30 UTC (Thu) by tamiko (subscriber, #115350) [Link]

This sounds very exciting!

It is already possible to manually set everything up after the installation has completed, by installing the dracut package and the missing systemd pieces and configuring everything by hand.

But first-class support in the Debian installer would really be a game changer.

Suse

Posted Aug 6, 2025 23:07 UTC (Wed) by leromarinvit (subscriber, #56850) [Link] (11 responses)

> An audience member asked how much of a pain it would be to ""magically incorporate"" the proper values when the kernel is updated so that the next time the system is booted it expects the new kernel. McDowell said that systemd does have tooling that will ""attempt to do the calculations for what the PCR values will end up as""

Suse has implemented something like this some time ago, I first bumped into this concept when I set up OpenSuse Aeon for testing. Kinda neat, but it does have exactly (?) the same requirements on the TPM as Windows 11 to work (of course unlike that, it will degrade gracefully). To be precise, not only does it need TPM 2.0, but also a feature called "PolicyAuthorizeNV" that - confusingly enough - not even all TPMs claiming to implement 2.0 support. Since my test system was missing that, I couldn't actually try it.

Suse

Posted Aug 7, 2025 11:07 UTC (Thu) by claudex (subscriber, #92510) [Link] (8 responses)

Do you have some link or documentation about it ? Because for me, it's seem to be a hard requirement for using the TPM to auto unseal Luks partitions and guarantee the security of the platform. Without it, if the system ask me for the unseal key, I don't know if it was tampered with a kernel cmdline modification (for example), or if there was an initrd update before the reboot that I missed. Maybe I missing something but there is no easy way to find out during the boot. (Of course, it is something improbable to be a direct security vulnerability, but it could allow to disable LSM for example).

If we can put the values during the upgrade, that mean that if I have to put Luks passphrase, there is something I should be investigating (or for common users, to signal to the IT team), because it shouldn't happen under normal condition.

Suse

Posted Aug 8, 2025 0:06 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

It's very obtuse. The TLDR is that you need to use UKIs (Unified Kernel Images) that incorporate the kernel, command line options, and the initrd in one verified binary.

Suse

Posted Aug 13, 2025 7:06 UTC (Wed) by cyphar (subscriber, #110703) [Link] (6 responses)

This feature works using UKIs (Unified Kernel Images), which bundle the UEFI boot stub, kernel image, permitted command-line(s), optionally an initrd, and some other resources. This produces a single PE binary that can be signed as a bundle and verified. The idea is for the UKI to be produced by the vendor of your kernel updates, and because there isn't an initrd that is being re-generated on the users' machine, you therefore can predict the PCR values that will be loaded when booting it -- so when updating the updater can rebind your TPM-sealed LUKS keys to the new PCR values.

This doesn't require changing the PCR values you bind your TPM-sealed keys to (if you already do this today) -- in fact, it allows you to require more PCR values for your LUKS key to be unsealed because more of the boot chain is predictable and it avoids the kinds of attacks you mention.

Suse

Posted Aug 13, 2025 7:55 UTC (Wed) by claudex (subscriber, #92510) [Link] (5 responses)

I don't understand why the UKI is needed. This allow to have a known value. But why can't it work with a classic initrd with the value computed on the host where it is generated ?

Suse

Posted Aug 13, 2025 8:02 UTC (Wed) by leromarinvit (subscriber, #56850) [Link] (3 responses)

This is also how I understand systemd-pcrlock's manpage:
lock-kernel-initrd FILE, unlock-kernel-initrd

Generates/removes a .pcrlock file based on a kernel initrd cpio archive. This is useful for predicting measurements the Linux kernel makes to PCR 9 ("kernel-initrd"). Do not use for systemd-stub(7) UKIs, as the initrd is combined dynamically from various sources and hence does not take a single input, like this command.

This writes/removes the file /var/lib/pcrlock.d/720-kernel-initrd.pcrlock/generated.pcrlock.

Added in version 255.

But like I said, I wasn't able to try it out, so I'm not really qualified to say if it works that way.

Suse

Posted Aug 13, 2025 8:35 UTC (Wed) by claudex (subscriber, #92510) [Link] (2 responses)

Thanks, I didn't know this tool. I'll try it and report my finding if I obtain meaningful result (positive of negative).

Suse

Posted Aug 15, 2025 15:46 UTC (Fri) by claudex (subscriber, #92510) [Link] (1 responses)

I'll understand better the challenge. I have grub on the system where I checked, and this mean that all the parsed config is an event for the PCR8. With things like:

> Raw: grub_cmd: [ xy = xy ]\000
> Raw: grub_cmd: insmod all_video\000
> Raw: grub_cmd: set gfxpayload=keep\000

So it'll be challenging for a program to predict it. However, it should works to script it since I know what should change, so I'll try to predict it for my system. But it can't be easily done without UKI at a distribution level, even with the hash of the initrd.

Suse

Posted Aug 15, 2025 22:24 UTC (Fri) by leromarinvit (subscriber, #56850) [Link]

Yes, GRUB will measure every action it takes, which makes measured boot challenging (see e.g. https://github.com/fedora-silverblue/issue-tracker/issues...). I gather that's at least part of the reason SUSE switched to systemd-boot (for MicroOS at least), which is more predictable in this regard.

Suse

Posted Aug 14, 2025 5:50 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> I don't understand why the UKI is needed.

It simplifies the checking logic. You just need to verify one binary that has everything and then chainload into it. With classic initrd you also need to measure it (and the kernel cmdline).

Suse

Posted Aug 7, 2025 12:01 UTC (Thu) by grawity (subscriber, #80596) [Link] (1 responses)

It can be implemented without NV – as long as the TPM lets you specify custom PCR values when sealing (which I believe any 2.0 TPM does), you can "just" take the current PCR event log from /sys and replay it, only swapping certain old event hashes (kernel .efi image, kernel cmdline, etc) with new ones, to generate the future PCR that will work for the next boot. I've implemented this; it works, as long as you're fine with authorizing just one kernel.

(also I very much do not like having to do needless NV updates, or efivar updates, if I don't know in numbers how many writes the flash can take, so the my implementation instead updates the LUKS keyslot on-disk)

But it *is* somewhat more brittle compared to binding to your SecureBoot certificate (PCR7).

Suse

Posted Aug 13, 2025 8:11 UTC (Wed) by leromarinvit (subscriber, #56850) [Link]

This blog post describing an early draft of the feature also doesn't mention NV. It seems to me the requirement came as an implementation detail based on how the systemd folks chose to implement it. I can understand that it's likely easier to argue about from a security perspective that way, and given that newly developed features tend to mostly be deployed on rather newer hardware than the old stuff that doesn't support NV, I can see why they did it that way.

Effectively just for secure boot?

Posted Aug 7, 2025 9:55 UTC (Thu) by aragilar (subscriber, #122569) [Link] (3 responses)

Maybe this will be more obvious once the video recording is posted, but the impression I get is TPMs are effectively only for secure boot? They sound slower than an actual HSM, are behind on crypto standards (and because they're on the device, they can't be upgraded so as is happening with Windows 11 with otherwise fine hardware no longer usable), and their unique selling point is the event log on boot? I guess other than their ubiquity (with the caveat that the machine you're on makes a major difference), using something else seems like a better choice from a technical/security standpoint?

Effectively just for secure boot?

Posted Aug 7, 2025 10:19 UTC (Thu) by noodles (subscriber, #39336) [Link] (2 responses)

While TPMs provide a measurement component that is very useful in a secure boot environment (knowing the image is signed is not sufficient, you want to know _what_ image is signed too), I very much did not talk about secure boot as part of my talk because it's not relevant for the interesting use of TPMs within Debian. You can store a key in the TPM and attest that it's actually hardware backed and non-exfiltratable outside of any use you might be making of the TPM to measure boot state. Even under Windows.

Additionally, if you care about tying your personal keys to machine state, you can make use of the TPM telling you what's been booted (for avoiding Evil Maid attacks) without having any secure boot support.

A proper HSM is obviously more desirable for performance reasons, but then you're also dealing with increased cost.

Effectively just for secure boot?

Posted Aug 7, 2025 11:42 UTC (Thu) by aragilar (subscriber, #122569) [Link] (1 responses)

Ah, I probably misusing the term "secure boot" then, I had assumed it included anything related to checking about the boot state and its trust.

Can you store many non-boot-related keys in TPMs, I recall reading https://fy.blackhats.net.au/blog/2023-02-02-how-hype-will... and the vibe I get about TPMs is that they are basically cheaper HSMs for the purposes of storing keys (on the device)?

Effectively just for secure boot?

Posted Aug 7, 2025 15:15 UTC (Thu) by muase (subscriber, #178466) [Link]

> Can you store many non-boot-related keys in TPMs, I recall reading https://fy.blackhats.net.au/blog/2023-02-02-how-hype-will... and the vibe I get about TPMs is that they are basically cheaper HSMs for the purposes of storing keys (on the device)?

Yes, you can use a TPM to generate, import or even seal external keys in varying degrees; you can pin them to hardware/software state with PCRs, and you can also require a PIN or similar for quick-but-still-interactive access.

I'm also not too sold on the "strong" distinction between TPMs and HSMs – it not only causes confusion (like in your case), but from what I know, HSM is the general super-term for everything that can work as an isolated secure element and does cryptography internally without exposing the keys. Be it a SmartCard, a YubiKey, an USB-YubiHSM, a TPM 2.0 module, an Apple Secure Enclave, a high-throughput PCIe module, a Pluton security chip... from what I know, and how the term is used in my environment, those are all HSM – just with different optimization goals: SmartCards/SIMs are removable and quickly exchangeable, TPMs are built-in and tightly integrated into the boot-cycle which allows them some additional attestations, the Secure Enclave has an additional focus on embedded biometry validation, etc.

To make things worse, those distinctions are also not strict; for example, there are PKCS#11 PCIe-HSMs that are strictly focused on a user-interactive root-CA-like role, and have a very low throughput and are not at all usable for TLS-handshakes. And for a mass-built and -shipped device, Apple's Secure Enclave has the absolutely stunning track-record of ZERO scientifically or publicly documented full (private) key extractions[1]; which suddenly makes it a low-throughput, but security-wise top-tier candidate compared to a lot of TPMs or even HSMs.

-----

[1] There were some successful attacks on the SEP, like the Pangu one, but that is not well-documented and we don't know if key extraction would have been possible, nor is it scientifically credible; and we have the the checkm8te/checkrain combination, which exploited the T2 – but again, no documented key extraction. And while running custom code is a **very big and impressive feat**, it's still not key extraction (similar to how running code in userland is not a root- or kernel-exploit).

Yes for DRM

Posted Aug 7, 2025 22:07 UTC (Thu) by comex (subscriber, #71521) [Link] (24 responses)

For decades, TPM-based DRM on PCs has been a purely theoretical threat. Every other computing platform stood up their own version of Secure Boot and used it for DRM, but on PC, hardware-enforced DRM has been limited to less-general-purpose stuff (Intel ME encrypted video, plus some stuff like SGX that has mostly been used on servers).

Until now.

Just in the last few days, two major upcoming games (Call of Duty: Black Ops 7 and Battlefield 6) have announced they will require Secure Boot to be enabled on Windows:

https://www.theverge.com/news/720007/call-of-duty-pc-anti...

To be fair, the developers have the laudable goal of preventing cheating. This won’t stop all cheaters (there will always be ways to compromise the kernel, plus some cheating devices work purely externally). But it will probably make a meaningful dent. I have to admit that.

Also, in practice this only affects Windows driver developers. It doesn’t affect people gaming with Wine because, well, they were *already* blocked from playing the previous iterations of these games by anti-cheat.

But if you say TPM can’t be used for DRM, here is your counterexample.

In retrospect, it makes sense that we’d see it get used for anti-cheat rather than more traditional kinds of DRM. This kind of check is not nearly secure enough for something like DRM video, where one user breaking the DRM on one device is game over. But anti-cheat is a numbers game.

Yes for DRM

Posted Aug 7, 2025 23:26 UTC (Thu) by excors (subscriber, #95769) [Link]

Secure Boot for anti-cheat isn't new - Valorant has required it for at least 4 years (I think for all users on Win 11, and for some users on Win 10 after suspicious activity is detected (not suspicious enough to instantly ban them, but enough to enable tighter restrictions at the cost of excluding some suspicious-but-innocent players who don't have a TPM)). It appears to be a pretty successful anti-cheat system. There was a lot of noise about it being a "rootkit" when the game was released but that didn't stop the game becoming very popular.

And it's not alone: The Finals has required Secure Boot for 2 years; Fortnite has required it for high-level tournament matches since Feb this year; Battlefield 2042 has required it since May. Seems the only recent change is that since Win 10 is nearly end-of-life, some new games are only aiming to support Win 11 compatible hardware (which requires TPM) and are not providing exceptions for TPM-less Win 10 systems, which is what triggered the current fuss.

Yes for DRM

Posted Aug 8, 2025 0:53 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (22 responses)

And this is useless, because TPM is not DRM. It provides additional hurdles, though.

You can just patch the kernel-level anti-cheat to ignore the integrity check. True DRM would require some kind of interactive attestation that the running kernel doesn't have anything unsigned. But it's not possible with the TPM.

Yes for DRM

Posted Aug 8, 2025 9:36 UTC (Fri) by excors (subscriber, #95769) [Link] (1 responses)

Valorant does use TPM remote attestation to some extent (and I expect the other modern anti-cheat systems are similar). Some cheats use a nearly-undetectable hypervisor to emulate the TPM and provide innocent-looking fake responses to the anti-cheat system, and attestation lets the anti-cheat system detect that.

I think it's primarily using the TPM as a hard-to-spoof hardware ID, so that banned players can't simply make a new free account and start playing again, though it can be bypassed by using an external TPM chip (instead of fTPM) and replacing it with a new chip every time you get banned. Secure Boot is an extra hurdle since you have to enroll new keys before it'll boot with your kernel-level cheat (unless you find another way to bypass it), and I presume the anti-cheat can detect those non-standard keys and will consider it an additional point of suspicion.

It's not perfect but it doesn't need to be - it's one of many layers that combine to make cheating more awkward for users to set up and less profitable for cheat makers, to keep the number of cheaters low enough to not ruin the game for everyone else.

Yes for DRM

Posted Aug 8, 2025 18:56 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

> Valorant does use TPM remote attestation to some extent

TPMs are not well-suited for remote attestation. They can attest that the running software is the same as it was at some point in the past, but you can't pre-compute the expected hash values for an arbitrary PC.

This is fine if your goal is to protect a corporate laptop, as you can guarantee that you install the software in a clean environment. But a dedicated cheater can start subverting the system during the installation so it's _never_ in a clean slate state.

Though it certainly makes attacks much more complicated, which is the end-goal for the anti-cheat protections.

Yes for DRM

Posted Aug 8, 2025 10:25 UTC (Fri) by SLi (subscriber, #53131) [Link] (19 responses)

Why is it not possible? I don't know exactly how the Windows Secure Boot pipeline works, but I'd assume it would exclude loading untrusted (~not Microsoft-signed) kernel drivers?

This means that a TPM can attest that the boot chain only contained Microsoft-signed elements in an unforgeable way to a game server, which basically means that you don't have cheat tricks in your kernel.

This does leave open some avenues like PCI devices that (I think) can in most cases compromise the secure boot chain already before the bootloader, or flashing a custom BIOS. These will change the measurements, but, as said, there's no way it's possible to maintain a whitelist of good firmware and PCI option ROMs.

Yes for DRM

Posted Aug 9, 2025 6:14 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (18 responses)

> This means that a TPM can attest that the boot chain only contained Microsoft-signed elements in an unforgeable way to a game server, which basically means that you don't have cheat tricks in your kernel.

The problem with this idea is that most of these games are already using their own kernel drivers to snoop on players and (supposedly) prevent cheating. I'm not sure if Microsoft is willing to sign those drivers - there is a lot of potential for abuse if somebody manages to extract the driver and convince it to behave slightly differently than intended.

Yes for DRM

Posted Aug 9, 2025 16:07 UTC (Sat) by excors (subscriber, #95769) [Link] (17 responses)

> I'm not sure if Microsoft is willing to sign those drivers

They are: "The driver has been signed by Riot’s own EV cert, which has in turn been signed by Microsoft as per their code signing process." (https://www.riotgames.com/en/news/a-message-about-vanguar...)

> there is a lot of potential for abuse if somebody manages to extract the driver and convince it to behave slightly differently than intended

Doesn't the same potential exist for all drivers? I'd have more trust in the security of an anti-cheat driver whose primary goal is to resist hostile adversaries, than a random driver for e.g. my keyboard LEDs where the developers probably had no interest in security and never expected to be attacked.

Yes for DRM

Posted Aug 9, 2025 16:30 UTC (Sat) by mb (subscriber, #50428) [Link] (15 responses)

>Doesn't the same potential exist for all drivers?

Sure. That's why such code in the trust chain must be minimized instead of adding code to try to prevent kiddies from cheating.

This chain of trust doesn't work in practice, because there probably are tens of millions of lines of code that must be trusted between the root of the trust in UEFI and the trusting application in userspace.
IMO this technique is fundamentally flawed.

>I'd have more trust in the security of an anti-cheat driver whose primary goal is to resist hostile adversaries, than a random driver for e.g. my keyboard LEDs where the developers probably had no interest in security and never expected to be attacked.

Yeah, well. The keyboard driver is present regardless of whether there is an anti-cheat driver or not.
Adding an anti-cheat driver can only weaken the system as a whole, if it contains a bug.

Yes for DRM

Posted Aug 9, 2025 17:05 UTC (Sat) by SLi (subscriber, #53131) [Link] (14 responses)

I think there's an important distinction to be made between vulnerabilities in drivers or other software and the entire model being broken. Even in the practical sense. If one driver has such a vulnerability and people start to abuse it, 1) it can be fixed, 2) the game company can just start requiring that the attested chain does not contain that (potentially old version of) the driver.

This is different from if there's a fundamentally unpluggable hole that you can exploit by e.g. running a specific EFI binary before booting. In the first case, detecting and fixing this is possible. In the second case, you can exploit weaknesses in the exploit to try to detect it, but those should generally be patchable and (near-)perfection is attainable.

Blacklisting drivers known to be used for cheating is _much_ easier than whitelisting everything that should be allowed.

Yes for DRM

Posted Aug 10, 2025 2:22 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (13 responses)

The TPM is not used to verify the exact set of loaded drivers. And once you get access to the kernel-level IO, you can install a "forever rootkit" that would virtualize the real IO, hiding the rootkit from the OS and patching the code used to detect it.

This is doubly easier because the rootkit doesn't have to hide from the computer's owner.

And this is not a theory, the current generation of most advanced cheats (e.g. https://blurred.gg/guides/info ) works like this, along with DMA-based RAM snooping. As I understand, the custom rootkit keeps the IOMMU disabled for the DMA engine to work.

Yes for DRM

Posted Aug 10, 2025 14:02 UTC (Sun) by SLi (subscriber, #53131) [Link] (12 responses)

Isn't it used to measure them, though? Or if not, at least I think it could be (I know that before bootloader they are, have no idea what Windows does). And of course logged in a TPM log or an operating system equivalent of one, which together with measuring means that a game server can require you to provide the logs from the power on moment to the present and use the TPM to attest that that corresponds to your current measurement state. This means that the game company would see at least the hash of a driver that allows a compromise, given a successful attestation.

Yes for DRM

Posted Aug 10, 2025 19:57 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

You can disable secure boot, enable the unsafe mode in Windows, install the rootkit, and then re-enable the secure boot.

Windows will happily re-measure the boot chain with the rootkit. The values of TPM registers will be different so you will lose access to any previously sealed secrets, but there's no way for the anti-cheat driver to pre-compute the "correct" values.

It's possible in theory to build a robust DRM system with the TPM, but it requires coordination and periodic updates for all the PC manufacturers.

Yes for DRM

Posted Aug 10, 2025 20:02 UTC (Sun) by SLi (subscriber, #53131) [Link] (10 responses)

Does this mean that Windows does not maintain a (measured) log of what it loaded that can be attested to? This would still provide the game company hashes and probably some level of other self provided identities of the drivers, attestably at least until the rogue driver, which seems to provide at least some kind of a plausible target for blacklisting.

Yes for DRM

Posted Aug 11, 2025 9:40 UTC (Mon) by farnz (subscriber, #17727) [Link] (9 responses)

The problem is that such a log could be truncated by the rogue driver - you remove the last entry when you load (which you can do, since you're in kernel space - albeit you may need a helper app to tell you what the log "should" look like without you), and now it's impossible to distinguish this attested log from one on a system without the rogue driver.

The protection against rogue drivers is via driver signing; if the driver is signed, it's trusted. Once it's loaded, it can rearrange memory so that everything looks plausible for the situation where the driver was never loaded, including resetting logs and changing flags to hide itself.

Yes for DRM

Posted Aug 11, 2025 14:58 UTC (Mon) by SLi (subscriber, #53131) [Link] (8 responses)

I don't think that's true. The log is protected by the state of the TPM which you cannot rewind. The protocol would go something like this:

1. Client: Please give me access to the game server!
2. Remote: Ok, let's see. Tell me who you are and show me your TPM logs. Here's a challenge nonce.
3. Client: I'm a TPM by FooCorp. Here's the certificate from FooCorp for my public key. And here's your challenge nonce to prove that this is not a replay. Here's the TPM logs. I have attached a signed attestation with your nonce from the TPM that my PCRs correspond to what the log says.
4. Remote: Excellent, looks clean. You are welcome.

Now of course if nothing else prevents it (i.e. there's no mechanism to, for example, for the game to get a notification if a driver is about to be loaded), the game server can just ask you to periodically reattest that you are still in that state or to provide the logs up to the state where you are.

Now what is logged in those measured logs is of course up to the operating system, but it definitely should contain a hash of the driver and most likely other information about its provenance (if it's signed, who has signed it, size, filename, other metadata, hash). The game server could even theoretically require you to send the unknown driver for analysis (they know its hash).

Yes for DRM

Posted Aug 11, 2025 15:16 UTC (Mon) by farnz (subscriber, #17727) [Link] (7 responses)

If the cheat system really wants to push the limits, you have two systems; one is clean, and can answer all the TPM queries you want answered. The second runs the dirty driver, which redirects your TPM log request across to the clean system, gets you a verified answer (which passes, since the cheat code is in userspace on the second system) and returns it as-if it came from the dirty system.

The result is that you've correctly proven that the clean system exists and is in the control of the user trying to cheat. But that's not what you wanted to know - you wanted to know that the system you are running on is the clean system, and not the dirty system.

Yes for DRM

Posted Aug 11, 2025 15:38 UTC (Mon) by SLi (subscriber, #53131) [Link] (6 responses)

Yes, this could be possible. The normal way to do this with e.g. DRM protected content is to design the system so that in a clean system you cannot extract some secrets—for example, in a video player the system gets access to a media decryption key and promises to leak neither that key or decrypted data, and that promise is backed by either doing the processing in hardware (easier to contain) or guaranteed by the OS.

I think that at least if the game loads a kernel driver, this should be doable, assuming really a "clean system", and I admit that with the current technology that's not really easy if you need to support tons of hardware—but that's where blacklisting bad drivers could come into play. And certainly some kernel APIs could be designed for doing something like this from the user space; at the simplest, something like "execute this binary, protecting its memory from everyone else, and attest to it". The kernel would then have a function `attest(nonce)` that gives the process a signed attest that:

1. This is an .exe with hash $hash
2. I started it so that its memory is not accessible from outside
3. The TPM state is $state (signed by the TPM, nonced by the nonce)

(1) and (2) are guaranteed only by the OS being in a known good state, which is the weakest link. (3) is guaranteed by the TPM.

... but I admit that at this point, speculating about what Windows does or could do, I am a bit out of my depth. I know TPM well enough as a concept to believe that this is something it should enable. I should probably go read about how it's really used by Windows; I only know the period until the bootloader starts pretty well, and I have some idea about how systemd uses it on Linux.

Yes for DRM (but possibly not on Windows, at least in the current state)

Posted Aug 11, 2025 16:02 UTC (Mon) by SLi (subscriber, #53131) [Link]

Ok, apparently, looking into what Windows does (and I may still be wrong):

- Windows really stops the measurement chain mostly early, i.e. what ends up in the PCRs is firmware and bootloader measurements and some info about the OS loader and early kernel init.
- Crucially, I think it *doesn't* routinely post-boot driver loads, which obviously breaks the chain if you can load your own driver (so you can only reliably attest to what happened in early boot).
- Device Health Attestation (DHA) apparently can attest to: 1) if Secure Boot is on or off; 2) BitLocker status; 3) Code integrity policy (whether kernel-mode driver signing is enforced); but it does not give you a hash of every loaded driver.
- Anti-cheat vendors rely more on the driver signing enforcement and their own kernel driver scanning, not PCR measurements (and that obviously can be subverted by drivers loaded before the anti-cheat driver, even if not easy), plus hardware identity (from TPM) for bans.

So, I would claim that locking down the system at a level where a game vendor can reliably blacklist drivers is doable using a TPM, but it would require a future version of Windows to start measuring all driver loads.

Yes for DRM

Posted Aug 11, 2025 16:13 UTC (Mon) by farnz (subscriber, #17727) [Link] (4 responses)

The core problem is that, having loaded a "bad" driver, the bad driver can lie through its teeth to the game client. Once the bad driver is loaded, it is, for all reasonable purposes, part of the kernel, and can (for example) redirect the "execute this binary, protecting its memory from everyone else, and attest to it" API to let it launch the binary on the clean system, and attest to the signed attest from the clean system.

Underlying this is that as soon as the "bad" driver is loaded, the OS is in a known-bad state, and anything running on that OS can't trust it. Remote attestation, as offered by the TPM, allows you to confirm that the user has access to a clean system, but not that the processes involved in attestation are actually running on that clean system. This is irrespective of the OS; once you have an untrusted system that has full userspace access to a clean system, you can get the attestations from the clean system, and send them back in place of the attestations that you can get from your "dirty" system.

The fix is to not support general systems; either the "dirty" system's kernel space must be locked down so that you cannot run "unwanted" but signed drivers (games console model), or the "clean" system's user and kernel space must be locked down so that the "dirty" system can't run arbitrary userspace on it to get attestation answers from the clean system.

Yes for DRM

Posted Aug 13, 2025 0:53 UTC (Wed) by SLi (subscriber, #53131) [Link] (3 responses)

> Remote attestation, as offered by the TPM, allows you to confirm that the user has access to a clean system, but not that the processes involved in attestation are actually running on that clean system.

Yes, this is true. But I think there's another fix besides what you suggest. The remote is now able to encrypt secrets so that only the clean system can access them. Because it's a clean system, it presumably won't leak them to the unclean system. This can be used for DRM (and is the common way to do DRM, I would claim). Typically you give the clean system a key to access encrypted secrets.

Yes for DRM

Posted Aug 13, 2025 1:17 UTC (Wed) by mjg59 (subscriber, #23239) [Link]

You can also just plug in a second TPM and give it whatever set of measurements you want…

Yes for DRM

Posted Aug 13, 2025 9:38 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

The userspace on the clean system is under attacker control; kernelspace (and thus userspace) on the dirty system is under attacker control. How do you propose to allow the userspace code on the dirty system to ask for a secret in such a way that the kernelspace on the dirty system can't proxy the request to a userspace process under its control on the clean system, and proxy the result back?

More generally, given that I can buy TPMs off the shelf (e.g. this TPM 2.0 evaluation board), what stops me from having a custom device that gets sent the same sequence of TPM 2.0 commands as a "real" TPM on a motherboard (hence has the "correct" hash values - all I need is a motherboard with an external TPM, such as the ASUS ROG STRIX TRX40-XE GAMING, where I can intercept the SPI bus to a discrete TPM and see what happens), but isn't actually reflecting the state of the "dirty" system?

Or, for even more sophistication, my hardware device sits between the discrete TPM on my gaming motherboard and the mobo connector, and simply filters out TPM commands that would show that my cheat driver is present - relying on my cheat driver modifying kernel data structures like the log to agree with the TPM.

Yes for DRM

Posted Aug 13, 2025 13:13 UTC (Wed) by pizza (subscriber, #46) [Link]

> Or, for even more sophistication, my hardware device sits between the discrete TPM on my gaming motherboard and the mobo connector, and simply filters out TPM commands that would show that my cheat driver is present -

There's a lot of precedent here; most so-called console "modchips" work this way.

Yes for DRM

Posted Aug 28, 2025 15:21 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

> I'd have more trust in the security of an anti-cheat driver whose primary goal is to resist hostile adversaries

It’s not designed to resist hostile adversaries in a general sense, it’s designed to resist abuse within the game. The anti cheat devs would not care less about all the other ways their driver could be abused.

Future of DRM

Posted Aug 8, 2025 10:54 UTC (Fri) by SLi (subscriber, #53131) [Link]

I see currently two plausible avenues for DRM if the industry coalesces around them.

The first one indeed uses TPM and Secure Boot to require that you're running approved software only at the kernel level. If it controls everything from bootloader to kernel, it's already going to be quite limiting and annoying. If it wants to protect against hardware too, I believe it would need to either drop support for many legacy PCI devices or maintain a whitelist of allowed legacy PCI devices; and then you'd require signed PCI option ROMs (and that firmware be careful to not enable PCI DMA without IOMMU). But I believe a solution absolutely exists where a game company can require that all the non-userland software that ran anywhere in the chain from power on to game start is signed by a company they trust.

The second option, which I think would be more secure but would require buy in from CPU vendors and is not going to happen in a few years would be using some sort of secure enclave mechanisms of CPUs that allow running untamperable, unobservable code even in an untrusted and hostile OS or hypervisor. Features like this exist on the latest server CPUs, but they're not 100% there yet, and to control the full chain it will also require things like the enclave getting untamperable control of some hardware like mice and keyboards (enforced by CPU).

Those things would likely also allow unanalyzable malware, which many would likely find annoying. But there are also pretty neat uses for this at least on servers; it is nice to be able to run code on the cloud that the cloud vendor could not observe without help from the CPU vendor. And, in some sense, I would be annoyed by not being able to do the same stuff on my home computer as on the cloud :)

This is being frantically developed for the server/cloud use case; we'll see if it makes its way to consumer CPUs.

a meaningful subject line

Posted Aug 13, 2025 19:21 UTC (Wed) by mirabilos (subscriber, #84359) [Link] (2 responses)

> generate asymmetric keys, such as SSH keys or signing keys, on the device that cannot be exfiltrated

But why would I want to do that? I want to back up these things to my normal backup storage.

> its use demonstrates that a user has access to a specific piece of hardware

But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.

Also, the RNG on TPMs is notoriously bad; the baseline for such an RNG is incredibly low (PRNG with “unknown and not exfiltratable seed” suffices), there’s no guarantee for entropy. (Full disclosure: I have written software to use a TPM 1.2, to read from its RNG and put this as *additional* source into the kernel entropy pool. Never as single source.) This is another good reason why a TPM should never be used to generate any cryptographic key, its random bits are not much better than those of plastic router boxen. Always let the OS do it — and not just Linux /dev/urandom with its crippled pool size either. touch ~/.rnd and then put 「openssl rand -rand ~/.rnd -writerand ~/.rnd 4 >/dev/urandom」 into your crontab, run it at least hourly. Over time, enough entropy will amass in ~/.rnd (each call also reads from the kernel and mixes that into the seed file), and a few bits will also find their way back to the kernel each call, which is not a bad thing either.

> 2048-bit RSA keys

Right. It’s called baseline for a reason…

> AMD's integral TPM has some problems with the random-number generator

*cough*

> everyone builds their own initrd

Better that way. I can include things to set up the network so I can unlock LUKS via SSH ;-)

But good to know that dracut is not something one wants to play with, I’ll stick to initramfs-tools as well then.

(OT: the plain text format doesn’t do quotes well, apparently ☹)

Use case for TPM-tied credentials

Posted Aug 15, 2025 11:06 UTC (Fri) by farnz (subscriber, #17727) [Link]

> its use demonstrates that a user has access to a specific piece of hardware

But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.

The use case is hardware that can be tampered with by an attacker in some way; if the attacker images your disk, then the TPM-backed secret doesn't let them decrypt it, and they've gained nothing. If you've set up a measured boot system, with the TPM only releasing symmetric keys when the measurements are correct, then the attacker also can't tamper with your system to exfiltrate data without you losing the ability to use that hardware for the intended purpose.

And you can still replace the hardware in this situation - it's just that instead of being able to take a HDD/SSD out of a failed machine and plug it into a new machine, you have to create new secrets on a new machine and restore from a securely kept backup. All you're doing is stopping an attacker from being able to do the same if they can borrow your laptop while you're (e.g.) drinking coffee with friends.

Underlying this is that it's only useful in cases where you need a guarantee that the user has access to a machine which has not been tampered with since you last verified its trustworthiness via physical access; my work laptop has FDE keys in the TPM so that I can boot without network available, but I need an alternative keying method (e.g. access to a secured physical network, a FIDO U2F device, a long passphrase) to get in if my device appears to have been tampered with (even if I did the tampering). My home server, however, is physically secure, and does not use a TPM to boot.

a meaningful subject line

Posted Aug 15, 2025 18:40 UTC (Fri) by raven667 (subscriber, #5198) [Link]

>> generate asymmetric keys, such as SSH keys or signing keys, on the device that cannot be exfiltrated

> But why would I want to do that? I want to back up these things to my normal backup storage.

> its use demonstrates that a user has access to a specific piece of hardware

> But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.

Not everyone has the same security requirements, for a fancier HSM the answer to backup/restore is cutting the key into chunks and storing parts of the key on smartcards that are distributed to multiple people requiring some number of them to come together to combine their key parts into a whole signing key. For more everyday situations with something like a Yubikey the answer can be buying two of them, and enrolling both everywhere you need it, then storing one safely as a backup. For something protected by a TPM like on-disk encryption you can use normal backup methods for the data while the system is unlocked and accessible, with whatever encryption policy for the backup you want separate from the host full-disk-encryption, knowing that if the hardware breaks, or the key is intentionally destroyed, the data is inaccessible even if the disk/nvme itself is "fine". I'm would guess for things like TPM you don't have to create the key on the TPM, I think you can create a private key on the host and push it into the device one-time as a write-only operation, similar to how a Yubikey can store some number of TOTP/HOTP tokens. I thought Passkeys worked the same way but it seems that there are mechanisms to sync the actual private keys between devices, or transfer them during device upgrades, rather than enrolling each device under your control with a separate hardware-locked key into each service doing key-based auth. The security properties of a key embedded in a physical object are easier to reason about though, no amount of OS-level RCE bug can remotely access a key sitting on my desk, it would take literal ninjas coming through my walls which is a whole different thing.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds