Don't fear the TPM
There is a great deal of misunderstanding, and some misinformation, about the Trusted Platform Module (TPM); to combat this, Debian developer Jonathan McDowell would like to clear the air and help users understand what it is good for, as well as what it's not. At DebConf25 in Brest, France, he delivered a talk about TPMs that explained what they are, why people might be interested in using them, and how users might do so on a Debian system.
![Jonathan McDowell [Jonathan McDowell]](https://static.lwn.net/images/2025/jonathan_mcdowell-sm.png)
McDowell started with a disclaimer; he was giving the talk in his
personal capacity, not on behalf of his employer. He wanted to talk
about "something that is useful to Debian and folks within
Debian
", rather than the use of TPMs in a corporate environment.
McDowell has been a Debian developer for quite some time—more
than 24 years, in fact. Professionally, he has done a lot of work with
infrastructure; he has written networking software, high-end storage
systems, and software-defined networking. He has also run an ISP. To
him, TPMs are simply "another piece of infrastructure and how we
secure things
".
Unfortunately, there is a lot of FUD around TPMs, he said, especially now that Microsoft is pushing TPM devices as part of the baseline requirement for Windows 11. That has been part of the baseline since it was introduced, of course, but with the end-of-life approaching for Windows 10 people are starting to take more notice.
Many people are responding to TPMs by "throwing up their hands
and going, 'this is terrible'
"; but they are actually really
useful devices. One of the reasons that they are useful is that they
are so common. If you buy a new PC, "it is incredibly likely that
you have some TPM capability on it
". Unless it's an Apple
system—they have Secure
Enclave instead, "which does a whole bunch of different things
that have some overlap
".
What is a TPM?
So, he asked rhetorically, "what is a TPM?
" He displayed a slide with
Wikipedia's definition of a TPM, which says that a TPM is "is a secure
cryptoprocessor that implements the ISO/IEC 11889
standard
". McDowell said he did not recognize that definition,
despite having worked with TPMs for several years. He repeated the
definition and said, "that doesn't mean much to me, and it's also
not entirely true
", because there are multiple TPM implementations
that are not secure cryptoprocessors.
There are three variants of TPM that McDowell said he was familiar
with: discrete, integral, and firmware. A discrete TPM is a separate
chip that lives on the motherboard. Historically, the discrete TPM has
been connected over the low pin count
(LPC) bus, but modern systems mostly use the serial
peripheral interface (SPI) bus. Then there is the integral TPM,
which sits on the same die as the CPU, but as a separate
processor. Examples of integral TPM include Intel's Management
Engine and AMD's Secure
Technology (formerly called "Platform Security Processor"). These
are logically separate from the CPU that applications run on, which
gives some extra security, "but not a full discrete chip
".
Finally, there are firmware TPMs, such as the Arm TrustZone
technology. In that case, McDowell said, the TPM is actually running
on the application processor in a more secure context, but firmware
TPMs can be vulnerable to speculative
side-channel attacks. The idea is
that the TPM is a small, specialized device that "concentrates on
cryptographic operations and is in some way more secure than doing it
on your main processor
".
McDowell digressed a bit to talk about TPM
1.2 devices. "I hate TPM 1.2 devices. I still have to deal with
a bunch of them in life. They are ancient.
" TPM
2.0, which is the baseline that Windows 11 expects, launched
in 2014. He would like TPM 1.2 devices to all go away and said that he
would not be discussing them further.
Not for DRM
One of the things that TPMs can do is state attestation. The idea is that the TPM can attest to the software that is running on the machine:
And if all of the stars align and you get everything right, you can actually build a full chain from the first piece of code, the firmware that the CPU runs, all the way up to the application layer and say, I am running this stack of software and I will provide you a signed cryptographic proof.
However, he assured the audience, TPMs are not a realistic
way of doing digital rights management (DRM), McDowell said—no matter how much Microsoft or
Netflix might want to use them in that way. "They could not build a
database of all these values for all the legitimate machines in the
world
." Trying to do so would result in "support calls coming
out of their ears
". It is absolutely possible to constrain things
so that the TPM can provide a level of security for embedded systems
and appliances, he said. "In particular, you can potentially use it
for some level of knowing that someone hasn't tampered with your
firmware image
". But full DRM on general-purpose PCs is not going
to happen.
A standard TPM for a PC has 24 platform-configuration
registers (PCRs), McDowell said. PCR 0 through PCR 7
belong to the firmware and are used by UEFI to measure the
bootloader and "base bits
" of what the operating system
runs. PCR 8 through PCR 15 are "under the control of the
bootloader and the OS
", and PCR 16 through PCR 23 are
"something different, and we'll not talk about those at
all
".
PCRs are SHA hash registers; at boot time the TPM sets the values for the registers to zero. Then the hash values for various objects are measured into the registers; For example when GRUB boots, it logs its activity into the TPM event log, and performs a cryptographic hash operation to extend the value of the PCR, explained in more detail in this coverage of a talk by Matthew Garrett. McDowell displayed a slide that showed a command to read the TPM's event log:
# tpm2_eventlog /sys/kernel/security/tpm0/binary_bios_measurements
Each entry in the log shows something that has been measured into
the registers. Details about Secure Boot,
for example, are put into PCR 7, which provides an attestation
that "this machine has used Secure Boot, and these are the keys it
has used to do Secure Boot
". All of that is machine-state
attestation, he said, "which is the thing that people get worried
about
" being used to enforce DRM.
Key storage
The much more interesting thing from a Debian point of view, he said, is key storage. While TPMs are small and incredibly slow devices, it is possible to securely generate asymmetric keys, such as SSH keys or signing keys, on the device that cannot be exfiltrated:
You can say "make me a key", and it will make you a key, and that private part of the key can only be exported from the device in a way that only the device itself can read.
Obviously an attacker could use the TPM while they are
connected to the machine. But if the user kicks them out or fixes
whatever has happened, the attacker would not be able to export
any keys stored in the TPM to another machine. That, McDowell said, is
incredibly useful. He reiterated that TPMs are slow; they are not
full-blown high-performance hardware
security module devices. But they are almost everywhere, "and
that's why they're interesting, right?
" They are a standard piece
of hardware that most PCs will have if they're not too old.
If one wants to get more into the corporate side of things, he
said, some hardware vendors will provide a certificate that ties the
TPM's unique
endorsement key to the serial number of the laptop. "So I can do a
very strong statement of 'this is the machine I think it should
be'
." But that, he reiterated, involves a slightly complicated
procedure. For single-machine use cases, "you don't have to worry
about this bit too much
".
This also allows the TPM to do attestation for the key. That is
more complicated, McDowell said, but "you can do an attestation
where the TPM goes, 'that key was definitely generated in me'".
That might be desirable, for example, in terms of a certificate
authority or when signing packages. If a key is hardware backed, its
use demonstrates that a user has access to a specific piece of
hardware—such as a company-issued laptop.
He elaborated later that using it for attestation
involved an "annoyingly interactive challenge and response
dance
"; it was not possible to have the TPM simply generate an
attestation statement that can be validated and trusted. However, if
one does the full attestation dance, "I can guarantee [the key is]
hardware-backed and I can guarantee it's hardware-backed by a
particular vendor of TPM
".
Another neat thing that users can do is to bind a key that can only
be used if the PCRs are in a particular state. That means it's
possible to ensure that someone hasn't messed with the firmware, to
guard against an "evil
maid" attack. If the machine is still running the image that the
user expected to be running, then they could use their key. "If
someone has subverted that with a dodgy firmware or a dodgy kernel,
then I will not be able to use my key
".
TPMs can also generate random numbers, he said, though that is not
necessarily particularly interesting. The TPM needs random numbers for
many of its operations, and it exposes that interface "so you can
ask the TPM for random numbers
". There are faster sources of
random numbers, such as the CPU's instruction set and USB-attached
random-number generators, but TPMs are still useful largely because
they are present in a lot of machines.
Crypto types
McDowell had said he was not going to talk about TPM 1.2
devices, but he mentioned them again to say they did not do
cryptography right. The 1.2 specification only allowed for the use of
1024-bit RSA keys, and the SHA1 algorithm. The 2.0 specification
added "this thing called crypto agility
", and extended
the baseline support to 2048-bit RSA keys, SHA256, and NIST P-256
elliptic-curve cryptography.
Post-quantum cryptography is not there yet but it is
being actively worked on upstream. Because of the crypto-agility
standard, none of the interfaces used to talk to the TPM will change
much—it will just be a different key type. All of the TPM
vendors are ready, McDowell said; it is just a matter of waiting for
the details to settle. "This will come before we need it, which is
good
".
Using the TPM
Next, he demonstrated how to check to see if the random-number generator was enabled, but did not go into detail on how to use the feature. McDowell cautioned that AMD's integral TPM has some problems with the random-number generator, possibly to do with locking and conflicts over accessing the device over the SPI bus.
The TPM can also be used to produce trust paths in software using the kernel's integrity-measurement architecture (IMA). For example, if a developer was building an appliance, it would be possible to use the kernel's IMA to create a list of the privileged code, with its hashes. To test this out without messing with the system TPM he recommended the swtpm package, which provides a TPM emulator.
What's more interesting about swtpm, McDowell said, is
that it can be used in conjunction with QEMU to provide a TPM to a
virtual machine. "I suspect a bunch of people are doing this to
boot Windows 11
" in virtual machines. It is a
fully-featured TPM 2.0 implementation, and it was what he had used
for the examples in his presentation. He also recommended the tpm2-tools
package, which he called a kind of Swiss Army knife for
working with TPMs. He put up a slide showing the tpm2_pcrread
command being used to read PCRs 0-7 from the TPM:
$ tpm2_pcrread sha256:0,1,2,3,4,5,6,7
The version of GNU Privacy
Guard (GnuPG) in Debian 13 (trixie) includes a feature that
allows users to generate a key and store it in the TPM. "That means
you've got a hardware-backed key, no need for the Yubikey plugged into
your machine
". Even if an attacker has access to the machine they
cannot copy the key from it. "That, to me, is amazing.
" That
feature is not available in the GnuPG version in Debian 12 ("bookworm").
I asked how users could back up their key if the machine with the TPM
died and was unusable. He said there were two options: generate the
key in the CPU and store it in the TPM, with an offline backup on a
USB key, or use a GPG subkey. "Then you have
the ability to put another subkey on your laptop because the primary
key is not the one stored in the TPM
." His approach was to use an
offline primary key, stored in a hardware token, and then to use
subkeys extensively for different machines.
McDowell also showed examples of using the TPM to store a PKCS#11
token for use with SSH,
which he said was "a bit annoying
" because the process was
convoluted. There was another method, using an SSH
agent for TPM written in Go, which he described as
"cheating
" because it was not yet packaged for Debian. He
lamented the fact that he was speaking at the same time as the Go team
BoF, so he was unable to get help figuring out Debian's Go
ecosystem.
Every now and again he thinks about "jumping through
all those hoops
" to be able to sign his own operating system
images to use with Secure Boot. If he did that he
could use the OpenSSL
TPM 2.0 provider as a certificate
authority with a secure backend stored in the TPM. But, he reminded
the audience, TPMs are slow. "If you can get 10 signing operations
a second out of your TPM, you're doing exceptionally well.
" It
would never be possible to back a TLS web server with a TPM. It was
much better for one-offs, such as certificate-authority operations,
where a system is not being used to issue a lot of certificates.
A really interesting use of the TPM, McDowell said, was to
automatically unlock a LUKS-encrypted drive. A user could set things
up to automatically unlock the drive if the firmware,
bootloader, and so forth are unchanged and avoid having to enter a
passphrase just to decrypt the disk. He noted that users would still
need to have a recovery password for LUKS, because if anything were to
change about the machine—including rebuilding the
initrd—then a user would have to have a passphrase to decrypt
the disk. He showed a slide with an example using
systemd-cryptsetup and dracut to enable this feature
and said, "this is my first time playing with dracut; I didn't like
it
." He also noted he could not fit the entire example on the
slide, but he included a link to a
blog post about using TPM for disk decryption.
An audience member asked how much of a pain it would be to
"magically incorporate
" the proper values when the kernel is
updated so that the next time the system is booted it expects the new
kernel. McDowell said that systemd does have tooling that will
"attempt to do the calculations for what the PCR values will end up
as
"; he had not looked at that tooling extensively, however. There
was still more pain than there should be in automating this, which is
"one of the reasons that the systemd folks are pushing
unified-kernel images
" (UKIs). That would allow distributions to
provide the initrd as part of the whole package and provide the PCR
value along with it. In the current model, where everyone builds their
own initrd, "we have no way of distributing those values as a
project
".
In general, he said, the systemd folks have been really good about
trying to drive the use of TPMs forward. LWN covered some of this work
in December. McDowell also gave a call out to James Bottomley for
doing a lot of work on the kernel side of things "in terms of just
generally improving the infrastructure
" around TPMs.
One audience member wanted to know if he had seen any work
that would allow programs like Firefox to have passkeys in the
TPM. He was not aware of any implementations of passkeys in the TPM;
the problem with the passkey approach and TPMs, he said, is that a
passkey "normally wants some proof of user presence
", such as a
button press on a Yubikey. There is no equivalent of user presence
with a TPM that couldn't be faked programmatically.
The slides for McDowell's talk are online now, and videos from DebConf25 should be published soon.
[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brest for DebConf25.]
Index entries for this article | |
---|---|
Conference | DebConf/2025 |
Posted Aug 6, 2025 16:07 UTC (Wed)
by Lennie (subscriber, #49641)
[Link] (1 responses)
As long as the firmware on your computer supports setting up your own keys or those you trust, there is no problem.
The issue is: most people don't know this or check this, so there is a potential systemic problem with Microsoft having the root keys.
At the moment PC users seem to be safe, Microsoft hasn't caused problems (intentional or otherwise).
When fwupd installs updates of the firmware on your machine (or dual boot and install Windows updates), that could change, in theory.
Very likely the option to disable Secure Boot will also remain (as I understand it, that is what these companies say they will do), so there is that.
So it's very much theoretical, but also shows the Linux world is not in complete control.
Posted Aug 8, 2025 8:28 UTC (Fri)
by SLi (subscriber, #53131)
[Link]
Posted Aug 6, 2025 18:28 UTC (Wed)
by Arrange1030 (subscriber, #178702)
[Link] (6 responses)
https://source.android.com/docs/core/virtualization/archi...
The entire software stack is verified on boot using the DICE cert chain (thanks to the TPM). This proves that no one tampered with the "protected" and closed source pVMs that are running under the untampered pKVM hypervisor. Linux also runs under pKVM and cannot access pVM's memory. The hypervisor can map the decoder/decryptor HW MMIO ranges into the pVM, or allow it to pass DRM buffers to the TEE or something. After that, userspace Android can only send the Netflix frames to the pVM for decryption. If the DICE checks fail (like with a custom ROM), you cannot talk to the pVM.
Even though many of these pieces are open sourced, you cannot flash the TEE on Android phones without the OEM key. This is partially for good reason, since I wouldn't want a malicious secondhand device to access my fingerprint/face unlock data. The flipside is that we no longer own our devices. For example, on the X1 Elite laptops you cannot even flash the hypervisor. I'm sure something like this is coming to Windows too. The TPMs are enabling this Tivoization because we don't hold the keys.
Posted Aug 6, 2025 18:38 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (5 responses)
Posted Aug 6, 2025 18:45 UTC (Wed)
by Arrange1030 (subscriber, #178702)
[Link] (4 responses)
>There are three key use cases for DICE:
Posted Aug 6, 2025 18:49 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (3 responses)
Posted Aug 11, 2025 15:46 UTC (Mon)
by SLi (subscriber, #53131)
[Link] (2 responses)
Posted Aug 11, 2025 17:46 UTC (Mon)
by intelfx (subscriber, #130118)
[Link]
At most it might be this way for HSMs _integrated into the platform_ (as reflected in the name, Trusted _Platform_ Module).
There is a variety of pluggable (PCI, USB) HSMs and, to my knowledge, nobody is trying to call them TPMs.
Posted Aug 11, 2025 18:25 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
For example - in the realm of computers - the number of people who now just talk about RAM. With no clue whether it's actually RAM, or disk. (Made even worse now by those systems that have matching RAM and SSD, 32GB of each maybe.)
Or the COMPUTER LECTURER who re-purposed "real time" to mean "interactive". I had a bit of a go at him but he was unrepentant. And now, twenty years on, I'm working in an industry when real-time errors (that's real real-time) are a major cause of errors and real physical crashes that damage equipment and take systems out of service for hours at a time ...
> Probably for a non-expert, it's close enough to make sense.
The problem is when the non-expert NEEDS to understand the issue, at which point the fact they can't even use the words correctly becomes a MAJOR problem.
Cheers,
Posted Aug 6, 2025 18:44 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Oh yep. I did this for my home server, and it never worked for me with stock Fedora. I ended up using sbctl ( https://github.com/Foxboron/sbctl ) to do signing.
Posted Aug 6, 2025 20:36 UTC (Wed)
by grawity (subscriber, #80596)
[Link] (7 responses)
That doesn't prevent Windows Hello from using it. In the case of passkeys, it seems to be enough that *unprivileged userspace* (websites and web browsers) cannot fake user presence programmatically, with the privileged OS component showing the confirmation UI.
Additionally, Windows Hello requires the user to enter a PIN, which is something that a TPM could potentially implement via policies (that's already done for BitLocker TPM+PIN).
So with modern Wayland/portal/flatpak desktops, something like "xdg-credential-portal" (previous name of [1]) seems entirely feasible. Though it doesn't necessarily have to be a browser-integrated system API; "u2f-hid" emulated a whole HID device and I think in theory that too could be made to use TPM + GUI confirmation in the same way.
Posted Aug 6, 2025 21:29 UTC (Wed)
by valderman (subscriber, #56479)
[Link]
I wrote a TOTP authenticator that uses the TPM to protect the shared secrets, which uses fingerprint verification via fprintd to approximate presence verification and it works pretty well. Sure, you can generate one time codes without verification if you have root, but (unlike Google Authenticator et al) at least you can't exfiltrate the secrets and keep generating codes offline.
Posted Aug 7, 2025 4:47 UTC (Thu)
by pabs (subscriber, #43278)
[Link]
https://blog.hansenpartnership.com/webauthn-in-linux-with...
Posted Aug 7, 2025 10:58 UTC (Thu)
by muase (subscriber, #178466)
[Link] (4 responses)
This^^
The problem is: To do true user-presence-confirmation, you'd need a trusted link between your sensor and the secure element; either via a dedicated non-programmable signal path that can be set to true if presence is confirmed, or some kind of cryptographic pairing and sealed channel between the sensor and the secure element. I would be surprised if the TPM standards don't offer a specification for that; but afaik, nobody implements this (at least on the consumer market) – so it's not even practical atm to enforce true user-presence-confirmation without a Yubikey or similar.
The only PC-like systems I know of who do that are Macs, where the fingerprint sensor is uniquely paired with the secure enclave, and the entire connection between both is cryptographically sealed (the fingerprint representation is sent to the secure enclave and verified in there). This does not only have the nice side effect that even a kernel level exploit doesn't give you access to the user's fingerprint data; but it also allows you to generate keys with "Currently enrolled biometry" as a security requirement – so even if someone knows your password and uses it to enroll an additional fingerprint, they cannot use your key.
Funnily enough however, afaik even macOS doesn't implement this security level for passkeys atm; the user can also simply enter their password instead without ever touching the fingerprint sensor.
Posted Aug 8, 2025 0:02 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
That's because it's impossible with the current secure enclave API. Passkeys need to be exportable, as they can be synced between devices.
You can use the SE to get a key that is used to decrypt the stored Passkey data, use it for whatever purpose, and then discard the decrypted data. This definitely can improve the security because the passkeys are represented as clear-text only during a brief window, but it's not foolproof.
Posted Aug 8, 2025 15:38 UTC (Fri)
by muase (subscriber, #178466)
[Link]
Posted Aug 9, 2025 6:03 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
There's a very straightforward way of doing this. It's called "stop using TPMs to do things that security keys were designed to do." A security key integrates the sensor into the same hardware that has the chip, so the path is indeed trusted unless somebody has physically opened the key (which is rather difficult to do remotely, hence making it good enough for verifying non-authenticated user presence, which probably sounds pointless, but is quite helpful in slowing down a remote attacker's lateral movement through a large network).
Of course, the downside is that it is much easier to steal a security key than to desolder and run off with a TPM. But one could easily imagine a setup where the security key is permanently attached to the device instead of hanging off a USB port. Still not as secure as something soldered directly to the motherboard, but life is a series of tradeoffs, and you can always treat the passkey as a second factor (in addition to a password) if you're paranoid.
Unfortunately, the story with PINs is much more grim. If you don't trust the OS (and hardware, and firmware, and the Intel Management Engine that everybody keeps telling me is "probably fine," etc.), then the OS can keylog them, and there's basically nothing you can do about it, short of integrating a tiny numeric keypad into your security key. Nobody does that as far as I have heard of. OTOH, if you're really worried about this class of attack, then you probably work for a three-letter agency.
> Funnily enough however, afaik even macOS doesn't implement this security level for passkeys atm; the user can also simply enter their password instead without ever touching the fingerprint sensor.
Passkeys were not and have never been intended as a full replacement for passwords in all circumstances. They are intended to make passwords the backup flow, not to remove them entirely. It is good and proper for macOS to accept a password in lieu of a passkey.
Posted Aug 11, 2025 10:48 UTC (Mon)
by tekNico (subscriber, #22)
[Link]
The Trezor hardware wallets do that.
Make Passwords a Thing of the Past
Posted Aug 6, 2025 20:58 UTC (Wed)
by Alphix (subscriber, #7543)
[Link] (1 responses)
Posted Aug 7, 2025 19:30 UTC (Thu)
by tamiko (subscriber, #115350)
[Link]
It is already possible to manually set everything up after the installation has completed, by installing the dracut package and the missing systemd pieces and configuring everything by hand.
But first-class support in the Debian installer would really be a game changer.
Posted Aug 6, 2025 23:07 UTC (Wed)
by leromarinvit (subscriber, #56850)
[Link] (11 responses)
Suse has implemented something like this some time ago, I first bumped into this concept when I set up OpenSuse Aeon for testing. Kinda neat, but it does have exactly (?) the same requirements on the TPM as Windows 11 to work (of course unlike that, it will degrade gracefully). To be precise, not only does it need TPM 2.0, but also a feature called "PolicyAuthorizeNV" that - confusingly enough - not even all TPMs claiming to implement 2.0 support. Since my test system was missing that, I couldn't actually try it.
Posted Aug 7, 2025 11:07 UTC (Thu)
by claudex (subscriber, #92510)
[Link] (8 responses)
If we can put the values during the upgrade, that mean that if I have to put Luks passphrase, there is something I should be investigating (or for common users, to signal to the IT team), because it shouldn't happen under normal condition.
Posted Aug 8, 2025 0:06 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 13, 2025 7:06 UTC (Wed)
by cyphar (subscriber, #110703)
[Link] (6 responses)
This feature works using UKIs (Unified Kernel Images), which bundle the UEFI boot stub, kernel image, permitted command-line(s), optionally an initrd, and some other resources. This produces a single PE binary that can be signed as a bundle and verified. The idea is for the UKI to be produced by the vendor of your kernel updates, and because there isn't an initrd that is being re-generated on the users' machine, you therefore can predict the PCR values that will be loaded when booting it -- so when updating the updater can rebind your TPM-sealed LUKS keys to the new PCR values. This doesn't require changing the PCR values you bind your TPM-sealed keys to (if you already do this today) -- in fact, it allows you to require more PCR values for your LUKS key to be unsealed because more of the boot chain is predictable and it avoids the kinds of attacks you mention.
Posted Aug 13, 2025 7:55 UTC (Wed)
by claudex (subscriber, #92510)
[Link] (5 responses)
Posted Aug 13, 2025 8:02 UTC (Wed)
by leromarinvit (subscriber, #56850)
[Link] (3 responses)
Generates/removes a .pcrlock file based on a kernel initrd cpio archive. This is useful for predicting measurements the Linux kernel makes to PCR 9 ("kernel-initrd"). Do not use for systemd-stub(7) UKIs, as the initrd is combined dynamically from various sources and hence does not take a single input, like this command.
This writes/removes the file /var/lib/pcrlock.d/720-kernel-initrd.pcrlock/generated.pcrlock.
Added in version 255.
Posted Aug 13, 2025 8:35 UTC (Wed)
by claudex (subscriber, #92510)
[Link] (2 responses)
Posted Aug 15, 2025 15:46 UTC (Fri)
by claudex (subscriber, #92510)
[Link] (1 responses)
> Raw: grub_cmd: [ xy = xy ]\000
So it'll be challenging for a program to predict it. However, it should works to script it since I know what should change, so I'll try to predict it for my system. But it can't be easily done without UKI at a distribution level, even with the hash of the initrd.
Posted Aug 15, 2025 22:24 UTC (Fri)
by leromarinvit (subscriber, #56850)
[Link]
Posted Aug 14, 2025 5:50 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
It simplifies the checking logic. You just need to verify one binary that has everything and then chainload into it. With classic initrd you also need to measure it (and the kernel cmdline).
Posted Aug 7, 2025 12:01 UTC (Thu)
by grawity (subscriber, #80596)
[Link] (1 responses)
(also I very much do not like having to do needless NV updates, or efivar updates, if I don't know in numbers how many writes the flash can take, so the my implementation instead updates the LUKS keyslot on-disk)
But it *is* somewhat more brittle compared to binding to your SecureBoot certificate (PCR7).
Posted Aug 13, 2025 8:11 UTC (Wed)
by leromarinvit (subscriber, #56850)
[Link]
Posted Aug 7, 2025 9:55 UTC (Thu)
by aragilar (subscriber, #122569)
[Link] (3 responses)
Posted Aug 7, 2025 10:19 UTC (Thu)
by noodles (subscriber, #39336)
[Link] (2 responses)
Additionally, if you care about tying your personal keys to machine state, you can make use of the TPM telling you what's been booted (for avoiding Evil Maid attacks) without having any secure boot support.
A proper HSM is obviously more desirable for performance reasons, but then you're also dealing with increased cost.
Posted Aug 7, 2025 11:42 UTC (Thu)
by aragilar (subscriber, #122569)
[Link] (1 responses)
Can you store many non-boot-related keys in TPMs, I recall reading https://fy.blackhats.net.au/blog/2023-02-02-how-hype-will... and the vibe I get about TPMs is that they are basically cheaper HSMs for the purposes of storing keys (on the device)?
Posted Aug 7, 2025 15:15 UTC (Thu)
by muase (subscriber, #178466)
[Link]
Yes, you can use a TPM to generate, import or even seal external keys in varying degrees; you can pin them to hardware/software state with PCRs, and you can also require a PIN or similar for quick-but-still-interactive access.
I'm also not too sold on the "strong" distinction between TPMs and HSMs – it not only causes confusion (like in your case), but from what I know, HSM is the general super-term for everything that can work as an isolated secure element and does cryptography internally without exposing the keys. Be it a SmartCard, a YubiKey, an USB-YubiHSM, a TPM 2.0 module, an Apple Secure Enclave, a high-throughput PCIe module, a Pluton security chip... from what I know, and how the term is used in my environment, those are all HSM – just with different optimization goals: SmartCards/SIMs are removable and quickly exchangeable, TPMs are built-in and tightly integrated into the boot-cycle which allows them some additional attestations, the Secure Enclave has an additional focus on embedded biometry validation, etc.
To make things worse, those distinctions are also not strict; for example, there are PKCS#11 PCIe-HSMs that are strictly focused on a user-interactive root-CA-like role, and have a very low throughput and are not at all usable for TLS-handshakes. And for a mass-built and -shipped device, Apple's Secure Enclave has the absolutely stunning track-record of ZERO scientifically or publicly documented full (private) key extractions[1]; which suddenly makes it a low-throughput, but security-wise top-tier candidate compared to a lot of TPMs or even HSMs.
-----
[1] There were some successful attacks on the SEP, like the Pangu one, but that is not well-documented and we don't know if key extraction would have been possible, nor is it scientifically credible; and we have the the checkm8te/checkrain combination, which exploited the T2 – but again, no documented key extraction. And while running custom code is a **very big and impressive feat**, it's still not key extraction (similar to how running code in userland is not a root- or kernel-exploit).
Posted Aug 7, 2025 22:07 UTC (Thu)
by comex (subscriber, #71521)
[Link] (24 responses)
For decades, TPM-based DRM on PCs has been a purely theoretical threat. Every other computing platform stood up their own version of Secure Boot and used it for DRM, but on PC, hardware-enforced DRM has been limited to less-general-purpose stuff (Intel ME encrypted video, plus some stuff like SGX that has mostly been used on servers).
Until now.
Just in the last few days, two major upcoming games (Call of Duty: Black Ops 7 and Battlefield 6) have announced they will require Secure Boot to be enabled on Windows:
https://www.theverge.com/news/720007/call-of-duty-pc-anti...
To be fair, the developers have the laudable goal of preventing cheating. This won’t stop all cheaters (there will always be ways to compromise the kernel, plus some cheating devices work purely externally). But it will probably make a meaningful dent. I have to admit that.
Also, in practice this only affects Windows driver developers. It doesn’t affect people gaming with Wine because, well, they were *already* blocked from playing the previous iterations of these games by anti-cheat.
But if you say TPM can’t be used for DRM, here is your counterexample.
In retrospect, it makes sense that we’d see it get used for anti-cheat rather than more traditional kinds of DRM. This kind of check is not nearly secure enough for something like DRM video, where one user breaking the DRM on one device is game over. But anti-cheat is a numbers game.
Posted Aug 7, 2025 23:26 UTC (Thu)
by excors (subscriber, #95769)
[Link]
And it's not alone: The Finals has required Secure Boot for 2 years; Fortnite has required it for high-level tournament matches since Feb this year; Battlefield 2042 has required it since May. Seems the only recent change is that since Win 10 is nearly end-of-life, some new games are only aiming to support Win 11 compatible hardware (which requires TPM) and are not providing exceptions for TPM-less Win 10 systems, which is what triggered the current fuss.
Posted Aug 8, 2025 0:53 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (22 responses)
You can just patch the kernel-level anti-cheat to ignore the integrity check. True DRM would require some kind of interactive attestation that the running kernel doesn't have anything unsigned. But it's not possible with the TPM.
Posted Aug 8, 2025 9:36 UTC (Fri)
by excors (subscriber, #95769)
[Link] (1 responses)
I think it's primarily using the TPM as a hard-to-spoof hardware ID, so that banned players can't simply make a new free account and start playing again, though it can be bypassed by using an external TPM chip (instead of fTPM) and replacing it with a new chip every time you get banned. Secure Boot is an extra hurdle since you have to enroll new keys before it'll boot with your kernel-level cheat (unless you find another way to bypass it), and I presume the anti-cheat can detect those non-standard keys and will consider it an additional point of suspicion.
It's not perfect but it doesn't need to be - it's one of many layers that combine to make cheating more awkward for users to set up and less profitable for cheat makers, to keep the number of cheaters low enough to not ruin the game for everyone else.
Posted Aug 8, 2025 18:56 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
TPMs are not well-suited for remote attestation. They can attest that the running software is the same as it was at some point in the past, but you can't pre-compute the expected hash values for an arbitrary PC.
This is fine if your goal is to protect a corporate laptop, as you can guarantee that you install the software in a clean environment. But a dedicated cheater can start subverting the system during the installation so it's _never_ in a clean slate state.
Though it certainly makes attacks much more complicated, which is the end-goal for the anti-cheat protections.
Posted Aug 8, 2025 10:25 UTC (Fri)
by SLi (subscriber, #53131)
[Link] (19 responses)
This means that a TPM can attest that the boot chain only contained Microsoft-signed elements in an unforgeable way to a game server, which basically means that you don't have cheat tricks in your kernel.
This does leave open some avenues like PCI devices that (I think) can in most cases compromise the secure boot chain already before the bootloader, or flashing a custom BIOS. These will change the measurements, but, as said, there's no way it's possible to maintain a whitelist of good firmware and PCI option ROMs.
Posted Aug 9, 2025 6:14 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link] (18 responses)
The problem with this idea is that most of these games are already using their own kernel drivers to snoop on players and (supposedly) prevent cheating. I'm not sure if Microsoft is willing to sign those drivers - there is a lot of potential for abuse if somebody manages to extract the driver and convince it to behave slightly differently than intended.
Posted Aug 9, 2025 16:07 UTC (Sat)
by excors (subscriber, #95769)
[Link] (17 responses)
They are: "The driver has been signed by Riot’s own EV cert, which has in turn been signed by Microsoft as per their code signing process." (https://www.riotgames.com/en/news/a-message-about-vanguar...)
> there is a lot of potential for abuse if somebody manages to extract the driver and convince it to behave slightly differently than intended
Doesn't the same potential exist for all drivers? I'd have more trust in the security of an anti-cheat driver whose primary goal is to resist hostile adversaries, than a random driver for e.g. my keyboard LEDs where the developers probably had no interest in security and never expected to be attacked.
Posted Aug 9, 2025 16:30 UTC (Sat)
by mb (subscriber, #50428)
[Link] (15 responses)
Sure. That's why such code in the trust chain must be minimized instead of adding code to try to prevent kiddies from cheating.
This chain of trust doesn't work in practice, because there probably are tens of millions of lines of code that must be trusted between the root of the trust in UEFI and the trusting application in userspace.
>I'd have more trust in the security of an anti-cheat driver whose primary goal is to resist hostile adversaries, than a random driver for e.g. my keyboard LEDs where the developers probably had no interest in security and never expected to be attacked.
Yeah, well. The keyboard driver is present regardless of whether there is an anti-cheat driver or not.
Posted Aug 9, 2025 17:05 UTC (Sat)
by SLi (subscriber, #53131)
[Link] (14 responses)
This is different from if there's a fundamentally unpluggable hole that you can exploit by e.g. running a specific EFI binary before booting. In the first case, detecting and fixing this is possible. In the second case, you can exploit weaknesses in the exploit to try to detect it, but those should generally be patchable and (near-)perfection is attainable.
Blacklisting drivers known to be used for cheating is _much_ easier than whitelisting everything that should be allowed.
Posted Aug 10, 2025 2:22 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
This is doubly easier because the rootkit doesn't have to hide from the computer's owner.
And this is not a theory, the current generation of most advanced cheats (e.g. https://blurred.gg/guides/info ) works like this, along with DMA-based RAM snooping. As I understand, the custom rootkit keeps the IOMMU disabled for the DMA engine to work.
Posted Aug 10, 2025 14:02 UTC (Sun)
by SLi (subscriber, #53131)
[Link] (12 responses)
Posted Aug 10, 2025 19:57 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
Windows will happily re-measure the boot chain with the rootkit. The values of TPM registers will be different so you will lose access to any previously sealed secrets, but there's no way for the anti-cheat driver to pre-compute the "correct" values.
It's possible in theory to build a robust DRM system with the TPM, but it requires coordination and periodic updates for all the PC manufacturers.
Posted Aug 10, 2025 20:02 UTC (Sun)
by SLi (subscriber, #53131)
[Link] (10 responses)
Posted Aug 11, 2025 9:40 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (9 responses)
The protection against rogue drivers is via driver signing; if the driver is signed, it's trusted. Once it's loaded, it can rearrange memory so that everything looks plausible for the situation where the driver was never loaded, including resetting logs and changing flags to hide itself.
Posted Aug 11, 2025 14:58 UTC (Mon)
by SLi (subscriber, #53131)
[Link] (8 responses)
1. Client: Please give me access to the game server!
Now of course if nothing else prevents it (i.e. there's no mechanism to, for example, for the game to get a notification if a driver is about to be loaded), the game server can just ask you to periodically reattest that you are still in that state or to provide the logs up to the state where you are.
Now what is logged in those measured logs is of course up to the operating system, but it definitely should contain a hash of the driver and most likely other information about its provenance (if it's signed, who has signed it, size, filename, other metadata, hash). The game server could even theoretically require you to send the unknown driver for analysis (they know its hash).
Posted Aug 11, 2025 15:16 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (7 responses)
The result is that you've correctly proven that the clean system exists and is in the control of the user trying to cheat. But that's not what you wanted to know - you wanted to know that the system you are running on is the clean system, and not the dirty system.
Posted Aug 11, 2025 15:38 UTC (Mon)
by SLi (subscriber, #53131)
[Link] (6 responses)
I think that at least if the game loads a kernel driver, this should be doable, assuming really a "clean system", and I admit that with the current technology that's not really easy if you need to support tons of hardware—but that's where blacklisting bad drivers could come into play. And certainly some kernel APIs could be designed for doing something like this from the user space; at the simplest, something like "execute this binary, protecting its memory from everyone else, and attest to it". The kernel would then have a function `attest(nonce)` that gives the process a signed attest that:
1. This is an .exe with hash $hash
(1) and (2) are guaranteed only by the OS being in a known good state, which is the weakest link. (3) is guaranteed by the TPM.
... but I admit that at this point, speculating about what Windows does or could do, I am a bit out of my depth. I know TPM well enough as a concept to believe that this is something it should enable. I should probably go read about how it's really used by Windows; I only know the period until the bootloader starts pretty well, and I have some idea about how systemd uses it on Linux.
Posted Aug 11, 2025 16:02 UTC (Mon)
by SLi (subscriber, #53131)
[Link]
- Windows really stops the measurement chain mostly early, i.e. what ends up in the PCRs is firmware and bootloader measurements and some info about the OS loader and early kernel init.
So, I would claim that locking down the system at a level where a game vendor can reliably blacklist drivers is doable using a TPM, but it would require a future version of Windows to start measuring all driver loads.
Posted Aug 11, 2025 16:13 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (4 responses)
Underlying this is that as soon as the "bad" driver is loaded, the OS is in a known-bad state, and anything running on that OS can't trust it. Remote attestation, as offered by the TPM, allows you to confirm that the user has access to a clean system, but not that the processes involved in attestation are actually running on that clean system. This is irrespective of the OS; once you have an untrusted system that has full userspace access to a clean system, you can get the attestations from the clean system, and send them back in place of the attestations that you can get from your "dirty" system.
The fix is to not support general systems; either the "dirty" system's kernel space must be locked down so that you cannot run "unwanted" but signed drivers (games console model), or the "clean" system's user and kernel space must be locked down so that the "dirty" system can't run arbitrary userspace on it to get attestation answers from the clean system.
Posted Aug 13, 2025 0:53 UTC (Wed)
by SLi (subscriber, #53131)
[Link] (3 responses)
Yes, this is true. But I think there's another fix besides what you suggest. The remote is now able to encrypt secrets so that only the clean system can access them. Because it's a clean system, it presumably won't leak them to the unclean system. This can be used for DRM (and is the common way to do DRM, I would claim). Typically you give the clean system a key to access encrypted secrets.
Posted Aug 13, 2025 1:17 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Aug 13, 2025 9:38 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
More generally, given that I can buy TPMs off the shelf (e.g. this TPM 2.0 evaluation board), what stops me from having a custom device that gets sent the same sequence of TPM 2.0 commands as a "real" TPM on a motherboard (hence has the "correct" hash values - all I need is a motherboard with an external TPM, such as the ASUS ROG STRIX TRX40-XE GAMING, where I can intercept the SPI bus to a discrete TPM and see what happens), but isn't actually reflecting the state of the "dirty" system?
Or, for even more sophistication, my hardware device sits between the discrete TPM on my gaming motherboard and the mobo connector, and simply filters out TPM commands that would show that my cheat driver is present - relying on my cheat driver modifying kernel data structures like the log to agree with the TPM.
Posted Aug 13, 2025 13:13 UTC (Wed)
by pizza (subscriber, #46)
[Link]
There's a lot of precedent here; most so-called console "modchips" work this way.
Posted Aug 28, 2025 15:21 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
It’s not designed to resist hostile adversaries in a general sense, it’s designed to resist abuse within the game. The anti cheat devs would not care less about all the other ways their driver could be abused.
Posted Aug 8, 2025 10:54 UTC (Fri)
by SLi (subscriber, #53131)
[Link]
The first one indeed uses TPM and Secure Boot to require that you're running approved software only at the kernel level. If it controls everything from bootloader to kernel, it's already going to be quite limiting and annoying. If it wants to protect against hardware too, I believe it would need to either drop support for many legacy PCI devices or maintain a whitelist of allowed legacy PCI devices; and then you'd require signed PCI option ROMs (and that firmware be careful to not enable PCI DMA without IOMMU). But I believe a solution absolutely exists where a game company can require that all the non-userland software that ran anywhere in the chain from power on to game start is signed by a company they trust.
The second option, which I think would be more secure but would require buy in from CPU vendors and is not going to happen in a few years would be using some sort of secure enclave mechanisms of CPUs that allow running untamperable, unobservable code even in an untrusted and hostile OS or hypervisor. Features like this exist on the latest server CPUs, but they're not 100% there yet, and to control the full chain it will also require things like the enclave getting untamperable control of some hardware like mice and keyboards (enforced by CPU).
Those things would likely also allow unanalyzable malware, which many would likely find annoying. But there are also pretty neat uses for this at least on servers; it is nice to be able to run code on the cloud that the cloud vendor could not observe without help from the CPU vendor. And, in some sense, I would be annoyed by not being able to do the same stuff on my home computer as on the cloud :)
This is being frantically developed for the server/cloud use case; we'll see if it makes its way to consumer CPUs.
Posted Aug 13, 2025 19:21 UTC (Wed)
by mirabilos (subscriber, #84359)
[Link] (2 responses)
But why would I want to do that? I want to back up these things to my normal backup storage.
> its use demonstrates that a user has access to a specific piece of hardware
But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.
Also, the RNG on TPMs is notoriously bad; the baseline for such an RNG is incredibly low (PRNG with “unknown and not exfiltratable seed” suffices), there’s no guarantee for entropy. (Full disclosure: I have written software to use a TPM 1.2, to read from its RNG and put this as *additional* source into the kernel entropy pool. Never as single source.) This is another good reason why a TPM should never be used to generate any cryptographic key, its random bits are not much better than those of plastic router boxen. Always let the OS do it — and not just Linux /dev/urandom with its crippled pool size either. touch ~/.rnd and then put 「openssl rand -rand ~/.rnd -writerand ~/.rnd 4 >/dev/urandom」 into your crontab, run it at least hourly. Over time, enough entropy will amass in ~/.rnd (each call also reads from the kernel and mixes that into the seed file), and a few bits will also find their way back to the kernel each call, which is not a bad thing either.
> 2048-bit RSA keys
Right. It’s called baseline for a reason…
> AMD's integral TPM has some problems with the random-number generator
*cough*
> everyone builds their own initrd
Better that way. I can include things to set up the network so I can unlock LUKS via SSH ;-)
But good to know that dracut is not something one wants to play with, I’ll stick to initramfs-tools as well then.
(OT: the plain text format doesn’t do quotes well, apparently ☹)
Posted Aug 15, 2025 11:06 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.
The use case is hardware that can be tampered with by an attacker in some way; if the attacker images your disk, then the TPM-backed secret doesn't let them decrypt it, and they've gained nothing. If you've set up a measured boot system, with the TPM only releasing symmetric keys when the measurements are correct, then the attacker also can't tamper with your system to exfiltrate data without you losing the ability to use that hardware for the intended purpose.
And you can still replace the hardware in this situation - it's just that instead of being able to take a HDD/SSD out of a failed machine and plug it into a new machine, you have to create new secrets on a new machine and restore from a securely kept backup. All you're doing is stopping an attacker from being able to do the same if they can borrow your laptop while you're (e.g.) drinking coffee with friends.
Underlying this is that it's only useful in cases where you need a guarantee that the user has access to a machine which has not been tampered with since you last verified its trustworthiness via physical access; my work laptop has FDE keys in the TPM so that I can boot without network available, but I need an alternative keying method (e.g. access to a secured physical network, a FIDO U2F device, a long passphrase) to get in if my device appears to have been tampered with (even if I did the tampering). My home server, however, is physically secure, and does not use a TPM to boot.
Posted Aug 15, 2025 18:40 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
> But why would I want to do that? I want to back up these things to my normal backup storage.
> its use demonstrates that a user has access to a specific piece of hardware
> But what’s the use? Hardware goes kaputt. I can quickly obtain a replacement, restore onto it and things work as before, if I don’t use the TPM for/like that.
Not everyone has the same security requirements, for a fancier HSM the answer to backup/restore is cutting the key into chunks and storing parts of the key on smartcards that are distributed to multiple people requiring some number of them to come together to combine their key parts into a whole signing key. For more everyday situations with something like a Yubikey the answer can be buying two of them, and enrolling both everywhere you need it, then storing one safely as a backup. For something protected by a TPM like on-disk encryption you can use normal backup methods for the data while the system is unlocked and accessible, with whatever encryption policy for the backup you want separate from the host full-disk-encryption, knowing that if the hardware breaks, or the key is intentionally destroyed, the data is inaccessible even if the disk/nvme itself is "fine". I'm would guess for things like TPM you don't have to create the key on the TPM, I think you can create a private key on the host and push it into the device one-time as a write-only operation, similar to how a Yubikey can store some number of TOTP/HOTP tokens. I thought Passkeys worked the same way but it seems that there are mechanisms to sync the actual private keys between devices, or transfer them during device upgrades, rather than enrolling each device under your control with a separate hardware-locked key into each service doing key-based auth. The security properties of a key embedded in a physical object are easier to reason about though, no amount of OS-level RCE bug can remotely access a key sitting on my desk, it would take literal ninjas coming through my walls which is a whole different thing.
If there is anything to worry about, Secure Boot is the issue people should be worried about
If there is anything to worry about, Secure Boot is the issue people should be worried about
Who holds the keys?
Who holds the keys?
Who holds the keys?
...
> in more complex security architectures working together with TPM.
Who holds the keys?
Who holds the keys?
Who holds the keys?
Who holds the keys?
Wol
Secure boot
Passkeys
Passkeys
Passkeys
Passkeys
Passkeys
Passkeys
Passkeys
Passkeys
FIDO2 Is Now Available on Trezor Model T
https://blog.trezor.io/make-passwords-a-thing-of-the-past...
What a coincidence
What a coincidence
Suse
Suse
Suse
Suse
Suse
This is also how I understand systemd-pcrlock's manpage:
Suse
lock-kernel-initrd FILE, unlock-kernel-initrd
But like I said, I wasn't able to try it out, so I'm not really qualified to say if it works that way.
Suse
Suse
> Raw: grub_cmd: insmod all_video\000
> Raw: grub_cmd: set gfxpayload=keep\000
Suse
Suse
Suse
This blog post describing an early draft of the feature also doesn't mention NV. It seems to me the requirement came as an implementation detail based on how the systemd folks chose to implement it. I can understand that it's likely easier to argue about from a security perspective that way, and given that newly developed features tend to mostly be deployed on rather newer hardware than the old stuff that doesn't support NV, I can see why they did it that way.
Suse
Effectively just for secure boot?
Effectively just for secure boot?
Effectively just for secure boot?
Effectively just for secure boot?
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
IMO this technique is fundamentally flawed.
Adding an anti-cheat driver can only weaken the system as a whole, if it contains a bug.
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
Yes for DRM
The problem is that such a log could be truncated by the rogue driver - you remove the last entry when you load (which you can do, since you're in kernel space - albeit you may need a helper app to tell you what the log "should" look like without you), and now it's impossible to distinguish this attested log from one on a system without the rogue driver.
Yes for DRM
Yes for DRM
2. Remote: Ok, let's see. Tell me who you are and show me your TPM logs. Here's a challenge nonce.
3. Client: I'm a TPM by FooCorp. Here's the certificate from FooCorp for my public key. And here's your challenge nonce to prove that this is not a replay. Here's the TPM logs. I have attached a signed attestation with your nonce from the TPM that my PCRs correspond to what the log says.
4. Remote: Excellent, looks clean. You are welcome.
If the cheat system really wants to push the limits, you have two systems; one is clean, and can answer all the TPM queries you want answered. The second runs the dirty driver, which redirects your TPM log request across to the clean system, gets you a verified answer (which passes, since the cheat code is in userspace on the second system) and returns it as-if it came from the dirty system.
Yes for DRM
Yes for DRM
2. I started it so that its memory is not accessible from outside
3. The TPM state is $state (signed by the TPM, nonced by the nonce)
Yes for DRM (but possibly not on Windows, at least in the current state)
- Crucially, I think it *doesn't* routinely post-boot driver loads, which obviously breaks the chain if you can load your own driver (so you can only reliably attest to what happened in early boot).
- Device Health Attestation (DHA) apparently can attest to: 1) if Secure Boot is on or off; 2) BitLocker status; 3) Code integrity policy (whether kernel-mode driver signing is enforced); but it does not give you a hash of every loaded driver.
- Anti-cheat vendors rely more on the driver signing enforcement and their own kernel driver scanning, not PCR measurements (and that obviously can be subverted by drivers loaded before the anti-cheat driver, even if not easy), plus hardware identity (from TPM) for bans.
The core problem is that, having loaded a "bad" driver, the bad driver can lie through its teeth to the game client. Once the bad driver is loaded, it is, for all reasonable purposes, part of the kernel, and can (for example) redirect the "execute this binary, protecting its memory from everyone else, and attest to it" API to let it launch the binary on the clean system, and attest to the signed attest from the clean system.
Yes for DRM
Yes for DRM
Yes for DRM
The userspace on the clean system is under attacker control; kernelspace (and thus userspace) on the dirty system is under attacker control. How do you propose to allow the userspace code on the dirty system to ask for a secret in such a way that the kernelspace on the dirty system can't proxy the request to a userspace process under its control on the clean system, and proxy the result back?
Yes for DRM
Yes for DRM
Yes for DRM
Future of DRM
a meaningful subject line
Use case for TPM-tied credentials
> its use demonstrates that a user has access to a specific piece of hardware
a meaningful subject line