LWN.net Logo

OLS: Linux and trusted computing

The term "trusted computing" tends to elicit a suspicious response in the free software community. It has come to be associated with digital restrictions management schemes, locked-down systems, and similar, untrustworthy mechanisms. At the 2005 Ottawa Linux Symposium, Emily Ratliff and Tom Lendacky discussed the state of trusted computing support for Linux and tried to show how this technology can be a good thing to have. Trusted computing does not have to be evil.

At the lowest level, trusted computing is implemented by a small chip called the "trusted platform module" or TPM. The Linux kernel has had driver support for TPM chips since 2.6.12; a couple of chips are supported now, with drivers for others in the works. Many systems - laptops in particular - are currently equipped with TPM chips, so this is a technology which Linux users can play with today.

A TPM provides a number of features to the host system. It includes a protected memory area, and a restricted set of commands which can operate on that area. "Platform configuration registers" (PCRs) are a special sort of hashed accumulator which can be used to track the current hardware and software configuration of the system. The TPM also includes a cryptographic processor with a number of basic functions: a random number generator, SHA hash calculator, etc. And there is some non-volatile RAM for holding keys and such.

A TPM-equipped system requires support in the BIOS. Before the system boots, the BIOS will "measure" the current hardware state, storing the result in a PCR. The boot loader will also be checksummed, with the result going into another PCR. The boot loader is then run; its job is to stash a checksum of the kernel into yet another register before actually booting that kernel. Once the kernel is up, the "trusted software stack" takes charge of talking to the TPM, providing access to its services and keeping an eye on the state of the system. Systems which provide a TPM typically also include the needed BIOS support; this support could also be added by projects like FreeBIOS and OpenBIOS. There are versions of the Grub bootloader which can handle the next step; LILO patches also exist. Once the kernel is booted, the TPM driver takes over, with the user-space being handled by the TrouSerS TSS stack.

TrouSerS makes a number of TPM capabilities available to the system. If the TPM has RSA capabilities, TrouSerS can perform RSA key pair generation, along with encryption and decryption. There is support for remote attestation functionality (more about that momentarily). The TSS can be used to "seal" data; such data will be encrypted in such a way that it can only be decrypted if certain PCRs contain the same values. This capability can also be used to bind data to a specific system; move an encrypted file to another host, and that host's TPM will simply lack the keys it needs to decrypt that file. Needless to say, if you make use of these features, you need to give some real thought to recovery plans; there are various sorts of key escrow schemes and such which can be used to get your data back should your motherboard (with its TPM chip) go up in flames.

The TrouSerS package also provides a set of tools for TPM configuration tasks. However, a number of BIOS implementations will lock down the TPM before invoking the boot loader, so TPM configuration is often best done by working directly with the BIOS. There is also a PCKS#11 library; PCKS#11 is a standard API for working with cryptographic hardware.

At the next level is the integrity measurement architecture (IMA) code. IMA was covered on the LWN Kernel Page last May; look there for the details. In short: IMA uses a PCR to accumulate checksums of every application and library run on the system since boot; this checksum, when signed by the TPM, can be provided to another system to prove that the measured system is running a specific list of software, that the programs have not been modified, and that nothing which is not on the list has been run. If the chain of trust (starting with the BIOS) holds together, a remote system can have a high degree of confidence that the list is accurate and complete.

Since last May, the IMA code has been significantly reworked (it took a fair amount of criticism on the kernel list). Among other things, it no longer hooks in as a Linux security module. The next step, however, will be a security module; it is called the "extended verification module." It includes a fair amount of security enforcement policy. This module can, for example, check that the extended attributes on files have not been changed by any third party. SELinux makes heavy use of extended attributes; with this mechanism in place, an SELinux system can remain secure even if somebody moves the disk to a different system and makes changes to the SELinux labels. Once back on the original system, those changes will be detected.

So why would a Linux user care about all of this? Some of the things that can be done with the TPM include:

  • Key protection. A user can store GPG keys (or others) in the TPM and not have to worry about those keys being extracted and disclosed by a compromised application.

  • System integrity checking. The measurement capabilities can be used to ensure that the binaries on the system have not been tampered with; it is a sort of Tripwire with hardware support.

  • In the corporate environment, the remote attestation features provided by IMA can be used to keep compromised systems from affecting the company network. Simply require systems to provide their "measurement" before giving them access to the network, and any system which has, say, been infected with malware at a conference will be detected and locked out.

  • Similarly, a conference attendee using an "email garden" terminal to access a mail server could, in the future, require that terminal to verify itself to the server before any sort of access is allowed.

  • Attestation could be used in electronic voting machines to verify that they are running the proper (hopefully open source) software.

And so on. The point is that there are legitimate uses for a hardware-based mechanism which can, with a reasonable level of confidence, verify that a system's software has not been compromised.

On the other hand, this same technology has a number of other potential uses. It could be used by company IT cops to ensure that employees are not running "unapproved" software, be it games, unlicensed copies of proprietary software, or Linux. Remote attestation is a boon for companies like TiVo, which can use it to ensure that the remote system is running current software and has not been cracked. Providers of web services could be sure that you really are running Internet Explorer. It does not take much imagination to come up with several unpleasant scenarios involving trusted computing and locked-down systems.

What it comes down to is that "trusted computing," like computing itself, is a tool which can be used in many ways. One does not have to look very far to find people using Linux in ways that one, personally, might not approve of. The TPM hackers feel that, given that the technology is available, let's use it. Properly used, this hardware can help to ensure that we remain in charge of our systems, and that much, certainly, is a good thing.


(Log in to post comments)

OLS: Linux and trusted computing

Posted Jul 22, 2005 17:21 UTC (Fri) by smitty_one_each (subscriber, #28989) [Link]

I can foresee government applications going for this in a big way.
It's important to know; one could as reasonably blow off Unicode.

OLS: Linux and trusted computing

Posted Jul 22, 2005 19:12 UTC (Fri) by jwb (guest, #15467) [Link]

I find the calcification of software practices implied by the TPM abhorrent. The whole architecture precludes self-modification and algorithm learning. Shall we be stuck in our current state forever? Will 4KiB quanta of unwritable jump tables be the ultimate state of the art in software development? That's pretty stupid, if you asked me.

Ask?

Posted Jul 22, 2005 22:44 UTC (Fri) by ncm (subscriber, #165) [Link]

That's kind of the point of the whole process: no one will ask. Once this stuff is in place, mandatory key escrow will be easy to impose.

OLS: Linux and trusted computing

Posted Jul 22, 2005 23:11 UTC (Fri) by allenp (guest, #5654) [Link]

It's important to note that TPM does not force the thing you abhor. It
enables it along with a bunch of other useful stuff. Technlogy has enabled
other bad things, like the various failed copy protection schemes and the
DVD CSS contraption. The 'Net considers stuff like that "damage" and routes
around it. (Who said that? Spaf?)

OLS: Linux and trusted computing

Posted Jul 24, 2005 23:48 UTC (Sun) by havardk (subscriber, #810) [Link]

The 'Net considers stuff like that "damage" and routes around it. (Who said that? Spaf?)
That quote is by John Gilmore. The context at the time was Usenet though.

OLS: Linux and trusted computing

Posted Jul 23, 2005 15:27 UTC (Sat) by zblaxell (subscriber, #26385) [Link]

I'm having trouble parsing this...Who is "self" and who is learning? If "self" is the machine, then you're talking about AI software running algorithms that learn. If "self" is the machine's owner, then you're talking about a user exercising free software rights and learning some algorithms.

I don't see why it wouldn't be possible to run an AI under TPM, as long as the AI's bootloader doesn't change (or the AI is smart enough to have its SHA1 sums recertified before trying to do remote attestation ;-). Exercise of free software rights depends on who is signing the code, and what code they are willing to sign.

There are contexts where calcification is quite desirable. If I had one of these on my laptop, I'd use TPM to verify my initrd with the PCR's, then provide part of the key material that protects root and swap (why swap? software suspend). This turns dictionary attacks against the key into an exercise in hardware modification--without access to the data in the TPM, a significant part of the key material is missing. My key material in the TPM would be combined with a passphrase taken from the keyboard by initrd, so that a compromised TPM isn't sufficient to get the disk keys. I have no way to know whether a compromised TPM has permitted a trojan initrd to execute--but that's no different from the status quo.

Once this verification had been done and the key material retrieved, I'd have no other use for the TPM, so I'd put it in untrusted mode (which means I'd need to reboot to use it again). Once the machine is booted, connected to a network, and running application software, there's nothing I could store in the TPM chip (or on disk or in RAM) that couldn't be compromised by a malware attack or OS security breach.

Note that I might still use some parts of the trusted security stack even after booting--e.g. I might personally sign all of the executables in all of the packages I install, and configure the kernel to only execute executables signed by me--but no TPM is needed in this case, since the OS can (and due to limited space in TPM, must) do its own certificate management. That should slow some of the viruses down, if anyone bothers to write some. OTOH, as a user, if it makes OpenOffice.org load any more slowly, I'd probably just turn the whole feature right off. ;-)

To upgrade the kernel, I would reboot the machine, tell my initrd to go into single-user mode with the TPM still authenticated, verify the kernel is the same as the kernel I built, install it, certify it, put the TPM back in untrusted mode, and reboot (or kexec) with the new kernel. Hopefully the kernel's build-time environment or source code wasn't compromised.

Anyone capable of an attack on the physical hardware is probably beyond my ability to defend against them. To me, TPM is a device to prevent data extraction by common thieves, and little more.

TPM provides little or no protection against most malware as long as today's typical general-purpose OS architecture is used. TPM assumes that the OS is capable of enforcing access controls--TPM only ensures that the OS hasn't been modified, not that the OS is actually secure.

Signed binaries

Posted Jul 25, 2005 6:17 UTC (Mon) by Ross (subscriber, #4065) [Link]

I suspect they would load more slowly... at least the first time; everytime
if there is no caching. However the problem with such things is that an
attacker need only find a bug in the kernel or any of the signed binaries
which allows running of unsigned code. Even worse consider that the only
protected "code" in this situation are machine code binaries and libraries.
Any language implemented at a higher level (scripts, macros, etc.) would not
be checked. If you want to be able to run bash you are suddenly trusting
all the scripts written in bash (though the commands they call may not be
allowed unless they are built-ins). 99% of interpreters do not have any way
to even know which actions should be allowed.

Signed scripts

Posted Jul 26, 2005 1:12 UTC (Tue) by xoddam (subscriber, #2322) [Link]

> Even worse consider that the only protected "code" in this situation are
> machine code binaries and libraries.

Shebang scripts (starting with a line like #!/usr/bin/perl) can have their signatures checked by the kernel's binfmt_script executable loader in exactly the same way as it's done for ELF binaries.

Checking signatures on scripts loaded in other ways (including modules) would need interpreter support. A large job, but not insurmountable.

Integrity checking vs patching

Posted Jul 23, 2005 14:24 UTC (Sat) by ayeomans (subscriber, #1848) [Link]

I have a sneaking suspicion that the software integrity checking will never be made to work with a general-purpose OS. It just about works with a closed games machine (not forgetting the MechWarrior Xbox hack), but as soon as users install software, or the OS and application vendors need to install patches, this breaks. Not because it has to technically, but because the processes around installation and patching, including updating checksums for all language versions, are simply not that controlled.

Don't believe me? Have a look at issues with Windows SFC. The Win98 version was unusable once you patched anything (so many changes to manually accept that you had to accept all updates were correct; WinXP only scans a few system files). I've also had experience with AIX Unix - it's file checks never updated properly after patches.

Sure, trusted integrity checking has potential, but only if manually driven by someone who understands the process. In which case they might as well use Tripwire or similar.

American English

Posted Jul 26, 2005 1:15 UTC (Tue) by xoddam (subscriber, #2322) [Link]

> (more about that momentarily).

American English really is a different language from the English I grew up with, albeit subtly.

American English

Posted Sep 1, 2005 5:00 UTC (Thu) by sitaram (subscriber, #5959) [Link]

The first few times I heard "United Airlines Flight <whatever> will be landing momentarily", I wondered if the passengers would come running out of the plane, and what would happen to those who were too slow!

Then I'm afraid I got used to it :-)

OLS: Linux and trusted computing

Posted Jul 28, 2005 10:57 UTC (Thu) by anonymous21 (guest, #30106) [Link]

Virtually every single argument in support of Trusted Computing falls apart on the exact same grounds. You can still get all of the same benefits from an essentially identical system where you DO know your master key that controls the security on your computer. If you have a printed copy of your key, perhaps kept in a safety deposit box if you like, all of the security functions on your computer still work for you. You can still seal your data and you can still control what software may and may not run on your computer and any unauthorized system alterations will still be detected and locked out.

Trusted Computing is not merely a tool that can be used for good or bad. Trusted Computing is like a nutricious apple containing a poison pill. The Trust chip is designed to keep secrets against its owner, designed to be secure against the owner. Advertising the vitamins a poisoned apple contains does not justify the poison pill. All of the talk of vitamins just means that you want to buy an apple without the poison pill.

The TPM is specifically designed to forbid the owner to know his own key and be secure against its owner. The arguments supposedly supporting Trusted Computing are simply invalid when they list all of these examples that do not justify nor require forbidding the owner to know his own key. If people want to argue those benefits and argue for new hardware, fine, then they should argue for new hardware with these exact same capabilities where the owner has the additional benefit of being allowed to know his own keys, not an anti-owner system designed to be secure against the owner. The fact that you know your own key does not prevent your computer from protecting you. Knowing your key allows you full control over your computer and the ability to unlock your files if and when you need to do so. Knowing your key allows you to avoid being locked out or locked in to anything.

An additional issue is that Trusted Computing defeats the GPL. Under Trusted Computing source code often becomes entirely useless. If you attempt to modify Trusted Computing GPL software then the Trust chip will detect this modification and the chip will forbid you to read any 'secured' files. The Trust chip will also attest that the software is 'currupt', interoperability and internet connection attempts can and will fail. The modified software may technically run, but it simply will not work. Trusted Computing defeats the GPL and can make the source code useless because it forbids the owner to know his own key to unlock his own computer and unlock his own data.

Not only does Trusted Computing defeat the GPL, but it will also begin to strangle Linux development if there is a move to Trusted Linux. Under such a Trust system much software will only run on a certified and unmodified Trusted Linux, varius files will only be readable on a certified and unmodified Trusted Linux, various websites and other network protocals will not work if you do not have a certified and unmodified Trusted Linux. I becomes almost impossible for most people to develop and test and contibute improvements and fixes for Linux if any attempt to modify and recompile causes most of your system to break. Trusted Linux is an evolutionary dead end, with most contributors locked out.

Another major issue is Trusted Network Connect (TNC), a new specification documented on the Trusted Computing Group's website. Micorsoft has issued a press release that they are implementing this system under the name Network Access Protection (NAP). This is a system that first checks if your computer has a Trust chip then checks the exact operating system you have and then checks exactly what software you are running. If you are not running an authorized and unmodified operating system then you are quarantined. Note that "quarantined" is the exact word used in the documentation, it means you can be denied any network connection at all. If you are not running certain mandatory software, specifically authorized and unmodified modified software, then you can again be quarantined and denied any internet connection at all.

The proper response to Trusted Computing is "I want to know my own key. No key, no sale".

OLS: Linux and trusted computing

Posted Jul 28, 2005 19:19 UTC (Thu) by Fats (subscriber, #14882) [Link]

You can still get all of the same benefits from an essentially identical system where you DO know your master key that controls the security on your computer.
You need to hide the master key when you want to be able to do something only on your machine. If you do know your master key, it means other people can know your master key and replicate it on other machines. This way they can steal things from your machine you want to have locked to your machine.
An additional issue is that Trusted Computing defeats the GPL. Under Trusted Computing source code often becomes entirely useless. If you attempt to modify Trusted Computing GPL software then the Trust chip will detect this modification and the chip will forbid you to read any 'secured' files.
They can forbid you running the modified code on the same machine but they can not forbid you adapting the code to run on machines not having a TPM chip. So yes they forbid you one of the reasons of the existence of the GPL e.g. to be able to bug fix code for the machine but they can not lock down the code. Staf.

OLS: Linux and trusted computing

Posted Aug 17, 2005 18:33 UTC (Wed) by dmag (subscriber, #17775) [Link]

> You can still get all of the same benefits from an essentially identical system where you DO know your master key that controls the security on your computer.

No. Any ordinary system must have the keys to decrypt the data on disk. Popping out the hard drive will let you decrypt all the data. TPM allows the data to be encrypted/decrypted without storing the key on disk.

> The Trust chip is [..] designed to be secure against the owner.

Yes and no. See http://trousers.sourceforge.net/faq.html#3.4

> Under Trusted Computing source code often becomes entirely useless.

No. You don't understand how the TPM works. In "Trusted computing", all software (bootloader, OS, etc) must constantly talk to the TPM. The TPM contains *no* code. The TPM makes no decisions, only reports checksums and the like.

All "trusted computing" platforms will boot existing software just fine. You can decide not to run TPM software. You can always take GPL software and re-compile it for your own computer.

> it will also begin to strangle Linux development if there is a move to Trusted Linux.

No. Remember, if you have a "trusted computer", you can still pop in your favorite Linux distro and start hacking. Worst case, you have to pop out the hard drive to reformat. Trusted computing is not designed to prevent that. (If it was, nobody could boot Windows!)

> any attempt to modify and recompile causes most of your system to break

If someone sells a complete "Trusted Linux Kiosk certified by the maker", you won't be able to 'simply' modify it. On the other hand, you will be able to wipe the hard drive and make a Trusted Linux Kiosk certified by you.

> Under such a Trust system much software will only run on a certified and unmodified Trusted Linux,

An application vendor who wishes their software to only run on a TPM machiene will have to weigh the pros and cons of the market. They may find that very few Linux users will want to run in TPM mode, using only certified (read expensive) software.

> varius files will only be readable on a certified and unmodified Trusted Linux,

Again, this requires application support. Don't buy applications that use TPM if you don't want to. And those GPL programs that do use TPM, you can just comment out a few lines and recompile for your system.

> various websites and other network protocals will not work if you do not have a certified and unmodified Trusted Linux.

Here's how that would work: Microsoft releases Windows Trusted 1.0. The website requests a (signed) checksum of all running software on the machiene. The website has a list of all valid checksums (program x running, program y running, program x + program y running). If your checksum isn't on the list, they complain and don't let you in.

But then Microsoft releases Windows Trusted 1.1 and 1.1a hotfix and 2.0 and 3.11 and 6.9... Every website will have to keep up with *all* the valid checksums for all possible combinations of software, or risk ire from their users. Suddenly, it's a full-time job because the list of good checksums will explode combinatorially. And anytime a flaw is discovered, the checksum has to be taken off the list.

Banks would love to use this, but they will find it unworkable. It certifies the computer software, but not the user. And a 'certified' executable with a remote buffer exploit is still a 'certified' program until it's taken off the list. Oops.

Corporations will use this to prevent bad stuff running on their corporate laptops, and to certify that everything is still ok when they dial-in.

Linux will support TPM as an additional (optional) security module. It's about as dangerous as SELinux.

OLS: Linux and trusted computing

Posted Nov 21, 2007 21:16 UTC (Wed) by toad (guest, #49198) [Link]

So what you're saying is Trusted Network Connect is harmless because it's impractical. And
then you're saying that corporations will be able to make it work anyway. Contradiction!
Clearly Microsoft will maintain the list of allowed hashes, or get an impartial industry body
to do it for them. If TNC doesn't work then MS has spent many many years on this for no
purpose: they will make it work. How? By only certifying core parts of the system, which
include the anti-malware system, which does the rest. The list of allowed hashes won't be that
big anyway, because they'll require you install the latest security patch within a short
period of its being released - immediately if it's not too intrusive. And then we'll be one
big happy family, with your user-modified linux PC not able to connect to your bank, your
hardware retailers of choice, your webmail provider, and eventually the internet itself. And
of course, China will love it: total control of cyberspace, once and for all! It might even
bring them back to Microsoft, but more likely they'll grow their own.

Copyright © 2005, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds