Protecting systems with the TPM
Verifying software integrity at boot time
The TPM is a 28-pin chip found in most desktop and server-class systems; it is offered by a number of manufacturers, and those offerings are all interchangeable. Most deployed TPMs adhere to the 1.2 standard, but 2.0 is out now and is quite different. Some TPMs are minimal in functionality and slow, while others are essentially ARM cores running at several hundred MHz. They feature a small amount of nonvolatile RAM storage; "small amount" meaning hundreds of bytes in this case. They can do some cryptographic operations, such as signing or encrypting data, but they do it quite slowly. Some operations can take multiple seconds; it would be almost as fast to do things by hand, Matthew said. In other words, the TPM is worse than the CPU in every single way, so why do manufacturers bother installing them in so many machines? Answering that question was the focus of much of the rest of the talk.
A core feature of a TPM is the platform configuration registers, or PCRs.
These are used to perform measurement of the bootstrap process, where
"measurement" means verification of the software that is run. The TPM
cannot do this work on its own, though, since it is unable to perform DMA
or to ask the operating system to perform actions on its behalf. So the
bootstrap code must feed
data to the TPM by way of explicit "extend" operations on the PCRs.
An interesting feature of the PCRs is that their value cannot be set directly (other than initializing them to zero when the system resets). Instead, they support an "extend" operation that calculates a cryptographic hash of a PCR's current state combined with the new data and uses the result as the new value. More precisely, the extend operation appends the new data to the 20-byte value already stored in the PCR, performs a hash of the 40-byte result, and stores the output of the hash back into the PCR. TPM 1.2 uses SHA1; version 2.0 moves on to more secure hashing algorithms.
In a typical trusted boot, things start at power-on time, when the special "management engine" starts running within the CPU. It measures the first-stage firmware by reading and hashing the binary code, using the result to extend the first PCR. Each subsequent stage of the process (second-stage firmware, option-card firmware, the bootloader) measures the stage to follow before executing it; by the time the kernel is measured and booted, the TPM will have a set of PCRs describing the software that has run so far. If those PCRs do not contain the expected values, then somebody has tampered with something somewhere in the process and the system cannot be trusted.
Quotes, sealing, and more
The TPM is not able to block the bootstrap process if the PCRs do not end up with the expected values; somebody must ask it whether all is well or not. The TPM is a device, accessed via a device driver, so any process wanting to query the TPM must do so by way of the kernel. There is an obvious potential problem here: if the kernel has been corrupted, it can lie about the values stored in the PCRs, thus defeating the entire measurement process.
The designers of the TPM specification have thought about this particular problem, though; the result is the remote attestation mechanism. The TPM supports a "quote" operation, which provides a list of current PCR values signed by a private key hidden within the device. This operation also includes a nonce value provided with the request, preventing quotes from being reused by hostile software. A system wanting to verify a quote can verify the signature using a well-known public key and ensure that the nonce matches what it provided; if things check out, then the PCR values can be trusted as having been provided by the TPM.
This quote can be passed back to a remote attestation server, which can check the PCR values and be sure that your machine is running trusted software. This works well for granting (or denying) access to the network or some other resource, but it is rather less helpful for a user trying to decide whether they can trust their own system. That's because answering that question requires asking the remote-attestation server — by sending it packets through the kernel. Once again, a malicious kernel is in a position to lie to the user.
There is some help for this problem as well, based on the TPM's ability to encrypt data. In particular, it can "seal" data, which cannot subsequently be decrypted unless the PCRs contain the expected values. So the solution is to encrypt the disk and seal the key with the TPM. If the system has been tampered with, the PCR values will not match, the disk cannot be decrypted, and the system will fail to boot. If it boots successfully, the software has not been messed with.
This kind of encryption will protect the contents of the disk if somebody removes it from the laptop, but is not helpful if somebody steals the whole box. The obvious answer is to add a passphrase to the bootstrap process, but that just leads to another question: how does the user know that the passphrase prompt is legitimate? An attacker (of the evil maid variety, perhaps) could install malware that would put up a fake password prompt, then fake a crash and reboot. Matthew noted that few of us would be seriously surprised by a crash-and-reboot cycle; we are, he said, not very good at doing computers in general.
One possible anti-evil-maid tactic would be to have the TPM encrypt a secret (a phrase, perhaps) and display it at boot time; if the phrase shows up, the user knows that the software running up to that point has not been modified. But an attacker could simply observe the phrase and replicate it. That problem could be addressed by putting the encrypted phrase onto a USB stick and booting from it anytime the system has been out of the user's control. This tactic requires discipline, though; the user has to remember to use it whenever there might be trouble.
A promising alternative (also described here) is to encrypt a time-based one-time password (TOTP) seed and seal that; the seed would also be put onto a second device (a smartphone, for example). When the system boots, it decrypts the seed, calculates the current one-time password, and displays it on the screen. That value should match what is shown on the second device. If the two numbers match, the system has not been changed. Either that, Matthew said, or both devices have been tampered with; it would be a good idea to not leave them both unattended in the same place.
Other issues
All of this seems like a reasonable solution, but there are still a few potential issues worth pointing out. The point of this whole exercise is that, if somebody has modified the system, it will fail to boot. But there can be legitimate reasons to modify the system, including installing a new bootloader or kernel, firmware upgrades, and more. Supporting upgrades is "awkward." Beyond that, once the TPM has decrypted the disk-encryption key, that key will be sitting in RAM where a hostile device could copy it via a DMA operation. An I/O memory management unit (IOMMU) can address that threat, but it must be enabled to do so; most distributions leave the IOMMU turned off, since it has an unpleasant habit of breaking Intel graphics.
Then there is the issue of the management engine. It runs before the CPU starts and performs the initial firmware verification. If the management engine can be made to run arbitrary code, the whole chain of trust fails before it even starts. This processor runs encrypted code that cannot be audited, so, Matthew said, he has no idea how secure it really is.
What about after the kernel boots? Modified user-space software can be just as bad. That's where the kernel's integrity measurement architecture comes into play. Every binary run by the system can be measured, with the resulting value used to extend a PCR; that PCR can then be used to verify that the binaries have not been modified. The only problem is that getting to a specific PCR value not only requires that a specific set of binaries is run; it also requires that they are run in a specific order. Guaranteeing that order during the bootstrap process is not easy. Working around that problem is a matter of obtaining the "event log" from the TPM; this log contains a record of each individual measurement event. Interested code can examine the log, verifying the trustworthiness of each individual binary that has been run.
Going one step further and measuring containers is really just a matter of measuring their disk images. This, too, can produce an event log, yielding a list of every container that has been launched. If a specific container turns out to be hostile (or to contain a bad vulnerability), it is possible to determine which systems it has run on. The just-announced Rkt 1.0 release has the ability to do this kind of measurement.
For those interested in the code: Rkt is available on this page. Matthew has put up code for shim (for the bootstrap process) and for GRUB. His tpmtotp repository has code to make the one-time password checks work. Much of this code will eventually go upstream to the relevant projects but, for now, it must be obtained separately.
An audience member asked about what pieces are missing still; Matthew replied that there is currently no way to measure the firmware running in the disk drive (or other peripherals). A hostile drive could provide correct code for a critical binary once to pass the IMA test, then provide corrupt code thereafter. We need, he said, a way for users to verify that the rest of the platform is also trustworthy.
When asked about the trustworthiness of the TPM himself, Matthew said that he hasn't really looked into it; he is afraid of what might happen. He knows of nobody who has tried to do any sort of serious fuzz-testing of TPM chips. There is, he said, a push to move TPM functionality into the firmware, running it on the management engine; that would allow manufacturers to remove the separate TPM chip from the board. If that were done, though, any flaws in the TPM implementation might enable code to be executed on the management engine itself — not a pleasing prospect.
The video for this talk is available on the LCA site.
[Your editor thanks LCA for assisting with his travel expenses.]
| Index entries for this article | |
|---|---|
| Conference | linux.conf.au/2016 |
Posted Feb 9, 2016 22:54 UTC (Tue)
by PaXTeam (guest, #24616)
[Link] (12 responses)
how's that possible if the block device itself is encrypted?
Posted Feb 10, 2016 0:35 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (11 responses)
Posted Feb 10, 2016 1:02 UTC (Wed)
by zx2c4 (subscriber, #82519)
[Link] (1 responses)
Posted Feb 10, 2016 2:24 UTC (Wed)
by luto (guest, #39314)
[Link]
On the pessimistic side, the rowhammer exploit is a great example of a way that random poorly-controlled corruption can sometimes lead to privilege escalation.
Posted Feb 11, 2016 6:34 UTC (Thu)
by rahvin (guest, #16953)
[Link] (8 responses)
Firmware is one of the scary things that came out of the Snowden revelations about the NSA, the information indicated that the NSA is not only capable of replacing firmware on various components but extremely adept at it and that they used such tactics frequently. It's the ideal spy tool, hardware firmware can be full processors with non-volatile memory and even have DMA access and there is essentially not a single way to verify firmware images which isn't dynamically loaded at boot. In fact it would appear your only security against firmware exploits is to have un-servicable firmware and a trusted manufacturer that validates all flashed firmwares before they leave the factory or to have only loadable firmwares where you have a way to validate the image. But even with unserviceable firmware you are still at risk the NSA will divert the package and physically replace the firmware chips with a compromised firmware.
I'd love to see you keep up your research in these areas and help FOSS keep going in this direction to close more and more exploit avenues. TPM has been around for years and I've read a lot about them but your talk is the first real analysis I've ever seen of just how they operate and what some of their strength's and weaknesses are.
Posted Feb 11, 2016 8:03 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
Posted Feb 11, 2016 8:24 UTC (Thu)
by dlang (guest, #313)
[Link] (6 responses)
Except if you need high performance, at which point you get Self Encrypting Disks, SSDs that do the encryption on the drive itself.
Posted Feb 11, 2016 16:11 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
If you need such high performance that you can't do the encryption on the CPU, you might as well do without encryption entirely. You can't prove that the black box encryption built into the disk is actually any more *effective* at protecting your data than no encryption. Even assuming that it doesn't include deliberate backdoors, the odds are good that there are bugs in the implementation. As it's closed-source and proprietary, you have no way to audit the security of the firmware.
Posted Feb 11, 2016 23:38 UTC (Thu)
by dlang (guest, #313)
[Link] (1 responses)
This is what is presented to upper management as being the state of the art in data security.
Posted Feb 11, 2016 23:59 UTC (Thu)
by jhhaller (guest, #56103)
[Link]
Posted Feb 12, 2016 13:09 UTC (Fri)
by james (subscriber, #1325)
[Link]
If you're trying to hide personal data from the NSA, GCHQ, or whoever, then you're absolutely right. If you're trying to hide top-secret corporate data, then you may have to worry about nation-state-sponsored corporate espionage.
But if you're storing credit card details or personally-identifying information, and the threat is random thieves, then the chances that they can blow the encryption on the disk is pretty minimal. Anyone who could get the data out of the disk would have easier ways to do it, but you can honestly tell customers that the data on the stolen disk was fully encrypted.
Posted Feb 12, 2016 8:49 UTC (Fri)
by jezuch (subscriber, #52988)
[Link]
Well, the drive has to run the encryption on *something* too. I guess the controller has some specialized hardware for that but still...
That said, I don't think I trust the on-drive encryption. It may be better in that the /boot will be encrypted as well, but since nobody can lift the hood and look inside the firmware, I think I'll stay with LUKS :)
Posted Feb 12, 2016 12:25 UTC (Fri)
by ksandstr (guest, #60862)
[Link]
This is the same issue as with TCP offload engines on Ethernet cards: the CPU usually does better, and for the cost difference between dumb and smart hardware one can simply buy more CPU. Generally the CPU has better utilization than smart peripherals, which yields higher efficiency per dollar. (hence multitasking on commodity hardware, and the death of channel architectures.)
Storage has the additional issue of trust: in the case of self-encrypting storage devices, it's unverifiable[0] that the manufacturer's firmware doesn't simply use the key-setting interface as an unlocking password and store all data verbatim, or encrypt the data with a session key different from what the host provides. Firmware can also fuck the user at the NSA's request where a CPU's verifiable AES implementation would need more than National Security Letters pointed at a small number of CTOs' foreheads.
[0] besides desoldering the flash chips & having a highly specialized poke at them
Posted Feb 9, 2016 23:02 UTC (Tue)
by dowdle (subscriber, #659)
[Link]
Posted Feb 10, 2016 6:47 UTC (Wed)
by alison (subscriber, #63752)
[Link] (2 responses)
Matthew also presented an excellent talk at 32c3, whose video is available:
Anyone who cares about this topic should also watch Joanna Rutkowska's talk:
Some questions that I've never seen answered are: how much storage do TPM's have? And what happens when it's filled? Presumably the oldest values are just rolled off?
While we can all agree that 'remote attestation' has some problems, when over-the-air updates are considered for embedded devices, there is no more appealing alternative. Is there?
Posted Feb 10, 2016 11:38 UTC (Wed)
by k3ninho (subscriber, #50375)
[Link] (1 responses)
From The Fine Article:
PCR's are SRAM, I guess. Regarding your question, the NVRAM is irrelevant and may be overwritable firmware instructions, but it's not part of the cycle of state attestation that mjg59 describes here. The old PCR values always get overwritten each time you calculate a new state for of the platform configuration and store it in the registers. PCRnew = hash(append(PCRold, new input data)), it seems. I'd be curious to know how much info leaks in the "event log" that might help you reconstruct the rainbow and create a preimage to subvert the PCR's.
K3n.
Posted Feb 10, 2016 17:08 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 10, 2016 9:07 UTC (Wed)
by fratti (guest, #105722)
[Link] (5 responses)
Even if you ignore real-world usability of this entire mechanism, this here seems like it's essentially a show-stopper. The TPM is a black box and the management engine is a black box as well, and I'm not sure whether I'm willing to trust embedded programmers with security (or any human, for that matter), and this is ignoring the kind of damage a government could do.
Also, I'm assuming the private key hidden within the device is not re-writeable by the user and is instead set by the manufacturer; I don't like that idea.
Posted Feb 10, 2016 17:05 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (4 responses)
Posted Feb 10, 2016 22:12 UTC (Wed)
by luto (guest, #39314)
[Link] (3 responses)
I seem to recall that Windows 10 was going to encourage this to change. I don't remember what happened.
One thing I really dislike about the TPM design is that there aren't any clear security properties defined for communication between TPM and host. If I ask my TPM to unseal a secret, I have to convince the TPM that I'm the correct software stack, but I see nothing clearly designed to allow the TPM to prove back to me that it's the TPM I think it is. It doesn't help that trousers (the standard Linux TPM software stack) is garbage, too.
Posted Feb 10, 2016 22:19 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
You can perform a dance around the EK to verify that you're communicating with the TPM that you think you are, and once that's established you can get it to certify that it controls a key. That should be sufficient - generate a non-migrateable key, get the TPM to certify it, seal the secret with it. Next time round, hand over the encrypted key blob, ensure that you get back a certification signed by the same EK, hand over the encrypted secret and ask for it to be unsealed.
Posted Feb 10, 2016 23:39 UTC (Wed)
by luto (guest, #39314)
[Link] (1 responses)
Is this secure against a MITM between host and TPM? The unsealed secret is protected by the authorization session, but there's so much gobbledygook and semi-home-brewed crypto in the authorization session stuff that I can't tell whether a MITM would have to know the authorization key, know the SRK, both, or neither.
If I actually trusted trousers at all, I would care as much about this issue.
Posted Feb 10, 2016 23:49 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 10, 2016 12:24 UTC (Wed)
by jem (subscriber, #24231)
[Link] (7 responses)
Posted Feb 11, 2016 23:40 UTC (Thu)
by dlang (guest, #313)
[Link] (6 responses)
you missed this section of the article.
> Some TPMs are minimal in functionality and slow, while others are essentially ARM cores running at several hundred MHz.
I'll bet that they are far closer to the off-the-shelf chips than you would like to think.
Posted Feb 12, 2016 8:14 UTC (Fri)
by jem (subscriber, #24231)
[Link] (5 responses)
>I'll bet that they are far closer to the off-the-shelf chips than you would like to think.
The TPM chips should be able to detect and resist all kinds of physical attacks, including altering the supply voltage, voltage spikes, light, dissolving with acid etc. The chips may contain up to dozens of special hardware security features like sensors, shields and obfuscation of the circuitry. Protection against Differential Power Analysis, which involves both hardware and the chips firmware, is also important. As I mentioned earlier, the TPM chips typically also contain hardware blocks to accelerate crypto operations like RSA and AES. On the other hand, the chips should be stripped of all unnecessary hardware features, especially I/O blocks.
These requirements mean that these chips have totally different part numbers than the off-the-shelf microcontrollers. Data sheets are available only on request, and these parts are not for sale to the public on Digi-Key. There is no reason, however, that these chips can't have a CPU core implementing a well known instruction set, like ARM or 8051. It might even be an advantage because of better (less buggy) tooling.
You can, of course, program a run-off-the-mill microcontroller to behave like a TPM chip. However, note that there is a certification program for these products and you would have a hard time getting your design pass evaluation.
https://www.trustedcomputinggroup.org/certification/certi...
Posted Feb 12, 2016 13:07 UTC (Fri)
by ksandstr (guest, #60862)
[Link] (4 responses)
Why would the mass-market TPM in Joe Random's laptop be this ludicrously high-tech? Most likely a discrete component will have been manufactured in China[0] using masks generated years ago from a design originally produced for generic purposes, and packaged in el cheapo 28-pin BGA just like everything else that's got very modest thermal requirements.
As a reminder, about a decade ago the plan was that from the second generation on, TPMs would be integrated into motherboard chipsets, e.g. the south bridge, to save on costs and to put Hollywood's Bespoke Magic behind the hardware veil. Certainly a dormant functional block in a mandatory chipset is cheaper than having a specially-wired bus and an extra package on the actual board just to add a feature which no-one uses.
Consequently the best way to hack a TPM is to build one, or to wield real-world power over people who do. Given that this is the case, and has been for a dog's lifetime, the avenues of compromise that're cost-effective to guard against will be at the other end of the TPM-to-CPU bus. Other solutions don't make economic sense given that software running on the CPU is always the first thing to suffer compromise -- as in various "born secure" game consoles, where manufacturer controls are overridden by microcontrollers hooked up to a USB port and CPLD gadgets hot-glued to the motherboard.
[0] complete with the chinese manufacturer's own backdoor, and a backdoor for every layer of China's security establishment
Posted Feb 12, 2016 18:41 UTC (Fri)
by jem (subscriber, #24231)
[Link] (3 responses)
Maybe because otherwise the manufacturer wouldn't be allowed to call it Trusted Platform Module? Besides, Joe Random's mass-market laptops rarely contain TPMs, they are more likely to be found in high-end business laptops and servers.
TPMs are closely related to smartcard chips and a lot of the groundwork has been done already. Smart card chips have been used for years in Id cards, payment cards, phone SIMs, TV set top box smart cards, so these "ludicrously high-tech" chips are in fact mass-market products at this stage, and are not overly expensive anymore.
Posted Feb 13, 2016 2:00 UTC (Sat)
by dlang (guest, #313)
[Link] (2 responses)
given the vulnerabilities that we are seeing in such systems (rm -rf / deltes everything on the TPM for example), I really doubt if there is the certification process you think exists.
Posted Feb 13, 2016 5:29 UTC (Sat)
by mjg59 (subscriber, #23239)
[Link]
rm --no-preserve-root / will delete all the runtime-accessible UEFI variables on a system, but it won't touch the TPM in any way
Posted Feb 18, 2016 3:04 UTC (Thu)
by rahvin (guest, #16953)
[Link]
Posted Feb 11, 2016 15:25 UTC (Thu)
by anton (subscriber, #25547)
[Link] (2 responses)
So if locked-down Windows-only PCs have not come about, it seems to me that they just have not come about yet.
Posted Feb 11, 2016 17:44 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 11, 2016 18:07 UTC (Thu)
by pjones (guest, #31722)
[Link]
Aside from political problems with doing that, which they're keenly aware of, one immediate result would be that vendors wouldn't tell MS when there's a security problem, because having a loader blacklisted would hurt too much. That means vulnerable loaders would rarely, if ever, be blacklisted. This is a scenario which significantly weakens Microsoft's ability to use Secure Boot to protect their OS and their customers' systems.
The cat is out of the bag on signing our bootloaders. It's not a thing they can just stop doing.
It's not happening.
What may happen is vendors whose products are in markets that don't require Windows compatibility could implement Secure Boot, with an entirely different set of trusted keys. But that's the scenario we've already got - it's exactly like phone bootloader locking, just with a different mechanism.
Posted Feb 16, 2016 7:50 UTC (Tue)
by ras (subscriber, #33059)
[Link] (12 responses)
It's a pity it's taken 10 years for their usefulness to dawn on the community. As it is, this dawning has been nicely timed to coincide with the version we use (TPM 1.2) becoming obsolete. TPM 2.0 apparently attempts to fix the "brittle sealed storage" problem in TPM 1.2, which would be nice.
But I gather 2.0 isn't backward compatible with 1.2. Is there any planned upgrade path? For example, I presume there is no reason both can't exist in the system at the same time? Or maybe people make devices that support both interfaces.
Posted Feb 16, 2016 8:18 UTC (Tue)
by mjg59 (subscriber, #23239)
[Link]
People have been using TPMs for good for a while, including LUKS and SSH support. But we've done a bad job at obtaining wider support for this, and that's definitely a failure on our side. Even so, this is still an area where free operating systems can do a good job of competing against Windows (still basically uses TPMs for disk encryption or corporate VPN support) and Apple (doesn't ship TPMs), so there's plenty of work to do. I don't think your characterisation is entirely fair - we've been paying attention, just not advertising it.
Posted Feb 16, 2016 13:01 UTC (Tue)
by corbet (editor, #1)
[Link] (5 responses)
Posted Feb 16, 2016 23:58 UTC (Tue)
by ras (subscriber, #33059)
[Link] (4 responses)
After reading Matthew's reply, it occurred to me ssh + TPM's are a match made in heaven. Not the ssh implementations we have now that unlock client keys - they seem rather pointless. But an sshd daemon that used the TPM 2.0 to guard the host key would be awesome. It would give me a guarantee the machine I am ssh'ing into is still running the BIOS signed my the manufacturer, a kernel signed by my distro, and user space signed by whoever I trusted. The machine is same state it was when I I built it in other words - uncompromised.
It's one of the most basic usage scenario's one can imagine for a TPM, but can't really be done with TPM 1.2 due to the brittleness problem. It could be done with TPM 2.0. Maybe in another 10 years our software stack will have adapted to 2.0 and will finally be able to use TPM's to do the things they were designed for on a regular basis.
It will have only taken 20 years. For an industry that prides itself on moving quickly, sometimes we are god awfully slow. We needed this stuff to working 5 years ago, when the cloud broke our assumption the machines we look after are under our personal supervision. It would be nice if these TPM's were a standard feature in the IoT deployments (which effectively adds a few noughts to the number of machines in the cloud, scaling the security problem by a similar factor). But AFAICT we don't even have user space libraries that talk 2.0 yet, so I bet we miss that boat too.
Posted Feb 17, 2016 3:40 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 17, 2016 9:20 UTC (Wed)
by paulj (subscriber, #341)
[Link] (2 responses)
That the fears havn't been realised isn't because there was nothing to fear, it was simply because it has taken the leading player - MS - a long time to be able to start _requiring_ the constituent bits of technology because of backward compatibility reasons. Which were exacerbated by a misstep in a major OS release MS made. There were also anti-trust issues to be careful not to step into.
You can't dismiss the fears when the major players still havn't finished the long game they are playing in setting up the required environment.
Posted Feb 17, 2016 11:44 UTC (Wed)
by ras (subscriber, #33059)
[Link]
Maybe they can, but the TPM doesn't give them anything they didn't have before.
It is true that after boot the TPM can uniquely identify the software that is running. However the TPM doesn't magically "gather" this information. It is gathered by the software as it boots (CPU Boot code, BIOS, Boot Loader, kernel, ...) and given to the TPM as a series of hashes (SHA-1 for TPM 1.2) for each lump of code loaded. So there is no "new" information being created here - anyone with access to the system can run SHA-1 and come to the same result.
And in this case at least, if they have access to the TPM then they have access to entire system so they could do it themselves. This is because the TPM is no different to any other local device in that no one outside of the system access it any more than they can access the disk drives or clock. *I* can certainly choose to give someone access to the information in the TPM and that information could be used to verify whatever I want to let them verify regarding the software running, but this is no different to say me choosing to provide access to some of the content on my disk drives via a web server. But as it happens I don't have to provide any external access to the TPM for this ssh magic to work, so no new potential channels that might leak information need be created.
My SSH client verifies the host is unchanged by verifying the host key just as it does now, whatever information that host key reveals about my system is unchanged by the presence of the TPM. The only difference is the host key is "sealed" by the TPM, which effectively means it is encrypted by the TPM and the TPM will refuse to decrypt it if the software (BIOS, Boot loader, kernel, ssh daemon, and whatever else you think might be important) changes. It doesn't do that by "knowing" what the right software is. It does that by verifying the hashes it was given through the boot process match the hashes you gave it when you asked it to seal the ssh host key.
Posted Feb 17, 2016 17:49 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 18, 2016 3:25 UTC (Thu)
by rahvin (guest, #16953)
[Link] (4 responses)
With Windows 8 and future versions Microsoft is making an effort to integrate the TPM controls directly into the OS to make use of the TPM less expensive, standard and more secure. As a result TPM has seen some mild expansion in use. But I wouldn't argue Open source is much behind the curve here. Honestly FOSS could leapfrog Microsoft overnight if TPM actually became popular enough that you could count on a module being in any random computer your purchased instead of having to spec an expensive enterprise model.
That's why I'm glad the same people that developed the FOSS secureboot shim are looking at this. Secureboot and TPM combined are complementary technologies. And if Microsoft actually gets enough support built into windows that TPM actually standardizes then FOSS will be well placed to take advantage of it.
Posted Feb 18, 2016 4:58 UTC (Thu)
by ras (subscriber, #33059)
[Link] (3 responses)
My last Dell two laptops (Precision and now XPS) do have TPM's, and I gather that is usually the case for "work" oriented Dell's.
You are right in saying it is a rare person that uses them (certainly I didn't) - but that has to come with a big qualification. It only applies to PC's. My phone has a TPM like thingy, and it is switched on by default. That isn't unusual. In fact in a few years I'd say most people will use hardware protect provided by a TPM like device every day of their lives.
When I ask myself why I am happy to use it on my phone but not my PC, the answer seems to be I am pretty confident I won't lose data on my phone due to the TPM. At least I haven't yet, and it's gone through a fair few firmware upgrades. On the other hand I have a friend who did turn on disk encryption for his Mac and one mishap or so later, he lost everything stored there. (It was backed up, but turns out Time machine encrypted the backup with the sealed key.) It happened to contain his wife's photo collection from an overseas holiday, so he wasn't a popular boy. Colour me skeptical, but if I ever get around turn on full disk encryption on Debian testing and seal the key with the TPM, I also fully expect to lose all the data on the disk; repeatedly.
If that expectation changes to me believing it works as well on my PC as it does on my phone, I would enable the TPM and full disk encryption as a matter of course, and I'd hope my distro would do that by default.
I guess the point I'm trying to make is that people do find TPM's useful, they only avoid them because they are too hard to use. Microsoft, Apple and Google are now doing an excellent job of making it obvious they don't have to be hard to use. It seems we in the open source world are learning how to deploy TPM's them, not the other way around.
Posted Feb 19, 2016 2:11 UTC (Fri)
by rahvin (guest, #16953)
[Link] (1 responses)
Posted Feb 19, 2016 3:01 UTC (Fri)
by ras (subscriber, #33059)
[Link]
It definitely was working in the precision. The XPS having TPM 1.2, and 1.2 being on the way out due to SHA-1 has sapped my interest. The kernel on the XPS seems to know about it, though.
> Until the PC side figures out how to do TPM right like on the Phone side I would avoid them.
Yes, well my take is TPM 1.2 is just too hard to use. Matthew's says the brittleness problem can be solved by resealing, but getting that right for software you control sounds hard and for software that changes underneath you (BIOS, host software in a VM) sounds impossible. Maybe TPM 2.0 will fix that, but given you can't buy one yet that's a long way out. And besides, I care more about keeping the data on my laptop accessible than I care about keeping it secure. So maybe it will never be good enough.
However, my laptop isn't where my interest lies. I am responsible for keeping customer data secure on VM's managed by others. My "responsible" approach is to keep it to a minimum and cross my fingers. If it was a joke it might even be funny. It almost brought an audible sigh of relief from me when I heard Matthew say he was working on extending the TPM into containers. The interesting thing is I don't even care that it's brittle and so probably will lose my data. That's mostly because VM's are already very good at losing my data.
One thing I don't understand is why they don't put the TPM on the CPU die. Why everyone thinks putting the TPM on an external bus where it can be reset at will with a minimal amount of hardware is a complete mystery to me. The TPM's integrity depends on being able to provide a tamper evident audit trail from boot. If you can reset it without resetting the rest of the system you can re-program that audit trail to be whatever you want. So if it's not on the die, it can only provide a very moderate level of protection.
Posted Feb 19, 2016 18:05 UTC (Fri)
by foom (subscriber, #14868)
[Link]
Whatever problem your friend had with disk encryption in MacOS can't be blamed on a TPM...
Posted Feb 18, 2016 8:05 UTC (Thu)
by ssokolow (guest, #94568)
[Link] (3 responses)
Unfortunately, while we were all up in arms about TPMs, they decided to add on-die ARM microcontrollers with more priviliege than the OS (commonly described as "Ring -3") like the Intel Management Engine and the AMD Platform Security Processor and then use those for proprietary DRM-enforcment code instead.
https://libreboot.org/faq/#amd
Given that one of the earlier (pre-ARM) versions of Intel ME had a proof-of-concept exploit, I'm seriously concerned about what I'm going to do when my current AMD CPU and BIOS-based mobo need to be replaced. (Perhaps do my work on a RasPi 2 with the reverse-engineered video firmware and do my gaming on an airgapped Intel/AMD box via a KVM? I certainly can't justify the Libreboot D16 or the Talos Secure Workstation on my current budget.)
Posted Feb 18, 2016 21:36 UTC (Thu)
by yuhong (guest, #57183)
[Link] (2 responses)
Posted Feb 19, 2016 11:43 UTC (Fri)
by ssokolow (guest, #94568)
[Link]
That aside, I still don't like having a "Ring -3" core that I can't audit.
Posted Feb 23, 2016 15:37 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
http://www.alexrad.me/discourse/why-rosyna-cant-take-a-mo...
That article explains how it's done with the ME and AMD's own press releases make it clear that the primary reason for adding the PSP was to woo those who want hardware-level DRM enforcement.
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
It depends what your threat model is, surely?
Protecting systems with the TPM
Protecting systems with the TPM
Secure storage is pure marketing
Protecting systems with the TPM
Protecting systems with the TPM
https://media.ccc.de/v/32c3-7343-beyond_anti_evil_maid
https://media.ccc.de/v/32c3-7352-towards_reasonably_trust...
Protecting systems with the TPM
>>They feature a small amount of nonvolatile RAM storage; "small amount" meaning hundreds of bytes in this case.
>>...
>>A core feature of a TPM is the platform configuration registers, or PCR's. ... [PCR's'] value cannot be set directly (other than initializing them to zero when the system resets).
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
In other words, the TPM is worse than the CPU in every single way, so why do manufacturers bother installing them in so many machines? Answering that question was the focus of much of the rest of the talk.
A core feature of a TPM is the platform configuration registers, or PCRs.
I would say the core feature of the TPM, or at least why it is implemented as a separate piece of hardware, is that it focuses on "doing one thing, and doing it well". A TPM is not a general purpose computer, but a combination of custom software and specialized hardware whose raison d'être is to keep secrets (encryption keys) safe and to be able to operate on data using the stored secrets using a narrow and well-defined interface.
The chips used in TPMs are not off-the-shelf microcontrollers. They are designed to be as tamper proof as possible and typically contain special hardware to accelerate cryptographic operations.
Of course, nothing is perfect and every claim should be met with a healthy dose of scepticism, but I think the TPM wins in comparison with the huge attack surface of the kernel.
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
The scenario of locked-down systems where the owner is not in control is very much a reality for users of iPhones and game consoles. With UEFI Secure Boot, PCs can be configured to only boot signed OSs, and isn't that required by some Windows versions nowadays? And while the Windows 8 Logo requirements still required to be able to turn Secure Boot off, that is no longer the case for Windows 10. I guess that, likewise, at some point Microsoft will stop signing Linux boot loaders.
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
I guess you weren't reading back in 2005 :)
The first talk?
The first talk?
The first talk?
The first talk?
The first talk?
The first talk?
Protecting systems with the TPM
Protecting systems with the TPM
>My last Dell two laptops (Precision and now XPS) do have TPM's, and I gather that is usually the case for "work" oriented Dell's.Protecting systems with the TPM
Yea, but were they accessible? There's been a shocking tendency with TPM to build them in because they are so cheap and then just disable them in the bios with no way to activate them. That is unless you pay extra for the model with the bios that enables them.
Until the PC side figures out how to do TPM right like on the Phone side I would avoid them. My experience on windows with TPM was terrible honestly. But the technology is quite powerful if you use it right. The FOSS community has really smart people looking at this. I can actually see this being a key feature some day and I can see FOSS leading the way here. Microsoft's support of TPM has always been pretty terrible, I don't see that changing very soon.
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
Protecting systems with the TPM
