Security is a much-discussed subject at the moment; it has become clear
that security needs to be improved throughout our community - and, indeed,
in the industry as a whole. But anybody who has lived through the last
decade does not need to be told that many actions carried out in the name
of improving security are, at best, intended to give control to somebody
else. At worst, they can end up reducing security at the same time. A
couple of examples from the hardware world show how "security" often
conflicts with freedom - and with itself.
UEFI secure boot
LWN first wrote about the UEFI "secure boot"
feature last June. At that point, the potential for trouble was clear,
but it was also mostly theoretical. More recently, it has been revealed that Microsoft intends to require the
enabling of secure boot for any system running the client version of
Windows 8. That makes the problem rather more immediate and concrete.
The secure boot technology is not without its value. If an attacker is
able to corrupt the system's firmware, bootloader, or kernel image, no
amount of good practice or security code will be able to remedy the
situation; that system will be owned by the attacker. Secure boot makes
such attacks much harder to carry out; the system will detect the corrupted
code and refuse to run it. An automated teller machine should almost
certainly have this kind of feature enabled, for example. Many LWN readers
find that the amount of time they have to put into family technical support
would drop considerably if certain family members had their systems
protected in this way.
Secure boot requires trust in whatever agency applies its signature to the
code. A better name for the feature might be "restricted boot," since it
restricts the system to booting code that has been signed by a trusted
key. The idea is sound enough, except for one little problem: who decides
which keys are trusted? Hardware vendors seeking Microsoft certification
will create a secure boot implementation that trusts Microsoft's keys.
They need not trust any others - not even from other hardware vendors
selling Windows-compatible hardware.
Secure boot would not be a big problem if users were guaranteed the right
to install their own keys or to disable the feature altogether. The owner
of a specific computer may well want to restrict the system to booting
kernels signed by Red Hat, SUSE, or OpenBSD. They might also want to say
that Windows is not a trusted system - but only as long as the driver
firmware needed to boot is signed by somebody other than Microsoft. The owners
may want to build their own
kernels signed with their own keys. Or they may decide that secure boot is
a pain that they would rather do without. With this freedom, secure boot
could be a beneficial feature indeed.
But nobody is guaranteeing that freedom. The ability to disable secure
boot, at least, may come standard on traditional "desktop PC"
systems, but the role of those systems in the market is declining.
Microsoft very much wants to push Windows into tablets, handsets,
refrigerators, and other new systems. Such machines do not have a stellar
record with regard to enabling owner control even now; it does not seem
likely that Microsoft's certification requirements will improve that
situation. Just as things seemed to be getting better in that area, we may
be about to see them get worse again.
That said, loss of control over our systems is not a foregone conclusion.
Microsoft will have to be very careful about monopoly concerns in the areas
where it is dominant. In the areas where Microsoft has failed to gain
dominance, there is no guarantee that it ever will. And, even then, users
have been clear enough about their desire for access to their own systems
to gain the attention of some big handset manufacturers. Lockdown via
secure boot is not a foregone conclusion; in fact, it looks like a battle
we should be able to win. But we must certainly keep our eyes on the
The pointer to this paper by
Alan Dunn et al [PDF] came via
Alan Cox. These investigators have figured out a way to use the
trusted platform module (TPM) found in most systems to hide malware from
anybody trying to investigate it. In essence, the TPM can be used to
create a trusted botnet capable of resisting attempts to determine what the
hostile code is actually doing.
The TPM provides a number of cryptographic functions along with a set of
"platform configuration registers" (PCRs) that can be used to make
guarantees about the state of the system. As long as the boot path is
trusted, the TPM can sign a message containing PCR values proving that a
specific set of software is running on the system. Fears that this "remote
attestation" capability would be used to lock down systems from afar have
not generally come true - so far. The TPM can also perform encryption and
decryption of data, optionally tied to specific PCR values.
One other TPM-supported feature is "late launch," a mechanism by which code
can be executed in an uninterruptable and unobservable manner. Late launch
is used to enable mechanisms like Intel
TXT; it is another way of ensuring that only "trusted" code can run on
The attack described in the paper requires gaining control of the TPM, an
act which may or may not be easy (even after the system itself has been
compromised) depending on how the TPM is being used. Once that has been
done, the compromised software will be able to attest to a remote controlling
node that it is in full control of the system. That node can then send
down encrypted code to be run in the late launch mode. This code is
limited in what it can do - it cannot call into the host operating system
for anything, for example - but it can make important policy decisions
controlling how the malware will operate.
Understanding - and defeating - malware often depends on the ability to
observe it in action and reverse engineer its decision making. If it
proves impossible to observe malware in operation or to run it in a
virtualized mode, that malware will be harder to stop. The attack is not
easy, but experience has shown that the world does not lack for capable,
motivated, and well-funded attackers who might just take up the challenge.
That would not bode well for the future security of the net as a whole.
Needless to say, the protection of botnets seems counter to the objectives
that led to the creation of the TPM in the first place. It has always been
clear that technology imposed in the name of "security" has the potential
to cost us control over our own systems. Now it seems that technology
could even hand control over to overtly hostile organizations. That does
not seem like a more secure situation, somehow.
to post comments)