Security
Integrity and embedded devices
David Safford's talk for the 2013 Linux Security Summit was in two parts—with two separate sets of slides. That's because the US Department of Homeland Security (DHS), which sponsored IBM's work on hardware roots of trust for embedded devices—part one of the talk—was quite clear that it didn't want to be associated with any kind of device cracking. So part two, which concerned circumventing "verified boot" on a Samsung ARM Chromebook, had to be a completely separate talk. The DHS's misgivings notwithstanding, the two topics are clearly related; understanding both leads to a clearer picture of the security of our devices.
The DHS is interested in what can be done to verify the integrity of the code that is running on all types of systems. For servers, desktops, and mobile devices, there are a variety of existing solutions: crypto hardware, secure boot, Trusted Platform Module (TPM) hardware, and so on. But for low-cost embedded devices like home routers, there is no support for integrity checking. The DHS is also interested in integrity checking for even lower cost sensors, Safford said, but those don't run Linux so they weren't part of his investigations.
Because routers and the like are such low-cost and low-margin systems, the researchers focused on what could be done for "zero cost" to the manufacturer. The team looked at four devices: the venerable Linksys WRT54G router (which spawned projects like OpenWrt and DD-WRT), the Pogoplug local network attached storage (NAS) cache, and two other router devices, the TP-Link MR3020 and D-Link DIR-505. All of those typically retail for around $50.
The MR3020, which was the focus for much of the talk, retails for $30. It has very limited space in its 4M flash chip, so the challenge to adding integrity features is "not so much technical" as it is in "squeezing things in" to the flash. For example, the MR3020 only has 64K of space for U-Boot, 1M for the kernel, and 2.8M for the root filesystem.
Safford gave a handful of examples of router vulnerabilities over the last few years. Beyond just theoretical examples, he noted that 4.5 million home routers were actually compromised in Brazil in 2011. While the vulnerabilities he listed were at the higher levels (typically the web interface), they do show that these embedded devices are most certainly targets.
So, without increasing the cost of these devices, what can be done to ensure that the firmware is what it is expected to be? In the supply chain, the wrong firmware could be added when the system is built or changed somewhere along the way. Safford said that IBM had some (unnamed) customers who had run into just this kind of problem.
So there needs to be a way to "measure" the firmware's integrity in hardware and then to lock the firmware down so that rootkits or other malware cannot modify it. In addition, these devices typically do not have support for signed updates, so malicious update files can be distributed and installed. There is also no ability for the system to refuse to boot if the code has been changed (i.e. secure or trusted boot).
Providing those capabilities was the goal for the project, he said. He showed a table (also present in his slides [PDF] and associated paper [PDF]) outlining the abilities of each of the devices in four separate integrity categories: "Measure BIOS?", "Lock BIOS?", "Secure local updates?", and "Secure Boot?". All of the boxes were "No", except that both the Pogoplug and WRT54G had a way to measure—verify—the firmware (by reading it using SATA via an immutable boot ROM and JTAG, respectively). By the end of the talk, those boxes had all been changed to "Yes" by the changes Safford and his team had made.
The traditional approaches for integrity revolve around either attestation (e.g. trusted boot) or a trusted chain of signed code as in UEFI secure boot. Attestation means that a system uses a TPM to measure everything read and executed, then sends that information to a trusted system for verification before being allowed to continue. There are several National Institute of Standards and Technology (NIST) standards that govern parts of the integrity puzzle, including trusted boot, but there are none, at least yet, that govern secure boot. Safford is working with NIST to get that process started.
Since a TPM chip is "expensive" ($0.75), it violates the zero-cost constraint. But in order to verify the firmware, it must be read somehow. The firmware itself cannot be involved in that step as it may lie about its contents to avoid malware detection. The Serial Peripheral Interface (SPI) bus provides a mechanism to read the contents of the flash for devices lacking other means (e.g. JTAG). That bus can be shared if it has proper buffering, but both the MR3020 and DIR-505 lack the resistors needed.
![[Bus Pirate on MR3020]](https://static.lwn.net/images/2013/lss-buspirate-sm.jpg)
Enter the Bus Pirate—a device that can be used to read the SPI bus. Using it requires adding three buffering resistors to the Atheros System-on-Chip (SoC) used by the devices, but that adds less than $0.01 to the cost of the device, which is close enough to zero cost that device makers can probably be convinced. That means that users (or device makers) can verify the contents of the flash fairly inexpensively (a Bus Pirate costs around $30).
Once the contents of the flash are verified, there needs to be a way to lock it down so that it can only be modified by those verified to be physically present. The SPI flash chips used by all of the devices have a status register that governs which addresses in the flash can be written, along with an overall write-disable bit. That register can be locked from any updates by holding the chip's write-protect (!WP) pin low. Physical presence can be proved by holding down a button at boot to drive !WP high.
Safford showed the modifications made to the MR3020 and DIR-505 to support the physical presence test. The WPS (Wireless Protected Setup) button was repurposed on the MR3020, while an unused sliding switch position was used on the DIR-505. The paper indicates that similar changes were made on the other two devices. Both the slides and paper have pictures of the modifications made to the devices. In addition, U-Boot was modified so that it locks the entire flash on each boot, but if !WP is held high when power is applied, U-Boot will unlock the flash.
Adding secure boot support to U-Boot was the next step. Once the root of trust is extended into the kernel, the kernel's integrity subsystem can take over to handle integrity verification from there. So it is a matter of verifying the kernel itself. The modified U-Boot will use a public key that is stored at the end of its partition to verify the signature of the kernel. That signature is stored at the end of the kernel partition.
As mentioned earlier, the trick is in getting that code (and key) to fit into the 64K U-Boot partition. Using code derived from PolarSSL, with everything unneeded removed, the modified U-Boot weighed in at 62K. Though Safford was never very specific, the U-Boot modifications must also provide RSA signature checking for updates to the firmware. Support for signed updates is one of the integrity requirements that were successfully tackled by the project.
Through some effectively zero-cost modifications, and some changes to the firmware, the team was able to achieve its integrity goals. All of the devices now support all four of the integrity requirements they set out to fulfill.
Breaking verified boot
Moving on to the DHS-unapproved portion of the talk, Safford showed how one can take control of a Samsung ARM Chromebook. The work on that was done in his spare time, he said, but many of the tools used for adding secure boot for embedded devices are the same as those for removing and altering a system with secure boot. The Chromebook is a "very secure" system, but the verified boot (VB) mechanism does not allow users to take control of the boot process.
However, a fairly simple hardware modification (removing a washer to change the !WP signal) will allow the owner to take control of the device, as Safford found. Beyond the hardware change, it also requires some scripts and instructions [tar.gz] that Safford wrote. Unlike the embedded devices described above, there is a full 4M flash just for U-Boot on the Chromebook, so there is "an embarrassment of riches" for adding code on those systems. VB has been added to the U-Boot upstream code, incidentally, but it is way too large (700K) for use in routers, he said.
In the normal VB operation, there is no way to write to the upper half of the SPI flash, which contains a copy of U-Boot and Google's root key. That key is used to verify two keys (firmware and kernel) stored in the read-write half of the SPI flash. The firmware key is used to verify another copy of U-Boot that lives in the modifiable portion of the flash. That U-Boot is responsible for verifying the kernel (which actually lives in a separate MMC flash) before booting it.
Holding down ESC and "refresh" while powering on the system will boot whatever kernel is installed, without checking the signatures. That is the "developer mode" for the system, but it circumvents secure boot, which is not what Safford set out to do. He wants to use secure boot but control the keys himself. In addition, developer mode must be enabled each time the system boots and you get a "scary screen" that says "OS verification is OFF".
A less scary approach is to use a non-verifying U-Boot that gets installed in place of the kernel. That U-Boot is signed, but does no verification of the kernel (installed in a different part of the MMC flash) before booting it. That way you don't have to invoke developer mode, nor do you get the scary screen, but you still don't get secure boot.
![[ARM Chromebook washer]](https://static.lwn.net/images/2013/lss-washer-sm.jpg)
Removing the washer is the way forward as it allows read-write access to the entire SPI flash. Once that is done, Safford has a set of scripts that can be run from a developer-mode kernel to create new key pairs, sign the read-write U-Boot, sign the kernel, and verify all of the signatures. If any of that goes wrong, one may end up at the "Chrome OS is missing or damaged" screen, which actually means the device hasn't been "bricked" and can be restored from a USB device. Even in the event of bricking, one can recover the device using Bus Pirate as the SPI flash is properly buffered, he said (seemingly from a fair amount of experience).
As part of his demo, he wanted an easy way to show that he had gained control of the low-level boot code in the SPI flash. He decided to change the "chrome" text in the upper left of the "scary screen" to "DaveOS", which actually turned out to be one of the harder ways to demonstrate it. Because of the format of the logo and where it was stored in the flash, it turned out to be rather painful to change, he said with a chuckle.
As Kees Cook pointed out, the washer removal trick was a deliberate choice in the design of the system. Google and Samsung wanted people to be able to take control of the keys for the device, but didn't want an attacker to be able to do so quickly while the user's attention was momentarily distracted. Safford agreed that it was a reasonable compromise, but that it is important for users to be able to set their own keys.
The slides [PDF] for the second half of the talk are instructive as well, with a number of pictures of the infamous washer, scary and DaveOS screens, the Bus Pirate in action, and so on. Seeing the problem from both angles, adding and subtracting secure boot functionality, was useful to help better understand integrity verification. Techniques like secure boot certainly can be used in user-unfriendly ways to lock down devices, but it can also provide some amount of peace of mind. As long as users can provide their own keys—or disable the feature entirely—secure boot is likely be a boon for many.
[I would like to thank LWN subscribers for travel assistance to New Orleans for LSS.]
Brief items
Security quotes of the week
It means the cert is probably accurate, or about as accurate as you can possibly get, without going over to the server certing it yourself. If those three parties are conspiring to disrupt your Amazon order, then I'm afraid you're not going to get your package, no matter what crypto you use. :-)
New vulnerabilities
chicken: code execution
Package(s): | chicken | CVE #(s): | CVE-2013-4385 | ||||||||||||||||
Created: | September 30, 2013 | Updated: | February 10, 2014 | ||||||||||||||||
Description: | From the Red Hat bugzilla:
Chicken, a compiler for the Scheme programming language, is found to have a buffer overrrun flaw due to the read-string! procedure from the "extras" unit, when used in a particular way. It was found that there was a missing check for the situation when NUM was at #f (the scheme value for false) in the buffer as the buffer size, then it will read beyond the buffer until the input port is exhausted. This may result in a DoS or a remote code execution. Though currently all stable releases are vulnerable to this flaw, there is a simple workaround to be used in code that uses read-string!: simply convert all (read-string! #f buf ...) invocations to (read-string! (string-length buf) buf ...) or, if possible, use the non-destructive read-string procedure from the same unit. | ||||||||||||||||||
Alerts: |
|
davfs2: privilege escalation
Package(s): | davfs2 | CVE #(s): | CVE-2013-4362 | ||||||||||||||||||||||||||||
Created: | September 27, 2013 | Updated: | December 2, 2016 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory: Davfs2, a filesystem client for WebDAV, calls the function system() insecurely while is setuid root. This might allow a privilege escalation. | ||||||||||||||||||||||||||||||
Alerts: |
|
firefox: denial of service
Package(s): | firefox | CVE #(s): | CVE-2013-1723 | ||||||||||||||||||||||||||||
Created: | September 27, 2013 | Updated: | October 2, 2013 | ||||||||||||||||||||||||||||
Description: | From the CVE entry: The NativeKey widget in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21 processes key messages after destruction by a dispatched event listener, which allows remote attackers to cause a denial of service (application crash) by leveraging incorrect event usage after widget-memory reallocation. | ||||||||||||||||||||||||||||||
Alerts: |
|
glibc: multiple vulnerabilities
Package(s): | glibc | CVE #(s): | CVE-2013-4788 CVE-2013-4332 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 30, 2013 | Updated: | October 11, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the OpenWall advisories [1; 2]:
I recently discovered three integer overflow issues in the glibc memory allocator functions pvalloc, valloc and posix_memalign/memalign/aligned_alloc. These issues cause a large allocation size to wrap around and cause a wrong sized allocation and heap corruption. (CVE-2013-4332) This bug was discovered in March 2013 while we were developing the RAF SSP technique. The glibc bug makes it easy to take advantage of common errors such as buffer overflows allows in these cases redirect the execution flow and potentially execute arbitrary code. All statically linked applications compiled with glibc and eglibc are affected, independent of the operating system distribution. Note that this problem is not solved by only patching the eglibc, but it is also necessary to recompile all static executables. As far I know there are a lot of routers, embedded systems etc., which use static linked applications. Since the bug is from the beginning of the PTR_MANGLE implementations (years 2005-2006) there are a ton of vulnerable devices. (CVE-2013-4788) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
gpg2: information disclosure
Package(s): | gpg2 | CVE #(s): | CVE-2013-4351 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 27, 2013 | Updated: | November 13, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE bug report: RFC 4880 permits OpenPGP keyholders to mark their primary keys and subkeys with a "key flags" packet that indicates the capabilities of the key [0]. These are represented as a set of binary flags, including things like "This key may be used to encrypt communications." If a key or subkey has this "key flags" subpacket attached with all bits cleared (off), GnuPG currently treats the key as having all bits set (on). While keys with this sort of marker are very rare in the wild, GnuPG's misinterpretation of this subpacket could lead to a breach of confidentiality or a mistaken identity verification. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2013-4205 | ||||||||
Created: | September 27, 2013 | Updated: | October 2, 2013 | ||||||||
Description: | From the Ubuntu advisory: A memory leak was discovered in the user namespace facility of the Linux kernel. A local user could cause a denial of service (memory consumption) via the CLONE_NEWUSER unshare call. | ||||||||||
Alerts: |
|
kernel: information leak
Package(s): | linux-2.6 | CVE #(s): | CVE-2013-2239 | ||||
Created: | September 30, 2013 | Updated: | October 2, 2013 | ||||
Description: | From the Debian advisory:
Jonathan Salwan discovered multiple memory leaks in the openvz kernel flavor. Local users could gain access to sensitive kernel memory. | ||||||
Alerts: |
|
kernel: off by one error
Package(s): | kernel | CVE #(s): | CVE-2013-4345 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | October 1, 2013 | Updated: | October 23, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A flaw was found in the way ansi cprng implementation in the Linux kernel processed non-block size aligned requests. If several small requests are made that are less than the instances block size, the remainder for loop code doesn't increment rand_data_valid in the last iteration, meaning that the last bytes in the rand_data buffer gets reused on the subsequent smaller-than-a-block request for random data. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
monkeyd: multiple vulnerabilities
Package(s): | monkeyd | CVE #(s): | CVE-2013-2163 CVE-2013-3724 CVE-2013-3843 | ||||
Created: | September 26, 2013 | Updated: | October 2, 2013 | ||||
Description: | From the CVE entry: The mk_request_header_process function in mk_request.c in Monkey 1.1.1 allows remote attackers to cause a denial of service (thread crash and service outage) via a '\0' character in an HTTP request. (CVE-2013-3724) From the Gentoo bug report: The vulnerability is caused due to a signedness error in the "mk_request_header_process()" function (src/mk_request.c) when parsing the request and can be exploited to cause a stack-based buffer overflow. (CVE-2013-3843) From the monkeyd bug report: The ranges parser did not validate properly the maximum offset allowed, so if a requester set limit offset equal to file size it continue processing, internally the sendfile(2) did not failed returning always zero, this condition was not handled and for hence that connections keeps running without ending, it could lead to a DoS. (CVE-2013-2163) | ||||||
Alerts: |
|
mozilla: privilege escalation
Package(s): | firefox, thunderbird, seamonkey | CVE #(s): | CVE-2013-1726 | ||||||||
Created: | September 30, 2013 | Updated: | October 2, 2013 | ||||||||
Description: | From the CVE entry:
Mozilla Updater in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 does not ensure exclusive access to a MAR file, which allows local users to gain privileges by creating a Trojan horse file after MAR signature verification but before MAR use. | ||||||||||
Alerts: |
|
MRG Grid: denial of service
Package(s): | MRG Grid | CVE #(s): | CVE-2013-4284 | ||||||||
Created: | October 2, 2013 | Updated: | October 2, 2013 | ||||||||
Description: | From the Red Hat advisory:
A denial of service flaw was found in the way cumin, a web management console for MRG, processed certain Ajax update queries. A remote attacker could use this flaw to issue a specially crafted HTTP request, causing excessive use of CPU time and memory on the system. | ||||||||||
Alerts: |
|
nas: multiple vulnerabilities
Package(s): | nas | CVE #(s): | CVE-2013-4256 CVE-2013-4257 CVE-2013-4258 | ||||||||||||||||||||||||
Created: | September 27, 2013 | Updated: | June 26, 2014 | ||||||||||||||||||||||||
Description: | From the Fedora advisory: CVE-2013-4258 (formatting string for syslog call) CVE-2013-4256 (parsing display number) CVE-2013-4257 (heap overflow when processing AUDIOHOST variable) | ||||||||||||||||||||||||||
Alerts: |
|
openstack-keystone: incorrect token revocation
Package(s): | openstack-keystone | CVE #(s): | CVE-2013-4294 | ||||||||||||
Created: | September 26, 2013 | Updated: | November 8, 2013 | ||||||||||||
Description: | From the Red Hat advisory: It was found that Keystone did not correctly handle revoked PKI tokens, allowing users with revoked tokens to retain access to resources they should no longer be able to access. This issue only affected systems using PKI tokens with the memcache or KVS token back ends. | ||||||||||||||
Alerts: |
|
squid: information disclosure
Package(s): | squid | CVE #(s): | CVE-2009-0801 | ||||||||||||||||||||||||||||
Created: | September 27, 2013 | Updated: | October 2, 2013 | ||||||||||||||||||||||||||||
Description: | From the CVE entry: Squid, when transparent interception mode is enabled, uses the HTTP Host header to determine the remote endpoint, which allows remote attackers to bypass access controls for Flash, Java, Silverlight, and probably other technologies, and possibly communicate with restricted intranet sites, via a crafted web page that causes a client to send HTTP requests with a modified Host header. | ||||||||||||||||||||||||||||||
Alerts: |
|
sudo: privilege escalation
Package(s): | sudo | CVE #(s): | CVE-2013-2776 | ||||||||||||||||||||||||||||
Created: | October 1, 2013 | Updated: | October 2, 2013 | ||||||||||||||||||||||||||||
Description: | From the CVE entry:
sudo 1.3.5 through 1.7.10p5 and 1.8.0 through 1.8.6p6, when running on systems without /proc or the sysctl function with the tty_tickets option enabled, does not properly validate the controlling terminal device, which allows local users with sudo permissions to hijack the authorization of another terminal via vectors related to connecting to a standard input, output, and error file descriptors of another terminal. NOTE: this is one of three closely-related vulnerabilities that were originally assigned CVE-2013-1776, but they have been SPLIT because of different affected versions. | ||||||||||||||||||||||||||||||
Alerts: |
|
tpp: code execution
Package(s): | tpp | CVE #(s): | CVE-2013-2208 | ||||||||||||
Created: | September 26, 2013 | Updated: | February 12, 2014 | ||||||||||||
Description: | From the Gentoo advisory: TPP templates may contain a --exec clause, the contents of which are automatically executed without confirmation from the user. A remote attacker could entice a user to open a specially crafted file using TPP, possibly resulting in execution of arbitrary code with the privileges of the user. | ||||||||||||||
Alerts: |
|
txt2man: file overwrite
Package(s): | txt2man | CVE #(s): | CVE-2013-1444 | ||||
Created: | October 1, 2013 | Updated: | October 2, 2013 | ||||
Description: | From the CVE entry:
A certain Debian patch for txt2man 1.5.5, as used in txt2man 1.5.5-2, 1.5.5-4, and others, allows local users to overwrite arbitrary files via a symlink attack on /tmp/2222. | ||||||
Alerts: |
|
vino: denial of service
Package(s): | vino | CVE #(s): | CVE-2013-5745 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | October 1, 2013 | Updated: | November 7, 2013 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Jonathan Claudius discovered that Vino incorrectly handled closing invalid connections. A remote attacker could use this issue to cause Vino to consume resources, resulting in a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
zabbix: man-in-the-middle attacks
Package(s): | zabbix | CVE #(s): | CVE-2012-6086 | ||||||||||||||||
Created: | September 30, 2013 | Updated: | October 15, 2013 | ||||||||||||||||
Description: | From the Red Hat bugzilla:
A security flaw was found in the way Zabbix, an open-source monitoring solution for IT infrastructure, used (lib)cURL's CURLOPT_SSL_VERIFYHOST variable, when doing certificate validation (value of '1' meaning only check for the existence of a common name was used instead of value '2' - which also checks if the particular common name matches the requested hostname of the server). A rogue service could use this flaw to conduct man-in-the-middle (MiTM) attacks. | ||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Next page:
Kernel development>>