Security
Practical security for 2014
2013 was an interesting year for security, said Matthew Garrett in his linux.conf.au 2014 keynote talk. But, he added, 2014 might be even more interesting yet. Securing our systems in ways that preserve freedom is a challenge, but one that we must accept if we are to have either security or freedom. Along the way, we'll have to ask some hard questions of those who provide us with our computing services.
What happened in 2013
One of the things that made 2013 interesting, Matthew said, was the deployment of UEFI secure boot. At this point, if you buy a computer — from anybody but Apple — it will ship with secure boot enabled by default; this is a huge change for the PC industry. Naturally, since secure boot is implemented in firmware, several of those implementations were shown to be vulnerable within months. Most of those problems have been fixed, though Matthew stopped short of saying that all systems were shipping with the fixed firmware.
Many of these problems can be explained by the fact that we're dealing with
firmware authors, but there is more to it than that: a system's firmware
has not traditionally been part of
its security model. Suddenly the firmware has been put into an important
position of trust, despite the fact that it was not written with that kind
of security in mind. There is a lesson here: always think about security
when writing code, even if it is just a shell script intended for personal
use. One never knows which code will suddenly become security-critical in
the future.
Another bit of big news from 2013, of course, was the flood of revelations resulting from the Snowden leaks. While many in the security community had long thought about the kinds of attacks that might be happening, we now have confirmation that theoretically possible attacks actually are happening. Governments, it seems, really are engaged in advanced technical attacks on their own populations — all for the purpose of increasing national security, of course. Many had known all this was possible, but nearly everyone was surprised by the extent of what is happening.
Finally, the defacing of the openSSL web site in 2013 raised a number of eyebrows. It was originally believed to be the result of a VMWare hypervisor vulnerability, something which would raise concerns about the security of vast numbers of cloud hosting providers. In truth, it was instead the result of easily guessable credentials for the hypervisor, Matthew said, which was not as bad. But it got people thinking about the kinds of things that could happen.
Evaluating the threats
In any setting, people who are concerned about security have to start by asking themselves who they are trying to defend themselves against. The US National Security Agency (NSA) has become a significant factor in this kind of discussion, naturally. The problem is that we still don't really know what their capabilities are; we have only seen a subset of what they can do. What we should do, Matthew said, is to assume the worst in this regard; that leads to the immediate conclusion that we should give up on computers and return to subsistence farming or similar activities.
What about hosting providers? The NSA revelations have shown that some companies are more than willing to hand data about their customers to government agencies. They have worked out established procedures for how to do this; we have to assume that it will happen. It is all for good causes, allegedly, but these hosting providers may have employees who are a little too focused on their own enrichment, and who may sell customer data to others as well. Unfortunately, Matthew said, we really don't know how to protect ourselves against such people.
There is also the ongoing threat of opportunistic attackers; the most likely security scenario is that one will be attacked by somebody from some other country who doesn't speak your language, but who does understand what credit card numbers look like and how to use them. Even if we cannot achieve perfect security, we can't give up entirely; imperfect security is better than none. It's good to protect credit cards, even if we are still defenseless against national governments.
Returning to the NSA: it is easy to assume the worst: that they can control systems at the firmware level and extract data from systems in an undetectable manner. But, Matthew said, the leaked materials don't support that view. Instead, they show an extensive series of exploits against specific models of specific devices; there aren't even exploits that work against full vendor product lines. Some of these exploits require the installation of additional hardware. And, in general, they are aimed at the products sold by top-tier vendors.
There is, in other words, no evidence that the entire stack has been subverted; there is no "generic attack" that works against everything. It is plausible that vendors are not actively cooperating with the NSA in the compromising of their products; if that were happening, one would expect that there would be more commonality between the various types of attacks. So it seems more likely that the NSA is taking advantage of bugs that it is finding in these products to develop its own attacks.
That said, some kinds of passive involvement seem likely. A government may, for example, order a large number of systems with the requirement that the source for the firmware be supplied. That source is then passed on to the relevant agencies for analysis.
Would it, Matthew asked, be in anybody's interest to develop a generic exploit? The United States depends heavily on its technology exports. If a generic exploit for US-made systems were ever to come out, it would wreck exports, causing huge economic and diplomatic damage. This possibility could easily be seen as too big a risk to take on. Any such exploit would have to be highly secure, and could almost never be used, lest it be revealed to the world. That would limit the effectiveness of such an exploit.
So, he said, worrying about intelligence agencies may not be the best use of time in the end. In the real world, most system compromises are still driven by profit or political reasons. What can we do to protect our users against that kind of attack?
Defending against real-world attacks
To protect our users, Matthew said, we have to protect the entire software chain. So, for example, boot-time software verification, as implemented by UEFI secure boot, is an absolute requirement. Operating systems are too big to be perfect; they will be compromised over time. But the worst case is a compromise that can become persistent. Verification of system software before booting it can protect against that possibility.
At the same time, though, user freedom is also vital. We can't find ourselves in a situation where somebody else has to approve your software before you can run it. We can't block users from building and installing their own kernels and other system components — including, ideally, the system's firmware. Two years ago there was a lot of concern about the secure boot mechanism. But, in the end, things did not go as badly as many had feared. Any system shipping with the Microsoft logo must allow users to replace the system's keys, though doing so is not always straightforward. There is not, unfortunately, a requirement that users must be able to replace the firmware as well.
The situation with Android devices, which are increasingly widespread, is not quite as good. Some of these systems allow replacement of the operating system, while others don't. But none of them allow the replacement of verification keys or the low-level firmware. So users, who are unable to boot personally signed software in a secure mode, must choose between freedom and security. We need to push vendors to move away from that model, he said. It is ironic that Microsoft is the only company that is not forcing this particular choice; in this case, Microsoft is the one that has done the right thing for user freedom.
Chromebooks have the same problem; the software can be replaced, but not in a secure mode. But at least they are not Apple systems, which provide no way to replace anything at all.
Trusting our systems
Moving on, he asked: how much can we trust the systems we are using for our computing? Might there be backdoors in our operating systems, for the use of security agencies or others? Matthew dryly noted that, given the level of security of most operating systems, there is no particular need for explicit backdoors. If teenagers working in their bedrooms can work out ways to gain root access, government agencies can probably do it too.
What about firmware-level backdoors? They may be unlikely, but it's hard to tell; in the absence of a demonstration, they are hard to find. Still, some opportunities to check do occasionally come by. Last year, he said, there was a leak of the source for AMI's BIOS on the site of motherboard maker Jetway. Why, he asked, has nobody audited it? It should be possible to build this firmware and see if it matches what is actually shipped, then look at the source and see what's there to be found. Matthew stated that naturally, he has not looked at this code, since doing so would constitute copyright infringement. Thus, he said, there is no way that he could say whether anybody might be able to find several easily exploited vulnerabilities in that code.
The Intel Active Management Technologies controller is an area of possible concern. It has access to the system, and may be powered up even when the system as a whole is off. These controllers have been shown to be able to leak data out of systems in the past. One might also worry about CPU microcode updates; some of those updates may contain deliberately introduced vulnerabilities. There is not much to do about these threats other than to insist that vendors provide the sources for their firmware and microcode.
In general, he said, it is not easy to prove the security of a computing system. In the end, you simply cannot trust hardware; you can't prove that it has not been compromised. If you want trust, he said, you can consider working with sheep, but you should just get out of computers altogether; you'll be much happier afterward.
Cloud concerns
The discussion so far has all been about client computing. But everything is moving to the cloud now. In theory, that means that the attack surface is much smaller, since there is little of interest on client devices when everything is in the cloud. All we have to do is to trust the cloud to be secure. Matthew made it clear that he, personally, was not feeling that trust all that strongly. In general, he said, giving your data to somebody requires you to trust them not to lose or steal it. History shows that this might not be a particularly wise choice.
Running your own server requires that you trust all of the software you have on that server. But if you're running a virtual server, you have to trust somebody else's software too. In particular, you have to trust both the hypervisor and all of the software run by any other guests that might happen to be sharing the same hardware. And you have to trust that the cloud provider is taking security seriously. To try to assess how much a provider can be trusted, he said, one should ask a few questions:
- What technologies does the provider use to provide isolation between
guests? Just running guests under a hypervisor is not enough, he
said. At a minimum, providers should be using a security mechanism
like SELinux or AppArmor to further confine guests in case they are
able to break out of the hypervisor.
- How do they manage updates for hypervisor-related security issues? If
the provider is not able to migrate running guests off a vulnerable
system, patch it, and migrate the guests back, then they are asking
the customer to trade off downtime against security.
- What mechanisms does the provider have to detect hypervisor
compromises? This, he said, is a hard question, one that he
has no answer for. But it is the kind of question that
customers need to be asking.
- What is the provider's response to the possibility that some of its hardware has been compromised? Are they willing to throw away hardware that they are unsure of, or will they leave it in place and run customer systems on it anyway?
In general, cloud computing adds a number of potential security issues.
Introspection of data on a bare-metal server is relatively hard for a
provider to do; they likely need to bring the system down, which tends to
attract attention. Instead, introspection of a system running under a
hypervisor is trivially easy; cloud providers can thus do a great deal of
damage in an undetectable manner. Whoever owns the hypervisor owns the
guests, he said; anybody who is running systems on cloud providers must be
aware that they are subjecting themselves to a wider range of potential
attacks.
Achieving security
If we are going to build secure systems in 2014, he concluded, we have to be more aggressive about it at every layer of the stack. Verified boot is important, and similar mechanisms should be pushed up the stack, but it must be done in a way that is mindful of user freedom. Cloud providers have to be made to answer a number of hard questions; it is not acceptable to have no stated security policy at this point. In the end, security and freedom are inseparable from each other. We have to be prepared to give users both and not allow conversations to be about restricting freedom to provide more security.
[Your editor would like to thank linux.conf.au for funding his travel to Perth].
Brief items
Security quotes of the week
That probably still wouldn't work very well, but it would at least show willing to prevent overblocking. To do any less is to tacitly admit that the whole thing is a publicity stunt from a government that is totally depraved in its indifference to the consequences of unaccountable censorship.
Security and the "Internet of Things"
Two recent articles look at embedded devices and the "Internet of Things" with an eye toward the security problems that abound in that space. Bruce Schneier worries about updates, especially for devices like internet routers: "We have to put pressure on embedded system vendors to design their systems better. We need open-source driver software -- no more binary blobs! -- so third-party vendors and ISPs can provide security tools and software updates for as long as the device is in use. We need automatic update mechanisms to ensure they get installed." Peter Bright at ars technica is more focused on smart TVs, refrigerators, and cars, but sees the same basic problem: "
As such, there are only two ways in which smart devices make sense. Manufacturers either need to commit to a lifetime of updates, or the devices need to be very cheap so they can be replaced every couple years. If manufacturers won't commit to providing a lifetime of updates—and again, the experience with smartphones is, I think, instructive here—then these smart devices are a liability." Food for thought on a quiet Thursday.
FFmpeg and a thousand fixes (Google Online Security Blog)
Over on the Google Online Security Blog, Mateusz Jurczyk and Gynvael Coldwind describe the results of a few years of fuzzing FFmpeg, which is a cross-platform solution for handling audio and video. FFmpeg is used by numerous other projects including Google Chrome/Chromium, MPlayer, VLC, and xine. "We started relatively small by making use of trivial mutation algorithms, some 500 cores and input media samples gathered from readily available sources such as the samples.mplayerhq.hu sample base and FFmpeg FATE regression testing suite. Later on, we grew to more complex and effective mutation methods, 2000 cores and an input corpus supported by sample files improving the overall code coverage." Over 1000 bugs (including lots of security bugs) have been fixed in FFmpeg (and 400+ in Libav, which is a fork of FFmpeg).
New vulnerabilities
bind9: denial of service
Package(s): | bind9 | CVE #(s): | CVE-2014-0591 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 14, 2014 | Updated: | October 14, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Jared Mauch discovered that Bind incorrectly handled certain queries for NSEC3-signed zones. A remote attacker could use this flaw with a specially crafted query to cause Bind to stop responding, resulting in a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
gnome-chemistry-utils: denial of service
Package(s): | gnome-chemistry-utils | CVE #(s): | CVE-2013-6836 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 13, 2014 | Updated: | February 24, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Heap-based buffer overflow in the ms_escher_get_data function in plugins/excel/ms-escher.c in GNOME Office Gnumeric before 1.12.9 allows remote attackers to cause a denial of service (crash) via a crafted xls file with a crafted length value. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
graphviz: multiple vulnerabilities
Package(s): | graphviz | CVE #(s): | CVE-2014-0978 CVE-2014-1235 CVE-2014-1236 | ||||||||||||||||||||||||||||
Created: | January 14, 2014 | Updated: | February 13, 2017 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory:
CVE-2014-0978: It was discovered that user-supplied input used in the yyerror() function in lib/cgraph/scan.l is not bound-checked before being copied into an insufficiently sized memory buffer. A context-dependent attacker could supply a specially crafted input file containing a long line to cause a stack-based buffer overflow, resulting in a denial of service (application crash) or potentially allowing the execution of arbitrary code. CVE-2014-1236: Sebastian Krahmer reported an overflow condition in the chkNum() function in lib/cgraph/scan.l that is triggered as the used regular expression accepts an arbitrary long digit list. With a specially crafted input file, a context-dependent attacker can cause a stack-based buffer overflow, resulting in a denial of service (application crash) or potentially allowing the execution of arbitrary code. | ||||||||||||||||||||||||||||||
Alerts: |
|
java-1.7.0-openjdk: multiple vulnerabilities
Package(s): | java-1.7.0-openjdk | CVE #(s): | CVE-2013-5878 CVE-2013-5884 CVE-2013-5893 CVE-2013-5896 CVE-2013-5907 CVE-2013-5910 CVE-2014-0368 CVE-2014-0373 CVE-2014-0376 CVE-2014-0411 CVE-2014-0416 CVE-2014-0422 CVE-2014-0423 CVE-2014-0428 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 15, 2014 | Updated: | May 29, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
An input validation flaw was discovered in the font layout engine in the 2D component. A specially crafted font file could trigger Java Virtual Machine memory corruption when processed. An untrusted Java application or applet could possibly use this flaw to bypass Java sandbox restrictions. (CVE-2013-5907) Multiple improper permission check issues were discovered in the CORBA, JNDI, and Libraries components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. (CVE-2014-0428, CVE-2014-0422, CVE-2013-5893) Multiple improper permission check issues were discovered in the Serviceability, Security, CORBA, JAAS, JAXP, and Networking components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. (CVE-2014-0373, CVE-2013-5878, CVE-2013-5910, CVE-2013-5896, CVE-2013-5884, CVE-2014-0416, CVE-2014-0376, CVE-2014-0368) It was discovered that the Beans component did not restrict processing of XML external entities. This flaw could cause a Java application using Beans to leak sensitive information, or affect application availability. (CVE-2014-0423) It was discovered that the JSSE component could leak timing information during the TLS/SSL handshake. This could possibly lead to disclosure of information about the used encryption keys. (CVE-2014-0411) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: information leak
Package(s): | kernel | CVE #(s): | CVE-2013-4579 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 14, 2014 | Updated: | January 27, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
The ath9k_htc_set_bssid_mask function in drivers/net/wireless/ath/ath9k/htc_drv_main.c in the Linux kernel through 3.12 uses a BSSID masking approach to determine the set of MAC addresses on which a Wi-Fi device is listening, which allows remote attackers to discover the original MAC address after spoofing by sending a series of packets to MAC addresses with certain bit manipulations. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2013-7266 CVE-2013-7267 CVE-2013-7268 CVE-2013-7269 CVE-2013-7270 CVE-2013-7271 CVE-2013-7263 CVE-2013-7264 CVE-2013-7265 CVE-2013-7281 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 13, 2014 | Updated: | March 28, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The mISDN_sock_recvmsg function in drivers/isdn/mISDN/socket.c in the Linux kernel before 3.12.4 does not ensure that a certain length value is consistent with the size of an associated data structure, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7266) The atalk_recvmsg function in net/appletalk/ddp.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7267) The ipx_recvmsg function in net/ipx/af_ipx.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7268) The nr_recvmsg function in net/netrom/af_netrom.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7269) The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7270) The x25_recvmsg function in net/x25/af_x25.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7271) The Linux kernel before 3.12.4 updates certain length values before ensuring that associated data structures have been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call, related to net/ipv4/ping.c, net/ipv4/raw.c, net/ipv4/udp.c, net/ipv6/raw.c, and net/ipv6/udp.c. (CVE-2013-7263) The l2tp_ip_recvmsg function in net/l2tp/l2tp_ip.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7264) The pn_recvmsg function in net/phonet/datagram.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. (CVE-2013-7265) The dgram_recvmsg function in net/ieee802154/dgram.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call (CVE-2013-7281) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libspring-java: denial of service
Package(s): | libspring-java | CVE #(s): | CVE-2013-4152 | ||||||||||||||||||||
Created: | January 13, 2014 | Updated: | March 4, 2014 | ||||||||||||||||||||
Description: | From the Debian advisory:
Alvaro Munoz discovered a XML External Entity (XXE) injection in the Spring Framework which can be used for conducting CSRF and DoS attacks on other sites. | ||||||||||||||||||||||
Alerts: |
|
lightdm-gtk-greeter: denial of service
Package(s): | lightdm-gtk-greeter | CVE #(s): | CVE-2014-0979 | ||||||||||||||||
Created: | January 15, 2014 | Updated: | February 12, 2014 | ||||||||||||||||
Description: | From the bug report:
lightdm/X crashes on simply hitting ENTER (without supplying a username) | ||||||||||||||||||
Alerts: |
|
movabletype-opensource: cross-site scripting
Package(s): | movabletype-opensource | CVE #(s): | CVE-2014-0977 | ||||
Created: | January 13, 2014 | Updated: | January 15, 2014 | ||||
Description: | From the Debian advisory:
A cross-site scripting vulnerability was discovered in the rich text editor of the Movable Type blogging engine. | ||||||
Alerts: |
|
python-libcloud: information leak
Package(s): | python-libcloud | CVE #(s): | CVE-2013-6480 | ||||||||||||||||
Created: | January 13, 2014 | Updated: | February 7, 2014 | ||||||||||||||||
Description: | From the Red Hat bugzilla:
DigitalOcean recently changed the default API behavior from scrub to non-scrub when destroying a VM. Libcloud doesn't explicitly send "scrub_data" query parameter when destroying a node. This means nodes which are destroyed using Libcloud are vulnerable to later customers stealing data contained on them. Only users who are using DigitalOcean driver are known to be affected by this issue. The issue is said to be fixed in the version 0.13.3. | ||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Next page:
Kernel development>>