LWN.net Logo

Advertisement

GStreamer, Embedded Linux, Android, VoD, Smooth Streaming, DRM, RTSP, HEVC, PulseAudio, OpenGL. Register now to attend.

Advertise here

Leading items

Toward healthy paranoia

By Jonathan Corbet
September 11, 2013
The mainstream news has been dominated in recent months by the revelation of the scope of the surveillance carried out by the US National Security Agency (NSA). This activity has troubling implications that cover just about every aspect of modern life. But discussion of the implications for free software has been relatively muted. Perhaps it is time to start thinking about that aspect of the situation in a more direct way. We live in a time of interesting challenges, but also interesting opportunities.

Some of the recent leaks have made it clear that the NSA has worked actively to insert weaknesses into both cryptographic standards and products sold by vendors. There is, for example, some evidence that the NSA has inserted weaknesses into some random-number generation standards, to the point that the US National Institute of Standards and Technology has felt the need to reopen the public comment period for the 800-90A/B/C random number standards, in which there is little confidence at this point. While no compromised commercial products have yet been named, it seems increasingly clear that such products must exist.

It is tempting to believe that the inherent protections that come with free software — open development processes and code review — can protect us from this kind of attack. And to an extent, that must be true. But it behooves us to remember just how extensively free software is used in almost every setting from deeply embedded systems to network routers to supercomputers. How can such a software system not be a target for those bent on increasing the surveillance state? Given the resources available to those who would compromise our systems, how good are our defenses?

In that context, this warning from Poul-Henning Kamp is worth reading:

Open source projects are built on trust, and these days they are barely conscious of national borders and largely unaffected by any real-world politics, be it trade wars or merely cultural differences. But that doesn't mean that real-world politics are not acutely aware of open source projects and the potential advantage they can give in the secret world of spycraft.

To an intelligence agency, a well-thought-out weakness can easily be worth a cover identity and five years of salary to a top-notch programmer. Anybody who puts in five good years on an open source project can get away with inserting a patch that "on further inspection might not be optimal."

Given the potential payoff from the insertion of a vulnerability into a widely used free software project, it seems inevitable that attempts have been made to do just that. And, it should be noted, the NSA is far from the only agency that would have an interest in compromising free software. There is no shortage of well-funded intelligence agencies worldwide, many of which operate with even less oversight than the NSA does. Even if the NSA has never caused the submission of a suspect patch to a free software project, some other agency almost certainly has.

Some concerns about this kind of compromise have already been expressed; see the various discussions (example) about the use of Intel's RDRAND instruction to add entropy to the kernel's pool of random data, for example (see also Linus responding to those concerns in typical Linus style). This lengthy Google+ discussion on random-number generation is worth reading; along with a lot of details on how that process works, it covers other concerns — like whether the NSA has forced companies like Red Hat to put backdoors into their Linux distributions. As people think through the implications of all that has been going on, expect a lot more questions to be raised about the security of our software.

Predicting an increased level of concern about security is easy; figuring out how to respond is rather harder. Perhaps the best advice comes from The Hitchhiker's Guide to the Galaxy: don't panic. Beyond anything else, we need to resist any temptation to engage in witch hunts. While it is entirely possible that somebody — perhaps even a trusted community figure — has deliberately inserted a vulnerability into a free software project, the simple truth remains that most bugs are simply bugs. If developers start to come under suspicion for having made a mistake, we could find ourselves driving some of our best contributors out of the community, leaving us weaker than before.

That said, we do need to start looking at our code more closely. We have a huge attack surface — everything from the kernel to libraries to network service daemons to applications like web browsers — and, with no external assistance at all, we succeed in adding far too many security bugs across that entire surface. There is clearly a market for the location and exploitation of those bugs, and there is quite a bit of evidence that governments are major buyers in that market. It is time that we got better at reviewing our code and reduced the supply of raw materials to the market for exploitable vulnerabilities.

Much of our existing code base needs to be looked at again, and quite a bit of it is past due for replacement. The OpenSSL code is an obvious target, for example; it is also widely held to be incomprehensible and unmaintainable, making auditing it for security problems that much harder. There are projects out there that are intended to replace OpenSSL (see Selene, for example), but the job is not trivial. Projects like this could really use more attention, more contributors, and more auditors.

Another challenge is the proliferation of systems running old software. Enterprise Linux distributions are at least supported with security updates, but old, undisclosed vulnerabilities can persist there for a long time. Old handsets (for values of "old" that are often less than one year) that no longer receive updates are nearly impossible to fix. Far worse, though, are the millions of old Linux-based routers. Those devices tend to be deployed and forgotten about; there is usually no mechanism for distributing updates even if the owners are aware of the need to apply them. Even projects like OpenWRT tend to ignore the security update problem. Given that spy agencies are understandably interested in attacking routers, we should really be paying more attention to the security of this kind of system.

While many in the community have long believed that a great deal of surveillance was going on, the current revelations have still proved to be shocking, and they have severely undermined trust in our communications systems. Future disclosures, including, one might predict, disclosures of activities by agencies that are in no way allied with the US, will make the problem even worse. The degree of corporate collaboration in this activity is not yet understood, but even now there is, unsurprisingly, a great deal of suspicion that closed security-relevant products may have been compromised. There is not a lot of reason to trust what vendors are saying (or not saying) about their products at this point.

This setting provides a great opportunity for free software to further establish itself as a secure alternative. The maker of a closed product can never effectively respond to suspicions about that product's integrity; free software, at least, can be inspected for vulnerabilities. But to take advantage of this opening, and, incidentally, help to make the world a more free place, we need to ensure that we have our own act together. And that may well require that we find a way to become a bit more paranoid while not wrecking the openness that makes our communities work.

Comments (77 posted)

Intel and XMir

By Jake Edge
September 11, 2013

Reverting a patch, at least one that isn't causing a bug or regression, is often controversial. Normally, the patch has been technically vetted before it was merged, so there is—or can be—a non-technical reason behind its removal. That is the case with the recent reversion of a patch to add XMir support to the Intel video driver. As might be guessed, rejecting support for the X compatibility layer of the Mir display server resulted in a loud hue and cry—with conspiracy theories aplenty.

The patch adding support for XMir was merged into the xf86-video-intel driver tree on September 4 by maintainer Chris Wilson. That driver is the user-space X.org code for supporting Intel GPUs; it is code that Intel has developed and maintains. The commit message noted that the XMir API had likely been frozen so support for the API was being added to the driver. The patch consists of less than 300 lines of code, most of it confined to a new sna_xmir.c file. Based on the commit and message, Wilson clearly didn't see any reason not to merge the patch.

All of that changed sometime before the revert on September 7, which also prompted the release of the 2.99.902 snapshot. In the NEWS file for the snapshot was the following message:

We do not condone or support Canonical in the course of action they have
chosen, and will not carry XMir patches upstream.
-The Management

There are a number of possible interpretations for that statement, but, however it was meant, it was certain to raise the ire of Canonical and/or Mir fans—and it did. When asked about the removal of XMir support, Wilson pointed to Intel management for answers. I contacted Dirk Hohndel, CTO of the Intel Open Source Technology Center, who answered the main question at hand: Intel's "engineering team and the senior technical people made the decision that we needed to continue to focus our efforts on X and Wayland", he said. It was a question of focus, he said, "adding more targets to our QA and validations needs, having to check more environments for regressions [...] would require us to cut somewhere else".

So removing support for XMir was requested by Intel management, but seemingly did not sit very well with Wilson. One suspects the NEWS file entry did not get approved, for example. But it's hard to see that any reversion (or outright rejection) of the XMir support would have led to a different outcome. Ubuntu has a legion of fans, who can often be quite vocal when they believe their distribution is being treated unfairly.

Michael Hall, a Canonical employee on the Community team, obliquely referenced the XMir removal in a post to Google+: "You will not make your open source project better by pulling another open source project down." The argument that Hall and others make is that because Intel supports Wayland, it is hamstringing Mir by removing support for it, and, in effect, helping to keep Mir as a single-distribution display server. "This just strikes me as trying to win the race by tripping the competition, not by running faster", Hall said in the comments.

But accepting any code into a codebase you maintain is a burden at some level. Supporting a new component, like a display server, also requires a certain amount of testing. All of those things need to be weighed before taking on that maintenance. As Matthew Garrett put it (also in the comments to Hall's post):

Intel commit to supporting the code that they ship, even if that would require them to write or fix large amounts of code to keep it working. Keeping the XMir code comes at a cost to Intel with (at present) zero benefit to Intel. As long as XMir is a single-distribution solution, it's unsurprising that they'd want to leave that up to the distribution.

Certainly Canonical can continue to carry the XMir patches for the Intel video driver. It is, after all, carrying its own display server code in addition to its Unity user interface and various other Ubuntu-specific components. But Hall sees the "single-distribution solution" as a self-fulfilling prophecy:

Upstream won't take patches because other distros don't use it. Other distros don't use it because other DE's don't use it. Other DE's don't use it because it requires upstream patches that haven't been accepted. Upstream won't accept the patches because other distros don't use it.

Since its initial attempt—with less than stellar results—Canonical has not really tried to make any kind of compelling technical arguments about Mir's superiority or why any other distribution (or desktop environment) would want to spend time working on it (as opposed to, say, Wayland). The whole idea is to have a display server that serves Unity's needs and will run on multiple form factors in a time frame that Canonical requires. That's not much of an argument for other projects to jump on board. As Garrett points out, Canonical has instead chosen the route of "winning in the market", which is going to require that it shoulder most or all of the burden until that win becomes apparent. Casting the rejection of XMir as an attack of some kind is not sensible, Garrett said:

Refusing to adopt code that doesn't benefit your project in any way isn't a hostile act, any more than Canonical's refusal to adopt code that permitted moving the Unity launcher was a hostile act or upstream Linux's refusal to adopt the Android wakelock code was a hostile act. In all cases the code in question simply doesn't align with the interests of the people maintaining the code.

Other comment threads (for example on Reddit here and here) followed a similar pattern. Intel focusing on Wayland and X is seen as Mir (or Canonical) bashing, with some positing that it really was an attempt to prop up Tizen vs. Ubuntu Touch (or some other Canonical mobile initiative). Or that Intel believes Wayland is so badly broken it needs to stop the Mir "momentum" any way it can. Most of that seems fairly far-fetched.

One can understand Intel's lack of interest in maintaining support for XMir without resorting to convoluted reasons—though the size of the patch and how self-contained it is do lead some to wonder a bit. There is a risk for Intel in doing so, however. As Luc Verhaegen, developer of the Lima driver for ARM Mali GPUs, pointed out in a highly critical blog post, Intel could actually end up harming its own interests:

By not carrying this patch, Intel forces Ubuntu users to only report bugs to Ubuntu, which then means that only few bug reports will filter through to the actual driver developers. At the same time, Ubuntu users cannot simply test upstream code which contains extra debugging or potential fixes. Even worse, if this madness continues, you can imagine Intel stating to its customers that they refuse to fix bugs which only appear under Mir, even though there is a very very high chance of these bugs being real driver bugs which are just exposed by Mir.

At this point, though, Intel may well be waiting to see the "proof of the pudding". If Canonical is successful at getting Mir onto the desktops of lots of Intel customers in the next year or two, one suspects that any needed changes for Mir or XMir will be cheerfully added to the Intel video driver. For now, the company loses little, and gains some maintenance and testing time, by waiting for it all to play out.

In the end, there is an element of a "tempest in a teapot" to the whole affair. We are talking about 300 lines of code that, evidently, won't need much in the way of changes in the future (since the API is frozen). Intel is almost certainly embarrassed by how the whole thing played out, and Ubuntu fans will undoubtedly see it as yet another diss of their favorite distribution. But in the final analysis, the impact on Mir users will be minimal to non-existent, at least in the short term and probably the long as well.

Comments (155 posted)

Page editor: Jonathan Corbet
Next page: Security>>

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds