Two high-profile kernel bugs—with publicly released
exploits—have recently been making news. Both can be used by
local attackers to gain root privileges, which makes them quite dangerous
for systems that allow untrusted users to log in, but they might also be
conjunction with other flaws to produce a remote root exploit. They are,
in short, just the kinds of vulnerabilities that most system administrators
would want to patch quickly, so a look at how distributions have responded
The vulnerabilities lie in the x86_64 compatibility layer that allows
32-bit binaries to be run on 64-bit systems (see the Kernel page article for more
details). In particular, that code allows 32-bit programs to make system
calls on a 64-bit kernel. One of the bugs, CVE-2010-3301, was
reintroduced into the kernel in April 2008, seven months after being fixed as
CVE-2007-4573. The second, CVE-2010-3081, was discovered in the process of
first, and had been in the kernel since 2.6.26, which was released in July
Obviously, these are long-standing kernel holes that may have been
available to attackers for as long as two-and-a-half years. In fact, a posting on the
full-disclosure mailing lists claims that CVE-2010-3081 was known by a
called "Ac1db1tch3z" since the code causing the bug was committed in April
2008. Included in the
post was a working exploit.
Ben Hawkes found and reported both of the vulnerabilities and fixes for
both were quickly committed to the mainline kernel. Both were committed on
September 14, but it is clear that at least CVE-2010-3081 was known a week
earlier. Stable kernels were released on September 20, and
some distributions had fixes out on September 17.
While enterprise kernels (e.g. RHEL, SLES) tend to be based on fairly old
kernels (RHEL 5 is 2.6.18-based, while SLES 11 is 2.6.27-based), the
distributors often backport features from newer kernels. Unsurprisingly,
that can sometimes lead to backporting bugs along with the features. For
RHEL, that meant that it was vulnerable to
CVE-2010-3081 even though that code came into the kernel long after
2.6.18. On September 21, Red Hat issued updates for RHEL 5 and 5.4 to fix
that problem. CentOS, which necessarily lags RHEL by at
least a few hours, has a fix for CVE-2010-3081 available for CentOS 5 now
For SLES, the situation is a little less clear. Based on its kernel
version, it should be vulnerable to both flaws, but no updates have been
issued as of this writing. In a separate advisory on September 21, SUSE
noted (in the "Pending Vulnerabilities ..." section) that it was working on
fixes for both.
For the more community-oriented distributions (Debian, Fedora, openSUSE,
Ubuntu, and others), the response has been somewhat mixed. Ubuntu, Debian,
and Fedora had fixes out on September 17 for both bugs (or, in the case of
Debian, just one, as its stable distribution ("Lenny") is based on 2.6.26
not vulnerable CVE-2010-3301). openSUSE has yet to release a fix and none
of the secondary distributions that we track (Gentoo, Mandriva, Slackware,
etc.) has put out a fix either.
How quickly can users and administrators expect security fixes? The
enterprise vendors, who are typically more cautious before issuing an
update, took a week or more to get fixes in the hands of their users.
Meanwhile, exploits have been published and have been used in the wild.
That has to make a lot of system administrators very nervous. Those
running SUSE-based systems must be even more worried.
A one-week delay (or more depending on where you start counting) may not
seem like a lot of time, but for critical systems, with lots of sensitive
data, it probably seems pretty long.
Local privilege escalation flaws are often downplayed because they require
a local user on the system to be useful. But that thinking has some flaws
of its own. On even a locked-down system, with only trusted users being
able to log in, there may be ways for local exploits to be turned into
remote exploits. Compromised user accounts might be one way for an
attacker to access the system, but there is a far more common route:
One badly written, or
misconfigured, web application, for example, might provide just the hole
that an attacker needs to get their code running on the system. Once that
happens, they can use a local privilege escalation to compromise the
entire system—and all the data it holds. Since many servers sit on
the internet and handle lots of web and other network traffic, compromising
a particular, targeted system may not be all that challenging to a
dedicated attacker. Using a "zero day" vulnerability in a widely deployed
web application might make a less-targeted attack (e.g. by script
kiddies) possible as well.
While most of the "big four" community distributions were quick to
updates for these problems, they still left a window that attackers could
have exploited. That is largely unavoidable unless there were embargoes
enforced on sensitive patches flowing into the mainline
kernel—something that Linus Torvalds has always been opposed to.
Then, of course, there is the (much larger) window available to those who
closely track kernel development and notice these bugs as they get
That is one of the unfortunate side-effects of doing development in the
open. While it allows anyone interested to look at the code, and find
various bugs—security or otherwise—it does not, cannot, require
that those bugs get reported. We can only hope that enough "white hat"
focused on the code to help outweigh the "black hat" eyes that are clearly
For distributors, particularly Red Hat and Novell, it may seem like these
flaws are not so critical that the fixes needed to be fast-tracked. Since
there are, presumably, no known, unpatched network service flaws in the
packages they ship, a local privilege escalation can be thwarted by a more
secure configuration (e.g. no untrusted users on
critical systems). While true in some sense, it may not make those
customers very happy.
There are also plenty of systems out there with some
possibly untrusted users. Perhaps they aren't the most critical systems,
but that doesn't mean their administrators want to turn them over to
attackers. It really seems like updates should have come
more quickly in this case, at least for the enterprise distributions. As we
have seen, a
reputation for being slow to fix
is a hard one to erase; hopefully it's not a reputation that Linux is getting.
Comments (14 posted)
Fusion Garage, makers of the Linux-based
tablet, are still out of compliance with
the GPL, but have responded to Linux kernel
developer Matthew Garrett regarding his complaint this
month to US Customs and Border Protection.
Until Garrett filed a so-called "e-allegation"
form with CBP, the company had refused his requests
for corresponding source for the GPL-covered software
on the joojoo. After the filing, company spokesperson
Megan Alpers said that "Fusion Garage is discussing
the issue internally."
The joojoo concept
began as the "CrunchPad," in an essay
by TechCrunch founder Michael Arrington.
He wanted a "dead simple and dirt
cheap touch screen web tablet to surf the
web." After a joint development project between Fusion Garage and
last year, Fusion Garage went on
to introduce the tablet on its own.
Garrett said he is not currently planning on taking
further action. "The company seems to be at least
considering the issue at the moment, so I wasn't
planning on doing anything further just yet," he said.
Bradley Kuhn, president of the Software Freedom
Conservancy, which carries out GPL enforcement, praised
on his blog:
However, it's really important that we try
lots of different strategies for GPL enforcement; the
path to success is often many methods in parallel. It
looks like Matthew already got the attention of the
violator. In the end, every GPL enforcement strategy
is primarily to get the violator's attention so they
take the issue seriously and come into compliance
with the license.
In June, Garrett checked
out a joojoo tablet and
mailed the company's support address
for the source to the modified kernel. He posted
the company's reply to his blog: "We will make the
source release available once we feel we are ready
to do so and also having the resources to get this
sorted out and organized for publication."
Although the device is fairly simple, Garrett said
he would like to see some of the kernel changes.
"They seem to be exposing ACPI events directly through
the embedded controller. I'm interested to see how
they're doing this and how drivers bind to it,"
The US government offers several tools for
enforcing copyright at the border, which are
cheaper and simpler for copyright holders than an
infringement case in a Federal court. Beyond the
"e-allegation" form that Garrett filed, other
options include "e-recordation," which allows
the holder of a US trademark or copyright to
request that US Customs and Border Protection
stop infringing goods at the border, and a Section
337 complaint, which can result in the US
International Trade Commission assigning government
attorneys to work on the US rights-holder's
"Customs will seize something if it's a clear
knockoff product but they don't like to wade
into disputes where it's not clear," said attorney Jeffery
Norman of Kirkland & Ellis, a law firm that
represents clients in Section 337 cases. "The ITC
has jurisdiction to litigate disputes involving
copyright, trademark and patent, but 99% of cases
involve patents," he said.
While most of the ITC's copyright cases involve
product packaging or manuals, Norman said,
the ITC has taken action to exclude infringing
copies of arcade game software. In a 1981 case, "Coin-Operated Audio Visual Games and Components
Thereof" the ITC unanimously voted to
exclude video games whose code infringed both
copyright and trademarks of US-based Midway. In a 1982
case, also brought by Midway, the ITC not only
voted to exclude infringing Pac-Man games, but issued
a cease and desist order against the importers while
the case was in progress.
In an LWN
comment, Norman added, "Customs will only
enforce either an ITC order or 'Piratical' [obviously
infringing] copies." He advocated the ITC approach:
Therefore I think
the ITC action is the way to go. You can get a quick
exclusion order potentially, and I would expect you
might be able to get a law firm to take this on pro
bono as it will generate a lot of publicity.
Open Source consultant Bruce Perens said that
retailers in the USA could
be a reason for manufacturers in other countries
to check license compliance for their embedded
[Retailers] need to get a
particular income from their shelf space, and they've
dedicated area for it that they might have used for
other merchandise. They've sunk part of their capital
into this opportunity. They have advertising already
in process. And then the U.S. Customs impounds or
destroys their merchandise, and they're stuck. Not
only the capital they laid out is gone, but the
potential to make the income. For the retailer,
this should be a big deal. So, they absolutely must
be assured that their manufacturers won't be getting
them into this situation.
Although the prospect of Customs enforcement
because of code they've never heard of might scare
some vendors away from open-source-based products,
Perens said, "the good news is that products like
Android are so important to the market that retailers
must learn to deal with Open Source or let their
competitors have the business."
No GPL licensor has yet filed a complaint with
the ITC, and even if a complaint is filed, the ITC
can decide whether or not to act on it. However,
import-based enforcement, including temporary
orders enforced at the Customs level, can move
much faster than ordinary infringement lawsuits.
Whether the resulting uncertainty is enough to make
device vendors double-check their license compliance
remains to be seen.
Comments (1 posted)
Debian's "testing" distribution is where Debian developers prepare the next
stable distribution. While this is still its main purpose, many users have
adopted this version of Debian because it offers them a good trade-off between
stability and freshness. But there are downsides to using the testing
distribution, so the "Constantly Usable Testing" (CUT) project aims to
reduce or eliminate those downsides.
About Debian unstable & testing
Debian "unstable" is the distribution where developers upload new versions of
their packages. But, frequently some packages are not installable from
to changes in other packages or transitions in libraries
that have not yet been completed.
Debian testing, on the contrary, is managed by a tool that ensures the
consistency of the whole distribution: it picks updates from unstable only if
the package has been tested enough (10 days usually), is free of new
release-critical bugs, is available on all supported architectures, and it
doesn't break any other package already present in testing. The release
team controls this tool and provides "hints" to help it find a set
of packages that can flow from unstable to testing.
Those rules also ensure that the packages that flow into testing are
reasonably free of show-stopper bugs (like a system that doesn't boot, or
X that doesn't work at all). This makes it very attractive to users
who like to regularly get new upstream versions of their software without
dealing with the biggest problems associated with them. That is very
attractive to users, yet several Debian developers advise people to not use
testing. Why is that?
Known problems with testing
The release team uses the distribution to prepare the next
stable release and from time to time they remove packages from it. That is
to ensure that other packages can migrate from
unstable to testing, or because a package has long-standing release-critical
bugs without progress towards a resolution. The team will also
remove packages on request if the maintainers believe
that the current version of the software cannot be supported
(security-wise) for 2 years or more. The security team also regularly
issues such requests.
Long delays for security and important fixes:
Despite the 10-day delay in unstable, there are always some annoying
bugs (and security bugs are no exceptions) that are only discovered when
the package has already migrated to testing. The maintainer might be quick to
upload a fixed package in unstable, and might even raise the urgency to
allow the package to migrate sooner, but if the packages get entangled in a
large ongoing transition, it will not migrate before the transition is
completed. Sometimes it can take weeks for that to happen.
The delay can be avoided by doing direct uploads to testing (through
testing-proposed-updates) but that mechanism is almost never used except
during a freeze, where targeted bug fixes are the norm.
Not always installable:
With testing evolving daily, updates sometimes break
the last installation images available (in particular netboot images
that get everything from the network). The debian-installer (d-i) packages
are usually quickly fixed but they don't move to testing automatically
because the new combination of d-i packages has not necessarily been
validated yet. Colin
up the problem:
Getting new installer code into testing takes too long, and problems
remain unfixed in testing for too long. [...] The problem with d-i
development at the moment is more that we're very slow at producing new
d-i *releases*. [...] Your choices right now are to work with stable (too
old), testing (would be nice except for the way sometimes it breaks and
then it tends to take a week to fix anything), unstable (breaks all the
CUT has its roots in an old proposal by Joey
Hess. That introduces the idea that the stable release is not Debian's
sole product and that testing could become—with some work—a suitable
choice for end-users. Nobody took on that work and there has been no visible
progress in the last 3 years.
But recently Joey brought
CUT again on the debian-devel mailing list and Stefano
Zacchiroli (the Debian project leader) challenged him to setup a BoF on
CUT for Debconf10. It turned out to be one of the most heavily
attended BoFs (video recording is here), so
there is clearly a lot of interest in the topic. There's now a dedicated wiki and an
Alioth project with a
The ideas behind CUT
Among all the ideas, there are two main approaches that have been
discussed. The first is to regularly snapshot testing at points where it
is known to work reasonably well (those snapshots would be named "cut").
The second is to build an improved testing distribution tailored to the
needs of users who want a working distribution with daily updates, its
name would be "rolling".
Regular snapshots of testing
There's general agreement that regular snapshots of testing are required:
it's the only way to ensure that the generated installation media will
continue to work until the next snapshot. If tests of the snapshot do not
reveal any major problems, then it becomes the latest "cut". For clarity,
the official codename would be date based: e.g. "cut-2010-09" would be the
cut taken during September 2010.
While the frequency has not been fixed yet, the goal is clearly to be
on the aggressive side: at the very least every 6 months, but every month
has been suggested as well. In order to reach a decision, many aspects
have to be balanced.
One of them (and possibly the most important) is the security support.
Given that the security team is already overworked, it's difficult to
put more work on their shoulders by declaring that cuts will be supported
like any stable release. No official security support sounds bad but it's
not necessarily so problematic as one might imagine.
Testing's security record is generally better than stable's is (see
the security tracker) because
fixes flow in naturally with new upstream versions. Stable still get fixes
for very important security issues earlier than testing, but on the whole
there are fewer known security-related problems in testing than in stable.
Since it's only a question of time until the fixed version comes naturally
from upstream, more frequent cut releases means that users get security
fixes sooner. But Stefan Fritsch, who used to be involved in the Debian testing security team,
has also experienced the downside for anyone who tries to contribute security
The updates to testing-security usually stay useful only for a few weeks, until
a fixed version migrates from unstable. In stable, the updates stay around for
a few years, which gives a higher motivation to spend time on preparing them.
So if it's difficult to form a dedicated security team, the work of
providing security updates must be done by the package maintainer. They are
usually quite quick to upload fixed packages in unstable but tend to not monitor
whether the packages migrate to testing. They can't be blamed for that because
testing was created to prepare the next stable release and there is thus no
urgency to get the fix in as long as it makes it before the release.
CUT can help in this regard precisely because it changes this assumption:
there will be users of the testing packages and they deserve to get security
fixes much like the stable users.
Another aspect to consider when picking a release frequency is the amount of
associated work that comes with any official release: testing upgrades from the
previous version, writing release notes, and preparing installation images. It
seems difficult to do this every month. With this frequency it's also
impossible to have a new major kernel release for each cut (since they
tend to come out only every 2 to 3 months) and the new hardware support
that it brings is something worthwhile to many users.
In summary, regular snapshots address the "not always installable" problem
and may change the perception of maintainers toward testing so that hopefully
they care more of security updates in that distribution (and in cuts).
But it does not solve the problem of disappearing packages. Something else
is needed to fix that problem.
A new "rolling" distribution?
Lucas Nussbaum pointed
out that regular snapshots of Debian is not really a new concept:
How would this differentiate from other distributions doing 6-month
release cycles, and in particular Ubuntu, which can already be seen as
Debian snapshots (+ added value)?
In Lucas's eyes, CUT becomes interesting if it can provide a
rolling distribution (like testing) with a "constant flux of new
upstream releases". For him, that would be "something quite unique in
the Free Software world". The snapshots would be used as starting
point for the initial installation, but the installed system would point
to the rolling distribution and users would then upgrade as often as they
want. In this scenario, security support for the snapshots is not so important,
what matters is the state of the rolling distribution.
If testing were used as the rolling distribution, the problem of disappearing
packages would not be fixed. But that could be solved with a new rolling
distribution that would work like testing but with adapted
rules, and the cuts would then be snapshots of rolling instead of testing.
The basic proposal is to make a copy of testing and to re-add the packages
which have been removed because they are not suited for a long term release
while they are perfectly acceptable for a constantly updated release (the most
recent example being Chromium).
Then it's possible to go one step further: during a freeze, testing is no
longer automatically updated, which makes it inappropriate to feed the
rolling distribution. That's why rolling would be reconfigured to grab
updates from unstable (but using the same rules as testing).
Given the frequent releases, it's likely that only a subset of architectures
would be officially supported. This is not a real problem because the
users who want bleeding edge software tend to be desktop users on mainly
i386/amd64 (and maybe armel for tablets and similar mobile products).
This choice—if made—opens up the door to even more possibilities:
if rolling is configured exactly like testing but with only a subset of the
architectures, it's likely that some packages would migrate to rolling before
testing where non-mainstream architectures are lagging in terms of auto-building
(or have toolchain problems).
While being ahead of testing can be positive for the users, it's also
problematic on several levels. First, managing rolling becomes much more
complicated because the transition management work done by the release
team can't be reused as-is. Then it introduces competition between both
distributions which can make it more difficult to get a stable release
out, for example if maintainers stop caring about the migration to testing
because the migration to rolling has been completed.
The rolling distribution is certainly a good idea but the rules governing it
must be designed to avoid any conflict with the process of releasing a stable
distribution. Lastly, the mere existence of rolling would finally fix the marketing
problem plaguing testing: the name "rolling" does not suggest that the
software is not yet
ready for prime time.
Whether CUT will be implemented remains
to be seen, but it's off for a good start: ftpmaster Joerg Jaspert said
that the new archive server can cope with a new distribution, and there's
now a proposal shaping up. It may get going quickly as there is already an implementation
plan for the snapshot side of the project. The rolling
distribution can always be introduced later, once it is ready. Both approaches
can complement each other and provide something useful to different kind
The global proposal is certainly appealing: it would address
the concerns of obsolescence of Debian's stable release by making
intermediary releases. Anyone needing something more recent for hardware
support can start by installing a cut and follow the subsequent releases
until the next stable version. And users who always want the latest
version of all software could use rolling after having installed a
From a user point of view, there are similarities with the mix of normal
and long-term releases of Ubuntu. But from the development side, the process
followed would be quite different, and the constraints imposed by having a
constantly usable distribution are stronger. With CUT, any wide-scale
change must be
designed in a way that it can happen progressively in a transparent manner for
Comments (49 posted)
Page editor: Jonathan Corbet
Next page: Security>>