A rather nasty security hole in the Debian OpenSSL package has generated a
lot of interest—along with a fair amount of controversy—amongst
Linux users. The bug has been lurking for up to two years in Debian and other
distributions, like Ubuntu, based on it. There are a number of lessons to
be learned here about distributions and projects working together or, as in
this case, failing to work together.
Back in April 2006, a Debian user reported a
problem using the OpenSSL library with valgrind, a tool that can check
programs for memory access problems. It was reporting that OpenSSL was
using uninitialized memory in parts of the random number generator (RNG)
code. Using memory before it is initialized to a known value is a well
known way to create hard-to-find bugs, so it is not surprising that the
valgrind report caused some consternation.
Debian hacker Kurt Roeckx tracked the problem down to what he thought were
two offending lines of code and posted a question on
the openssl-dev mailing list:
What I currently see as best option is to actually comment out
those 2 lines of code. But I have no idea what effect this
really has on the RNG. The only effect I see is that the pool
might receive less entropy. But on the other hand, I'm not even
sure how much entropy some unitialised data has.
What do you people think about removing those 2 lines of code?
There were few responses, but they were not opposed to removing the lines,
including one from Ulf
Möeller using an openssl.org email address: "If it helps
with debugging, I'm in favor of removing them." Unfortunately, as
was discovered recently, removing one of the two lines was harmless, the
other essentially crippled the RNG so that OpenSSL-generated cryptographic
keys were easy to predict.
(For more technical details on the bug and what should be done to respond to
it, see our article on this
week's Security page.)
It turns out, at
least according to OpenSSL core team member Ben Laurie, that openssl-dev is not for discussing
development of OpenSSL. That may be true in practice, but the OpenSSL support web page describes
it as: "Discussions on development of the OpenSSL library. Not for
application development questions!" In addition, the address
suggested by Laurie (openssl-team-AT-openssl.org) does not appear in any of
documentation or web pages. If it wasn't the right place, it would seem
that the OpenSSL developers could have provided a helpful pointer to the right address, but that did not occur.
It probably was not clear that Roeckx was asking the questions in an
official Debian capacity, nor that he was planning to change the
Debian package based on the answer to his questions. As Laurie rightly points out, he should
have submitted a patch, proposing that it be accepted into the upstream
OpenSSL codebase. That probably would have garnered more attention, even
if it was only posted to openssl-dev. It seems very unlikely that
the patch in question would have ever made it into an OpenSSL release.
It is in the best interests of everyone, distributions, projects, and
users, for changes made downstream to make their way back upstream. In
order for that to work, there must be a commitment by downstream
entities—typically distributions, but sometimes users—to push
their changes upstream. By the same token, projects must actively
encourage that kind of activity by helping patch proposals and
First and foremost, of course, it must be absolutely clear where such
communications should take place.
Another recently reported security vulnerability also came about
because of a lack of cooperation between the project and distributions. It
is vital, especially for core system security packages like OpenSSH and
OpenSSL, that upstream and downstream work very closely together. Any
changes made in these packages need to be scrutinized carefully by the
project team before being released as part of a distribution's
package. It is one thing to let some kind of ill-advised patch be made to
a game or even an office application package that many use; SSH and SSL form the
basis for many of the tools used to protect systems from attackers, so they
need to be held to a higher standard.
Another of Laurie's points, which also bears out the need for a higher
standard, is the timing of the check-in
to a public repository when compared to that of the advisory.
Any alert attacker could have made very good use of the five or six day
head start, they could have gotten by monitoring the repository, to exploit
the vulnerability. While it is certainly possible that some of malicious
intent already knew about the flaw, though no exploits have been reported,
alerting potential attackers to this kind of hole well in advance of
vulnerable users is unbelievably bad security protocol.
This is the kind of problem that could have been handled quickly and
quietly by all concerned. All affected distributions—though it might
be difficult to list all of the Debian-derived distributions out
there—could have been contacted so that the advisory and updates to
affected packages could have been coordinated. One of these days, one of
these problems is going to give Linux a security black eye unless the
community can do a better job of working together.
Comments (18 posted)
For those who have not seen it, Mark Shuttleworth's recent The Art of Release
posting is worth a look. He starts with some rather self-congratulatory
talk about the Ubuntu 8.04 release, saying:
To the best of my knowledge there has never been an "enterprise
platform" release delivered exactly on schedule, to the day, in any
proprietary or Linux OS.
One could quibble with this claim in a number of ways, but it is true that
Ubuntu got out a release designed to be supported for a number of years,
and they did it when they said they would. That, of course, is only part
of the job; now they have to follow through on that little promise of
supporting this distribution into 2011. The initial signs
are good: Ubuntu's support thus far has been solid, and it would appear
that the distribution will not be going away anytime soon.
One might well question whether the timely release of 8.04 is noteworthy.
As a community we are increasingly spoiled; an increasingly large number of
projects and distributions manage to get out regular releases on a
reasonably predictable schedule. Even kernel releases, once known to slip
for a year or more, are now predictable to within a couple of weeks. Now
that free software releases are rather more predictable and reliable than,
say, airline departures, why is the Ubuntu 8.04 release noteworthy?
The answer is the long-term support commitment. Theoretically, a
distribution intended for this sort of long lifetime will have had a degree
of extra care put into its preparation. Important components will have
been given extra time to stabilize so that the distribution will be more
reliable from the outset. Some thought will have gone into the selection
of packages shipped with an emphasis on supportability over the long term.
The whole process requires more effort and a higher degree of assurance
that all of the pieces are truly ready.
The degree to which Ubuntu has done all of that work should become clear
over time. Certainly the software selected for this release is rather less
seasoned than the packages found in a Red Hat or SUSE enterprise release.
But "older" does not necessarily mean "better" or "more stable," so the
real proof will be in how well this distribution holds up for the next
Meanwhile, Mr. Shuttleworth has already stated that the next long-term
support release will be happening in April, 2010. Ubuntu's success with
8.04, he says, allows this commitment to be made almost two years in
advance. There is, however, a possibility that things could change:
There's one thing that could convince me to change the date of the
next Ubuntu LTS: the opportunity to collaborate with the other,
large distributions on a coordinated major / minor release
cycle. If two out of three of Red Hat (RHEL), Novell (SLES) and
Debian are willing to agree in advance on a date to the nearest
month, and thereby on a combination of kernel, compiler toolchain,
GNOME/KDE, X and OpenOffice versions, and agree to a six-month and
2-3 year long term cycle, then I would happily realign Ubuntu's
short and long-term cycles around that.
This idea is not new, but Mr. Shuttleworth seems to be particularly
attached to it. There is no doubt that there would be advantages to
aligning schedules in this way. The kernel developers, who have been known
to make a special effort for a release destined to be used by a major
enterprise distributor, could focus especially hard on a stable release
knowing that it would be widely used. Higher-level projects could do the
same. The distributors could also, perhaps, find a way to collaborate on
the long-term maintenance of these components, rather than duplicating the
effort of backporting patches into older code. Perhaps they could even get
together for a joint release party, saving even more money.
Or perhaps this is all a nice idea which fails to survive its encounter
with reality. Enterprise distribution releases tend to be
highly-publicized events. Ubuntu might be happy to share its limelight
with the larger distributors, but that feeling might not be reciprocated on
the other side. It is hard to imagine Red Hat or Novell wanting to have
their big enterprise distribution release be just one of many happening
during the same month.
It is also hard to see Ubuntu making an agreement with the
enterprise distributors which specifies both a release date
and the versions of the major components. 8.04 released with the 2.6.24
kernel, which was almost exactly three months old at the time. Red Hat
Enterprise Linux 5 released in mid-March, 2007, when the 2.6.20 kernel
was current - but Red Hat shipped the six-month-old 2.6.18 kernel instead.
Aligning schedules would require more than picking a date; it would also
require adopting similar stabilization periods. It is far from clear that
Ubuntu would want to fall that far behind the leading edge for the sake of
And, frankly, it's hard to imagine Debian making a credible commitment
(within one month) to a release date at all.
So the aligned schedules for enterprise distributions seems like a hard
sell. A better approach might be to try to wean these distributions off
the "freeze and backport" model of support; this model is expensive to
sustain, brings risks of its
own, and doesn't always fit
the needs of enterprise customers.. If the enterprise distributors
were able to track more current software - rather than backporting pieces
of it into older software - better alignment of releases might just come
Comments (8 posted)
It is fair to say that distributed source code management systems are
taking over the world. There are plenty of centralized systems still in
use, but it is a rare project which would choose to adopt a centralized SCM
in 2008. Developers have gotten too used to the idea that they can carry
the entire history of their project on their laptop, make their changes,
and merge with others at their leisure.
But, while any developer can now commit changes to a project while strapped
into a seat in a tin can flying over the Pacific Ocean, that developer
generally cannot simultaneously work with the project's bug database.
Committing changes and making bug tracker changes are activities which
often go together, but bug tracking systems remain strongly in the
centralized mode. Our ocean-hopping developer can commit a dozen fixes,
but updating the related bug entries must wait until the plane has landed
and network connectivity has been found.
There are a number of projects out there which are trying to change this
situation through the creation of distributed bug tracking systems. These
developments are all in a relatively early state, but their potential
- and limitations - can be seen.
One of the leading projects in this area is Bugs Everywhere, which has recently
moved to a new home with Chris Ball as its new maintainer. Bugs
Everywhere, like the other systems investigated by your editor, tries to
work with an underlying distributed source code management system to manage
the creation and tracking of bug entries. In particular, Bugs Everywhere
creates a new directory (called .be) in the top level of the
project's directory. Bugs are stored as directories full of text files
within that directory, and the whole collection is managed with the
The advantages to an approach like this are clear. The bug database can
now be downloaded along with the project's code itself. It can be branched
along with the code; if a particular branch contains a fix for a bug, it
can also contain the updated bug tracker entry. That, in turn, ensures
that the current bug tracking information will be merged upstream at
exactly the same time as the fix itself. Contemporary projects are
characterized by large numbers of repositories and branches, each of which
can contain a different set of bugs and fixes; distributing the bug
database into these repositories can only help to keep the code and its bug
information consistent everywhere.
There are also some disadvantages to this scheme, at least in its current
form. Changes to bug entries don't become real until they are committed
into the SCM. If a bug is fixed, committing the fix and the bug tracker
update at the same time makes sense; in cases where one is trying to add
comments to a bug as part of an ongoing conversation the required commit is
just more work to do. That fact that, in git at least, one must explicitly
add any new files created by the bug tracker (which have names like
does not help the situation.
Beyond that, tracking bugs this way creates two independent sets of
metadata - the bug information itself, and whatever the developer added
when committing changes. There is currently no way of tying those two
metadata streams together. Then, there is the issue of merging. Bugs
Everywhere appears to reflect some thought about this problem; most changes
involve the creation of new, (seemingly) randomly-named files which will not
create conflicts at merge time. It did not take long, however, for your
editor to prove that changing the severity of a bug in two branches and
merging the result creates a conflict which can only be resolved by
hand-editing the bug tracker's files. Said files are plain text, but that
is less comforting than one might think.
All of this can make distributed bug tracking look like a source of more
work for developers, which is not the path to world domination.
All of this can make distributed bug tracking look like a source of more
work for developers, which is not the path to world domination. What is
needed, it seems, is a combination of more advanced tools and better
integration with the underlying SCM. Bugs Everywhere, by trying to work
with any SCM, risks not being easily usable with any of them.
A project which is trying for closer integration is ticgit, which, as one
might expect, is based on git. Ticgit takes a different approach, in that
there are no files added to the project's source tree, at least not
directly; instead, ticgit adds a new branch to the SCM and stores the bug
information there. That allows the bug database to travel with the source
(as long as one is careful to push or pull the ticgit branch!) while keeping the
associated files out of the way. Ticgit operations work on the git object
database directory, so there is no need for separate commit operations. On
the other hand, this approach loses the ability to have a separate view of
the bug database in each branch; the connection between bug fixes and bug
tracker changes has been made weaker. This is something which can be
fixed, and it would appear (from comments in the source) that dealing with
branches is on the author's agenda.
Ticgit clearly has potential, but even closer integration would be
worthwhile. Wouldn't it be nice if a git commit command would
also, in a single operation, update the associated entry in the bug
database? Interested developers could view a commit which is alleged to
fix a bug without the need for anybody to copy commit IDs back and forth.
Reverting a bugfix commit could automatically reopen the bug. And so on.
In the long run, it is hard to see how a truly integrated, distributed bug
tracker can be implemented independently of the source code management
There are some other development projects in this area, including:
- Scmbug is a relatively
advanced project which aims "to solve the integration problem once and
for all." It is not truly a distributed bug tracker, though; it
depends on hooks into the SCM which talk to a central server.
Regardless, this project has done a significant amount of thinking
about how bug trackers and source code management systems should work
- DisTract is a
distributed bug tracker which works through a web interface. To that
in Haskell, which manipulate bug entries stored in a Monotone
repository. Your editor confesses that he did not pull together all
of the pieces needed to make this tool work.
- DITrack is a set of Python
scripts for manipulating bug information within a Subversion
repository. It is meant to be distributed (and, eventually,
"backend-agnostic"), but its use of Subversion limits how distributed
it can be for now.
- Ditz is a set of Ruby scripts
for manipulating bug information within a source code management
system; it has no knowledge of the SCM itself.
As can be seen, there is no shortage of work being done in this area,
though few of these projects have achieved a high level of usability. Only
Scmbug has been widely deployed so far. A few of these projects have the
potential to change the way development is done, though, once various
integration and user interface issues are addressed.
There is one remaining problem, though, which has not been touched upon
yet. A bug tracker serves as a sort of to-do list for developers, but
there is more to it than that. It is also a focal point for a conversation
between developers and users. Most users are unlikely to be impressed by a
message like "set up a git repository and run these commands to file or
comment on a bug." There is, in other words, value in a central system
with a web interface which makes the issue tracking system accessible to a
wider community. Any distributed bug tracking system which does not
facilitate this wider conversation will, in the end, not be successful.
Creating a distributed tracker which also works well for users could be the
biggest challenge of them all.
Comments (43 posted)
Page editor: Jonathan Corbet
Next page: Security>>