By now most LWN readers will be well aware of the compromise of the systems
running kernel.org. The services provided by kernel.org have been offline
since that time, with the result that the flow of changes into the 3.1-rc
kernel has slowed considerably. Kernel.org will eventually come back,
perhaps with some significant policy changes. But the real effect may be a
wider discussion of security within the development community, which can
only be a good thing.
This compromise is far from the first that we have seen in our community.
Numerous projects and companies have had their systems broken
into at times; in some of those incidents, the attackers have replaced
distributed code with versions containing trojans or backdoors. Think back
to the OpenSSH and sendmail compromises, for example. Kernel.org
suffered a compromise (smaller in extent)
in 2010; there was also an attempt to insert a
backdoor into the kernel source back in 2003. In general, these
attempts have been caught quickly, and there is little (known) history of
compromised code being distributed to users. Cases where backdoors and
other misfeatures have actually been distributed have typically not been
the result of attacks; some readers will remember the InterBase backdoor,
for example, which predated that project's release as open source.
So, while the history of attacks is unnerving, the actual results in terms
of compromised systems have not been all that bad. So far.
Whether this attack on kernel.org has had a worse outcome is not yet
known. As your editor wrote
in a different venue, it is quite unlikely that the mainline Linux source
repository has been corrupted; git makes it almost certain that any such
attempt would be detected quickly. But that article was deliberately
limited in scope; there are many possible attack vectors that do not
involve a direct attempt to corrupt Linus's repository. Ruling out these
other attacks will be harder than verifying the integrity of the mainline
For example, kernel.org distributes tarballs and flat-file
patches that are not as easy to verify. For obvious reasons, comparing
those files against the checksums stored in the same directory is not
considered to be adequate at this point. Kernel.org also serves as a
mirror site for a wide range of other projects and distributions.
Verifying all of those mirrored files will not be a quick exercise.
There is also concern about kernel repositories maintained by other
developers that feed patches into the mainline. It is not uncommon to
create a throwaway branch for merging; that branch is often deleted (or
simply forgotten about) after the pull is done. It is possible that
changes to a throwaway branch between its creation and its pulling into the
mainline could go undetected. There are ways to avoid this possibility -
simply including the commit ID for the head of that branch in the pull
request, for example - but that is not routinely done now.
All recently used branches on all kernel.org-hosted repositories should be
checked for tampering, but to focus on that threat is to miss the bigger
picture. Every tree feeding into the mainline is a possible way for
malicious code to get into the kernel, but the compromise of kernel.org has
not changed the situation much, for a couple of reasons:
- All of those trees originate outside of kernel.org, so each one lives
on at least one other system which is also a target for attack. Often
that other system is a developer laptop. Anybody who has attended a
few developer conferences has seen a long line of laptop bags against
the wall at meals and receptions; it would not be all that hard to
borrow one for the time it takes to drink a beer or two. Those
systems and their owners are also all subject to all the usual forms of
remote attack: corrupt PDF files, social engineering, etc. In many of
these cases, a successful attack is less likely to be detected than it
is on a site like kernel.org.
- In our normal development process, with proper code review and no
compromised systems, we still insert security vulnerabilities into the
kernel - and most other projects as well. So it is a bit of a stretch
to say that we would detect an attempt to deliberately add vulnerable
code through the normal patch submission process. We just don't have
enough people to review code in general; people who are willing and
able to do a proper security review are even harder to come by. The
community could have 100% secure infrastructure and still be
vulnerable to attack.
Kernel.org will be back soon, possibly in a more secure mode. It might
make sense to ask, for example, whether it is really necessary to have 450
shell accounts on such an important system. But it seems clear that a stronger
kernel.org, as important as that is, will not make our security worries go
away. Given the incentives that exist, there will certainly be more
attacks, and some of those attacks will originate in highly competent, well
funded organizations. Those attacks might overwhelm even a reinforced
kernel.org, but attackers need not focus their attention on just that one
What is needed is to make the entire system more robust. A discussion
started at the Linux Plumbers Conference centers around the creation of a
"compilation sandbox" to defend developers (and users) against malicious
code inserted into makefiles or configuration scripts, for example.
Defending against malicious kernels will be rather harder, but it merits
some thought. Someday, perhaps, we'll have static analysis tools that can
find an increasing variety of security problems before they are distributed
to users. There's a lot that can be done to block future attacks.
But, as Bruce Schneier has often said, security efforts focused exclusively
on prevention are doomed to fail; there is a strong need for detection and
mitigation efforts as well. We are not as good in those areas as we should
be; the fact that the kernel.org compromise went unnoticed for days, even
when the system was experiencing unexplained kernel crashes, makes that clear.
We need to improve our ability to detect successful attacks and reduce the
damage that those attacks can cause. Because there will be more attacks,
and some of them will succeed.
This isn't about script kiddies anymore; it hasn't been for a while now.
The compromise of kernel.org needs to be seen as part of a wider pattern of
attacks on high-profile sites - Google, DigiNotar, RSA Security, etc.
At the minimum, large amounts of money are involved; it is not an
exaggeration to say that, in some cases, lives are at stake. The continued
success of free software depends on our ability to deal with this threat,
and to do so without compromising the openness on which our community
depends. It is a hard problem, but not an impossible one. We have solved
many hard problems to get as far as we have; we can deal with this one
Comments (19 posted)
There is a wide diversity in development models used by free and open source
software projects. Allison Randal explored that diversity in her keynote talk at
the Linux Plumbers
Conference. As one would guess, there is no model that works for all projects and, in fact, projects can learn from
the models used by others—which can help as the project evolves over
Since "the dawn of time", humans have been searching for a
mythical "one true development model", but it does not exist,
Randal said. This is not the fault of the computers, but is, instead, a
"human problem". There has been much hope over the years that
artificial intelligence will help to solve these kinds of problems, but
that hasn't happened and may not, though she hopes things don't turn out
that way. In the
meantime, "if the code doesn't do what you wanted it to do, it's your
There are various limitations that projects have that stem from these "human
problems". The first is time, because there is never enough of it to
implement all the different ideas that project participants might have.
Knowledge is another limitation because there is always more to learn:
"If you haven't learned anything this week, you aren't trying hard
enough", she said.
We also repeat the same mistakes over and over
again because projects are limited by the memory of their participants. Not
only do individuals repeat their own mistakes, but new project members
repeat mistakes made earlier in the lifetime of the project. Distraction
causes people to drop out of projects because of real life changes
(families, jobs, and so on) or due to finding something new that interests
them. In addition, participants may find their patience wearing thin,
because they are "tired of answering the same user questions"
repeatedly or are weary of the friction between various developers. All of
these things get in the way of producing good code and sometimes lead to
projects with great ideas that "never make it", she said.
In some ideal world, projects can predict the future perfectly which will allow
them to plan for all of the obstacles that they will face. That prescience
will also allow projects to anticipate exactly what its users want and to
implement it so that it "maps exactly" to those wants. The
project documentation would make it obvious why the project is relevant,
how to use the project's code, and how to immediately start pitching in to
help. Likewise, the testing, deployment, and maintenance phases of the
project would proceed without a hitch.
There are several mainstream development models whose names are often
heard, Randal said. One is the "waterfall" model, which essentially relies
on planning perfectly in advance, whereas the "agile" model requires
that you plan perfectly in small steps. Neither is very commonly used in
FLOSS projects at least partly because they are "very process
heavy". Those and other development models require a set of people
in the project whose only focus is to keep the process itself going, which
doesn't work very well for volunteer-staffed FLOSS projects. Even in projects
where there is a good bit of funding behind them, we "rarely see
heavyweight development models", she said.
There are multiple influences on how FLOSS project choose development
models, but perhaps the largest is that the developers are the ones who set
the direction for the project. There is typically a large group of
geographically distributed participants, some of whom are volunteers, while
others may be contributing for various
different companies. It's not the companies that set the direction,
though, developers choose the way by being interested in "working on this
piece or that one", Randal said.
Projects also have various goals with regard to innovation. Some set out
to "keep the state of the art moving ahead", while others
focus on incremental improvements. In addition, some projects are driven
by competition with other projects or proprietary software offerings. The
competition for success is often what drives innovation within those projects.
Project forks can also influence the development model. Forks often get a
"bad rap", Randal said, but they should really be seen as
exploring another evolutionary path. A fork does not necessarily create a
"permanent chasm" between the two paths. Aspects that are
successful in the fork, both of the feature and development model variety,
can and often do merge back to the mainline.
A "derivative" is essentially a different way to look at a fork, when
everyone recognizes that there is value in exploring the different ways to
achieve a shared "baseline goal", Randal said. Like with a
fork, the experience
gained by the derivative can make its way back to the original project.
Conflicts and governance
Some projects are "conflict-productive", while others are
"conflict-averse". Either can work, but a project needs to
recognize which fits it. While many projects work completely in the open,
none are really totally public, she said. Discussions or decisions are often made
between participants informally in conversations or meetings that are not
recorded. The important part is that the majority of those discussions
happen in a way that allows the information in them to flow to the
public. A mailing list is a good method for doing so, though the traffic
can grow to the point where it is too large for people to use, and then some kind
of summary is needed.
The "benevolent dictator" model is very popular in FLOSS
projects, partly because many projects start with just one person who
becomes the obvious choice. A benevolent dictator typically has one of two
roles, either a "tie-breaker" or a "trailblazer",
she said. A tie-breaker can sit back and listen to the discussion and
determine a way forward when there is a conflict among the project
participants. The trailblazer, on the other hand, takes an active role in
leading the project in the direction they think it should go. Elements of
both roles are often seen in benevolent dictators of FLOSS projects, she said.
One challenge that some projects have already faced and others will have to deal with
in the future is "what happens when the benevolent dictator
retires?" It is a natural process that we already see with project
contributors moving on, and project leaders will eventually want to
do the same. The tie-breaker role tends to trickle down to
other experienced project members even before a benevolent dictator steps
down. The trailblazer role may be harder to replace.
Other projects don't have a single leader and will strive to reach
decisions by way of consensus. The GNOME project is a good example of
this, she said. There are challenges with that model because it can take a
longer time to make decisions that way. In projects like that, there are
often different people that take on the trailblazing role at different
times for various pieces
of the project.
Projects with a strict hierarchy are not seen very often for FLOSS projects
because those projects tend to be more egalitarian, she said. The "JFDI" (just f* do
it) model is
fairly common because it allows developers to scratch their itch
immediately. If the idea doesn't work out, or others don't like it, it can
either be fixed or become an evolutionary branch that just dies off. That
model is somewhat challenging as it requires participants to "embrace
the chaos" that it can bring.
Any way that there is to structure a project can be found somewhere in the
FLOSS universe, and "they all work to one extent or another",
Randal said. In addition, the development model is not a good predictor of
of a project. There are successful projects out there with bad code, bad
organization, and/or bad structure, she said.
After a while, projects tend to reach a certain equilibrium where longtime
members have set up a model and structure in a way that works. But as new
people come on board, those things may need to change. The "model
that works for your project today may not work in ten years", Randal
said, in fact, it "almost certainly won't".
With that in mind, Randal challenged the audience to think about their
projects and to try to determine the best features of their development
models. Then she suggested that the audience look around for a
project that does things in exactly the opposite way and try to learn how that
project makes that work. It is important for projects and people to keep
learning, adapting, and evolving; seeing what else is out there is part of
Randal answered several interesting audience questions after she finished
She noted that she has worked many different models along the way, and that
there is none that stands out as "best"—or even necessarily
"better"—it depends on the project and participants. There are some
combinations that don't work very well together, as well as some models
that will drive some people off.
The Linux kernel "is one of the best examples of a benevolent
dictator model that works", she said, because Linus Torvalds
"does such a good job of embracing chaos". Git is a big part
of that because it encourages branching. But that also leads to challenges for
those who want to use the kernel. Kernel integration teams have to be
large to keep with the chaos: tracking patches and branches, following
Randal sees Torvalds as mostly being in the tie-breaker role, as it would
be difficult to be a trailblazer for the kernel as a whole because it is
moving so quickly. She doesn't foresee any major problems when Torvalds
decides to retire as there are a number of kernel hackers that have
established themselves as tie-breakers in various areas, and one or more of
those folks could step up. Though "some people will be nervous"
when that day comes.
Debian and Perl were two examples that she gave of projects that have made significant shifts in their
development model over time. Debian has always been
consensus-based—and still is—but recently Debian project leader
Stefano Zacchiroli has been taking on a trailblazing role, which has been
good for the project, she said. Randal took on the benevolent dictator
role for the Perl Foundation some time ago in order to move it from that
model to a
committee-based one. Essentially, there was too much for a benevolent
dictator to do, which was creating a bottleneck that might have seriously
harmed the project, but a committee-oriented governance solved that
She also pointed to Parrot as a project that was using a bad model for its
development. Parrot switched to Git, but uses a centralized scheme with a
branching and merging strategy", rather than something more like the
Linux kernel strategy. It's not Git's fault, she said, "tools can be
misused". While Parrot is getting things done, it is not operating
optimally which makes it hard for new developers and in general gets in the
way of the development of the project.
[ I would like to thank LWN subscribers for supporting my travel to LPC. ]
Comments (11 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Certificates and "authorities"; New vulnerabilities in bcfg2, kernel, mongoose, xen, ...
- Kernel: On multi-platform drivers; Upcoming DSP architectures; Ensuring data reaches disk.
- Distributions: Security testing with BackBox 2; openSUSE, Ubuntu, ...
- Development: Jitsi nears 1.0; OpenSSH, Tomdroid, wdiff, ...
- Announcements: Project Gutenberg founder dies, HTC Sues Apple, Jeremy Allison interview, ...