By Jonathan Corbet
February 1, 2010
Recently, Google announced that its operations in China (and beyond) had
been subject to sophisticated attacks, some of which were successful; a
number of other companies have been attacked as well. The source of these
attacks may never be proved, but it is widely assumed that they were
carried out by government agencies. There are also
allegations
that the East Anglia email leak was a government-sponsored operation.
While at LCA, your editor talked with a developer who has recently found
himself at Google; according to this developer, incidents like these
demonstrate that the security game has changed in significant ways, with
implications that the community can ignore only at its peril.
Whenever one talks about security, one must do so in the context of a
specific threat model: what are we trying to defend ourselves against?
Different threat models lead to very different conclusions. For years, one
of the most pressing threats has been script kiddies and others using
well-known vulnerabilities to break into systems; initially these breakins
were mostly for fun, but, over time, these attackers have increasingly had
commercial motivations. In response, Linux distributors have created
reasonably secure-by-default installations and effective mechanisms for the
distribution of
updates. As a result, we are, by default, quite well defended against this
class of attack when carried out remotely, and moderately well defended
against canned local attacks.
Attackers with more determination and focus are harder to defend against;
somebody who intends to break into a specific system in pursuit of a
well-defined goal has a better chance of success. Chances are, only the
most hardened of systems can stand up against focused attackers with local
access. When these attackers are at the far end of a network connection,
we still stand a reasonable chance of keeping them out.
Often, those concerned with security simply throw up their hands when
confronted with the problem of defending a system against an attacker who
is working with the resources available to national governments. Most of
us assume that we'll not be confronted with such an attack, and that
there's little that we could do about one if we were. When governmental
attackers can obtain physical access, there probably is little to be done,
but remote (foreign) governmental attackers may not be able to gain that
sort of access.
[PULL QUOTE:
What the attacks on Google (and others) tell
us is that we've now entered an era where we need to be concerned about
attacks from national governments.
END QUOTE]
What the attacks on Google (and others) tell us is that we've now entered
an era where we need to be concerned about attacks from national
governments. Probably we have been in such an epoch for a while now, but
the situation has become increasingly clear. Thinking about the
implications would make some sense.
A look at updates from distributors shows that we still have have a steady
stream of vulnerabilities in image processing libraries, PDF viewers, Flash
players, and more. Some of these problems (yet another PNG buffer
overflow, say) appear to have a relatively low priority, but they
shouldn't. Media-based attacks can only become more common over time; it's
easy to get a victim to look at a file or go to a specific web page.
Properly targeted phishing (easily done by a national government) may be
the method of choice for compromising specific systems for some time to
come. Browsers, file viewers, and media players will play an unfortunate
role in the compromise of many systems.
What may be even more worrisome, though, is the threat of back doors,
trojan horses, or (perhaps most likely) subtle vulnerabilities inserted
into our software development and distribution channels. This could happen
at just about any stage in the chain.
On the development side, we like to think that code review would find
deliberately coded security weaknesses. But consider this: kernel code
tends to be reviewed more heavily than code in many other widely-used
programs, and core kernel code gets more review than driver code. But none
of that was able to prevent the vmsplice()
vulnerability - caused by a beginner-level programming error - from
getting into the mainline kernel. Many more subtle bugs are merged in
every development cycle. We can't ever catch them all; what are our
chances against a deliberately-inserted, carefully-hidden hole?
Source code management has gotten more robust in recent years; the
widespread use of tools like git and mercurial effectively guarantees that
an attempt to corrupt a repository somewhere will be detected. But that
nice assumption only holds true for as long as one assumes that the hash
algorithms used to identify commits are not subject to brute-force
collisions. One should be careful about such assumptions when the
computing resources of a national government can be brought to bear. We
might still detect an attempt to exploit a hash collision - but our chances
are not as good.
In any case, the software that ends up on our systems does not come
directly from the source
repositories; distributors apply changes of their own and build binary
packages from that source. The building of packages is, one hopes,
relatively robust; distributors have invested some significant resources
into package signing and verification mechanisms. The Fedora and Red Hat
intrusions show that this link in the chain is indeed subject to attack,
but it is probably not one of the weakest links.
A weaker point may be the source trees found on developer laptops and the
patches that those developers apply. A compromise of the right developer's
system could render the entire signing mechanism moot; it will just sign
code which has already been corrupted. Community distributions, which
(presumably) have weaker controls, could be especially vulnerable to this
attack vector. In that context, it's worth bearing in mind that
distributions like Debian and Gentoo - at least - are extensively used in a
number of sensitive environments. Enterprise distributions might be
better defended against the injection of unwanted code, but the payback for
the insertion of a hole into an enterprise distribution could be high.
Users of community rebuilds of enterprise distributions (LWN being one of
those) should bear in mind that they have added one more link to the chain
of security that they depend on.
Then again, all of that may be unnecessary; perhaps ordinary bugs are
enough to open our systems to sufficiently determined attackers. We
certainly have no shortage of them. One assumes that no self-respecting,
well-funded governmental operation would be without a list of undisclosed
vulnerabilities close at hand. They have the resources to look for unknown
bugs, to purchase the information from black-hat crackers, and to develop
better static analysis tools than we have.
All told, it is a scary situation, one which requires that we rethink the
security of our systems and processes from one end to the other. Otherwise
we risk becoming increasingly vulnerable to well-funded attackers. We also
risk misguided and destructive attempts to secure the net through
heavy-handed regulation; see this ZDNet article for a
somewhat confusing view of how that could come about.
The challenge is daunting, and it may be insurmountable. But, then, we as
a community have overcome many other challenges that the world thought we
would never get past, and the attacks seem destined to happen regardless of
whether we try to improve our defenses. If we could achieve a higher level
of security while preserving the openness of our community and the vitality
of our development process, Linux would be even closer to World Domination
than it is now. Even in the absence of other minor concerns - freedom, the
preservation of fundamental civil rights, and the preservation of an open
network, for example - this goal would be worth pursuing.
(
Log in to post comments)