LWN.net Logo

On the security of our processes and infrastructure

By Jonathan Corbet
September 8, 2011
By now most LWN readers will be well aware of the compromise of the systems running kernel.org. The services provided by kernel.org have been offline since that time, with the result that the flow of changes into the 3.1-rc kernel has slowed considerably. Kernel.org will eventually come back, perhaps with some significant policy changes. But the real effect may be a wider discussion of security within the development community, which can only be a good thing.

This compromise is far from the first that we have seen in our community. Numerous projects and companies have had their systems broken into at times; in some of those incidents, the attackers have replaced distributed code with versions containing trojans or backdoors. Think back to the OpenSSH and sendmail compromises, for example. Kernel.org suffered a compromise (smaller in extent) in 2010; there was also an attempt to insert a backdoor into the kernel source back in 2003. In general, these attempts have been caught quickly, and there is little (known) history of compromised code being distributed to users. Cases where backdoors and other misfeatures have actually been distributed have typically not been the result of attacks; some readers will remember the InterBase backdoor, for example, which predated that project's release as open source.

So, while the history of attacks is unnerving, the actual results in terms of compromised systems have not been all that bad. So far.

Whether this attack on kernel.org has had a worse outcome is not yet known. As your editor wrote in a different venue, it is quite unlikely that the mainline Linux source repository has been corrupted; git makes it almost certain that any such attempt would be detected quickly. But that article was deliberately limited in scope; there are many possible attack vectors that do not involve a direct attempt to corrupt Linus's repository. Ruling out these other attacks will be harder than verifying the integrity of the mainline git tree.

For example, kernel.org distributes tarballs and flat-file patches that are not as easy to verify. For obvious reasons, comparing those files against the checksums stored in the same directory is not considered to be adequate at this point. Kernel.org also serves as a mirror site for a wide range of other projects and distributions. Verifying all of those mirrored files will not be a quick exercise.

There is also concern about kernel repositories maintained by other developers that feed patches into the mainline. It is not uncommon to create a throwaway branch for merging; that branch is often deleted (or simply forgotten about) after the pull is done. It is possible that changes to a throwaway branch between its creation and its pulling into the mainline could go undetected. There are ways to avoid this possibility - simply including the commit ID for the head of that branch in the pull request, for example - but that is not routinely done now.

All recently used branches on all kernel.org-hosted repositories should be checked for tampering, but to focus on that threat is to miss the bigger picture. Every tree feeding into the mainline is a possible way for malicious code to get into the kernel, but the compromise of kernel.org has not changed the situation much, for a couple of reasons:

  • All of those trees originate outside of kernel.org, so each one lives on at least one other system which is also a target for attack. Often that other system is a developer laptop. Anybody who has attended a few developer conferences has seen a long line of laptop bags against the wall at meals and receptions; it would not be all that hard to borrow one for the time it takes to drink a beer or two. Those systems and their owners are also all subject to all the usual forms of remote attack: corrupt PDF files, social engineering, etc. In many of these cases, a successful attack is less likely to be detected than it is on a site like kernel.org.

  • In our normal development process, with proper code review and no compromised systems, we still insert security vulnerabilities into the kernel - and most other projects as well. So it is a bit of a stretch to say that we would detect an attempt to deliberately add vulnerable code through the normal patch submission process. We just don't have enough people to review code in general; people who are willing and able to do a proper security review are even harder to come by. The community could have 100% secure infrastructure and still be vulnerable to attack.

Kernel.org will be back soon, possibly in a more secure mode. It might make sense to ask, for example, whether it is really necessary to have 450 shell accounts on such an important system. But it seems clear that a stronger kernel.org, as important as that is, will not make our security worries go away. Given the incentives that exist, there will certainly be more attacks, and some of those attacks will originate in highly competent, well funded organizations. Those attacks might overwhelm even a reinforced kernel.org, but attackers need not focus their attention on just that one target.

What is needed is to make the entire system more robust. A discussion started at the Linux Plumbers Conference centers around the creation of a "compilation sandbox" to defend developers (and users) against malicious code inserted into makefiles or configuration scripts, for example. Defending against malicious kernels will be rather harder, but it merits some thought. Someday, perhaps, we'll have static analysis tools that can find an increasing variety of security problems before they are distributed to users. There's a lot that can be done to block future attacks.

But, as Bruce Schneier has often said, security efforts focused exclusively on prevention are doomed to fail; there is a strong need for detection and mitigation efforts as well. We are not as good in those areas as we should be; the fact that the kernel.org compromise went unnoticed for days, even when the system was experiencing unexplained kernel crashes, makes that clear. We need to improve our ability to detect successful attacks and reduce the damage that those attacks can cause. Because there will be more attacks, and some of them will succeed.

This isn't about script kiddies anymore; it hasn't been for a while now. The compromise of kernel.org needs to be seen as part of a wider pattern of attacks on high-profile sites - Google, DigiNotar, RSA Security, etc. At the minimum, large amounts of money are involved; it is not an exaggeration to say that, in some cases, lives are at stake. The continued success of free software depends on our ability to deal with this threat, and to do so without compromising the openness on which our community depends. It is a hard problem, but not an impossible one. We have solved many hard problems to get as far as we have; we can deal with this one too.


(Log in to post comments)

On the security of our processes and infrastructure

Posted Sep 9, 2011 2:08 UTC (Fri) by koverstreet (subscriber, #4296) [Link]

I think the value of GPG signing every commit (wherever we would sign off on a commit today) ought to be clear now...

On the security of our processes and infrastructure

Posted Sep 9, 2011 2:46 UTC (Fri) by rweir (subscriber, #24833) [Link]

that doesn't really help with the "compromised dev laptop" attack, though.

On the security of our processes and infrastructure

Posted Sep 9, 2011 8:07 UTC (Fri) by Klavs (subscriber, #10563) [Link]

if the dev. used a PKCS#11 interface to sign (using a smartcard with a pin or whatever) f.ex. - would help that :)

But still - it'll take more patience and a keylogger, if they were signed.

On the security of our processes and infrastructure

Posted Sep 9, 2011 16:39 UTC (Fri) by JoeBuck (subscriber, #2330) [Link]

No, it wouldn't help. If the developer's system is compromised, the rootkit could see and intercept every action. The rootkit would simply wait for the developer to sign a commit, and then apply that signature to a different commit. The fact that the developer also had to enter a token from a smartcard or get her iris scanned is no defense if someone else owns the developer's machine.

On the security of our processes and infrastructure

Posted Sep 9, 2011 7:44 UTC (Fri) by geofft (subscriber, #59789) [Link]

"Anybody who has attended a few developer conferences has seen a long line of laptop bags against the wall at meals and receptions"

Just... stop doing that.

There are a couple of approaches. I'm that weird guy who keeps my laptop bag with me at all times (when I don't leave it at home, and if people are breaking into my home I have bigger problems). It's a little awkward to have a bag with you / under your table at dinners and such, but hey, we're all programmers, we can be awkward.

If you set out with the approach that your laptop is dangerous if it isn't in your hands, you quickly adapt to carrying it everywhere, or leaving it in advance in safe places like locked to your office table.

You can also set an admin password in your BIOS and disable booting to external drives, set a GRUB password, and lock your screen when you walk away. While it's not enough to deter a determined "Evil Maid"-style attacker who's willing to open your laptop, it's probably good enough. (This worked better on my netbook, which didn't have an easily removable internal drive, even if you opened the case.)

Finally, we could as a community figure out how the heck you're supposed to use the TPM and trusted boot and all this fun stuff. I would really like the ability to create a trusted container/VM on my laptop, and I know the hardware technology exists, but I can't figure out how to use any of the free software support for it. It should get built into the desktop the way things like NetworkManager are.

On the security of our processes and infrastructure

Posted Sep 9, 2011 10:31 UTC (Fri) by NAR (subscriber, #1313) [Link]

"Anybody who has attended a few developer conferences has seen a long line of laptop bags against the wall at meals and receptions"
Just... stop doing that.

On a conference I never dared to leave my laptop alone. Not because I was afraid that someone would break the screensaver lock, but because I was afraid someone might simply steal it...

On the security of our processes and infrastructure

Posted Sep 9, 2011 12:29 UTC (Fri) by jengelh (subscriber, #33263) [Link]

In that regard, the grandparent poster's laptop must be quite a vintage or defaced-with-stickers for it not to be stolen when left unattended :)

On the security of our processes and infrastructure

Posted Sep 10, 2011 3:23 UTC (Sat) by geofft (subscriber, #59789) [Link]

Oh, certainly, at a hotel or at certain university buildings like libraries, I'd worry about theft primarily.

At other university buildings I'd worry more about pranksters. :)

On the security of our processes and infrastructure

Posted Sep 15, 2011 16:37 UTC (Thu) by slashdot (guest, #22014) [Link]

Just encrypt the whole hard drive with cryptsetup-luks and turn off the laptop when you leave it unattended.

An attacker can still corrupt the hard drive or steal the machine, but almost surely won't achieve anything beyond forcing you to buy a new machine and restore backups.

On the security of our processes and infrastructure

Posted Sep 18, 2011 0:36 UTC (Sun) by ccurtis (guest, #49713) [Link]

You can also set an admin password in your BIOS and disable booting to external drives, [...]
Or, as I do (out of necessity, really), remove the hard drive from the laptop and only boot from external drives. It's a lot easier to carry around a portable hard drive than a laptop anyway.

On the security of our processes and infrastructure

Posted Sep 23, 2011 17:27 UTC (Fri) by oak (guest, #2786) [Link]

In this case you could make the OS on the internal hard drive to do "interesting" things if somebody ever happens to boot it, like:
* Log anything the user does (URLs, passwords etc), take photos with the webcam
* Scan for WLAN networks & connect to network
* If that fails, use few dollar/euro prepaid SIM to do cellular connection instead
* "Call home" to log close-by WLAN & phone base stations etc info needed to locate the laptop and identify its thief
* If "Home" tells that the device should do something, first disable volume & power off keys
* Then start blinking the screen & blasting from the tinny speakers something like "I'm stolen, please call police" etc.

Whoever steals that device, will probably remember it for a while and maybe even avoid geek conferences in future...

On the security of our processes and infrastructure

Posted Sep 9, 2011 14:50 UTC (Fri) by malor (subscriber, #2973) [Link]

This isn't about script kiddies anymore; it hasn't been for a while now. The compromise of kernel.org needs to be seen as part of a wider pattern of attacks on high-profile sites - Google, DigiNotar, RSA Security, etc.

I wonder if this will do anything to convince the kernel devs that the security community is not just theater? Their determined insistence on hiding security fixes is probably going to end up causing people's deaths.

It would be particularly notable if the kernel.org compromise was from a security fix that wasn't rolled out because it wasn't correctly labeled.

On the security of our processes and infrastructure

Posted Sep 13, 2011 11:29 UTC (Tue) by mpr22 (subscriber, #60784) [Link]

My impression is that the kernel devs' position is approximately "if you rely on a magic flag to tell you whether a given fix is a security fix, you have a security problem and you should investigate it".

On the security of our processes and infrastructure

Posted Sep 15, 2011 16:39 UTC (Thu) by slashdot (guest, #22014) [Link]

You are just supposed to "roll out" ALL fixes.

On the security of our processes and infrastructure

Posted Sep 17, 2011 4:28 UTC (Sat) by malor (subscriber, #2973) [Link]

Which means you inevitably must also accept a bunch of new features that haven't been completely thought out or tested, resulting in yet more patches resulting in yet more untested features resulting in yet more patches. It's a never-ending stream of 'which insecurity do I have this week?'

At this point, if I had data that would threaten my livelihood or life if it leaked, I would never, never not EVER put it on a Linux box.

On the security of our processes and infrastructure

Posted Sep 17, 2011 20:00 UTC (Sat) by jrn (subscriber, #64214) [Link]

> Which means you inevitably must also accept a bunch of new features

Have you looked at a linux-stable (i.e., 3.x.y) kernel recently?

On the security of our processes and infrastructure

Posted Sep 19, 2011 12:31 UTC (Mon) by mpr22 (subscriber, #60784) [Link]

I certainly know that at this point, if I were the maintainer of an OS kernel I would flag all fixes to the kernel as security fixes, simply because (a) selectively flagging fixes is subject to human error (b) there are too many people out there who can't be trusted to know the difference between "A implies B" and "not-A implies not-B". (In this case, A is "the fix is flagged as a security fix" and B is "omitting the fix has negative implications for the security of the system".)

Out of interest, what OS would you trust to keep such information safe? (For my part, I think the right solution there is to keep the information strongly encrypted, and never let the keys reside - even in volatile storage - on a network-connected device.)

On the security of our processes and infrastructure

Posted Sep 10, 2011 2:08 UTC (Sat) by fuhchee (subscriber, #40059) [Link]

"as Bruce Schneier has often said, security efforts focused exclusively on prevention are doomed to fail; there is a strong need for detection and mitigation efforts as well."

This is just a recurrent straw man, when Bruce is busy raging at the TSA. In reality, people do not exclusively focus on prevention.

On the security of our processes and infrastructure

Posted Sep 11, 2011 20:03 UTC (Sun) by juhah (subscriber, #32930) [Link]

Well worth a read: Protecting a Laptop from Simple and Sophisticated Attacks.

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds