Infrastructural attacks on free software
Less attention has been paid to the cost of having the Debian servers be unavailable for the better part of a week. Your editor, waiting for a working version of psycopg to be uploaded to unstable, was merely inconvenienced. Other users, who may have planned significant installations or upgrades, or who were trying to discuss problems with Debian developers will have been rather more inconvenienced. Debian developers, trying to get 3.0r2 out the door, were stopped dead for a while. These consequences are costly enough by themselves, but consider what could happen. Had a major security incident broken out while the Debian servers were unavailable, it would have been difficult or impossible for the project to respond quickly.
Linux systems are living things; even the most stable systems need
occasional updates to stay secure. Linux users depend on the availability
of their distributions' supporting infrastructure to keep their systems up
to date. This sort of attack, by making that infrastructure unavailable,
hurts users worldwide, and could leave them unable to respond quickly to
serious security problems. Once again, we have been warned that our
infrastructure is too fragile and insufficiently secure.
Posted Nov 26, 2003 3:21 UTC (Wed)
by smoogen (subscriber, #97)
[Link] (9 responses)
Posted Nov 26, 2003 6:30 UTC (Wed)
by proski (subscriber, #104)
[Link] (6 responses)
Posted Nov 26, 2003 8:55 UTC (Wed)
by walles (guest, #954)
[Link] (3 responses)
I have to disagree with the part about not necessarily running your own distro. If the distro isn't good enough for the distributor, how could it be good enough for anybody else? If a distro is too insecure, the distributor should fix it, not avoid it. Dogfood is an excellent principle. I agree on everything else you wrote though.
Posted Nov 26, 2003 9:20 UTC (Wed)
by Robin.Hill (subscriber, #4385)
[Link] (2 responses)
Posted Nov 26, 2003 10:36 UTC (Wed)
by stuart (subscriber, #623)
[Link] (1 responses)
But Debian is (or wants to be) the "Universal Operating System" and hence dogfood is very much in. Stu.
Posted Dec 5, 2003 3:26 UTC (Fri)
by eread (guest, #1918)
[Link]
Perhaps they might consider using a special-high secure distro for critical systems. Thanks.
Posted Nov 26, 2003 15:14 UTC (Wed)
by freethinker (guest, #4397)
[Link] (1 responses)
Your other points are good, but I have to take issue with this one. While running the Debian servers on, say, OpenBSD would be somewhat more secure, it would also force Debian maintainers to keep up with OpenBSD developments and be expert users of two very different systems. It would also make a critical part of the Debian infrastructure dependent on an entirely separate project. Neither is desirable. Besides, while no one can match OpenBSD's record, Debian is no slouch. The distro has an excellent reputation for security. I very much doubt anyone will crack these machines again, now that they've had this wake-up call.
Posted Nov 27, 2003 19:43 UTC (Thu)
by NAR (subscriber, #1313)
[Link]
I disagree. Only the sysadmins of the servers should be "OpenBSD experts" - a simple user might not even notice if he's on OpenBSD instead of Debian. The basic UNIX tools are the same everywhere (and the GNU tools can be installed, if needed) and I don't think a package maintainer needs any more than that on the main servers.
Posted Nov 26, 2003 18:30 UTC (Wed)
by Ross (guest, #4065)
[Link]
Besides the typical system security steps (disabling extra services, The same people should not be responsible for all critical systems and the The servers should be limited to one task. The Debian crew might already The server should avoid dependencies. Dependencies mean that a single This one is controversial. The systems shouldn't all be running the same Each system should have a backup, probably in a different physical One final piece of advice is to make installation simple and repeatable.
Posted Nov 27, 2003 23:58 UTC (Thu)
by daniels (subscriber, #16193)
[Link]
Posted Nov 26, 2003 15:34 UTC (Wed)
by mmarsh (subscriber, #17029)
[Link]
Posted Nov 27, 2003 8:28 UTC (Thu)
by gleef (guest, #1004)
[Link] (3 responses)
The author suggests: I disagree. The Debian project had control over DNS throughout the crisis, and had volunteers with servers and bandwidth willing to help. Had a major security issue hit, it would have been easy for them to set up a security repository, with a patch for the issue on it, on an volunteer's machine (or several), and point DNS for security.debian.org at that machine. For an added bit of trust, Martin Schulze (or Matt Zimmerman, or another trusted and visible Debian developer) could have posted the MD5 sums of the updated packages onto the debian-security-announce list in a PGP/GPG signed email, so you can validate that a trusted person is saying that these are the signatures of the trusted packages. Actually, that's standard operating procedure in Debian, system compromise or not. I, for one, feel that the Debian developers performed spectacularly in this crisis. They showed that not only do they have the infrastructure to keep things under control (and fix them quickly) when inevitable problems arise, but they have engineered room to spare if an even worse problem, or series of problems, were to come by.
Posted Nov 27, 2003 12:04 UTC (Thu)
by ajk (guest, #6607)
[Link] (2 responses)
For an added bit of trust, Martin Schulze (or Matt Zimmerman, or another trusted and visible Debian developer) could have posted the MD5 sums of the updated packages onto the debian-security-announce list in a PGP/GPG signed email
The mailing list server machine was among the compromised, and thus the lists were out of order for several days. It would have taken a lot of time and effort to set up a temporary lists.debian.org, assuming that a backup system was not already set up. Therefore, your idea would not have worked.
Posted Nov 27, 2003 23:57 UTC (Thu)
by daniels (subscriber, #16193)
[Link]
Posted Dec 4, 2003 9:11 UTC (Thu)
by jaalwn (guest, #17500)
[Link]
Irrespective of how the attack occurred, and what had to be done to restore the compromised servers, the real issue was that key Debian services went down, and some services are still down today. I believe that the folks involved reacted rapidly and appropriately in dealing with the compromise, however, there appears to have been no resource allocated to maintaining continuous service, and hence the disruption to the community. Imagine an alternative stream of events: * some hosts discovered to be compromised I feel that the biggest disruption was in the communication channels - many folks did not receive the 21st Nov announcement until 25th Nov - it was sent via a host that then taken down shortly after being posted. There was also extreme pressure on those performing the forensics to diagnose and cure the problem - because services were down until the dianosis and cure could be implemented. I don't think it is possible to guarantee publicly exposed internet hosts as secure. But it IS possible to minimise disruption and provide continuity of service, with some forethought and planning. All in all, I believe the Debian effort was commendable, and I will be sticking with the dist. I do hope that next time [yes - it will happen!] the communication channels will be maintained - they are an essential part of the community's security. Jeff
Posted Dec 3, 2003 16:21 UTC (Wed)
by Klavs (guest, #10563)
[Link] (1 responses)
A good way to fix such an issue - an allow users to interact/develop software would be to use vservers (virtual servers see linux-vserver.org) or perhaps UserMode Linux (which runs seperate kernels for each virtual server) would do the trick - and avoid kernel exploits from compromising the system. This would also make the attempt to compromise a lot more easily detectable.
Posted Dec 5, 2003 4:38 UTC (Fri)
by khim (subscriber, #9252)
[Link]
..and avoid kernel exploits from compromising the system... Huh? What crack are you smoking? No, really? Kernel exploit is kernel exploit. It's the end of story, period. Once you have kernel exploit you can easily break out of vserver and somewhat margnally harder is to break out of UserMode Linux. And currenly kernel is big and monolitic. That's why the HURD is not abandoned, you know.
And how should we fix it? It is easy to say 'things are too fragile and insecure..' but how hard it is to come up with sustainable ways to improve the situation.
Infrastructural attacks on free software
Don't put more than one service on the same machine. Don't put too many users with shell access on the same machine. Separate users into groups with different permissions to make it easier to find compromized accounts. Use remove logging so that the logs cannot be erased by successful attackers. Don't choose software for servers based on marketing or political considerations (i.e. we just have to run our distribution). Monitor critical files. Insulate development machines from the key infrastructure (web server, FTP). Use security features of the OS when possible (capabilities, ACLs, chroot, system levels).
Infrastructural attacks on free software
> Don't choose software for servers based on marketing or politicalDogfood
> considerations (i.e. we just have to run our distribution).
That depends to a large extent on what market the distro is aimed at. If it's targeting the desktop/games market then you should definitely be picking another distro for the server.
Dogfood
Quite,Dogfood
I would say that Debian wants to be general purpose.Dogfood
> Don't choose software for servers based on marketing orInfrastructural attacks on free software
> political considerations (i.e. we just have to run our
> distribution).
While running the Debian servers on, say, OpenBSD would be somewhat more secure, it would also force Debian maintainers to keep up with OpenBSD developments and be expert users of two very different systems.
Infrastructural attacks on free software
It's hard to offer advice with so few details about what happened. But wesolutions
can always try and give generic advice :)
applying patches in a timely manner, using good passwords, etc.) I can
think of four action which can help prevent problems like this:
minimize access, separate services, reduce dependencies,
diversify implementations, and increase redundancy.
number of people with access to any of those systems should be kept low.
be doing a good job of this. I'm not sure.
problem will cause outages on all systems.
software. The problem with running the same software on every system is
that the chances they will all be vulnerable to an attack at the same
time is much higher. The downside is that managing the systems becomes
more difficult.
location, connected to a different network, which can serve as a
replacement while the main system is being restored, examined, replaced,
etc.
This may require careful documentation of what changes are made after the
system is installed or creating special install images. This can help you
restore a system quickly in many situations. Having an identical spare
system can be even more useful because you don't have to worry about
erasing evidence or valuable information on the original hard drive(s).
Take people out of the loop? Infrastructural attacks on free software
No, seriously, if I had to choose between my family/SO/mates, and my GnuPG
passphrase, I know which one I'd be giving up. People suck, and anything
involving people will never be secure, ever.
If we really want to run with the "systems are living things" metaphor, perhaps we should start playing up the multiplicity of Linux distributions as a good thing. After all, mixing and matching the best of several distributions to make a new and better distribution is what evolution is all about, right? The really nice thing is that software packages with compatible licenses can be similarly "inter-bred". What becomes important then is having a good set of standards and an easy upgrade/downgrade/crossgrade path from distro to distro or app to app. I'm not sure to what extent the LSB addresses this, though my impression was that it was more concerned with standardizing configurations rather than mechanisms.
To carry a metaphor further...
Infrastructural attacks on free software
Had a major security incident broken out while the Debian servers were unavailable, it would have been difficult or impossible for the project to respond quickly.Infrastructural attacks on free software
Err, a few security team members are also DSA (admin team), so it wouldn't have Infrastructural attacks on free software
been too difficult for them to extract the debian-security-announce subscription
list from murphy. They wouldn't have to had murphy up to send it out; therefore
the idea could well have worked.
I trust the security team and the DSA enough to believe that they would've come
up with a more-than-satisfactory solution to the hypothetical problem, had it
arisen. People don't get to DSA/security because they just went and drank with
the right people - they're there for a reason.
I think this hits the crux of the matter. Infrastructural attacks on free software
* these hosts are isolated / frozen / services go down
> integrity of services are verified
> redundant hosts are activated, service resumes
* forensics are performed on compromised hosts
* compromised hosts are purged / rebuilt
* rebuilt hosts are bought back on-line
* service is switched back to primary hosts
IMHO the big mistake was having users on a system that does anything else than serve users. Infrastructural attacks on free software
A more precise setup description would depend on the needs of these users, and indeed it's a good idea to devide them into groups (based on access needs) and give these groups seperate vservers.
I know of many people running linux-vserver.org on Debian servers (and Gentoo and RedHat :)
Infrastructural attacks on free software