The backdooring of SquirrelMail
It only took one day, though, before Uwe Schindler pointed out that, in fact, the changes made to the source opened a remote-execution back door into deployed SquirrelMail systems. Somewhere along the way, the project discovered that the 1.4.11 release had also been tampered with. The SquirrelMail developers released version 1.4.13 to close the vulnerabilities.
There have not been any public reports of systems being compromised by way of this vulnerability. Additionally, it would appear that all of the distributors which shipped the affected versions got their version of the code prior to the attack. So the episode would appear to have ended reasonably well - as far as we know. There are some lessons that one can take from this attack, though.
The downplaying of the problem initially was a potentially fatal mistake. If somebody has been tampering with the sources, there is no excuse not to go into red-alert mode immediately, even if the developers involved do not understand the attack. When a project has been compromised at such a fundamental level, one must assume the worst.
The compromise was discovered after a user noticed that the tarballs on the download site did not match the posted MD5 checksums. Your editor suspects that very few of us actually verify checksums in the packages they take from the net. Doing so more often would be a good exercise in software hygiene for all of us.
That said, the project got lucky this time around. A smarter attacker would have replaced the checksums after adding the back door, making the changes harder to detect. Longer-term, the increasing doubts about the security of MD5 suggest that relying on it to detect changes to tarballs might not be entirely safe. Far better to use public-key signatures; they should have a longer shelf life, and, if the keys are managed properly, they are impossible to replace. It seems that the project has posted GPG signatures for 1.4.13, though the Wayback Machine suggests that this is a recent practice. Your editor was unable to find the public key needed to verify the signatures.
The modifications to the tarballs were done using a compromised developer's account. The specific changes made were not put into the SquirrelMail source repository. The project has said nothing, though, about what has been done to ensure that no other changes were made there. Some sort of statement from the project along these lines would be most reassuring to SquirrelMail's users.
Perhaps the most encouraging conclusion, though, is this: there have been
several attempts to compromise source distributions over the years. Many
of them have succeeded in getting bad code into high-profile packages. But
none of these attacks - so far as we know - have escaped detection for any
significant period of time, and none of them have led to any sort of
wide-scale exploit. As a whole, we would appear to be reasonably resistant
to this kind of attack, even when the front-line defenses fail. With luck,
and continued vigilance, that trend will continue. Both will be required,
though: there is no doubt that the attackers will keep trying.
Index entries for this article | |
---|---|
Security | Backdoors |
Security | Web application flaws |
Posted Dec 20, 2007 5:39 UTC (Thu)
by khim (subscriber, #9252)
[Link] (4 responses)
I can only say "huh?". It's certainly true that public-key signatures are impossible to replace if you don't have access to private key. It's very much not true that they have a longer shelf life! If you'll try to sign multi megabyte archive by using RSA or DSS directly process will take minutes if not hours and the check will be just as slow - thus ALL public-key cryptography depends on "normal" hashes (usually SHA1 today) in practice! Of course if MD5 or SHA1 is broken public-key signing scheme based on MD5 or SHA1 is broken as well...
Posted Dec 20, 2007 7:45 UTC (Thu)
by anselm (subscriber, #2796)
[Link] (3 responses)
One short-term way of alleviating this problem could be by publishing
(and signing) both an MD5 and an SHA-1 checksum of the archive(s)
in question. Even if an ambitious attacker managed to find a way to
compromise an archive such that its MD5 or SHA-1 checksum stayed the same
while the modified code still made sense, finding such a compromise that
kept both hashes identical would be that much more difficult. (For
extra credit, use two hash functions that are not as closely related as
MD5 and SHA-1, or add a third one.)
Posted Dec 20, 2007 18:03 UTC (Thu)
by hmh (subscriber, #3838)
[Link]
Posted Dec 20, 2007 19:19 UTC (Thu)
by rise (guest, #5045)
[Link] (1 responses)
Posted Dec 20, 2007 19:30 UTC (Thu)
by smoogen (subscriber, #97)
[Link]
Posted Dec 20, 2007 9:23 UTC (Thu)
by hickinbottoms (subscriber, #14798)
[Link] (4 responses)
Posted Dec 20, 2007 16:21 UTC (Thu)
by gerv (guest, #3376)
[Link] (2 responses)
Posted Jan 4, 2008 23:32 UTC (Fri)
by roelofs (guest, #2599)
[Link] (1 responses)
Why stop with the IETF? This clearly falls equally under the W3C's purvue--at least, if you consider implementing it as additional attributes to the anchor tag rather than welding it to URI syntax. It seems like an almost ideal XHTML or HTML4.x addition.
Greg
Posted Jan 5, 2008 12:18 UTC (Sat)
by gerv (guest, #3376)
[Link]
Posted Dec 20, 2007 20:44 UTC (Thu)
by zooko (guest, #2589)
[Link]
Posted Dec 20, 2007 12:17 UTC (Thu)
by scarabaeus (guest, #7142)
[Link] (3 responses)
Far better to use public-key signatures
Far better to use public-key signatures
Far better to use public-key signatures
And, also add the size of the files. Might as well make things even a little more difficult
to the attacker by reducing even more the set of possible streams he can use...
Far better to use public-key signatures
Sadly it's been shown that using both hashes doesn't increase the work factor by very much.
Far better to use public-key signatures
One has to take into effect that when most times people say that it doesn't increase the work
load they are talking about order of magnitude things... and that it doesn't increase the
factor if certain factors are true. Finding a match between SHA1 and 'pull out unrelated of my
butt' Hash might only extend the time to see it by months or years versus decades... and is
not non-trivial.. may only be 'trivial' to the mathmetician who was testing it against a
theoretical 10^20 years to find a match.
The backdooring of SquirrelMail
Wouldn't it be helpful if the <a> tag could include a hash/signature (I'll refrain from
suggesting which one), that the browser could use to verify the download automatically?
Whilst that wouldn't plug the hole completely (the attacker may be able to compromise both the
web site and the tarball), from the reading of this article it would have meant all
downloaders would have been alerted to the compromise.
The backdooring of SquirrelMail
I've been proposing this for some years now - http://www.gerv.net/security/link-fingerprints/
- and we even got as far as a draft RFC but it received a chilly reception from the IETF. :-(
Gerv
I've been proposing this for some years now - http://www.gerv.net/security/link-fingerprints/
- and we even got as far as a draft RFC but it received a chilly reception from the IETF. :-(
link fingerprints
link fingerprints
I wanted to make it part of the URI syntax because then it could be used even in non-HTML
contexts - for example, in plain-text emails. But yes, perhaps if that's not going to be
achievable, we could get a significant proportion of the benefits by going via WHAT-WG or W3C
and adding a new attribute to HTML.
Gerv
link fingerprints
I uploaded a package this morning -- zfec v1.3.1. If give the following hyperlink to the
"easy_install" tool, as in:
easy_install
http://pypi.python.org/packages/source/z/zfec/zfec-1.3.1....
Then easy_install will check that the md5 fingerprint of the resulting tarball matches the one
in the URL fragment and stop with an error message if they don't match.
The backdooring of SquirrelMail
there have been several attempts to compromise source distributions over the years. Many of them have succeeded in getting bad code into high-profile packages. But none of these attacks - so far as we know - have escaped detection for any significant period of time
Well, yes - how do you know that no such thing exists?? Anybody who has done it will surely be careful not to cause alarm when exploiting it.
BTW, it is also possible and likely that some developer somewhere has done a similar thing. I dimly remember one occasion a few years ago when such a developer backdoor was detected, can't remember any details though...
Posted Dec 20, 2007 15:30 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (1 responses)
Posted Dec 20, 2007 15:43 UTC (Thu)
by corbet (editor, #1)
[Link]
In other cases where backdoors have actually made it into source repositories (interbase, for example, or the mICQ incident), peer reviewers have caught the problem. The interbase backdoor lasted for a year and a half, but I do not think it was being exploited. It was something the developers left in by mistake. I do not know of a case where a trojan was introduced into a free software project, then was exploited for any significant period of time before being found.
That, of course, does not say that no such compromise exists. But I would be more concerned about long-term backdoors if there had been some cases of compromises which lasted for an intermediate period of time.
Posted Jan 4, 2008 23:39 UTC (Fri)
by roelofs (guest, #2599)
[Link]
Some Debian machines were compromised via a developer's account, if that's what you mean. There was also a case of the kernel getting backdoored, but only via a CVS "mirror" of the main git or bitkeeper repository, not the master copy itself.
Those are the only two I recall offhand. Then again, the old brain cell ain't what she used ter be...
Greg
Posted Dec 20, 2007 13:37 UTC (Thu)
by gnb (subscriber, #5132)
[Link]
Posted Dec 20, 2007 18:56 UTC (Thu)
by addw (guest, #1771)
[Link]
Posted Dec 21, 2007 7:05 UTC (Fri)
by geertj (guest, #4116)
[Link] (2 responses)
The more I read about source code being compromised, the more I am convinced we need technology support (as opposed to procedural support) to prevent any such modifications from being trusted and/or used by anyone. Projects have become so big that manual audits of the source code to look for backdoors become increasingly less effective. For example, if in the case of this SquirrelMail compromise, the attacker had also updated the checksum, then it may have taken even longer for this to come out. A good place for implementing such support would in my view be the version control system. The Monotone version control system identifies files and trees with a cryptographically secure fingerprints, and it uses digital signatures to assert arbitrary statements about versions and changes (such as: author so-and-so created this change on so-and-so date). The way I understand it, an attacker could add whatever he wants to a monotone repository, as long as the primary developers' public keys are not compromised, this will be completely harmless. A similar situation exists for a developer: you can pull whatever changes you want into your local repository database from whatever dodgy site on the net: as long as you have not assigned trust to a particular public key those changes will be harmless. I am thinking of switching a few of my open source projects over to Monotone just to see how it works.
Posted Dec 21, 2007 20:00 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (1 responses)
But that wouldn't be effective against what happened with Squirrelmail, since the code was changed after it came out of the source repository.
And it may not be effective against hackers who put code into source repositories either, because if you can get commit privilege on a Subversion server, you can probably also add a public key to a Monotone server or sign code as some authorized developer.
Posted Dec 21, 2007 20:27 UTC (Fri)
by geertj (guest, #4116)
[Link]
But that wouldn't be effective against what happened with Squirrelmail, since the code was changed after it came out of the source repository. It would be effective if the users would pull the code directly from a Monotone netsync server using "mtn sync". There is indeed no protection against modifying a tarfile after it is released from a monotone repository. And it may not be effective against hackers who put code into source repositories either, because if you can get commit privilege on a Subversion server, you can probably also add a public key to a Monotone server or sign code as some authorized developer.
Being a distributed version control system, a typical way to deploy monotone is for all developers to have their own repository on their private workstations, and in addition to this one central netsync server connected to the Internet to which everybody synchronises. The development workstations can be behind a firewall and do not need to accept any incoming connection. The developers would normally store their private keys only on their workstations, hopefully protected by a passphrase. In this setup, a compromise of the netsync server (which is more likely than a compromise of a developer workstation because it is a public server) would not impact the security of the monotone sources hosted on it. The attacker can add anything to the repository he wants, but he has no access to a key that is trusted by the other developers with which he can certify the new revision.
Posted Jan 6, 2008 7:15 UTC (Sun)
by voluspa (guest, #49821)
[Link]
The backdooring of SquirrelMail
Exactly. How do we know that someone didn't crack the workstation of an apache or firefox
developer, didn't slip a backdoor into the code and currently isn't waiting for the highest
bidder to sell the access to these computers? Yes, I know, there is peer review, but it
obviously didn't work in the case of SquirrelMail...
Peer review did work with SquirrelMail - somebody reviewed the checksum and raised the alarm. There was no possibility for review to happan any earlier - that code did not go through the ordinary process. The fact that almost all backdoor attempts have targeted the distribution point (the final tarball) rather than some point earlier in the process suggests that getting a backdoor in that way is hard.
How do we know?
BTW, it is also possible and likely that some developer somewhere has done a similar thing. I dimly remember one occasion a few years ago when such a developer backdoor was detected, can't remember any details though...
The backdooring of SquirrelMail
Worrying initial conclusion?
The quoted section of the initial announcement is a little worrying:
compromising the release packages involved effort on someone's part and,
as a motivation for that effort, introducing an exploitable vulnerability
is a far, _far_ likelier goal that adding a random bug. So the initial
position should probably be to assume that whoever made the changes
intended them to be exploitable and therefore to act as though there
were a compromise introduced until those changes are fully understood.
That is, the healthy initial reaction is "what have I missed?" rather
than "this doesn't seem to do anything".
Checksums are next to the tar.gz files
The trouble is that the checksums and source files tend to be on the same server -- so if you
can 'fix' one, you can fix the other.
However, as Jon says: most people don't bother to check check sums, if they were held on
another machine then even fewer would bother. Whatever is done has to be easy to use.
Technology support
Technology support
Technology support
The backdooring of SquirrelMail
I was just in the process of updating a little project of mine, and had decided on an extra
bit of security for the published hashes, when I took a break and read lwn.
What I'll do with my project is to run a daily script comparing the remote pages (the ones
containing hashes) with the local copies. Should a "diff" happen, all hell will break loose
here and the remote tarballs pulled asap.
Mats Johannesson