|
|
Subscribe / Log in / New account

Study: Attacks on package managers

The University of Arizona is publishing a study on security problems with package management systems. The core problem would appear to be that tools like yum and apt will happily install versions of packages with known vulnerabilities if they think that's the most recent version available. And feeding such packages to the package managers is not a big challenge: "To give an example of how easy it is for a malicious party to obtain a mirror, we ran an experiment where we created a fake administrator and company name and leased a server from a hosting provider. We were able to get our mirror listed on every distribution we tried (Ubuntu, Fedora, OpenSuSE, CentOS, and Debian) and our mirrors were contacted by thousands of clients, even including military and government computers!"

to post comments

How to fix it

Posted Jul 14, 2008 17:09 UTC (Mon) by epa (subscriber, #39769) [Link] (29 responses)

Surely not hard to fix.  Instead of asking an untrusted mirror to get the list of updated
packages, the package index should itself be cryptographically signed and timestamped, so the
client won't use one more than say 24 hours old.  Then it won't rest until it has all the
packages it wants to update and their hashes match those in the index (in addition to the
normal signature check).

Indeed, it is strange to even have mirror sites in this day and age when we have Bittorrent.

How to fix it

Posted Jul 14, 2008 17:17 UTC (Mon) by madscientist (subscriber, #16861) [Link] (2 responses)

Bittorrent should definitely be an option, but it cannot be the only one.  Far too many
corporate sites, and even some educational ones, ban or heavily restrict bittorrent (and any
other peer-to-peer protocol).

Small change can fix it

Posted Jul 14, 2008 19:07 UTC (Mon) by khim (subscriber, #9252) [Link] (1 responses)

Bittorrent works well for transfer of large files.. it does not work well with small packages. I would think the torrent nightmare of having a seperate torrent for each updated package would make pre-yum rpm/pre-apt .deb hell look like a picnic.

That's why you need SINGLE .torrent for ALL updates. Then you add padding and voila: you can download ANY file from peers - if there are any peers - or you can wait for a single slow seed to deliver data. Anyone can participate but only central server can issue new file to the system.

In short: yes, bittorrent needs some work to be usable as update system, but it's MUCH better then what we currently have...

Small change can fix it

Posted Jul 15, 2008 0:31 UTC (Tue) by beoba (guest, #16942) [Link]

But then, every time any package is updated in that set, you'd need to generate a new torrent
file for it to be included. Additionally, the torrent file itself would be huge, so you
wouldn't want to send it out too often.

How to fix it

Posted Jul 14, 2008 17:30 UTC (Mon) by jengelh (guest, #33263) [Link] (2 responses)

>Indeed, it is strange to even have mirror sites in this day and age when we have Bittorrent.

The next local mirror here (4 hops away according to traceroute) is delivering files faster
than any BT does. Well, that's the purpose of mirrors in the first place...

How to fix it

Posted Jul 14, 2008 21:37 UTC (Mon) by epa (subscriber, #39769) [Link] (1 responses)

Indeed, but if the server admin set up a Bittorrent node instead of a traditional mirror site,
you'd get the same speed by BT.  And I imagine that starting up a Bittorrent client, telling
it to seed URLs x,y,z and use 500Mb/s maximum bandwidth and 500Gbyte maximum disk space is a
lot easier for the server administrator than the traditional 'mirror' script.

How to fix it

Posted Jul 17, 2008 5:02 UTC (Thu) by dirtyepic (guest, #30178) [Link]

in many cases in the time it takes the BT client to contact the tracker, receive the seed/peer
data, and then establish connections to each of these seed/peers, you could have downloaded
the file directly several times over.  and you never get high transfer rates right off the bat
with BT.  It usually takes a few minutes to get up to a decent speed.

How to fix it

Posted Jul 14, 2008 17:39 UTC (Mon) by tetromino (guest, #33846) [Link] (4 responses)

> Indeed, it is strange to even have mirror sites in this day and age when we have Bittorrent.

Well, *some* of us live in North America, and are forced to use an ISP that throttles p2p
protocols.

How to fix it

Posted Jul 14, 2008 19:32 UTC (Mon) by drag (guest, #31333) [Link] (3 responses)

Well.. encryption helps out with that.

Plus having legal uses for bittorrent is a good thing when it comes to complaining to your ISP
about it's behavior. 


How to fix it

Posted Jul 14, 2008 20:24 UTC (Mon) by tetromino (guest, #33846) [Link] (2 responses)

Encryption on its own doesn't help. Comcast's throttler checks for anything that looks like a
torrent user, even if all the packets are encrypted. The only thing I've found that really
works is SOCKS5-over-ssh to a server outside Comcast's network. I'm guessing their algorithm
says that "a user who opens an encrypted connection to only one other machine is OK".

How to fix it

Posted Jul 15, 2008 18:51 UTC (Tue) by drag (guest, #31333) [Link] (1 responses)

I know it sounds stupid, but are you sure that they are throttling bittorrent?

I seen that Comcast had a investigation on them and all that, but for most cases I think that
it's because of bad bandwidth management on the part of end users is what is causing most
'throttling'.

For example.. At my house I use Cox Cable for Internet. For a long long time I was able to run
bittorrent quite well, but after some changes they made I started running into big issues with
losing packets and huge increases in latency of my network connection. I was only able to get
a third or a forth of the speed of the downloads that I used to get, and this was after a
'network upgrade'. And upload speeds were very poor since other peers were getting
disconnected.

At first blush it seemed very obvious that this was due to some sort of throttling or network
performance shaping on the part of Cox. But in fact it wasn't.

----------------------------------

The deal is, if I understand it correctly, is that Bittorrent is a TCP oriented network
protocol. That is when you send a packet or at least initiate a data transfer you have to have
'ack' packets acknowledge the connection. TCP is connection-oriented protocol. 

Home-style network connections are hugely asymmetrical. The download speed vastly outperforms
the upload speeds.

So Bittorrent does upload and download simultanously. As these ISPs increase their download
speeds dramatically they do little to increase their upload speeds. 

So with BT the upload becomes a huge bottleneck since even if you rarely saturate the download
it's very very likely that you saturate the upload. 

And what happens then is you end up having this huge queue of uploaded packets. Among those
packets being stuck in router limbo are those TCP ack packets, and they don't make it back up
to their destination in time you start having disconnects and all sorts of nasty network
performance problems. 

I found that the controls offered by bittorrent clients were very inadequate. This is due to
the fact that there are probably multiple BT clients running at the same time as well as other
network traffic.

So the solution is to have quality assurance rules on your home's router. You can't control
what is being sent to you, but you can have control over the priority of packets leaving your
network. So you have to throttle the ports your using for bittorrent _yourself_ and then give
very high priority to ack packets. Usually your going to have to limit your BT traffic to
80-90% of maximum network bandwidth to get the best performance.

There is a 'wondershaper' set of scripts that can do this, but I found that a modified setup
of that designed for Shoreline firewall/router is the best.

Now my bittorrent downloads are at least 4-5 times as fast as they used to be and I have no
network latency problems.


--------------------------------


So I figure if your experiencing not only slow BT downloads, but huge increases in ping
latency and some limited packet loss then it's not your ISP's fault (except for the
excessively asymmetrical nature of their network). It's your's and your router needs to be
configured to do quality assurance. 

Of course if your ISP is actually doing throttling then none of this will help you. Hopefully
you can find another company to go through. 

How to fix it

Posted Jul 16, 2008 14:29 UTC (Wed) by epa (subscriber, #39769) [Link]

Makes you wonder why Bittorrent isn't built on top of UDP?

(Indeed, HTTP GET requests might as well have been be UDP instead of TCP... but that is a
discussion for another day.)

How to fix it

Posted Jul 14, 2008 17:40 UTC (Mon) by smoogen (subscriber, #97) [Link] (6 responses)

Bittorrent works well for transfer of large files.. it does not work well with small packages.
I would think the torrent nightmare of having a seperate torrent for each updated package
would make pre-yum rpm/pre-apt .deb hell look like a picnic.

How to fix it

Posted Jul 14, 2008 19:22 UTC (Mon) by bronson (subscriber, #4806) [Link] (3 responses)

There's no need to have a separate torrent per package.  Just use a single torrent with all
packages.  Any decent bittorrent client will allow you to download just the files that you
want and ignore everything else.

Of course, you will have to create a new torrent any time a package changes.  That might be a
show-stopper.  :)


How to fix it

Posted Jul 14, 2008 20:12 UTC (Mon) by dlang (guest, #313) [Link] (1 responses)

but people don't pull all the packages (pulling all debian packages is several hundred gigs)
so a single torrent of everything won't work.

now it would be very possible to make a new torrent-like protocol that pulls individual
packages as needed per a torrent, but none of the package managers can do that today.

Not a new protocol, but a slight modification of the existing one

Posted Jul 14, 2008 20:40 UTC (Mon) by khim (subscriber, #9252) [Link]

Most torrent managers I know about can download just a few files from huge torrent. Sure it works not as good as when you download the whole torrent but when there are thousands of peers it still works great. You only need to pad .torrent with zero-filled files and voila - no need for a new protocol.

How to fix it

Posted Jul 16, 2008 17:48 UTC (Wed) by bronson (subscriber, #4806) [Link]

You might have missed this line in my post?

> Any decent bittorrent client will allow you to download just the files that you want and
ignore everything else.

Bittorrent and small files

Posted Jul 15, 2008 9:38 UTC (Tue) by epa (subscriber, #39769) [Link] (1 responses)

What makes you say that Bittorrent does not work well with small files?

Bittorrent and small files

Posted Jul 15, 2008 18:45 UTC (Tue) by nix (subscriber, #2304) [Link]

Say rather that it doesn't work well with small transfers. My impression 
is that it has very slow start, so for transfers taking less than a few 
minutes, ordinary FTP is going to be noticeably faster.

How to fix it

Posted Jul 14, 2008 17:45 UTC (Mon) by i3839 (guest, #31386) [Link] (2 responses)

Considering the current topology of the Internet mirrors will stay making 
sense and be more efficient than peer-to-peer systems. Throw in the huge
differense between clients' upload and download bandwidth and mirrors make 
even more sense.

Better to experiment with multicast...

How to fix it

Posted Jul 15, 2008 15:30 UTC (Tue) by salimma (subscriber, #34460) [Link] (1 responses)

Multicast would be useful for a sysadmin wanting to update multiple computers all at once, but
otherwise, data still has to be transmitted multiple times.

How to fix it

Posted Jul 17, 2008 14:41 UTC (Thu) by i3839 (guest, #31386) [Link]

I was more thinking about distro's multicasting package updates and everyone being able to
pick them up. Sending a package update ten times a day for ten days means almost everyone has
the chance to pick it up. Uploading a file a hundred times instead of once per user uses much
less bandwidth if you've more than a few hundred users. Bandwidth usage can be controlled on
the client side by choosing how many streams are read simultaneously.

Throw in a traditional client-server thing to retrieve lost packets or missed updates, and to
enable people with crappy ISPs or routers which don't support multicast, and you've a complete
solution.

Of course multicast has its downsides and troubles, so you'd want to use Source Specific
Multicast, and IGMPv3 support is needed for that. No idea how well (home/ISP) routers do that
though.

How to fix it

Posted Jul 14, 2008 18:21 UTC (Mon) by jwb (guest, #15467) [Link] (1 responses)

From the user's perspective, the a mirror is better than bittorrent.  Mirrors are known-good
hosts with lots of bandwidth.

The point of bittorrent is that you can distribute a file which otherwise you would not be
able to distribute.  People seem to forget that central element of bittorrent, and turn it
around to mean that users should expect extremely fast download rates.  That's not true.
Bittorrent allows you to download a file that otherwise would not be available.  That it is
sometimes fast is simple and artifact of the implementation.

How to fix it

Posted Jul 15, 2008 8:12 UTC (Tue) by zeridon (guest, #46234) [Link]

Hmm the points said are a bit interesting but untill now everyone forgets that there is
ibiblio and it's sibling Osprey & Permaseed

Basically with that soft they can implement a mirror and BT with seeds directly from the
storrage and on the big pipe.

Last month it was prety usefull 2-3 hours delay from the oficial eclipse release and pushing
hard for the torrents. It made some diffference. The server wasn't so bogged down, and as
eclipse is just sw nobody kills the torrent so we have more seeds :)

How to fix it

Posted Jul 14, 2008 22:32 UTC (Mon) by tzafrir (subscriber, #11501) [Link]

Getting closer to that:

http://debtorrent.alioth.debian.org/

How to fix it

Posted Jul 14, 2008 23:49 UTC (Mon) by kornak (guest, #17589) [Link]

I had similar thoughts which were trashed at the outset. It would be ideal to have a more decentralized, and at the same time, more secure distribution mechanism...

http://lwn.net/Articles/222639/

Why not torrent

Posted Jul 15, 2008 11:56 UTC (Tue) by job (guest, #670) [Link] (1 responses)

I can think of a few reasons. Bittorrent is more complex (which would make the install program
larger, and might not fit on a netboot ram image). It is slower. (If it's faster for you,
switch mirror.) It uses uplink capacity for the end users (which some may have to pay for).
Distribution of torrent metadata is a single point of failure.

So in short, it's not robust, it's slow, and it's complicated.

You make it sound like more modern equals better, which should be obvious it is not true.
Otherwise we would not use a web browser when we have 3D games. Different uses, different
protocols.

Why not torrent

Posted Jul 16, 2008 14:43 UTC (Wed) by epa (subscriber, #39769) [Link]

I think you answered your own argument: "If it's faster for you, switch mirror."

The point is that users should not have to care about switching mirrors, and (with somewhat
less importance) administrators should not have to care about setting up mirrors and fixing
the various things that can go wrong with them.  Bittorrent is a more complex protocol, but it
is often simpler to use and maintain.  Let the computer do the hard work of figuring out which
site is the nearest and which site has which data.

(I have two Fedora machines on my local network; I could spend disk space and bandwidth
setting up a full mirror but it would be a hassle and download more than I need.  An
alternative is to set up a caching http proxy - but if Bittorrent were used, the machines
would just share the downloaded data automatically.)

It need not necessarily use uplink capacity for the end users.  A leeching Bittorrent client
would be fine for downloading package updates.  I proposed setting up Bittorrent servers
instead of traditional mirror sites - such a server would be a regular Bittorrent node
configured to participate in the package torrents.  Since the server is just there to provide
speeded-up downloads to the users, it can run in generous mode where it will happily provide
data to leeching clients.

Distribution of torrent metadata is indeed a single point of failure.  But so is distributing
a list of mirror sites.  Whatever update method you use, you must start with a single point
somewhere.

How to fix it

Posted Jul 16, 2008 0:39 UTC (Wed) by motk (guest, #51120) [Link]

Bittorrent uses a ridiculous amount of bandwidth, which is not always Cheap.

How to fix it

Posted Jul 17, 2008 16:48 UTC (Thu) by justincappos (guest, #52950) [Link]

There is an additional problem with BitTorrent or other P2P solutions that hasn't been
mentioned in the discussion here.   When you download the current version of a package, you
are commonly doing so because you are upgrading an old version.   So when downloading a
package from an untrusted party (like a mirror) you disclose that you are running outdated
software to that party.   This is obviously bad because they may be able to root you, etc.
Using something like BitTorrent increases the effect because now a much larger group of people
with a lower barrier to entry are aware of you requesting a package.

I don't think this is a good trade-off given the current status quo.

Study: Attacks on package managers

Posted Jul 14, 2008 17:14 UTC (Mon) by DeletedUser32991 ((unknown), #32991) [Link] (3 responses)

Debian uses signed package files and includes security.d.o (administered by Debian itself) in
the default configuration. The latter seems to exclude the possibility of keeping things
insecure by using old package files on the mirror. And the suggestion to use HTTPS... Bad
research at its best.

Study: Attacks on package managers

Posted Jul 15, 2008 10:18 UTC (Tue) by nhippi (subscriber, #34640) [Link] (1 responses)

To put it more plainly:

The attacker cannot use a malicious mirror inject old content against security.debian.org,
since security.debian.org isn't mirrored by third parties.

Testing users could become vulnerable. Mitigating against this would be relatively easy to
implement, as the Signed "Release" file already has A "Date" field. - Just check that it isn't
older than X days. As a added bonus, users will start noticing if their mirror has problems
getting updates.

One option the attacker has is transparent proxies, but then again you are in big trouble
anyway (mmm.. cookies..) if a cracker manages to root your ISP's transparent proxy.


Study: Attacks on package managers

Posted Jul 15, 2008 18:49 UTC (Tue) by nix (subscriber, #2304) [Link]

`X' days doesn't work for any fixed value of X. A better check is to check 
that the package date is not much older than the last time you downloaded 
a set of updates which should have included that package (`much' 
introduced to allow time for the package to be uploaded, inter-mirror 
propagation delays, et al).

Downside: this means that after Debian's ftpmasters sit on a package for 
five hundred years they have to get it re-signed before putting it into 
the repo ;) and I'm not sure what implications it has for 
automatically-promoted repositories such as Debian testing: perhaps the 
Date header should be updated, and the signing repeated, by the (trusted) 
software with a silly name which does the promotion (I can't remember that 
name right now, it always drops out of my head). If attackers take *that* 
over, we're all dead anyway.

(sorry for the jab at ftpmasters gone, I couldn't resist ;} )

Debian security

Posted Jul 17, 2008 17:20 UTC (Thu) by justincappos (guest, #52950) [Link]

(Disclosure: I'm an author of the study)

I agree with you that using a security repository significantly reduces the vulnerability to
attack.   This doesn't protect against the "endless data" attack we describe that can be used
by a mirror to crash clients, but this is not as big of a threat as compromising the client.
(Does Debian need to contact so many mirrors by default?)

There are several other minor issues that remain and may impact Debian users that we didn't
see discussed here.   First, there is no authentication that you are talking with the security
repo, so a MITM attacker can still launch attacks by masquerading as the repo.   HTTPS with
correct certificate checking would prevent this, hence the HTTPS ("Bad research at its best")
suggestion.  :)

Second if the security repo fails or is not contactable from a client (non-transitive
connectivity, etc.) then the mirrors can attack clients by replaying content from the security
repo.

Third, clients who use netselect-apt, etc. should be aware that they are likely to remove the
security repo from their list of mirrors and thus become vulnerable.


In general we found that Debian's practice of using a security repository is effective in
protecting their users from replay / freeze attacks in the majority of cases for users with
default configurations.

There is another issue with Debian that we didn't bring up on the web pages because we felt
there was already too much loosely connected content.   We briefly looked at the developer
update process and if I understand correctly from reading the documentation any developer can
update any package (they are encouraged not to except in extreme situations but have the
ability to).   Furthermore, if I understand correctly there are thousands of keys in the
developer database so really this means that a compromise of any key allows an attacker to
upload any package.   Also there are keys as short as 768 bits and as old as 1993.   I'm not a
crypto expert so I don't really know how to quantify risk, but both of those numbers trigger a
mental alarm.   Anyways, I was hoping that you could also clarify / correct / confirm any of
this information as well.

Thanks,
Justin Cappos

Kind of weak

Posted Jul 14, 2008 17:14 UTC (Mon) by JoeBuck (subscriber, #2330) [Link] (1 responses)

The idea behind this attack is for a mirror to contain older, known-vulnerable packages, but still signed by the vendor because they were once current. One problem with it is that apt and yum won't downgrade a package that already exists; furthermore, with yum at least the mirror that you get is "randomly" selected. So only users who are installing a package for the first time, and that are unlucky enough to be assigned your mirror, are vulnerable, meaning that you're unlikely to be able to replace an essential package with a broken one in this way.

Mirrors could also be audited after security updates go out, to verify that they contain essential security updates and those that don't could be blackballed.

Kind of weak

Posted Jul 15, 2008 15:35 UTC (Tue) by salimma (subscriber, #34460) [Link]

There is a 'yum-fastestmirror' plugin, so if you know the target computer is using it, you can
set up a mirror a small number of hops away and have a good chance of being preselected.
Signing and dating the package list is still the safest solution.

Nastier attack

Posted Jul 14, 2008 18:16 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (5 responses)

Someone on Slashdot pointed out a much nastier potential attack. The process is simple:

  1. Set up a mirror.
  2. Wait for the distro you're mirroring to send out a security update for a package with a remotely exploitable hole.
  3. Root the box of everybody who starts to download the updated package.

The mirror can look completely legitimate, because it just passively harvests the IDs of vulnerable computers. You probably want to pass off the job of rooting vulnerable computers to a separate botnet to keep your mirror looking squeaky clean.

Nastier attack

Posted Jul 14, 2008 20:30 UTC (Mon) by dskoll (subscriber, #1630) [Link] (2 responses)

That is a very nasty attack.  To defend against this, an organization should have a dedicated
"mirroring" computer that runs almost nothing.  This computer does all the downloads and then
serves the updated packages to other machines.

By decoupling the machine doing the downloading from the machine being updated, you can
mitigate against evil mirrors.  (You can't completely block the attack because the downloader
machine itself might happen to require a package that is found to have a vulnerability.
That's really hard to protect against other than by upgrading the downloader machine manually.)

One solution to this problem

Posted Jul 14, 2008 21:03 UTC (Mon) by jmorris42 (guest, #2203) [Link] (1 responses)

> You can't completely block the attack because the downloader
> machine itself might happen to require a package that is found
> to have a vulnerability. That's really hard to protect against
> other than by upgrading the downloader machine manually.

Use an OS where security updates come from a trusted source for the key machine running your
local mirror.  Since you control the local mirror it should be trusted, thus solving the real
problem here.  This is a basic information disclosure attack, where an evil mirror can
convince machines/users to disclose their vulnerability to an untrusted entity.

So run the one mirror machine on Debian (their low installed base allows all security updates
to originate from security.debian.org) or use a paid support distro (SuSE, RHEL) where all
updates come from the OS vendor itself.

This closes the flaw for larger sites that can setup a dedicated local mirror, but the lone
Fedora user at home is still pretty much boned.  And until all communication to the
mirrors/master repo is via https with server keys precached during the OS install (or via
package updates which are signed) there is still some non-zero potential for DNS poisioning,
man in the middle  attacks, etc.

One solution to this problem

Posted Jul 15, 2008 12:22 UTC (Tue) by jmm (subscriber, #34596) [Link]

security.debian.org isn't a single machine, but a round-robin setup of several hosts
administrated by Debian. Last time someone published the stats they were serving ~ 30 MB/s
(which was two days after the last DSA being published, I suppose the peaks are higher)

Nastier attack

Posted Jul 15, 2008 4:46 UTC (Tue) by dvdeug (guest, #10998) [Link]

Why is this a nasty attack? Compare to:

1. Portscan a lot of computers; save the results
2. When there's a security update, hit the computers running that program

It doesn't require you to have a mirror (and hence a large traceable presence) and hits all
targets, not just one distro. It's less targetted, but how often has that been a problem in
Internet attacks?

Nastier attack

Posted Jul 15, 2008 9:18 UTC (Tue) by tzafrir (subscriber, #11501) [Link]

 ...
Alternative (3)
"Root" a whole bunch of NAT routers.

Study: Attacks on package managers

Posted Jul 14, 2008 18:18 UTC (Mon) by mdomsch (guest, #5920) [Link] (5 responses)

Fedora's MirrorManager software scans each public mirror several times a day, to be sure they
have current and correct content.  If they do not, the mirror is "dropped" from the mirror
list - not returned to clients, until they _do_ have current and correct content.

-Matt Domsch
Fedora MirrorManager author

MirrorManager

Posted Jul 14, 2008 18:33 UTC (Mon) by corbet (editor, #1) [Link]

Ah, but how hard would it be to detect those scans and return a rather different set of results than everybody else gets?

Study: Attacks on package managers

Posted Jul 14, 2008 23:54 UTC (Mon) by arjan (subscriber, #36785) [Link] (3 responses)

would be nice if yum would get a digest of the package list from the master (together with the
mirror list etc) which it then can use to reject anything other than the real package list...
if a mirror doesn't have the right one the client could them report back to MM (triggering a
rescan prematurely possibly) and get another mirror...

Study: Attacks on package managers

Posted Jul 15, 2008 0:35 UTC (Tue) by mdomsch (guest, #5920) [Link] (2 responses)

yes, Seth Vidal, James Antill and I discussed this today and we will see what can be added in
yum & mirrormanager to reduce the problem surface.  Seems adding a digest of the repomd.xml
file to be returned in the mirrorlist query is one part.  Returning mirrorlist over https
(assuming the python urlgrabber code does cert checking) is another.  Dealing with slightly
stale (e.g. all except the masters for a period while the content syncs out) mirrors will be
more of a challenge, so need to find a way to mitigate problems from the user's perspective
(content that was valid 5 minutes ago might not be anymore, but to the user it's still OK...)

Study: Attacks on package managers

Posted Jul 15, 2008 2:20 UTC (Tue) by jmorris42 (guest, #2203) [Link] (1 responses)

Is there anything going to be done about the information disclosure problem?  Is there
anything that CAN be done about the information disclosure?

HTTPS connects can stop random points from noticing a host asking for an update but that won't
stop a mirror site itself from realizing that by asking for a package it means the requester
is running a previous version and is vulnerable.  Even a mirror on a 'reputable' network can
itself be compromised.  In the end the whole concept of mirrors depends on trusting unknown
machines.  Crypto can mitigate some of the more gross dangers but leaves a false sense of
security regarding more subtle risks.

Study: Attacks on package managers

Posted Jul 15, 2008 11:01 UTC (Tue) by tzafrir (subscriber, #11501) [Link]

This seems to be a high level of paranoia.

apt-tor, anybody?

Study: Attacks on package managers

Posted Jul 14, 2008 19:24 UTC (Mon) by rrdharan (subscriber, #41452) [Link] (2 responses)

It boggles the mind that there isn't more verification required in order to get listed as a
mirror.


Study: Attacks on package managers

Posted Jul 14, 2008 21:12 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link]

What kind of verification would you suggest for a voluntary mirror? If you add too much
overhead, good mirrors will just walk away and you will lose. That isn't the gateway where you
should be adding security. You should assume malicious mirrors are already present and work to
mitigate that within the distribution. 

Study: Attacks on package managers

Posted Jul 15, 2008 9:43 UTC (Tue) by epa (subscriber, #39769) [Link]

That is the wrong approach.  You are suggesting there should be verification so that only
trustworthy people (by some measure) can set up a mirror site.  But it will always be possible
for bad guys to slip through the net.  Even the US nuclear weapons programme, with the
strictest possible vetting of participants, contained spies.

And even a well-meaning mirror site can be taken over by an attacker.

Better to make sure the update system is secure so that even with total control of one or more
mirrors an attacker cannot push out bad packages or cause a denial of service for more than a
few minutes.

Study: Attacks on package managers

Posted Jul 14, 2008 19:37 UTC (Mon) by msmeissn (subscriber, #13641) [Link] (4 responses)

For SUSE the whole repository is integrity protected by GPG signatures 
from top down.

So at most you can replay old repository states, but never smuggle in bad 
packages.

Also we use a central download redirector, which serves the meta data 
directly, but for the RPMs refers to the mirrors. So you get the latest 
repository state.

If you overtake its DNS record or be the man in the middle you can due to 
checking only replay old states.

So yes, their mirror was contacted, but the downloads were checked 
afterwards and would have been discarded if bad.

Also old repository states will not get old packages to be installed, 
since the conditions >= version still apply.


So they were able to get a mirror on the mirrorlist, but malicious 
attacks would not be possible.

Study: Attacks on package managers

Posted Jul 15, 2008 5:55 UTC (Tue) by afalko (guest, #37028) [Link] (3 responses)

All files downloaded from Gentoo have SHA1 and SHA256 sum associated with them. If a file does
not match the file the developer was using, the user will receive a digest error and the
package manger will not continue. Does any one see any loopholes with this scheme?

Study: Attacks on package managers

Posted Jul 15, 2008 11:17 UTC (Tue) by Zenith (guest, #24899) [Link]

Quoting rgmoore further up in the discussion:

Someone on Slashdot pointed out a much nastier potential attack. The process is simple: 1. Set up a mirror. 2. Wait for the distro you're mirroring to send out a security update for a package with a remotely exploitable hole. 3. Root the box of everybody who starts to download the updated package. The mirror can look completely legitimate, because it just passively harvests the IDs of vulnerable computers. You probably want to pass off the job of rooting vulnerable computers to a separate botnet to keep your mirror looking squeaky clean.

So yes, a sort of loophole, but not one you can do much about I would think, besides from the whole "trusted mirrors only" scheme mentioned here in the discussion.

Study: Attacks on package managers

Posted Jul 15, 2008 11:46 UTC (Tue) by job (guest, #670) [Link] (1 responses)

Are all the ebuilds cryptographically signed now? Last time I checked they were not. So the
reason you needn't worry about the attacks described in the article is that the verifications
aren't there in the first place.

Study: Attacks on package managers

Posted Jul 17, 2008 8:08 UTC (Thu) by hickinbottoms (subscriber, #14798) [Link]

No, they're not signed. This means the SHA1/MD5 checks only protect you from corruption during
the download (or subsequently on disk, before the package has been built).

You can prove this yourself - you can modify the downloaded package and regenerate those
hashes with a "ebuild ... digest" command, so there's no secret to it.

Portage is, I believe, quite vulnerable to compromised mirrors at present. I believe the
groundwork to GPG-signing (not sure if that covers the package only, or whether it includes
the metadata) has been done some time ago, but it's not progressed to the point where that's
utilised yet.

Study: Attacks on package managers

Posted Jul 15, 2008 0:07 UTC (Tue) by MattPerry (guest, #46341) [Link] (13 responses)

I'm not surprised at all.  About a third of the time when I try to update Debian or Ubuntu
machines I get an error about GPG signatures being invalid.  I have to do an update over and
over before it doesn't complain.  I don't have a lot of faith in the security of the system
and this article doesn't help that.

Study: Attacks on package managers

Posted Jul 15, 2008 0:11 UTC (Tue) by MattPerry (guest, #46341) [Link] (12 responses)

Here's the error I'm getting at this very moment as I try to update my Ubuntu 8.04 system:
W: GPG error: http://security.ubuntu.com hardy-security Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
That doesn't inspire confidence.

Study: Attacks on package managers

Posted Jul 15, 2008 3:45 UTC (Tue) by JoeBuck (subscriber, #2330) [Link] (9 responses)

And why doesn't it inspire confidence? The invalid signature protected you from a corrupt download (my guess is that these are usually truncated or partially transferred files).

Study: Attacks on package managers

Posted Jul 15, 2008 6:29 UTC (Tue) by k8to (guest, #15413) [Link] (5 responses)

The reason it doesn't inspire confidence is that this error occurs during normal operation.

Sometime this error indicates a problem of grabbing an inconsistent set of files from a
round-robin type situation.  Yes, the system has saved me from data corruption, theoretically.
Realistically, it is an embarassment that these inconsistencies are encounterable in normal
configurations.  For example, chosing a host such as http.us.debian.org will often result in
data inconsistency during updates.  Why is this advertised as a viable mirror selection if it
does not work reliably?

Sometimes, however, this error indicates a "problem" such as not bothering to run update for a
few weeks and the key has expired.  Once this error indicated that the key had expired before
the new key was even made available, and so the web of trust simply did not extend from one
administration key to the next.  Every user of debian testing encountered this during one
transition.

Perhaps you begin to see the problem?  This thing pops up all the time to suggest a problem
which is not caused by incorectly signed or unsigned files.  How will you identify a real
security issue in the noise?

Study: Attacks on package managers

Posted Jul 15, 2008 7:02 UTC (Tue) by MattPerry (guest, #46341) [Link]

> Sometimes, however, this error indicates a "problem" such as not
> bothering to run update for a few weeks and the key has expired.

Are the keys really being regenerated that quickly?  What is the reason for doing that rather
than keeping a key for a long time?

I wonder if that might have something to do with my problem.  I usually don't update my
servers unless I see a post on the security-announce lists indicating that there's an update
for a package that I use.  I can sometimes go for a month or two (or more) before running
apt-get update.  The Ubuntu system that I was attempting to upgrade today was last powered on
sometime in May.

> How will you identify a real security issue in the noise?

I agree.  Right now it seems like "the boy who cried wolf."

Study: Attacks on package managers

Posted Jul 15, 2008 9:45 UTC (Tue) by epa (subscriber, #39769) [Link] (3 responses)

The existence of flaky, corrupted mirror sites is another argument in favour of dropping
old-style mirrors and using Bittorrent or some other protocol that handles the mirroring
automatically and is robust against misbehaving nodes.

Study: Attacks on package managers

Posted Jul 15, 2008 12:07 UTC (Tue) by job (guest, #670) [Link] (1 responses)

No, it is not.

The system of signatures just prevented you from downloading data from a "misbehaving node"
(i.e. corrupted mirror), and you blame the system?

The mirroring IS handled automatically, AND you are protected from bad data. What would be
good would be failover handling in the package manager so you didn't need to see that message
at all.

It would also be desirable to protect from the attack described in the article, perhaps using
timestamped and signed package indexes?

Study: Attacks on package managers

Posted Jul 16, 2008 14:49 UTC (Wed) by epa (subscriber, #39769) [Link]

Checking the signature is a good thing and I'm not blaming that at all. I am kvetching about the corrupted mirror site existing in the first place. Removing the signature check, obviously, would not improve things. Better error reporting of 'the download failed and the file was truncated' before even attempting the signature check would be helpful, but not essential.
What would be good would be failover handling in the package manager so you didn't need to see that message at all.
Yes. Some kind of client library that automatically handles selecting an upstream server (or more than one, if the download is to be parallelized), checks for data consistency, and restarts or switches servers if the consistency check fails. Bittorrent is one example of a protocol that handles all this, with the added bonus that nodes can share data between each other (as when two machines on the same network both need to update), and that setting up a traditional mirror site using cron jobs and perl scripts is not necessary (just start up the Bittorrent program and tell it how much bandwidth and disk space to use). Some kind of intelligent http frontend would also do the job. Of course you still need to check package signatures after the download has completed successfully.

Study: Attacks on package managers

Posted Jul 16, 2008 0:44 UTC (Wed) by motk (guest, #51120) [Link]

Bittorrent is not a hammer, and not everything is a nail.

Study: Attacks on package managers

Posted Jul 15, 2008 6:35 UTC (Tue) by jamesh (guest, #1159) [Link]

Occasionally it also indicates a badly behaved almost-transparent proxy sitting between you
and the mirror.

Study: Attacks on package managers

Posted Jul 15, 2008 6:44 UTC (Tue) by MattPerry (guest, #46341) [Link] (1 responses)

> And why doesn't it inspire confidence?  The invalid signature protected
> you from a corrupt download (my guess is that these are usually truncated
> or partially transferred files).

It doesn't inspire confidence because I'm not a cryptography expert, nor do I desire to be
one.  As an end user, all I see is an error that I do not understand.  I don't know why the
signature is invalid and the error doesn't give me any guidance on what the significance is
nor how to correct it.  I know that signed packages and package lists are supposed to protect
me, which is why I sit up and take notice when I see the error.

The best that I've been able to do in this situation is to try the update again and hope the
error goes away.  Usually the error will not happen when I update the package list a second
time.  Occasionally, the error will persist no matter how many times I update and I just try
again later.  That is what happened with Ubuntu today. I ran the "check updates" from the
update manager five times over about 15 minutes and I continued to receive the same error.  If
I try the updates tomorrow, I expect that it will be fine.

It's using TCP, not UDP, to download the data.  Shouldn't TCP should ensure that I'm getting
the correct data?  I wouldn't expect for the transfer to be corrupt several times in a row.  I
could understand if I only saw this error once, but I see it often enough that I don't think a
corrupted download is the problem.  I also see it with Debian and Ubuntu, so it's not
something restricted to one distribution.

Study: Attacks on package managers

Posted Jul 15, 2008 8:50 UTC (Tue) by jond (subscriber, #37669) [Link]

> It doesn't inspire confidence because I'm not a cryptography expert,
> nor do I desire to be one.  As an end user, all I see is an error 
> that I do not understand.  I > don't know why the signature is invalid
> and the error doesn't give me any guidance on what > the significance 
> is nor how to correct it.

I think I agree with you here that the UI side needs work.

> It's using TCP, not UDP, to download the data.  Shouldn't TCP should 
> ensure that I'm getting the correct data?

TCP would protect you against the data being corrupted in transit from the mirror to yourself.
This looks like corruption at the mirror end or (in the case of a bad transparent proxy) stale
data being served up from a cache that doesn't correspond to the package index.

Study: Attacks on package managers

Posted Jul 16, 2008 18:17 UTC (Wed) by MattPerry (guest, #46341) [Link] (1 responses)

Today I'm getting this error:
GPG error: http://security.ubuntu.com hardy-security Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/hardy-updates/main/binary-i386/Packages.bz2 Hash Sum mismatch

Some index files failed to download, they have been ignored, or old ones used instead.

If I try to move ahead and installed I get a bold warning that packages can't be authenticated. No suggestions are provided on how to fix the problem. I don't know what I can do except back up my files and reinstall Ubuntu.

Study: Attacks on package managers

Posted Jul 21, 2008 9:35 UTC (Mon) by mdz@debian.org (guest, #14112) [Link]

This is typically due to a broken transparent proxy, or similar network anomaly, between you
and your chosen package mirror.

Surely package signing already solves this?

Posted Jul 15, 2008 1:53 UTC (Tue) by PaulWay (guest, #45600) [Link] (3 responses)

I may be missing something, but surely the package signatures that are already in place make
this "black mirror" an unlikely attack vector?  AFAICS they're not creating a new repository
with their own package signing key, so you're still using your core package signatures to
verify that the RPM or DEB that you've downloaded hasn't been tampered with.  Unless they can
break that security, they can't introduce new vulnerabilities or back doors.

So the only real possibility is for them to keep only the packages that have known
vulnerabilities, which would mean that unless your victims had listed your mirror first in an
in-order (i.e. not round-robin or random) list, you would only have a small window of
opportunity to trace the victim that downloaded the vulnerable package and exploit it before
they did their next update from someone else and got an updated package with the vulnerability
fixed.  Otherwise you have a much smaller chance that they pick your mirror when you're
holding the newer-but-insecure package.  And if you're holding out-of-date packages or
fiddling with package names then the repository maintainers will probably fairly quickly spot
this from their own surveys and block you anyway.

This sounds similar to someone saying "wow, I'm so leet, I found this uber warez site and
downloaded Linux for free!"

Surely package signing already solves this?

Posted Jul 15, 2008 3:52 UTC (Tue) by JoeBuck (subscriber, #2330) [Link] (2 responses)

The attack is for the mirror to serve up old (but digitally signed) versions of the packages that have known vulnerabilities. But the problem is that apt and yum won't downgrade packages, so this isn't much of an attack.

Surely package signing already solves this?

Posted Jul 15, 2008 22:11 UTC (Tue) by njs (subscriber, #40338) [Link] (1 responses)

One could sign packages in such a way that a replay attack was impossible... assuming that the
client has an accurate clock.

Have a root-level signature re-issued each day over the entire repository, that also includes
the date that it is generated.  When talking to a mirror, you can check this root file, and if
it's more than a day or two old, that's a bad mirror and should be ignored.  (This also
catches mirrors that are merely stale, which can be a problem sometimes for entirely
non-malicious reasons...)  Because the date is signed, the mirror can't pretend to be
up-to-date.

Of course, the signature has to also cover the packages in the repository or it's no use -- a
malicious mirror could serve up a fresh root cert and stale packages, and verifying a
signature made in the naive way over the entire repository would be prohibitively expensive.
(Heck, *generating* such a signature would be prohibitively expensive...)  But one could create
a tree of signatures to mitigate this -- the root signs a list of package/version/hash, and
then you just have to download the root, the list of package versions (which you usually
download anyway), and the actual packages you want...

If clients don't have accurate clocks, you could instead put the current root signature on a
single centralized server run by a trusted party (i.e. the distribution itself), and have
everyone fetch it directly; this would be reasonable scalable, since the central box would
only have to serve one few-hundred-byte file instead of being a full mirror.

Not that I expect anyone to do this, but it's fun to think about.

Surely package signing already solves this?

Posted Jul 16, 2008 17:04 UTC (Wed) by tzafrir (subscriber, #11501) [Link]

I would like the CDs to contain signed repositories as well.

A valid mirror and an up-to-date mirror are not necessarily the same thing.

Deficiencies wrt. CentOS

Posted Jul 17, 2008 7:54 UTC (Thu) by dag- (guest, #30207) [Link] (1 responses)

The study (and especially the way it is being advertised) is seriously lacking any insight in how yum works (on CentOS). The impact of 'being in the mirrorlist' is really reduced by a number of things and makes the motives of the author(s) of the paper somewhat questionable. (Attention grabbing ?)

We have blogged about this:

as appeared on Planet CentOS.

Deficiencies wrt. CentOS

Posted Jul 17, 2008 16:42 UTC (Thu) by justincappos (guest, #52950) [Link]

I've posted a few corrections on the original blog entry.   I'll post here as well.



Hello, I'm one of the authors of the study.   I wanted to first of all thank you for
commenting on our research.   One of the major benefits we hoped would come from making this
public is that the Linux community would become more aware and interested in fixing the
problems we point out.


I also wanted to respond to several of the issues you brought up in your blog post.   First, I
appreciate you pointing out that many distributions check to see if their mirrors are current
and try to remove mirrors that are not.   We under-emphasized this in the webpage and other
documents because we did not view this as a mechanism used to detect a malicious party (we
thought the intent was to detect negligent administrators and broken scripts).   As I'm sure
you and the savvy reader are aware, it is possible for a web server to serve different content
to different users.   We examined our web request logs from our CentOS mirror and I believe we
can identify the "checking bot" IP addresses.   If we were malicious, we could serve "good"
content to your checking bot and "malicious" content to other users.   I would be happy to
provide what I believe to be the IPs used to check if a mirror is current to you offline for
verification / rebuttal.   However, since you view this information as important to the
security of your users, I will not list the information here.

Additionally, I wanted to mention that we found significant security problems with Fedora's
MirrorManager (our FAQ talks about how it can be used to target attacks).   However, other
redirectors we looked at (like Download Redirector for OpenSUSE) do improve security in a
similar manner to what you describe.   I was wondering if we could talk more offline about how
your mirror list redirection works so we can discuss the potential for abuse? 


I also wondered if you might want to look in detail at the other attacks page of the web site
and the technical report which mentions detailed information about flaws in YUM.   We would be
happy to discuss the feasibility of attacks that target these issues with you.   However, I
will point out one attack that is extremely simple that I hope illustrates there is a real
danger to your users.   If I control a mirror and you attempt to retrieve a file from my
mirror, I can return an endless stream of data which (on YUM) will fill the disk and crash the
client system (stopping logging, corrupt databases, etc.).   This is obviously a real threat
to all of your users regardless of any mirror redirection strategies you perform.

Anyways, we thank you for taking a look at our research and hope to hear more rebuttal /
confirmation in the future.

Thanks,
Justin Cappos


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds