By Jake Edge
August 31, 2011
A rather potent denial of service (DoS) vulnerability in the Apache HTTP
server has dominated the security news over the last week. It was first
reported by way of a proof-of-concept
posted to the full-disclosure mailing list on August 20. The problem
itself is due to a bug in the way that Apache implemented HTTP Range
headers, but there is also an underlying problem in the way those HTTP
headers are defined.
The proof-of-concept (posted by "Kingcope") is a fairly simple Perl script
that makes multiple connections to a web host, each with a long and,
arguably, incorrectly redundant Range specified in the headers.
Only a small number
of these connections can cause enormous memory and CPU usage on the web
host. This is a classic "amplification" attack, where a small amount of
resources on the attacker side can lead to consuming many more victim-side
resources. In some sense, normal HTTP requests are amplifications, because
a few bytes of request can lead to a multi-megabyte response (a large PDF
for example), but this attack is different. A single request can lead to
multiple partial responses, each with their own header and, importantly,
server overhead. It is the resources required to create the responses that
leads to the DoS.
The Range header is meant to be used to request just a portion of
the resource, but, as we see here, it can be abused. The idea is that a
streaming client or other application can request chunks of the resource as
it plays or displays them. An HTTP request with the following:
Range: bytes=512-1023
would be asking for 512 bytes starting at the 513th (it is zero-based).
But it is not just a single range that can be specified:
Range: bytes=512-1023,1024-2047
would request two ranges, each of which would be sent in either a separate
response (each with a
Content-Range header) or in a multipart
response (i.e. with a
multipart/byteranges Content-Type).
Each of those examples is fairly benign. The problem stems from requests
that look like:
Range: bytes=0-,5-0,5-1,...,5-6,5-7,5-8,5-9,5-10,...
which requests the whole file (
0-) along with several nonsensical
ranges (
5-0,5-1,...) as well as a bunch of overlapping ranges.
The example is taken from the proof-of-concept code (which creates 1300
ranges for each request), and other kinds of
lengthy range requests will also cause the DoS.
When it receives range requests, Apache dutifully creates pieces of a multipart
response for each range specified. That eats up both memory and CPU on the
server, and doing so tens or hundreds of times for multiple attacker
connections is enough to exhaust the server resources and cause the DoS.
The range requests in the attack are obviously not reasonable, but they are
legal according to the HTTP specification. There is discussion
of changing the specification, but that doesn't help right now.
An obvious solution would be to sort the ranges (as they don't have to be
in any specific order) and coalesce those that are adjacent or overlap. If
the range turned out to correspond to the entire file, the server could
just send that instead. Unfortunately, (arguably) badly written applications or
browsers may be expecting to get multipart responses in the same order, and
with the same lengths, as
specified in the request. In addition, the HTTP specification does not allow
that kind of reordering.
So the Apache solution is to look at the
sum of the lengths of the ranges in a request
and if that's greater than the size of the requested file, just send the
whole file. That will defeat the proof-of-concept and drastically reduce
the amplification factor that any particular request can cause. It doesn't
completely solve the problem, but it alleviates the worst part. Attackers
can still craft nonsensical range requests, but the number of responses
they can generate is vastly reduced.
While Apache's implementation of range requests is fairly resource-hungry,
which makes it easier to cause the DoS,
the HTTP protocol bears some of the blame here too. Allowing multiple
overlapping ranges does not really make any sense, and not allowing servers
to reorder and coalesce adjacent ranges seems poorly thought-out as well.
No matter how efficiently implemented, allowing arbitrary ranges of that
sort is going to lead to some sort of amplification effect.
Apache's fix is for the stable 2.2 series, so it is necessarily fairly
conservative. There are ongoing discussions going on in the Apache dev
mailing list that indicate a more robust fix—probably following the
proposed changes to the HTTP specification—is in the works for the 2.3
development branch (which will eventually become the stable 2.4 release).
As of this writing, only Debian has released a fix for the problem (which
it did based on the patch while it was being tested and before Apache announced
its fix). Other distributions are sure to follow. Since it is trivially
easy to make an unpatched server unresponsive, it probably makes sense to
use one of the mitigation techniques
suggested by
Apache until a server update is available.
[ A word of warning to those who may be tempted to try the proof-of-concept
code: while limiting the number-of-forks command-line parameter to 1 may
seem like a good idea for testing purposes, it doesn't actually work in
practice. If that parameter is <= 1, the code sets it to 50, which is
enough to DoS a server—trust me on that last part. ]
(
Log in to post comments)