By Jake Edge
August 31, 2011
A rather potent denial of service (DoS) vulnerability in the Apache HTTP
server has dominated the security news over the last week. It was first
reported by way of a proof-of-concept
posted to the full-disclosure mailing list on August 20. The problem
itself is due to a bug in the way that Apache implemented HTTP Range
headers, but there is also an underlying problem in the way those HTTP
headers are defined.
The proof-of-concept (posted by "Kingcope") is a fairly simple Perl script
that makes multiple connections to a web host, each with a long and,
arguably, incorrectly redundant Range specified in the headers.
Only a small number
of these connections can cause enormous memory and CPU usage on the web
host. This is a classic "amplification" attack, where a small amount of
resources on the attacker side can lead to consuming many more victim-side
resources. In some sense, normal HTTP requests are amplifications, because
a few bytes of request can lead to a multi-megabyte response (a large PDF
for example), but this attack is different. A single request can lead to
multiple partial responses, each with their own header and, importantly,
server overhead. It is the resources required to create the responses that
leads to the DoS.
The Range header is meant to be used to request just a portion of
the resource, but, as we see here, it can be abused. The idea is that a
streaming client or other application can request chunks of the resource as
it plays or displays them. An HTTP request with the following:
Range: bytes=512-1023
would be asking for 512 bytes starting at the 513th (it is zero-based).
But it is not just a single range that can be specified:
Range: bytes=512-1023,1024-2047
would request two ranges, each of which would be sent in either a separate
response (each with a
Content-Range header) or in a multipart
response (i.e. with a
multipart/byteranges Content-Type).
Each of those examples is fairly benign. The problem stems from requests
that look like:
Range: bytes=0-,5-0,5-1,...,5-6,5-7,5-8,5-9,5-10,...
which requests the whole file (
0-) along with several nonsensical
ranges (
5-0,5-1,...) as well as a bunch of overlapping ranges.
The example is taken from the proof-of-concept code (which creates 1300
ranges for each request), and other kinds of
lengthy range requests will also cause the DoS.
When it receives range requests, Apache dutifully creates pieces of a multipart
response for each range specified. That eats up both memory and CPU on the
server, and doing so tens or hundreds of times for multiple attacker
connections is enough to exhaust the server resources and cause the DoS.
The range requests in the attack are obviously not reasonable, but they are
legal according to the HTTP specification. There is discussion
of changing the specification, but that doesn't help right now.
An obvious solution would be to sort the ranges (as they don't have to be
in any specific order) and coalesce those that are adjacent or overlap. If
the range turned out to correspond to the entire file, the server could
just send that instead. Unfortunately, (arguably) badly written applications or
browsers may be expecting to get multipart responses in the same order, and
with the same lengths, as
specified in the request. In addition, the HTTP specification does not allow
that kind of reordering.
So the Apache solution is to look at the
sum of the lengths of the ranges in a request
and if that's greater than the size of the requested file, just send the
whole file. That will defeat the proof-of-concept and drastically reduce
the amplification factor that any particular request can cause. It doesn't
completely solve the problem, but it alleviates the worst part. Attackers
can still craft nonsensical range requests, but the number of responses
they can generate is vastly reduced.
While Apache's implementation of range requests is fairly resource-hungry,
which makes it easier to cause the DoS,
the HTTP protocol bears some of the blame here too. Allowing multiple
overlapping ranges does not really make any sense, and not allowing servers
to reorder and coalesce adjacent ranges seems poorly thought-out as well.
No matter how efficiently implemented, allowing arbitrary ranges of that
sort is going to lead to some sort of amplification effect.
Apache's fix is for the stable 2.2 series, so it is necessarily fairly
conservative. There are ongoing discussions going on in the Apache dev
mailing list that indicate a more robust fix—probably following the
proposed changes to the HTTP specification—is in the works for the 2.3
development branch (which will eventually become the stable 2.4 release).
As of this writing, only Debian has released a fix for the problem (which
it did based on the patch while it was being tested and before Apache announced
its fix). Other distributions are sure to follow. Since it is trivially
easy to make an unpatched server unresponsive, it probably makes sense to
use one of the mitigation techniques
suggested by
Apache until a server update is available.
[ A word of warning to those who may be tempted to try the proof-of-concept
code: while limiting the number-of-forks command-line parameter to 1 may
seem like a good idea for testing purposes, it doesn't actually work in
practice. If that parameter is <= 1, the code sets it to 50, which is
enough to DoS a server—trust me on that last part. ]
Comments (4 posted)
Brief items
On July 19th 2011, DigiNotar detected an intrusion into its Certificate Authority (CA) infrastructure, which resulted in the fraudulent issuance of public key certificate requests for a number of domains, including Google.com.
Once it detected the intrusion, DigiNotar has acted in accordance with all relevant rules and procedures.
At that time, an external security audit concluded that all fraudulently issued certificates were revoked. Recently, it was discovered that at least one fraudulent certificate had not been revoked at the time. After being notified by Dutch government organization Govcert, DigiNotar took immediate action and revoked the fraudulent certificate.
--
DigiNotar 'fesses up
Diginotar indeed was hacked, on the 19th of July, 2011. The attackers were able to generate several fraudulent certificates, including possibly also EVSSL certificates. But while Diginotar revoked the other rogue certificates, they missed the one issued to Google. Didn't Diginotar think it's a tad weird that Google would suddenly renew their SSL certificate, and decide to do it with a mid-sized Dutch CA, of all places? And when Diginotar was auditing their systems after the breach, how on earth did they miss the Iranian defacement discussed above?
--
F-Secure is not so sure we have the full DigiNotar story
None of the recipients were people who would normally be considered high-profile or high-value targets, such as an executive or an IT administrator with special network privileges. But that didn't matter. When one of the four recipients clicked on the attachment, the attachment used a zero-day exploit targeting a vulnerability in Adobe Flash to drop another malicious file — a backdoor — onto the recipient's desktop computer. This gave the attackers a foothold to burrow farther into the network and gain the access they needed.
--
Wired
on an RSA phishing attack that may have led to the SecurID disclosure
I remember back at the government fear mongering after 9/11. How there were hundreds of sleeper cells in the U.S. How terrorism would become the new normal unless we implemented all sorts of Draconian security measures. You'd think that -- if this were even remotely true -- we would have seen more attempted terrorism in the U.S. over the past decade.
--
Bruce
Schneier
Comments (1 posted)
The Mozilla Security Blog carries
an
advisory that DigiNotar has revoked a fake digital certificate it
issued for Google's domain. "
Users on a compromised network could be
directed to sites using a fraudulent certificate and mistake them for the
legitimate sites. This could deceive them into revealing personal
information such as usernames and passwords. It may also deceive users into
downloading malware if they believe it's coming from a trusted site. We
have received reports of these certificates being used in the wild."
Updates to Firefox, Thunderbird, and SeaMonkey are being released in response.
Update: see this
EFF release for a lot more information; it does not look good.
"Certificate authorities have been caught issuing fraudulent
certificates in at least half a dozen high-profile cases in the past two
years and EFF has voiced concerns that the problem may be even more
widespread. But this is the first time that a fake certificate is known to
have been successfully used in the wild. Even worse, the certificate in
this attack was issued on July 10th 2011, almost two months ago, and may
well have been used to spy on an unknown number of Internet users in Iran
from the moment of its issuance until it was revoked earlier today."
Comments (89 posted)
The Apache project has updated its advisory on the recently-disclosed
denial-of-service vulnerability. The news is not good: the scope of the
vulnerability has grown, the workarounds have become more complex, and
there is still no fix available. "
There are two aspects to this vulnerability. One is new, is Apache specific;
and resolved with this server side fix. The other issue is fundamentally a
protocol design issue dating back to 2007."
Full Story (comments: 7)
Apache has released an update to its HTTP server that fixes the denial of service problem that was
reported on August 24 (and
updated on August 26). We should see updates from distributions soon, though it should be noted that Debian put out an
update on August 29. "
Fix handling of byte-range requests to use less memory, to avoid
denial of service. If the sum of all ranges in a request is larger than
the original file, ignore the ranges and send the complete file."
Full Story (comments: 3)
New vulnerabilities
apache2: denial of service
| Package(s): | apache2 |
CVE #(s): | CVE-2011-3192
|
| Created: | August 30, 2011 |
Updated: | October 14, 2011 |
| Description: |
From the Debian advisory:
A vulnerability has been found in the way the multiple overlapping
ranges are handled by the Apache HTTPD server. This vulnerability
allows an attacker to cause Apache HTTPD to use an excessive amount of
memory, causing a denial of service.
|
| Alerts: |
|
Comments (none posted)
apache-commons-daemon: remote access to superuser files/directories
| Package(s): | apache-commons-daemon |
CVE #(s): | CVE-2011-2729
|
| Created: | August 29, 2011 |
Updated: | December 12, 2011 |
| Description: |
From the CVE entry:
native/unix/native/jsvc-unix.c in jsvc in the Daemon component 1.0.3 through 1.0.6 in Apache Commons, as used in Apache Tomcat 5.5.32 through 5.5.33, 6.0.30 through 6.0.32, and 7.0.x before 7.0.20 on Linux, does not drop capabilities, which allows remote attackers to bypass read permissions for files via a request to an application. |
| Alerts: |
|
Comments (none posted)
hplip: remote code execution
| Package(s): | hplip |
CVE #(s): | CVE-2004-0801
|
| Created: | August 25, 2011 |
Updated: | August 31, 2011 |
| Description: |
From the Novell vulnerability entry:
Unknown vulnerability in foomatic-rip in Foomatic before 3.0.2 allows local users or remote attackers with access to CUPS to execute arbitrary commands. |
| Alerts: |
|
Comments (none posted)
pidgin: possible buffer overflows
| Package(s): | pidgin |
CVE #(s): | |
| Created: | August 31, 2011 |
Updated: | August 31, 2011 |
| Description: |
The pidgin 2.10.0 release features the removal of a lot of calls to unsafe string functions, closing a number of potential buffer overflows. See the changelog for details. |
| Alerts: |
|
Comments (none posted)
selinux-policy: policy updates
| Package(s): | selinux-policy |
CVE #(s): | |
| Created: | August 25, 2011 |
Updated: | August 31, 2011 |
| Description: |
From the Scientific Linux advisory:
* Prior to this update, the SELinux policy package did not allow the
RHEV agent to execute. This update adds the policy for RHEV agents, so
that they can be executed as expected.
* Previously, several labels were incorrect and rules for creating new
389-ds instances were missing. As a result, access vector caches (AVC)
appeared when a new 389-ds instance was created through the 389-console.
This update fixes the labels and adds the missing rules. Now, new 389-ds
instances are created without further errors.
* Prior to this update, AVC error messages occurred in the audit.log
file. With this update, the labels causing the error messages have been
fixed, thus preventing this bug.
|
| Alerts: |
|
Comments (none posted)
vpnc: remote command injection
| Package(s): | vpnc |
CVE #(s): | CVE-2011-2660
|
| Created: | August 31, 2011 |
Updated: | August 31, 2011 |
| Description: |
The modify_resolvconf_suse script packaged with vpnc contains a flaw that could enable command injection attacks via specially-crafted DNS entries. |
| Alerts: |
|
Comments (none posted)
xen: denial of service
| Package(s): | xen |
CVE #(s): | CVE-2011-3131
|
| Created: | August 31, 2011 |
Updated: | September 1, 2011 |
| Description: |
A xen virtual machine given control of a PCI device can cause it to issue invalid DMA requests, potentially overwhelming the host with interrupts from the IOMMU. See this advisory for details. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Next page: Kernel development>>