|
|
Subscribe / Log in / New account

Using HTTP POST for denial of service

By Jake Edge
December 1, 2010

While it isn't altogether new, the implications of a recently reported denial of service (DoS) attack method against web servers is somewhat eye-opening. The attack itself is similar in many ways to the Slowloris technique. Depending on how the web server is configured, a DoS against it may only require a fairly small number of slow-moving connections. In addition, since it exploits a weakness in HTTP, it is difficult to work around.

Open Web Application Security Project (OWASP) researchers, Wong Onn Chee and Tom Brennan, presented [PDF] on the flaw at the AppSec DC 2010 conference in mid-November. As they point out, most DoS (and distributed DoS or DDoS) attacks target lower layers in the network stack, typically the transport layer, which is layer four in the OSI model. More recent DoS attacks have moved up the stack, with things like Slowloris attacking layer seven, the application layer (e.g. HTTP, FTP, SMTP).

ISPs and internet carriers have gotten much better at thwarting attacks at the transport layer, but for a number of reasons, attacks against applications are more difficult to deal with. It can be difficult to distinguish legitimate application traffic from a DoS attack. Applications tend to have a larger footprint and require more server resources. That means that a smaller number of attacking resources may be needed to perform a DoS at the application versus those required to do it at the transport layer.

Previously, Slowloris and other techniques used the HTTP GET method in conjunction with sending the HTTP headers very slowly to monopolize a connection to the web server. If enough of those "clients" were running, they would consume all of the sockets or other resources at the server end, thus denying any other clients (i.e. legitimate traffic) access to the server. Since that time, various workarounds have been found to reduce the problem from slow HTTP GETs in the Apache web server. Interestingly, Microsoft's IIS web server uses a different mechanism to handle incoming requests and was not vulnerable to GET-based DoS.

What Chee and his team found in September 2009 was that the HTTP POST method could also be used to perform a DoS. By sending a very large value in the Content-Length header, then very slowly sending that amount of data, one byte at a time, a client can consume a connection on the server for a very long time. In addition, because all of the headers have been sent, the mechanism that allowed IIS to avoid the GET-based attack was bypassed, so IIS and Apache are both vulnerable to these POST-based attacks.

So, Apache and IIS web servers that accept any kind of forms—that is to say, nearly all of them—are vulnerable. In addition, the attacks don't even have to reference a valid form on the server as most servers don't check until after the request is received. How many attacker connections are required to shut down a server is variable depending on the configuration of the server. The presentation mentions 20,000 connections for IIS and fewer for Apache because of client or thread limits in httpd.conf. An Acunetix blog post notes that 256 connections can be enough for Apache 1.3 in its default configuration. In any case, neither 256 nor 20,000 is a significant hurdle for an interested attacker.

Both Apache and Microsoft were contacted about this problem, but neither plans to "fix" it, because it is inherent in the protocol. HTTP is meant to allow for slow and intermittent connections, which don't look very different from this kind of attack. Apache has two potential workarounds: the experimental mod_reqtimeout, which allows for timeouts on headers and request bodies, or the LimitRequestBody directive, which allows a maximum request size to be set (by default it is 2GB). Those may provide band-aids but there will be collateral damage as folks with slower, dodgier connections—perhaps from a mobile device—may suffer. It seems likely that most servers could live with maximum request sizes significantly smaller than 2GB, however.

Chee and Brennan also report that botnet operators have started incorporating application layer DDoS into their bag of tricks, so we will likely be seeing more of these kind of attacks. It may mostly be GET-based attacks for the moment, but the botnet "herders" will eventually get around to incorporating POST-based attacks as well. The researchers predict that application layer DDoS will supplant transport layer DDoS sometime in the next ten years.

DoS attacks are probably less of a problem to small-time web site operators as the likely targets are deep-pocketed online retailers and the like. Criminals often target those sites at particularly important points in the calendar, like when holiday shoppers are likely to visit. It isn't too difficult to extract a large payment from such a retailer when it is faced with losing most of its sales at such a critical time. Those kinds of sites should probably be gearing up—hopefully have already geared up—for those kinds of attacks over the next month and beyond.

Index entries for this article
SecurityVulnerabilities/Denial of service


to post comments

Using HTTP POST for denial of service

Posted Dec 2, 2010 16:41 UTC (Thu) by adamgundy (subscriber, #5418) [Link] (3 responses)

seems to me the traditional solution for slowloris would solve this ('put nginx in front of your vulnerable server'). it has configurable limits on HTTP body size (post size), which can be configured per-server or per-page, and buffers all of the request before sending it on to the backend server (ie: slow GETs or POSTs get absorbed in their entirety before being handed off to the backend server as fast as possible). you can also set request timeouts. I'm guessing other C10K web servers or proxies would provide the same protection (lighttpd, Cherokee, pound, varnish, etc etc).

Using HTTP POST for denial of service

Posted Dec 2, 2010 18:45 UTC (Thu) by skorgu (subscriber, #39558) [Link] (1 responses)

Not to mention the use of CDNs or other proxies as the first point of connection. Does slowlaris et al persist through an Akamai or similar cache?

Using HTTP POST for denial of service

Posted Dec 2, 2010 19:19 UTC (Thu) by adamgundy (subscriber, #5418) [Link]

same thing, but generally CDNs are used for static content (lots of GET requests). some support POST for uploading large files (images, video, etc), but that's less common.

you don't generally put a CDN in front of your 'dynamic' web server domain (because for dynamic content it can't usually help with caching, and just adds another layer of indirection and delay)

layer 7 load balancing proxies will almost certainly 'fix' slowloris AND this 'slowpost' attack - that's what nginx or any of the other servers I listed are doing - DNS round robin etc (poor man's balancing) obviously won't help.

Using HTTP POST for denial of service

Posted Dec 2, 2010 19:34 UTC (Thu) by adamgundy (subscriber, #5418) [Link]

I guess I should add the typical reason people DON'T want to put a smart proxy in front of their apache or IIS server - it breaks their 'fancy upload' script.

most sites that accept content uploads have to work around the fact that browsers don't display any useful progress messages when uploading a file (why? historical I think). they usually end up providing a flash plugin (sigh), then a non-flash solution for flash-refusers (including the vast crowd using any device named iXXXX).

the non-flash solution typically involves polling the server every few seconds to ask 'how much now?' over and over (recent browser enhancements let you get this info client side, but you have to support IE6+ of course - sigh again).

SO: nginx (or any 'caching' proxy) thoughtfully collects all the data before handing it off to the backend server (PHP, ruby, ASP, whatever), and the answer to your poll requests is always 'zero' until it's suddenly '100%'. most of these servers (nginx included) now provide builtin 'upload monitoring' modules that let you poll a particular page to retrieve the answer direct from the proxy instead of your backend script... but that means rewriting your polling code (hiring a web developer, checking that it works, blah, blah - instead of just getting your sysadmin to stick a proxy in front of your web server).

Using HTTP POST for denial of service

Posted Dec 2, 2010 18:29 UTC (Thu) by mcmanus (guest, #4569) [Link] (2 responses)

I did some work a while back on using pessimistically constructed SACKs to DoS any webserver that needed to serve up largeish responses. It uses bog standard simple HTTP requests. In addition to tie-ing up fixed server resources for a long time you can severely tax the server CPU over the same period.

http://www.ibm.com/developerworks/linux/library/l-tcp-sac...

I haven't been following SACK all that closely since, but I'm not aware of anything that would have changed it. But there are a good dozen active DoS vectors at any given time, so the whole category is a hard one to get too worked up about.

Using HTTP POST for denial of service

Posted Dec 3, 2010 20:31 UTC (Fri) by bradfitz (subscriber, #4378) [Link] (1 responses)

This is news? When I wrote Perlbal (http://www.danga.com/perlbal/) I explicitly "defended" against this, not because of attacks but because I didn't want backends wasting time & memory reading requests from slow clients (where slow == not 1 Gbps).

Perlbal can buffer POSTs in memory up to a given time/space threshold, and then spill to disk until received, and the blast it away at the backend once fully received.

So just put Perlbal in front of it. (It's in front of LiveJournal, TypePad, etc...)

Using HTTP POST for denial of service

Posted Dec 3, 2010 20:32 UTC (Fri) by bradfitz (subscriber, #4378) [Link]

Whoops, I fail at replying on LWN, it seems. Also at reading previous replies. Double fail.


Copyright © 2010, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds