The email you link to doesn't seem complacent to me. "We all know our architecture is wrong" and an admission that they've been rewriting it to fix this but simply haven't had time to finish isn't complacency. Overload, perhaps, but not complacency.
(Even that wouldn't fix a related problem, which is that it isn't hard to hit a system with so many incoming sockets that it can't accept anymore, either because of dstport saturation or simple kernel memory saturation with socket buffers. To fix this problem properly, obvious sluggards need to be not handled by a process that isn't DoSed but actually kicked off. Doing that without penalizing people behind slow or overloaded network links is... an interesting problem. Note that the problem can also occur the other way: send a request normally, for a valid page, then read the result very slowly so that things block right back into the webserver. It needs to be a big page, but those aren't hard to find.)