Whilst I've seen far too many database administrators blame "the network", it is also true that a lot of the points made by this article apply as equally to networks as to databases.
Particularly the lack of instrumentation, especially of problematic middleboxes such as application (de)accelerators and firewalls. Even basic monitoring is poor, link with application-performance-killing high errors rates often creeping under the radar of monitoring tools like Nagios.
It's rare to see routing designed with good choices and configured correctly. There's a simple tell-tale test: type in an unassigned IP address in the corporate network, does it error immediately or time out?
The poor state of corporate networks isn't helped by networking equipment vendors, who often ship equipment with near-essential settings off for "backward compatibility".
Finally, many sysadmins and applications are their own worst enemy. Using IP addresses rather than DNS names (they're going to regret that, come IPv6). Disabling ethernet autonegotiation. Assuming link layer connectivity for high-availability schemes. Refusing to deal with authentication and authorisation issues within the application, but pushing that into VLANs and VPNs, thus turning the corporate network into a flat layer two network, with resulting poor behaviour under fault conditions.