LPC: An update on bufferbloat
Posted Sep 22, 2011 5:05 UTC (Thu) by fest3er
Parent article: LPC: An update on bufferbloat
I'm two years into exploring Linux Traffic Control (LTC) to create a nice web-based UI to configure it; I'm still learning what it can and cannot do. A lot of the documentation isn't very good, and some of it is flat-out wrong. The UI is finally working fairly well, even though it is very wrong in a couple places.
So far, I have found:
- When using HTB, with Stochastic Fair Queuing (SFQ) on each leaf, LTC can do a very good job balancing data streams
- I have to measure the actual sustained throughput of each interface to ensure that I don't try to push too much data through it. For example, my 100Mbit links generally max out at around 92Mbit/s and my internet uplink maxes out at around 2.8Mbit/s.
- Priority should only ever be given to data streams that use limited, well-defined bandwidth, such as DNS and VoIP; using too much prioritization leads to bad latencies that can be avoided.
- All other streams should equally share the bandwidth and be allowed to use 100% of it when there is no other contention. HTB does this very well.
With that in hand, I identified the most common sets of data streams (FTP, HTTP, SSH, DNS, et al) and designed an HTB scheme that fairly shares bandwidth among the many data streams while allowing all but DNS, VoIP and IM to use 100% of the available bandwidth in the absence of contention--DNS, VoIP and IM are allowed limited bandwidth, based on educated guesses as to the most they can need. I also set DNS and VoIP to priority 0 (zero), IM to priority 2, and all others to priority 1. DNS and VoIP have priority but won't keep other streams from getting bandwidth because their average bit rates are fairly low, nor will other streams see much changes in latency. I have not given ACK/NAK/etc. type packets any special treatment. (For what it's worth, I also limited unidentified traffic to 56kbit/s. It's bitten me a few times, but has also forced me to learn more.)
Before deploying this scheme, I would almost always see uploads start at a nice high rate, then plummet to 1Mbit/s and stay there, and I would see different data streams fight with each other causing choppy throughput. After deploying the scheme, throughput is smooth, uplink is stable at 2.8Mbit/s, and the various data streams share the available bandwidth very nicely. Interactive SSH proceeds as though the 500MiB tarball uploads and downloads aren't happening. Even wireless behaves nicely with its bandwidth set to match the actual wireless rate. The "uncontrollable downlink" problem? It flows fairly smoothly even though my only control is at the high-speed outbound IFs. By forcing data streams to share proportionately, none of them can take over the limited downlink bandwidth; don't forget that their uplink ACK/NAK/etc. are also shared proportionately. The "shared cable" problem? I don't remember the last time I noticed a bothersome slowdown during the normal 'overload' periods.
But that is only part of the solution: the outbound part. Inbound control is a whole other problem, one that LTC doesn't handle well. And when you add NAT to the equation, LTC all but falls apart; after outbound NAT, the source IPs and possibly ports are 'obscured' and requires iptables classifiers in mangle because 'tc filter' cannot work. When I have time, I am going to rewrite my UI to use IMQ devices to handle all traffic control. Then I'll be able to fairly share a limited link's bandwidth (read 'internet link') among the other links in the router and properly control traffic whether or not NAT is involved. It will even be much easier to control VPNs that terminate on the firewall.
In summary, I've found that it is possible to avoid flooding neighbors without horribly disfiguring latencies and without unduly throttling interfaces. Steady state transfers are clean. Bursty transfers are clean (even though they'll use some buffer space) as long as their average bit rate remains reasonable. But as has been pointed out several times and is the point of this series, controlling my end of the link is only half of the solution.
TCP/IP needs a way to query interfaces to determine their maximum sustainable throughput, and it needs a way to transmit that info to the other end (the peer), possibly in the manner of path MTU discovery. ISPs need to tell their customers their maximum sustainable throughput; the exact number, not their usual misleading 'up to' marketing BS. Packets should be dropped as close to the source as possible when necessary; dropping a packet at the first router on a 100Mb or Gb link is far better than dropping it after it's traversed the slow link on the other side of that router.
The effects of bufferbloat can be minimized, even when only one side can be controlled. But good Lord! Understanding LTC is not easy. The faint of heart and those with hypertension, anger management issues, difficulty understanding semi-mangled English, ADD, ADHD and other ailments should consult their physicians before undertaking such a trek. It is an arduous journey. Every time you think you've reached the end, you discover you've only looped back to an earlier position on the trail. Here, patience is a virtue; the patience of Job is almost required.
to post comments)