Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Or do I only have to think about bufferbloat? Are there any guides telling me how to fix my Ubuntu Linux-based server/router to not be a part of the bufferbloat problem.
The CHOKe packet scheduler
Posted Jan 13, 2011 21:07 UTC (Thu) by jhhaller (subscriber, #56103)
One could do something similar by emulating a slightly slower link after receipt from your ISP. For example, if you have a 8 Mbps link from your ISP, you could put an artificial link of 7 Mbps out of your router (method for this left as an exercise for the reader). Then, if the queues start growing on that 7 Mbps link, packets could be discarded there. This would tend to prevent your 8 Mbps link from being congested, at the expense of not getting your full 8 Mbps. But, it would be much better for the ISP to install the CHOKe scheduler, so it would only start dropping packets when the average was over 8 Mbps.
Posted Jan 13, 2011 22:26 UTC (Thu) by paulj (subscriber, #341)
- the size of the ring buffer used to transfer packets from the kernel to the NIC. See 'ethtool -g ethX' and 'ethtool -g <dev> tx X'
- the number of packets the kernel's network layer will queue, it's 1000 by default on ethernet interfaces, which is way too large if your ethernet has a wifi segment in the middle of it. Set this to something lower with 'ip link set <dev> txqueuelen Y'
Posted Jan 14, 2011 9:28 UTC (Fri) by marcH (subscriber, #57642)
A queue size matters only when it becomes a bottleneck. When a queue is empty its maximum size obviously does not matter.
If your traffic goes first to a gigabit wire, and then through wifi, the queue size of your gigabit wire will never matter. Because the wifi queue will bottleneck first, fill up and drop the packets first.
It does not hurt to fine tune the queue size of every link just in case. But if you only want a quick fix you just need to look at your usual bottlenecks.
By the way, is the TX ring size in Linux finally adjusted according to the link speed? This should be a very basic thing to implement...
Posted Jan 14, 2011 9:54 UTC (Fri) by paulj (subscriber, #341)
There's no way for Linux to know just from your local link-speed what the correct size is. Ethernets in the past tended to be relatively homogeneous wrt segments, but these days they can be extraordinarily mismatched - with GigE segments often bridged together with 802.11 links whose reliability, latency and bandwidth make you yearn fondly for the thinnet of 20+ years ago.
Posted Jan 14, 2011 10:50 UTC (Fri) by marcH (subscriber, #57642)
For sure the NIC *alone* would, but TCP (or DCCP) do NOT.
TCP is ACK-clocked on the bottleneck. Just try it! The name of this feature is "congestion avoidance", google it.
> There's no way for Linux to know just from your local link-speed what the correct size is.
> Ethernets in the past tended to be relatively homogeneous wrt segments, but these days they can be extraordinarily mismatched
It does not matter: every link needs to adjust according to its *own* speed.
Posted Jan 14, 2011 15:31 UTC (Fri) by njs (guest, #40338)
Nope. (And it's "ring sizes", not "ring size"; every driver has its own TX queue handling code, plus there's the TX queue in the network layer itself.) I do have some patches to auto-tune the iwlwifi driver's queues, but haven't had a chance to benchmark them properly...
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds