|
|
Subscribe / Log in / New account

Bufferbloat: the summary

Bufferbloat: the summary

Posted Feb 26, 2011 12:20 UTC (Sat) by hmh (subscriber, #3838)
In reply to: Bufferbloat: the summary by jg
Parent article: The debloat-testing kernel tree

Actually, the answer is, and has always been, AQM. You can and should have a dynamically-sized queue, even on hosts (NOTE: socket buffers often should be rather large, this has nothing to do with the queues).

The queue should be able to grow large, but only for flows where the bandwidth-delay product requires it. And it should early-drop.

And the driver DMA ring-buffer size really should be considered part of the queue for any calculations, although you probably have to consider that part of the queue a "done deal" and not drop/reorder anything there. Otherwise, you can get even fast-ethernet to feel like a very badly behaved LFN (long fat network). However, reducing DMA ring-buffer size can have several drawbacks on high-throughput hosts.

Using latency-aware, priority-aware AQM (even if it is not flow-aware) should fix the worst issues, without downgrading throughput on bursty links or long fat networks. Teaching it about the hardware buffers would let it autotune better.


to post comments

Bufferbloat: the summary

Posted Feb 26, 2011 13:37 UTC (Sat) by jg (guest, #17537) [Link] (1 responses)

Yes, AQM is the answer, including on hosts.

What AQM algorithm is a different question.

Van Jacobson says that RED is fundamentally broken, and has no hope of working in the environments we have to operate in. And Van was one of the
inventors of RED...

SFB may or may not hack it. Van has an algorithm he is finishing up the write up of that he thinks may work. Hopefully will be available soon. We have fundamentally interesting problem here. And testing this is going to be much more work than implementing, by orders of magnitude.

It isn't clear the AQM needs to be priority aware; wherever the queues are building, you are more likely to choose a packet to drop (literally drop, or ECN mark) just by running an algorithm across all the queues. I haven't seen arguments that makes me believe the AQM must be per queue (that doesn't mean there aren't any! just I haven't seen them).

And there are good reasons why the choice of packet to drop should have randomness in it; time based congestion can occur if you don't. Different packet types also have different leverage to them (acks, vs. data, vs. syn, etc.).


Bufferbloat: the summary

Posted Feb 26, 2011 16:14 UTC (Sat) by hmh (subscriber, #3838) [Link]

You're likely not going to get anywhere above "acceptable" using a simple AQM, even if it is SFB. It is not going to get to "good" or "excelent" marks.

The Diffserv model got it right, in the sense that even on a simple host, there are flows for which you do NOT want to drop packets (DNS, NTP) if you can help it, and that there is naturally an hierarchy of priorities of which services you'd rather suffer more packet drops than others during congestion.

I've also found that "socializing" the available bandwidth among flows of the same class is a damn convenient thing (SFQ). SFB does this well, AFAIK.

So, I'd say that what we should aim for hosts is an auto-tuned flow-aware AQM that at least pays attention to the bare minimum of priority ordering (802.1p/DSCP class selectors) and does a good job of keeping latency under wraps without killing throughput on high bandwidth-delay product flows. Such a beast could be enabled by default on a distro [for desktops] with little fear.

This doesn't mean you need multiple queues. However, you will want multiple queues in many cases because that's how hardware-assisted QoS works, such as what you find on any 802.11n device or non-el-cheap-o gigabit ethernet NIC.

Routers are a different deal altogether.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds