|
|
Subscribe / Log in / New account

Power impact of debufferbloating

Power impact of debufferbloating

Posted Sep 14, 2011 6:26 UTC (Wed) by njs (subscriber, #40338)
In reply to: Power impact of debufferbloating by Cyberax
Parent article: LPC: An update on bufferbloat

You miss the point. Yes, in principle that is the solution. But, right now, it just doesn't work. I turned on traffic shaping, and it had no effect whatsoever. That's because traffic shaping only applies to packets that are in the qdisc buffer. Packets only end up in the qdisc buffer if the device buffer is full, but device buffers are so large that they never fill up unless you either have a very high bandwidth connection, or have made the buffers smaller by hand. After I hacked my kernel to reduce the device buffer sizes, traffic shaping started working, so that makes reducing buffer sizes a reasonable short-term solution, but in the long-term it produces other bad effects like Arjan points out.

Actually, there is one other way to make traffic shaping work -- if you throttle your outgoing bandwidth, then that throttling is applied in between the qdisc and the device buffers, so the device buffers stop filling up. Lots of docs on traffic shaping say that this is a good idea to work around problems with your ISP's queue management, but in fact it's needed just to work around problems within your own kernel's queue management. Also, you can't really do it for wifi since you don't know what the outgoing bandwidth is at any given time, and in the case of 802.11n, this will actually *reduce* that bandwidth because hiding packets from the device driver will screw up its ability to do aggregation.


to post comments

Power impact of debufferbloating

Posted Sep 14, 2011 14:34 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Uhm. As far as I understand, qdisc sees all the packets (check the http://www.docum.org/docum.org/kptd/ ). Now, policies for ingress shaping are somewhat lacking in Linux, but egress shaping works perfectly fine.

In my own tests, I was able to segregate heavy BitTorrent traffic from VoIP traffic and get good results on a puny 2MBit line.

I did lost some bandwidth, but not that much.

Power impact of debufferbloating

Posted Sep 14, 2011 16:37 UTC (Wed) by njs (subscriber, #40338) [Link]

Yes, qdisc every packet goes through the qdisc. But here's a typical sequence of events in the situation I'm talking about:
- packet 1 enters qdisc
- packet 1 leaves qdisc
- packet 2 enters qdisc
- packet 2 leaves qdisc
- ...

There isn't much the qdisc can do here, prioritization-wise -- even if packet 2 turns out to be high priority, it can't "take back" packet 1 and send packet 2 first. Packet 1 is already gone.

In your tests you used bandwidth throttling, which moved the chokepoint so that it fell in between the qdisc buffer and your device buffer. You told the qdisc to hold onto packets and only let them dribble out at a fixed rate, and chose that rate so that it would be slower than the rate the device buffer drained. So the device buffer never filled up, and the qdisc actually had multiple packets visible at once and was able to reorder them.

Power impact of debufferbloating

Posted Sep 29, 2011 0:02 UTC (Thu) by marcH (subscriber, #57642) [Link]

Indeed: you cannot shape traffic unless/until you are the bottleneck. In other words, policing an empty queue does not much. This makes fixing bufferbloat (or QoS more generally speaking) even more difficult since bottlenecks come and go.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds