|
|
Subscribe / Log in / New account

Essential, but not the first line

Essential, but not the first line

Posted Sep 23, 2011 0:52 UTC (Fri) by njs (subscriber, #40338)
In reply to: Essential, but not the first line by cmccabe
Parent article: LPC: An update on bufferbloat

> I don't think short connection = low latency, long connection = high throughput is a good idea.

I don't think anyone does? (I've had plenty of multi-day ssh connections; they were very low throughput...)

I think the idea is that if some connection is using *less* than its fair share of the available bandwidth, then it's reasonable to give it priority latency-wise. If it could have sent a packet 100 ms again without being throttled, but chose not to -- then it's pretty reasonable to let the packet it sends now jump ahead of all the other packets that have arrived in the last 100 ms; it'll end up at the same place as it would have if the flow were more aggressive. So it should work okay, and naturally gives latency priority to at least some of the connections that need it more.


to post comments

Essential, but not the first line

Posted Sep 23, 2011 9:39 UTC (Fri) by cmccabe (guest, #60281) [Link] (1 responses)

Penalizing high bandwidth users is kind of an interesting heuristic. It's definitely better than penalizing long connections, at least!

However, I think you're assuming that all the clients are the same. This is definitely not be the case in real life. Also, not all applications that need low latency are low bandwidth. For example video chat can suck up quite a bit of bandwith.

Just to take one example. If I'm the cable company, I might have some customers with a 1.5 MBit/s download and others with 6.0 MBit/s. Assuming that they all go into one big router at some point, the 6.0 MBit/s guys will obviously be using more than their "fair share" of the uplink from this box. Maybe I can be super clever and account for this, but what about the next router in the chain? It may not even be owned by my cable company, so it's not going to know the exact reason why some connections are using more bandwidth than others.

Maybe there's something I'm not seeing, but this still seems problematic...

Essential, but not the first line

Posted Sep 24, 2011 1:30 UTC (Sat) by njs (subscriber, #40338) [Link]

Well, that's why we call it a heuristic :-) It can be helpful even if it's not perfect. A really tough case is flows that can scale their bandwidth requirements but value latency over throughput -- for something like VNC or live video you'd really like to use all the bandwidth you can get, but latency is more important. (I maintain a program like this, bufferbloat kicks its butt :-(.) These should just go ahead and set explicit QoS bits.

Obviously the first goal should be to minimize latency in general, though.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds