Essential, but not the first line
Essential, but not the first line
Posted Sep 14, 2011 17:58 UTC (Wed) by cmccabe (guest, #60281)In reply to: Essential, but not the first line by tialaramex
Parent article: LPC: An update on bufferbloat
There are a lot of applications where you really just don't care about latency at all, like downloading a software update or retrieving a large file. And then there's applications like instant messenger, Skype, and web browsing, where latency is very important.
If bulk really achieved higher throughput, and interactive really got reasonable latency, I think applications would fall in line pretty quickly and nobody would "optimize" by setting the wrong class.
The problem is that there's very little real competition in the broadband market, at least in the US. The telcos tend to regard any new feature as just another cost for them. Even figuring out "how much bandwidth will I get?" or "how many gigs can I download per month?" is often difficult. So I don't expect to see real end-to-end QoS any time soon.
Posted Sep 15, 2011 13:18 UTC (Thu)
by joern (guest, #22392)
[Link] (10 responses)
Tcp actually has a good heuristic. If a packet got lost, this connection is too fast for the available bandwidth and has to back off a bit. If no packets get lost, it will use a bit more bandwidth. With this simple mechanism, it can adjust to any network speed, fairly rapidly adjust changing network speeds, etc.
Until you can come up with a similarly elegant heuristic that doesn't involve decisions like "ssh, but not scp, unless scp is really small", consider me unconvinced. :)
Posted Sep 16, 2011 16:14 UTC (Fri)
by sethml (guest, #8471)
[Link] (9 responses)
Unfortunately my scheme requires routers to track TCP connection state, which might be prohibitively expensive in practice on core routers.
Posted Sep 16, 2011 21:03 UTC (Fri)
by piggy (guest, #18693)
[Link]
Posted Sep 22, 2011 5:28 UTC (Thu)
by cmccabe (guest, #60281)
[Link] (7 responses)
Due to the 3-way handshake, TCP connections which only transfer a small amount of data have to pay a heavy latency penalty before sending any data at all. It seems pretty silly to ask applications that want low latency to spawn a blizzard of tiny TCP connections, all of which will have to do the 3-way handshake before sending even a single byte. Also, spawning a blizzard of connections tends to short-circuit even the limited amount of fairness that you currently get from TCP.
This problem is one of the reasons why Google designed SDPY. The SPDY web page explains that it was designed "to minimize latency" by "allow[ing] many concurrent HTTP requests to run across a single TCP session."
Routers could do deep packet inspection and try to put packets into a class of service that way. This is a dirty hack, on par with flash drives scanning the disk for the FAT header. Still, we've been stuck with even dirtier hacks in the past, so who knows.
I still feel like the right solution is to have the application set a flag in the header somewhere. The application is the one who knows. Just to take your example, the ssh does know whether the input it's getting is coming from a tty (interactive) or a file that's been catted to it (non-interactive). And scp should probably always be non-interactive. You can't deduce this kind of information at a lower layer, because only the application knows.
I guess there is this thing in TCP called "urgent data" (aka OOB data), but it seems to be kind of a veniform appendix of the TCP standard. Nobody has ever been able to explain to me just what an application might want to do with it that is useful...
Posted Sep 22, 2011 8:23 UTC (Thu)
by kevinm (guest, #69913)
[Link]
Posted Sep 22, 2011 17:20 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 23, 2011 6:44 UTC (Fri)
by salimma (subscriber, #34460)
[Link] (1 responses)
Posted Sep 23, 2011 10:57 UTC (Fri)
by nix (subscriber, #2304)
[Link]
Posted Sep 23, 2011 0:52 UTC (Fri)
by njs (subscriber, #40338)
[Link] (2 responses)
I don't think anyone does? (I've had plenty of multi-day ssh connections; they were very low throughput...)
I think the idea is that if some connection is using *less* than its fair share of the available bandwidth, then it's reasonable to give it priority latency-wise. If it could have sent a packet 100 ms again without being throttled, but chose not to -- then it's pretty reasonable to let the packet it sends now jump ahead of all the other packets that have arrived in the last 100 ms; it'll end up at the same place as it would have if the flow were more aggressive. So it should work okay, and naturally gives latency priority to at least some of the connections that need it more.
Posted Sep 23, 2011 9:39 UTC (Fri)
by cmccabe (guest, #60281)
[Link] (1 responses)
However, I think you're assuming that all the clients are the same. This is definitely not be the case in real life. Also, not all applications that need low latency are low bandwidth. For example video chat can suck up quite a bit of bandwith.
Just to take one example. If I'm the cable company, I might have some customers with a 1.5 MBit/s download and others with 6.0 MBit/s. Assuming that they all go into one big router at some point, the 6.0 MBit/s guys will obviously be using more than their "fair share" of the uplink from this box. Maybe I can be super clever and account for this, but what about the next router in the chain? It may not even be owned by my cable company, so it's not going to know the exact reason why some connections are using more bandwidth than others.
Maybe there's something I'm not seeing, but this still seems problematic...
Posted Sep 24, 2011 1:30 UTC (Sat)
by njs (subscriber, #40338)
[Link]
Obviously the first goal should be to minimize latency in general, though.
Posted Sep 29, 2011 21:40 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
Interesting, but never going to happen. The main reason why TCP/IP is successful is because QoS is optional in theory and non-existent in practice.
The end to end principle states that the network should be as dumb as possible. This is at the core of the design of TCP/IP. It notably allows interconnecting any network technologies together, including the least demanding ones. The problem with this approach is: as soon as you have the cheapest and dumbest technology somewhere in your path (think: basic Ethernet) there is a HUGE incentive to align your other network section(s) on this lowest common denominator (think... Ethernet). Because the advanced features and efforts you paid in the more expensive sections are wasted.
Suppose you have the perfect QoS settings implemented in only a few sections of your network path (like many posts in this thread do). As soon as the traffic changes and causes your current bottleneck (= non-empty queue) to move to another, QoS-ignorant section then all your QoS dollars and configuration efforts become instantly wasted. Policing empty queues has no effect.
An even more spectacular way to waste time and money with QoS in TCP/IP is to have different network sections implementing QoS in ways not really compatible with each other.
The only cases where TCP/IP QoS can be made to work is when a *single* entity has a tight control on the entire network; think for instance VoIP at the corporate or ISP level. And even there I suspect it does not come cheap. In other cases bye bye QoS.
Essential, but not the first line
Essential, but not the first line
Essential, but not the first line
Essential, but not the first line
(See http://www.chromium.org/spdy/spdy-whitepaper)
Essential, but not the first line
Essential, but not the first line
I still feel like the right solution is to have the application set a flag in the header somewhere. The application is the one who knows. Just to take your example, the ssh does know whether the input it's getting is coming from a tty (interactive) or a file that's been catted to it (non-interactive). And scp should probably always be non-interactive. You can't deduce this kind of information at a lower layer, because only the application knows.
And SSH can do just this: if DISPLAY is unset and SSH is running without a terminal, it sets the QoS bits for a bulk transfer: otherwise, it sets them for an interactive transfer. Unfortunately scp doesn't unset DISPLAY, so if you run scp from inside an X session I suspect it always gets incorrectly marked as interactive... but that's a small thing.
Essential, but not the first line
Essential, but not the first line
Essential, but not the first line
Essential, but not the first line
Essential, but not the first line
The QoS lost cause
