|
|
Subscribe / Log in / New account

TCP Fast Open: expediting web services

TCP Fast Open: expediting web services

Posted Aug 2, 2012 15:57 UTC (Thu) by gdt (subscriber, #6284)
Parent article: TCP Fast Open: expediting web services

"Router latencies" isn't really an issue, as they are easily enough solved by increasing the bandwidth (which reduces the time to receive and transmit a packet). (And yeah, I'm avoiding the "bufferbloat" overprovisioning of buffers at the network edge here, because when that exists RTO is not much help -- saving one RTT when you have multiple RTT in the queue ahead of you isn't a huge win.)

The speed of light in optical fiber is the major contributor to latency. The speed of light in fiber is roughly 150Km per ms, this is much slower than the speed of light in a vacuum. The speed of light in a fiber can be improved, but at the cost of narrowing bandwidth. This tradeoff isn't available to users, but is determined during ITU standards-making. Moreover the tradeoff isn't huge, in the order of 5% of latency. But the tradeoff does have a major effect on the cost of Forward Error Correction ASICs.

Once you get out of the tail links, the lowest speed you'll encounter on a ISP backbone is 1Gbps. You've got to have well more than 1,000 router hops before you'll get 1ms of ingress playin, cell switching and egress playout. The other devices which can add significant latency are the middleboxes at the other end of the link: firewalls, load sharers and so on.


to post comments

TCP Fast Open: expediting web services

Posted Aug 2, 2012 23:44 UTC (Thu) by Lennie (subscriber, #49641) [Link] (1 responses)

So when will we see companies building vacuum tubes used to speed up the light when crossing large parts of land or maybe the atlantic ?

That is the thing I care about. ;-)

Is anyone doing research on that yet ?

TCP Fast Open: expediting web services

Posted Aug 4, 2012 12:20 UTC (Sat) by nix (subscriber, #2304) [Link]

Warning: work in this area can be dangerous. See e.g. the series, ahem I mean feasibility study, bookended by <http://www.amazon.com/The-Collapsium-Wil-McCarthy/dp/0553...>, <http://www.amazon.com/To-Crush-Moon-Wil-McCarthy/dp/05535...>.

(Though admittedly the Queendom did go to rather more extreme lengths to increase the speed of light than mere vacuum tubes, and displayed a cavalier degree of carelessness, indeed insouciance, regarding the fact that keeping trillions of black holes in your solar system in very large arrays moving at well below orbital velocity is insanely dangerous.)

TCP Fast Open: expediting web services

Posted Aug 4, 2012 13:34 UTC (Sat) by Jannes (subscriber, #80396) [Link]

actually TFO should be a huge improvement in a bufferbloated situation. If there is 1 second of bufferbloat, then 'Saving one RTT' means saving 1 second.

Not saying it's a solution to bufferbloat of course.

TCP Fast Open: expediting web services

Posted Aug 4, 2012 23:12 UTC (Sat) by drdabbles (guest, #48755) [Link]

This may be true in theory (I'm not sure), but in practice it's completely wrong. Bandwidth tells you only how much data can be passed through a link in a given time period. Saying a link is capable of 1Gbit/sec means if you consume every possible bit for every possible cycle for 1 second, you'll have transferred 1Gbit of data over the wire. Many links have a frame/second limit, so if your frames aren't completely full, you've wasted bandwidth and decreased the utilization of the link.

Router latency is caused by many factors. Some can be router CPU shortages, memory resource shortages, the time it takes to transfer a frame from "the wire" to the internal hardware and vice verse, how quickly a packet can be processed, whether packet inspection is happening, etc. This, relatively speaking, can be a very long time. Typically it's microseconds, but certainly not always. Either way, it represents a minimum time delay with only a practical ceiling (IP timeout / retransmission). So increasing bandwidth to an already slow router only makes the problem worse.

Also, if you have a link that passes through 1000 routers, it's bound to hit a dozen that are oversubscribed and performing horribly. This is especially true as your distance from the "core" increases and your distance to the "edge" decreases. This is why major datacenters are next to or house the major peering points of the Internet.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds