TCP Fast Open: expediting web services
Posted Aug 2, 2012 15:57 UTC (Thu) by gdt
Parent article: TCP Fast Open: expediting web services
"Router latencies" isn't really an issue, as they are easily enough solved by increasing the bandwidth (which reduces the time to receive and transmit a packet). (And yeah, I'm avoiding the "bufferbloat" overprovisioning of buffers at the network edge here, because when that exists RTO is not much help -- saving one RTT when you have multiple RTT in the queue ahead of you isn't a huge win.)
The speed of light in optical fiber is the major contributor to latency. The speed of light in fiber is roughly 150Km per ms, this is much slower than the speed of light in a vacuum. The speed of light in a fiber can be improved, but at the cost of narrowing bandwidth. This tradeoff isn't available to users, but is determined during ITU standards-making. Moreover the tradeoff isn't huge, in the order of 5% of latency. But the tradeoff does have a major effect on the cost of Forward Error Correction ASICs.
Once you get out of the tail links, the lowest speed you'll encounter on a ISP backbone is 1Gbps. You've got to have well more than 1,000 router hops before you'll get 1ms of ingress playin, cell switching and egress playout. The other devices which can add significant latency are the middleboxes at the other end of the link: firewalls, load sharers and so on.
to post comments)