LWN: Comments on "TCP Fast Open: expediting web services" https://lwn.net/Articles/508865/ This is a special feed containing comments posted to the individual LWN article titled "TCP Fast Open: expediting web services". en-us Thu, 09 Oct 2025 07:30:46 +0000 Thu, 09 Oct 2025 07:30:46 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Examples for the speed of light https://lwn.net/Articles/519575/ https://lwn.net/Articles/519575/ Lennie <div class="FormattedComment"> You can see the routes here:<br> <p> <a href="http://www.cablemap.info/">http://www.cablemap.info/</a><br> </div> Fri, 12 Oct 2012 12:04:19 +0000 Examples for the speed of light https://lwn.net/Articles/518298/ https://lwn.net/Articles/518298/ ncm <div class="FormattedComment"> Do fibers follow great-circle routes already? I expected that to take longer to happen.<br> </div> Tue, 02 Oct 2012 08:17:41 +0000 Examples for the speed of light https://lwn.net/Articles/510519/ https://lwn.net/Articles/510519/ paulproteus <div class="FormattedComment"> Interesting point!<br> <p> <a href="http://blog.advaoptical.com/speed-light-fiber-first-building-block-low-latency-trading-infrastructure/">http://blog.advaoptical.com/speed-light-fiber-first-build...</a> suggests that one should expect approximately a 33% increase in these times for the fiber optics.<br> <p> (Additionally, they should be doubled due to round-trip time, as per my follow-up comment.)<br> </div> Fri, 10 Aug 2012 01:30:47 +0000 Examples for the speed of light (Correction) https://lwn.net/Articles/510518/ https://lwn.net/Articles/510518/ paulproteus <div class="FormattedComment"> One correction to the above note: One should *double* these numbers for *round*-trip time. These are one-way times.<br> </div> Fri, 10 Aug 2012 01:28:44 +0000 Examples for the speed of light https://lwn.net/Articles/510517/ https://lwn.net/Articles/510517/ dlang <div class="FormattedComment"> it's actually a bit longer than these times as the speed of light you are listing is the speed of light through a vacuum, going through fiber is noticeably slower.<br> </div> Fri, 10 Aug 2012 01:27:11 +0000 Examples for the speed of light https://lwn.net/Articles/510509/ https://lwn.net/Articles/510509/ paulproteus The article says: <blockquote>At intercontinental distances, this physical limitation means that—leaving aside router latencies—transmission through the medium alone requires several milliseconds</blockquote> To be more concrete: <ul><li>1 mile is 5 microseconds</li> <li>6 milliseconds from New York to Florida (1152 miles)</li> <li>15 milliseconds from New York to San Francisco (2917 miles)</li> <li>36 milliseconds from New York to Tokyo (6735 miles)</li> </ul> Source for time conversion: GNU Units. <blockquote>You have: 1 mile<br> You want: light second<br> * 5.3681938e-06<br> / 186282.4 </blockquote> Fri, 10 Aug 2012 00:28:22 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510397/ https://lwn.net/Articles/510397/ jthill I think just allowing a server to detect whether the connection is still in fast-open state should be enough. If the server sees a non-idempotent request it can make sure the initial <tt>ACK</tt> has arrived before proceeding. Make it detectable by say making <tt>{aio_,}fsync</tt> not complete until it's arrived, or fabricating a redundant <tt>EPOLLOUT</tt> edge. Thu, 09 Aug 2012 13:48:32 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510347/ https://lwn.net/Articles/510347/ cvrebert <a href="http://www.imdb.com/title/tt0584424/quotes">You're gonna have to wait until 2208 for that.</a> Thu, 09 Aug 2012 06:11:06 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510199/ https://lwn.net/Articles/510199/ kevinm <div class="FormattedComment"> ...and you could request fastopen by setting TCP_CORK before connect(); write(); then uncorking.<br> </div> Wed, 08 Aug 2012 14:45:06 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510133/ https://lwn.net/Articles/510133/ johill <div class="FormattedComment"> I don't think it would affect the effectiveness a lot -- presumably the backend server and LB are close by each other, so the latency between them matters less than the latency between the LB &amp; client.<br> </div> Wed, 08 Aug 2012 08:02:23 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510098/ https://lwn.net/Articles/510098/ raven667 <div class="FormattedComment"> Doing so would presumably add latency and reduce the effectiveness of fast open. It would make more sense for the LB to just do NAT rather than proxying the connection. Is there any special handling in conntrack needed for this? <br> </div> Wed, 08 Aug 2012 03:41:30 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/510076/ https://lwn.net/Articles/510076/ butlerm <div class="FormattedComment"> That depends on the design of the load balancer. A well designed one should have no problem converting a TCP Fast Open connection between the client and the LB and a standard connection between the LB and the servers behind it. Assuming the servers and the load balancer(s) are colocated, there should be very little penalty for doing so. <br> </div> Wed, 08 Aug 2012 00:05:52 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509772/ https://lwn.net/Articles/509772/ drdabbles <div class="FormattedComment"> How do you perceive this working with load balancing hardware in the future? Vendors will have to get behind TFO and patch firmwares, but hardware behind the LBs will also need to be patched. Do you expect a chicken-and-the-egg situation here, or do you know something that perhaps you aren't or can't share with us?<br> </div> Sat, 04 Aug 2012 23:16:55 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509770/ https://lwn.net/Articles/509770/ drdabbles <div class="FormattedComment"> This may be true in theory (I'm not sure), but in practice it's completely wrong. Bandwidth tells you only how much data can be passed through a link in a given time period. Saying a link is capable of 1Gbit/sec means if you consume every possible bit for every possible cycle for 1 second, you'll have transferred 1Gbit of data over the wire. Many links have a frame/second limit, so if your frames aren't completely full, you've wasted bandwidth and decreased the utilization of the link.<br> <p> Router latency is caused by many factors. Some can be router CPU shortages, memory resource shortages, the time it takes to transfer a frame from "the wire" to the internal hardware and vice verse, how quickly a packet can be processed, whether packet inspection is happening, etc. This, relatively speaking, can be a very long time. Typically it's microseconds, but certainly not always. Either way, it represents a minimum time delay with only a practical ceiling (IP timeout / retransmission). So increasing bandwidth to an already slow router only makes the problem worse.<br> <p> Also, if you have a link that passes through 1000 routers, it's bound to hit a dozen that are oversubscribed and performing horribly. This is especially true as your distance from the "core" increases and your distance to the "edge" decreases. This is why major datacenters are next to or house the major peering points of the Internet.<br> </div> Sat, 04 Aug 2012 23:12:41 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509739/ https://lwn.net/Articles/509739/ Jannes <div class="FormattedComment"> actually TFO should be a huge improvement in a bufferbloated situation. If there is 1 second of bufferbloat, then 'Saving one RTT' means saving 1 second.<br> <p> Not saying it's a solution to bufferbloat of course.<br> </div> Sat, 04 Aug 2012 13:34:46 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509732/ https://lwn.net/Articles/509732/ nix <div class="FormattedComment"> Warning: work in this area can be dangerous. See e.g. the series, ahem I mean feasibility study, bookended by &lt;<a href="http://www.amazon.com/The-Collapsium-Wil-McCarthy/dp/055358443X">http://www.amazon.com/The-Collapsium-Wil-McCarthy/dp/0553...</a>&gt;, &lt;<a href="http://www.amazon.com/To-Crush-Moon-Wil-McCarthy/dp/055358717X">http://www.amazon.com/To-Crush-Moon-Wil-McCarthy/dp/05535...</a>&gt;.<br> <p> (Though admittedly the Queendom did go to rather more extreme lengths to increase the speed of light than mere vacuum tubes, and displayed a cavalier degree of carelessness, indeed insouciance, regarding the fact that keeping trillions of black holes in your solar system in very large arrays moving at well below orbital velocity is insanely dangerous.)<br> <p> </div> Sat, 04 Aug 2012 12:20:34 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509656/ https://lwn.net/Articles/509656/ dlang <div class="FormattedComment"> That depends on how you define REST<br> <p> many groups make everything a GET request, especially for APIs that are not expected to be used from browsers, but rather called from other applications, especially in B2B type situations.<br> </div> Fri, 03 Aug 2012 21:16:40 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509578/ https://lwn.net/Articles/509578/ epa <div class="FormattedComment"> I thought that part of making a RESTful web service was deciding which operations are read-only and which affect the state; and splitting them into GET and POST requests accordingly.<br> </div> Fri, 03 Aug 2012 15:35:19 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509569/ https://lwn.net/Articles/509569/ cesarb <div class="FormattedComment"> <font class="QuotedText">&gt; Surely you could get around that by including the servers boot time in the MAC key used to generate the cookie?</font><br> <p> A better option would be /proc/sys/kernel/random/boot_id (see <a href="http://0pointer.de/blog/projects/ids.html">http://0pointer.de/blog/projects/ids.html</a>).<br> </div> Fri, 03 Aug 2012 14:04:48 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509499/ https://lwn.net/Articles/509499/ ras <div class="FormattedComment"> <font class="QuotedText">&gt; Being unaware of the reboot, the client will timeout and retransmit SYNs. </font><br> <p> For that to happen the server must accept the cookie. Surely you could get around that by including the servers boot time in the MAC key used to generate the cookie?<br> </div> Fri, 03 Aug 2012 04:36:51 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509481/ https://lwn.net/Articles/509481/ butlerm <div class="FormattedComment"> Yes, both client and server need support for TCP Fast Open. That support amounts to a change to both the TCP stack and the TCP socket API, both of which are implemented by the kernel. Without kernel support (or a user space TCP implementation and the privileges necessary to use it) neither endpoint can make use of TFO.<br> </div> Fri, 03 Aug 2012 02:22:31 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509480/ https://lwn.net/Articles/509480/ butlerm <div class="FormattedComment"> If I am not mistaken, most modern TCP stacks do not hold data sent with a SYN, and do not send it either. For most applications, there would be relatively little advantage if they did. Requests usually fit in an MTU (or MSS) worth of data, and in the absence of something like TCP Fast Open, the target endpoint has to wait for an acknowledgement that can carry the full sub-MSS sized request without a problem. Where on the other hand, holding the data simply makes it easier to conduct SYN attacks.<br> </div> Fri, 03 Aug 2012 02:17:19 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509472/ https://lwn.net/Articles/509472/ butlerm <div class="FormattedComment"> The normal technique is to mark all non-idempotent requests with a unique id, and keep track of which ones have already been processed. That is practically the only way to get once-only execution semantics.<br> <p> This could presumably be done (up to a point) at the transport layer with TCP Fast Open by having the initiating endpoint assign a unique identifier to a given user space connection request, attaching the identifier as a TCP option, caching the identifier for some reasonable period on the target endpoint, and throwing away SYN packets with connection identifiers that have already been satisfied. The more general way to do that of course is to do it at the application layer, in addition to anything the transport layer may or may not do.<br> </div> Fri, 03 Aug 2012 02:02:51 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509460/ https://lwn.net/Articles/509460/ Lennie <div class="FormattedComment"> So when will we see companies building vacuum tubes used to speed up the light when crossing large parts of land or maybe the atlantic ?<br> <p> That is the thing I care about. ;-)<br> <p> Is anyone doing research on that yet ?<br> </div> Thu, 02 Aug 2012 23:44:43 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509456/ https://lwn.net/Articles/509456/ Lennie <div class="FormattedComment"> For HTTP the patch is a SPDY-/HTTP/2-like protocol, the performance of HTTP is currently limited because it does not do multiplexing.<br> <p> SPDY/2 is currently supported by Firefox, Chrome. An Apache module from Google, there is a beta patch from the nginx developers, node.js module, a Java server implementation and some beta C-client- and server-libraries and implementations.<br> </div> Thu, 02 Aug 2012 23:28:17 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509451/ https://lwn.net/Articles/509451/ nix <div class="FormattedComment"> Quite. The set of errors in the manpages (and, indeed, in POSIX) is not a total set -- the kernel is allowed to return other errors, though it is perhaps unwise to expect callers to expect those errors.<br> </div> Thu, 02 Aug 2012 22:56:27 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509421/ https://lwn.net/Articles/509421/ dlang <div class="FormattedComment"> you are making the assumption that the RESTful web service is read-only.<br> <p> How can you do a RESTful web service where the client is sending information to the server without causing these sorts of problems?<br> </div> Thu, 02 Aug 2012 20:42:08 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509403/ https://lwn.net/Articles/509403/ jengelh <div class="FormattedComment"> <font class="QuotedText">&gt;Google [...] to implement TCP Fast Open</font><br> <p> Great, now GTFO is going to get a new subentry in the Urban Dictionary.<br> </div> Thu, 02 Aug 2012 19:06:57 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509395/ https://lwn.net/Articles/509395/ pr1268 <p>Perhaps I'm not totally understanding what's going on here (high-level): TFO seems more like a user-space fix (i.e. Apache HTTPD, Microsoft IIS, etc.). Are changes to the system calls <tt>socket(2)</tt>, <tt>connect(2)</tt>, etc. the reason this article is on the LWN Kernel Development page?</p> <p>Also, do I assume correctly that both client and server have to support TFO to realize the speed-up mentioned in the article?</p> <p>I certainly don't mean to criticize this article (or its placement here on LWN), just curious instead. Great article, thanks!</p> Thu, 02 Aug 2012 18:46:39 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509376/ https://lwn.net/Articles/509376/ ycheng-lwn <div class="FormattedComment"> Our draft creates this confusion that HTTP on TFO will break if the request is not idempotent because we use the word "idempotent transactions". But today the client may send a non-idempotent request twice already with standard TCP. For example, the link may fail after the server receive a non-idempotent request. the client will retry the request on another connection later since the original is not acknowledged.<br> <p> TFO makes such a case possible in the SYN stage: the server reboots between when it receives request in SYN-data and when it sends the SYN-ACK. Being unaware of the reboot, the client will timeout and retransmit SYNs. If the server comes back and accepts the SYN, the client will repeat the request. But IMO the risk is minimal especially if the server defers enabling TFO until a reasonable connection timeout after reboot, e.g., 5 min.<br> <p> Cheers,<br> <p> -yuchung (tfo developer)<br> </div> Thu, 02 Aug 2012 17:33:42 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509367/ https://lwn.net/Articles/509367/ tialaramex <div class="FormattedComment"> Maybe I don't understand. Internally we have a great many RESTful web services which all work roughly like this:<br> <p> The client asks if this particular combination of data (provided as GET query parameters) is found in the database overseen by that web service. The server looks in its database (e.g. maybe it's the collection of all voter registration records for a particular country) and if there is a matching record it replies saying what was found and where.<br> <p> You can run that same request again and get the same exact answer and running it many times or not at all changes nothing‡ so that seems to meet your requirement entirely.<br> <p> ‡ In practice some of the services do accounting, they are incrementing a counter somewhere for every query run and then we use that counter to determine the payment to a third party for the use of their data. But this is no greater deviation from the concept than the usual practice of logging GET requests and anyway most of the services don't do that.<br> </div> Thu, 02 Aug 2012 16:32:42 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509366/ https://lwn.net/Articles/509366/ man_ls That is not required in the RFC, and I am not sure that restful, stateless web services would be even possible. After all, web services are supposed to change state in the server; otherwise what is the point? Thu, 02 Aug 2012 16:11:02 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509361/ https://lwn.net/Articles/509361/ gdt <p>"Router latencies" isn't really an issue, as they are easily enough solved by increasing the bandwidth (which reduces the time to receive and transmit a packet). (And yeah, I'm avoiding the "bufferbloat" overprovisioning of buffers at the network edge here, because when that exists RTO is not much help -- saving one RTT when you have multiple RTT in the queue ahead of you isn't a huge win.)</p> <p>The speed of light in optical fiber is the major contributor to latency. The speed of light in fiber is roughly 150Km per ms, this is much slower than the speed of light in a vacuum. The speed of light in a fiber can be improved, but at the cost of narrowing bandwidth. This tradeoff isn't available to users, but is determined during ITU standards-making. Moreover the tradeoff isn't huge, in the order of 5% of latency. But the tradeoff does have a major effect on the cost of Forward Error Correction ASICs.</p> <p>Once you get out of the tail links, the lowest speed you'll encounter on a ISP backbone is 1Gbps. You've got to have well more than 1,000 router hops before you'll get 1ms of ingress playin, cell switching and egress playout. The other devices which can add significant latency are the middleboxes at the other end of the link: firewalls, load sharers and so on.</p> Thu, 02 Aug 2012 15:57:39 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509360/ https://lwn.net/Articles/509360/ epa <div class="FormattedComment"> Yes, I was confusing idempotent (makes no difference whether requested once, or many times) with stateless (makes no difference whether requested zero, one, or many times). Ideally GET requests would be not merely idempotent but stateless, which is a stronger property. But that is not needed for TFO.<br> </div> Thu, 02 Aug 2012 15:19:41 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509354/ https://lwn.net/Articles/509354/ alankila <div class="FormattedComment"> Although to be fair, multiple GETs that all attempt to delete the same user aren't a problem here. So maybe the latter request crashes if website developer wasn't careful enough, but the delete itself was idempotent.<br> <p> I guess cautious people can't turn TFO on unless they validate the software they runs, but now there is going to be a good reason why you want to ensure that software is idempotent-GET safe. I imagine that the vast majority of software is, actually.<br> </div> Thu, 02 Aug 2012 14:04:19 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509344/ https://lwn.net/Articles/509344/ epa <div class="FormattedComment"> Long ago I worked at ArsDigita where the rule was to avoid form submit buttons as much as possible. So the user management page on a site would have 'delete user' not as a POST form submission, but simply a hyperlink. This was held to make the website look cleaner and feel faster. Then a customer using a 'web accelerator' which eagerly follows links ended up trashing their whole site. The programmer's fault, of course - GET requests should never be used for strongly state-changing operations like deleting data.<br> </div> Thu, 02 Aug 2012 13:16:12 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509333/ https://lwn.net/Articles/509333/ colo <div class="FormattedComment"> Ah, well yes, in an ideal world... :)<br> <p> I've seen enough GET-requests with at times far-reaching side effects in the wild that I'm not convinced this will (or should) see widespread adoption, at least not for "ordinary" HTTP servers that aren't serving up static content only or something like that.<br> </div> Thu, 02 Aug 2012 12:21:07 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509331/ https://lwn.net/Articles/509331/ iq-0 <div class="FormattedComment"> Many 'GET' operations are not necessarily idempotent. The RFC's language is 'SHOULD NOT', which is about the mildest way requirements are described.<br> <p> The problem is that often different applications run in a big webserver (especially true for massive virtual hosting setups) and that "partially enabling TFO" is not really an option.<br> <p> The other way round would be for the application to signal that the request is not allowed using TFO, but that would require a retry using non-TFO which is not specced and introduces more latency than is gained using TFO.<br> <p> The only real solution that is safe would be to invent (yet another) HTTP header that signifies which methods for which paths under the current vhost may be done using TFO initialized connections.<br> For systems that support TFO for all requests (because they have higher-level guards against duplicate requests) one could simply hint '* /' or something. Only the first connection to such a site must in that case always be made using non-TFO requests.<br> </div> Thu, 02 Aug 2012 12:16:44 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509328/ https://lwn.net/Articles/509328/ Los__D <div class="FormattedComment"> But if it is already possible to send data with an normal SYN, and that data gets delivered to the application later, what do you gain from throwing it away when you use TCP Fast Open? If the feature can be used for SYN attacks, they would just do it without Fast Open.<br> <p> I'm probably missing something, but it doesn't really make sense to me.<br> </div> Thu, 02 Aug 2012 11:28:25 +0000 TCP Fast Open: expediting web services https://lwn.net/Articles/509326/ https://lwn.net/Articles/509326/ dan_a <div class="FormattedComment"> I would think that the problem is the resources in the OS which you could consume by doing this - especially since the handshake has already failed one trustworthiness test.<br> </div> Thu, 02 Aug 2012 10:59:52 +0000