User: Password:
|
|
Subscribe / Log in / New account

TCP Fast Open: expediting web services

TCP Fast Open: expediting web services

Posted Aug 4, 2012 23:16 UTC (Sat) by drdabbles (guest, #48755)
In reply to: TCP Fast Open: expediting web services by ycheng-lwn
Parent article: TCP Fast Open: expediting web services

How do you perceive this working with load balancing hardware in the future? Vendors will have to get behind TFO and patch firmwares, but hardware behind the LBs will also need to be patched. Do you expect a chicken-and-the-egg situation here, or do you know something that perhaps you aren't or can't share with us?


(Log in to post comments)

TCP Fast Open: expediting web services

Posted Aug 8, 2012 0:05 UTC (Wed) by butlerm (guest, #13312) [Link]

That depends on the design of the load balancer. A well designed one should have no problem converting a TCP Fast Open connection between the client and the LB and a standard connection between the LB and the servers behind it. Assuming the servers and the load balancer(s) are colocated, there should be very little penalty for doing so.

TCP Fast Open: expediting web services

Posted Aug 8, 2012 3:41 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Doing so would presumably add latency and reduce the effectiveness of fast open. It would make more sense for the LB to just do NAT rather than proxying the connection. Is there any special handling in conntrack needed for this?

TCP Fast Open: expediting web services

Posted Aug 8, 2012 8:02 UTC (Wed) by johill (subscriber, #25196) [Link]

I don't think it would affect the effectiveness a lot -- presumably the backend server and LB are close by each other, so the latency between them matters less than the latency between the LB & client.


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds