LWN: Comments on "Upstreaming multipath TCP" https://lwn.net/Articles/800501/ This is a special feed containing comments posted to the individual LWN article titled "Upstreaming multipath TCP". en-us Tue, 23 Sep 2025 01:51:52 +0000 Tue, 23 Sep 2025 01:51:52 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Upstreaming multipath TCP https://lwn.net/Articles/801420/ https://lwn.net/Articles/801420/ kevincox <div class="FormattedComment"> TCP is very much alive.<br> <p> QUIC is a very cool protocol and effectively support multi-path out of the box since IP and port aren't used to identify connections. In fact roaming works a lot better than MPTCP since you don't need to report your new IP+Port before breaking the old connection, which means you can migrate between two WiFi access points or similar where there is never an overlap where you are connected to both networks.<br> <p> However TCP is far from dead, and this is still very useful until then (if it ever happens).<br> </div> Mon, 07 Oct 2019 10:22:39 +0000 Upstreaming multipath TCP https://lwn.net/Articles/801349/ https://lwn.net/Articles/801349/ flussence <div class="FormattedComment"> Software that works tends to outlive CADT fads.<br> </div> Sat, 05 Oct 2019 03:17:50 +0000 Upstreaming multipath TCP https://lwn.net/Articles/801286/ https://lwn.net/Articles/801286/ foom <div class="FormattedComment"> And you can still write new software in COBOL, too. Doesn't mean it's not dead.<br> </div> Fri, 04 Oct 2019 13:07:34 +0000 Upstreaming multipath TCP https://lwn.net/Articles/801261/ https://lwn.net/Articles/801261/ flussence <div class="FormattedComment"> TCP seems pretty alive to me, as I was able to read your post sent via it.<br> </div> Thu, 03 Oct 2019 23:03:46 +0000 Upstreaming multipath TCP https://lwn.net/Articles/801257/ https://lwn.net/Articles/801257/ rand0m$tring <div class="FormattedComment"> tcp is dead. non-encrypted connections are dead (perhaps besides intra-datacenter).<br> <p> i feel the best course of action must be to fold all these wonderful efforts into QUIC. no?<br> </div> Thu, 03 Oct 2019 20:15:01 +0000 Upstreaming multipath TCP https://lwn.net/Articles/801250/ https://lwn.net/Articles/801250/ dps <div class="FormattedComment"> IMHO the simplest method of making regular application use multi[path TCP without recompiling them might be a LD_PRELOAD object which automagically replaces regular TCP with the multipath version. An object like this could be smart enough not to use multipath TCP in when it inappropriate, for example connections to the local host. This would not require any kernel support and work for almost all applications.<br> <p> Some very high performance network products feature LD_PRELOAD objects which make regular application exploit the hardware, often in user space. Applications include stock trading applications where the fact that c is finite matters. This hardware is seriously expensive because high stakes bond trading and supercomputers can both justify expensive hardware.<br> <p> <p> </div> Thu, 03 Oct 2019 18:25:32 +0000 Upstreaming multipath TCP https://lwn.net/Articles/800826/ https://lwn.net/Articles/800826/ Herve5 <div class="FormattedComment"> Now I dream of the day someone will add ping tunnels as an extra alternate path, turning us connected *forever* even if s o m e t i m e s _ v e r y _ s l o w . . .<br> </div> Sun, 29 Sep 2019 13:33:11 +0000 Upstreaming multipath TCP https://lwn.net/Articles/800727/ https://lwn.net/Articles/800727/ obonaventure <div class="FormattedComment"> You are right, Multipath TCP can be tuned to better support load-balancers and anycast. The trick for anycast is very simple. The client sends a SYN to the anycast address. It reaches one of the servers that replies and returns its regular IP address using the ADD_ADDR option supported by Multipath TCP. The client can either create a new subflow towards the server's real IP address or wait until routing changes break the initial subflow.<br> <p> This technique was proposed and evaluated in Making Multipath TCP friendlier to Load Balancers and Anycast, see <a rel="nofollow" href="https://inl.info.ucl.ac.be/publications/making-multipath-tcp-friendlier-load-balancers-and-anycast.html">https://inl.info.ucl.ac.be/publications/making-multipath-...</a> <br> It also works for load balancers and has been included in RFC6824bis<br> </div> Fri, 27 Sep 2019 20:07:40 +0000 Upstreaming multipath TCP https://lwn.net/Articles/800711/ https://lwn.net/Articles/800711/ kevincox <div class="FormattedComment"> You can definitely do it this way. DNS is commonly used as that anycast service. However there are a number of reasons this isn't quite ideal including staleness or added latency. <br> <p> Doing it via MPTCP means that you don't have any added latency in exchange for no guarantee that the "connection" doesn't get migrated to a different target before the "handover" to the server IP. <br> </div> Fri, 27 Sep 2019 17:47:45 +0000 Upstreaming multipath TCP https://lwn.net/Articles/800708/ https://lwn.net/Articles/800708/ raven667 <div class="FormattedComment"> I dunno if this would make sense to implement in the OS kernel, but you could build this exact behavior into an application by having an anycast service that just returns the IP (and port?) you want the client to load balance to, followed by an MTCP connection to that service. One could probably make that into a client library and server daemon to make it easy to be integrated into any software that wanted to behave this way but the OS provides all the primitives necessary to have this work, and there probably isn't any benefit in abstracting that away into the kernel as opposed to having userspace control all the knobs and build on top.<br> </div> Fri, 27 Sep 2019 17:42:41 +0000 Upstreaming multipath TCP https://lwn.net/Articles/800680/ https://lwn.net/Articles/800680/ kevincox <div class="FormattedComment"> I wonder if multipath TCP can improve using TCP with an anycast address. Right now the problem is that if different packets of a TCP stream hit different servers sharing an IP it will be broken. This is currently solved with fancy routing to create session stickyness. <br> <p> I imagine with multipath TCP it would be possible to receive the connection on the anycast address then quickly fail over to a flow on an IP address that is specific to the host the packet landed on. <br> <p> It probably doesn't completely remove the need for sticky routing however it should reduce the (already rare) case where the session rerouted outside of your sticky routing domain (for example landing on a different PoP). <br> </div> Fri, 27 Sep 2019 07:22:04 +0000