BBR congestion control
BBR congestion control
Posted Jan 6, 2019 21:26 UTC (Sun) by bircoph (guest, #117170)Parent article: BBR congestion control
Tests were made on two endpoints (desktop and laptop) connected via openvpn.
With Cubic on sending side I have ~3 MB/s initiall which steadily rises to ~5 MB/s and holds there. 5 MB/s hits a CPU limit on laptop due to sophisticated encryption used.
With BBR I have initial speed ~5MB/s which drops steadily to ~400 КB/s and holds there.
So maybe BBR is good for specific datacenter setups or lab environment, but it is a failure for real life end-user hardware on common wired or wireless networks. At least for now. Maybe some bug in the kernel?
Posted Jan 6, 2019 22:43 UTC (Sun)
by zlynx (guest, #2285)
[Link] (2 responses)
But today I needed to copy a disk image around anyway so I tried it from my laptop over WiFi to my NAS. Both are running Fedora 29, both with BBR enabled.
Something you should know about FQ though. The WiFi system has dropped that. It no longer uses qdiscs at all. It's internal to the WiFi system and is a fq_codel variant designed for WiFi. It is supposed to work the same as FQ for BBR, however.
Doing my transfer using rsync with --progress it averaged right around 20 MB/s aka just over 200 Mbps. The speed went up and down but never dropped to the floor as in your example.
One thing I just thought of though. You are definitely not using OpenVPN in TCP mode, right? Using any VPN through a TCP tunnel is a horrible idea, and especially bad on WiFi. You get all of TCP's problems compounded as packet losses and delay cause both TCP sessions to react, overreact, and miserably fail.
Posted Jan 7, 2019 16:36 UTC (Mon)
by bircoph (guest, #117170)
[Link] (1 responses)
I tested OpenVPN in exactly the same environment with 4 different TCP congestion control algorithms on sender's side: RENO, BIC, CUBIC and BBR. (I also tested changing algo on receiver's side, but this changes almost nothing.) First three of them behave rather alike: RENO, BIC and CUBIC achieve 4.6-5.0 MB/s (with CUBIC being the best of three), but BBR provides a steady drop to ~400 KB/s. This is absolutely unacceptable. And this is unlikely to be an application problem, since it works well with other congestion control algorithms. So something is very wrong with BBR or its implementation.
> One thing I just thought of though. You are definitely not using OpenVPN in TCP mode, right?
Of course I'm using TCP. It is pointless to test TCP congestion control with an UDP application. Why I'm using OpenVPN over TCP? Because that's how the server is configured and I can't change that: both endpoints are under my control, but the server is not. That's the reality I have to face and work with.
Posted Jan 7, 2019 16:54 UTC (Mon)
by zlynx (guest, #2285)
[Link]
Use TCP sessions inside of a UDP OpenVPN tunnel.
Proxy tunnels like SSH and SOCKS are different because they do not tunnel the actual TCP packets. Proxies unwrap the TCP sessions and build new ones.
BBR congestion control
BBR congestion control
BBR congestion control