LWN: Comments on "BBR congestion control" https://lwn.net/Articles/701165/ This is a special feed containing comments posted to the individual LWN article titled "BBR congestion control". en-us Thu, 18 Sep 2025 11:42:46 +0000 Thu, 18 Sep 2025 11:42:46 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net BBR congestion control https://lwn.net/Articles/776164/ https://lwn.net/Articles/776164/ zlynx <div class="FormattedComment"> Sorry for the confusion there, but TCP does not work when tunneled inside TCP. It may appear to work but its feedback mechanisms are fundamentally broken.<br> <p> Use TCP sessions inside of a UDP OpenVPN tunnel.<br> <p> Proxy tunnels like SSH and SOCKS are different because they do not tunnel the actual TCP packets. Proxies unwrap the TCP sessions and build new ones.<br> </div> Mon, 07 Jan 2019 16:54:41 +0000 BBR congestion control https://lwn.net/Articles/776163/ https://lwn.net/Articles/776163/ bircoph <div class="FormattedComment"> <font class="QuotedText">&gt; You may be having a OpenVPN problem.</font><br> <p> I tested OpenVPN in exactly the same environment with 4 different TCP congestion control algorithms on sender's side: RENO, BIC, CUBIC and BBR. (I also tested changing algo on receiver's side, but this changes almost nothing.) First three of them behave rather alike: RENO, BIC and CUBIC achieve 4.6-5.0 MB/s (with CUBIC being the best of three), but BBR provides a steady drop to ~400 KB/s. This is absolutely unacceptable. And this is unlikely to be an application problem, since it works well with other congestion control algorithms. So something is very wrong with BBR or its implementation.<br> <p> <font class="QuotedText">&gt; One thing I just thought of though. You are definitely not using OpenVPN in TCP mode, right?</font><br> <p> Of course I'm using TCP. It is pointless to test TCP congestion control with an UDP application. Why I'm using OpenVPN over TCP? Because that's how the server is configured and I can't change that: both endpoints are under my control, but the server is not. That's the reality I have to face and work with.<br> <p> </div> Mon, 07 Jan 2019 16:36:58 +0000 BBR congestion control https://lwn.net/Articles/776091/ https://lwn.net/Articles/776091/ zlynx <div class="FormattedComment"> You may be having a OpenVPN problem. I honestly haven't tried it that way.<br> <p> But today I needed to copy a disk image around anyway so I tried it from my laptop over WiFi to my NAS. Both are running Fedora 29, both with BBR enabled.<br> <p> Something you should know about FQ though. The WiFi system has dropped that. It no longer uses qdiscs at all. It's internal to the WiFi system and is a fq_codel variant designed for WiFi. It is supposed to work the same as FQ for BBR, however.<br> <p> Doing my transfer using rsync with --progress it averaged right around 20 MB/s aka just over 200 Mbps. The speed went up and down but never dropped to the floor as in your example.<br> <p> One thing I just thought of though. You are definitely not using OpenVPN in TCP mode, right? Using any VPN through a TCP tunnel is a horrible idea, and especially bad on WiFi. You get all of TCP's problems compounded as packet losses and delay cause both TCP sessions to react, overreact, and miserably fail.<br> </div> Sun, 06 Jan 2019 22:43:48 +0000 BBR congestion control https://lwn.net/Articles/776090/ https://lwn.net/Articles/776090/ bircoph <div class="FormattedComment"> I gave BBR a try on 4.19.13 kernel, FQ traffic scheduler is enabled as well.<br> Tests were made on two endpoints (desktop and laptop) connected via openvpn.<br> <p> With Cubic on sending side I have ~3 MB/s initiall which steadily rises to ~5 MB/s and holds there. 5 MB/s hits a CPU limit on laptop due to sophisticated encryption used.<br> <p> With BBR I have initial speed ~5MB/s which drops steadily to ~400 КB/s and holds there.<br> <p> So maybe BBR is good for specific datacenter setups or lab environment, but it is a failure for real life end-user hardware on common wired or wireless networks. At least for now. Maybe some bug in the kernel?<br> <p> </div> Sun, 06 Jan 2019 21:26:42 +0000 BBR congestion control https://lwn.net/Articles/731447/ https://lwn.net/Articles/731447/ flussence <div class="FormattedComment"> BBR is a sender-side change, so all end users get the benefit if a server upgrades. If it required client changes it'd be nearly worthless; Android vendors never keep their kernel up to date.<br> </div> Sat, 19 Aug 2017 16:30:59 +0000 BBR congestion control https://lwn.net/Articles/731262/ https://lwn.net/Articles/731262/ plasma-tiger <div class="FormattedComment"> With this algorithm in place, will a handheld mobile user see any difference in web working faster?<br> </div> Thu, 17 Aug 2017 13:18:44 +0000 BBR congestion control https://lwn.net/Articles/709529/ https://lwn.net/Articles/709529/ dps <div class="FormattedComment"> A while ago I looked at *lot* of research about bandwidth, delay, etc measurement. Some of the results where impressive but none of the tools developed and much else is no longer accessible. The only major exception is nttcp, which debian has but is hard to find elsewhere.<br> <p> BBR seems to be using some similar techniques to tools whih measured rsponse tines for two back to back ICMP echo request packets, some which claimed good results. Very asymmetric links and 100:1 contention has made the problem harder.<br> </div> Sat, 17 Dec 2016 00:04:22 +0000 data rate, not bandwidth https://lwn.net/Articles/702643/ https://lwn.net/Articles/702643/ giraffedata <blockquote> "Throughput" is the shorter word you're looking for. </blockquote> <p> In this case, I think "throughput" would have been almost as confusing as bandwidth, because we're dealing with the special case that the data rate into the pipe is greater than the data rate out of it (because of that meddlesome buffering by routers). I think of throughput as a steady state thing covering the whole pipe. Wed, 05 Oct 2016 18:03:39 +0000 data rate, not bandwidth https://lwn.net/Articles/702356/ https://lwn.net/Articles/702356/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; It's supposed to say, "the actual rate of data...</font><br> <p> "Throughput" is the shorter word you're looking for.<br> </div> Sun, 02 Oct 2016 04:26:41 +0000 Where is one supposed to run this? https://lwn.net/Articles/702353/ https://lwn.net/Articles/702353/ marcH <div class="FormattedComment"> Video isn't just YouTube - it's also Skype, Hangouts etc. The latter are much more vulnerable than the former to network issues, espcially latency issues like bufferbloat. When reading the description of BBR above I felt like someone in the back kept screaming "adaptive video codecs".<br> </div> Sun, 02 Oct 2016 04:23:54 +0000 BBR congestion control https://lwn.net/Articles/702324/ https://lwn.net/Articles/702324/ simoncion <div class="FormattedComment"> By relying on the fact that IPv6 requires admins to permit at least _some_ ICMP packets through their firewalls? (And maybe the -somewhat morbid- hope that sysadmins of the "Hackers can't be allowed to _ever_ see any of my ports, ever!!1" persuasion are leaving the field to enjoy their retirement?)<br> </div> Sat, 01 Oct 2016 00:27:42 +0000 BBR congestion control https://lwn.net/Articles/702262/ https://lwn.net/Articles/702262/ pjardetzky <div class="FormattedComment"> Reminds me of this rate-based work from the early 90s. <a href="http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-keshav91.pdf">http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-keshav...</a><br> <p> <p> </div> Fri, 30 Sep 2016 05:51:12 +0000 BBR congestion control https://lwn.net/Articles/702231/ https://lwn.net/Articles/702231/ mtaht <div class="FormattedComment"> I don't see how to get past this for a newer icmp message.<br> <p> "Widespread deployment of ICMP filtering makes it impossible to<br> rely on ICMP Source Quench messages for congestion control."<br> </div> Thu, 29 Sep 2016 19:34:39 +0000 BBR congestion control https://lwn.net/Articles/702226/ https://lwn.net/Articles/702226/ forthy <div class="FormattedComment"> Net2o doesn't use round trip delay for rate (it keeps measuring this, though), the sender is sending out bursts, and the receiver measures how fast these bursts arrive. The way back is not included in the measurement, so the measured time is from first to last packet of a burst (and also the timestamp of the first packet in a burst, to measure delay).<br> <p> Note that net2o can do a lot of things TCP can't, because it is redesigned from scratch, and therefore the ack packets can contain more information than a TCP ack.<br> </div> Thu, 29 Sep 2016 18:45:12 +0000 BBR congestion control https://lwn.net/Articles/702153/ https://lwn.net/Articles/702153/ Cyberax <div class="FormattedComment"> Source quench was meant to be a request, not a hint. And it was deployed way back when the TCP stacks were exceedingly primitive.<br> <p> A semi-authenticated ICMP message that is treated as a hint might be a much better idea now.<br> </div> Thu, 29 Sep 2016 09:13:32 +0000 BBR congestion control https://lwn.net/Articles/702145/ https://lwn.net/Articles/702145/ mtaht <div class="FormattedComment"> see the history of icmp source quench<br> <p> <a href="https://tools.ietf.org/html/rfc6633">https://tools.ietf.org/html/rfc6633</a><br> </div> Thu, 29 Sep 2016 05:10:40 +0000 Where is one supposed to run this? https://lwn.net/Articles/702043/ https://lwn.net/Articles/702043/ gdt <p>Recall that it is the transmitter that makes the decisions in TCP congestion control, the receiver simply Acks. Where the handset or Chromebook is predominately a consumer of data then its choice of TCP congestion control algorithm isn't going to make a difference.</p> Wed, 28 Sep 2016 00:22:58 +0000 BBR congestion control https://lwn.net/Articles/701971/ https://lwn.net/Articles/701971/ nnovoice <div class="FormattedComment"> Does BBR not care for tcp rmem and wmem values? I have been trying to set small values to the tcp rmem and wmem buffers so as to counter the bufferbloat issues on 3G and 4G links. Also trying to use fq_codel.<br> Also, <a href="http://blog.cerowrt.org/post/bbrs_basic_beauty/">http://blog.cerowrt.org/post/bbrs_basic_beauty/</a> link says better bandwidth and sawtooth is also dead, which is really good if it happens. Itching to try BBR, but have to go from 4.1 to 4.9. Long way to go!<br> </div> Tue, 27 Sep 2016 09:45:53 +0000 BBR congestion control https://lwn.net/Articles/701877/ https://lwn.net/Articles/701877/ anton <a href="https://net2o.de/internet-2.0.html">Net2o</a> also estimates the available bandwidth through the round-trip time, as described in these <a href="https://net2o.de/net2o-tl2.pdf">Slides from 2012</a>. It would be interesting to see a more detailed comparison between BBR and net2o flow control. Mon, 26 Sep 2016 12:20:57 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701856/ https://lwn.net/Articles/701856/ mtaht <div class="FormattedComment"> That is a very good explanation. I will be writing more on my blog, after I run a few thousand more tests, and think about things more. I've already mildly mis-interpreted one dataset, publicly, and I'd like to "do it right" - and for that matter, wait for the paper.<br> <p> That said, BBR and cubic CAN co-habitate in a conventional drop tail system - this plot - showing a BBR, cubic, BBR, and cubic flow, starting 3 seconds apart, in sequence, shows that.<br> <p> <a href="http://blog.cerowrt.org/flent/bbr-comprehensive/latecomer_advantage.svg">http://blog.cerowrt.org/flent/bbr-comprehensive/latecomer...</a><br> <p> (interpretation, I welcome corrections!)<br> <p> The first BBR flow gets a reasonable RTT estimate, and when cubic hits the link, BBR falls off to what it thinks is a reasonably "fair share" of the link, while cubic grabs what it can, and then gets crushed by the second BBR flow entering the link (which does not get an accurate early RTT estimate) - BUT - eventually, it does, and all flows of both CCs eventually achieve something close to parity.<br> <p> There's a lot of carnage going on in the background to get to this point - and this is a nasty test, intentionally so! And I did most of the data collection in the wee hours of the morning (and in the interest of science made all my data public) <br> <p> I keep the blog and a goodly portion of the flent data (including, rarely, packet captures) on github. I track queue depth, emulate several RTTs, multiple kinds of devices (notably cable modems), and also put multiple AQM and fq technologies in the path as well. <br> <p> <a href="https://github.com/dtaht/blog-cerowrt">https://github.com/dtaht/blog-cerowrt</a><br> <p> I think - based on what I just seen - I need to go add conventional policers to the test matrix. Sigh. And then theres, you know, making wifi fast, which is what I should be finishing up instead of going gaga over BBR.<br> </div> Sun, 25 Sep 2016 18:55:40 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701842/ https://lwn.net/Articles/701842/ Cyberax <div class="FormattedComment"> There's another effect here. Lots of routers try to share bandwidth fairly between various flows, so BBR and Cubic flows will get the same bandwidth if they both are 100% loaded. But in this case Cubic will oscillate back and forth while BBR will remain mostly stable.<br> <p> </div> Sun, 25 Sep 2016 04:29:56 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701836/ https://lwn.net/Articles/701836/ flussence <div class="FormattedComment"> Let's see if I understand all this right... (mtaht will probably correct me if not :)<br> <p> Cubic is aggressive in the sense that it has permanent acceleration, and the only thing keeping its velocity in check is a simple negative feedback loop: always trying to stuff more data into the network, until *after* the network itself becomes congested and starts rejecting (or dropping) packets.<br> <p> Due to that design it spends most of its time significantly under maximum capacity, ramping up until a congestion spike hits. And due to *that* effect being in the wild for so long, network middleboxes have evolved to buffer tons of data even when they can't forward it right away, because a higher initial rate spike for a short-lived connection (e.g. HTTP) would be perceived as "faster".<br> <p> BBR uses a lot of cleverness to guess the network path's effective bandwidth based on ACK delays and then self-regulates slightly below that, updating its estimate over time. If a Cubic stream is sharing the same pipe, it'll still be oscillating between over- and under-loaded, but because it's mostly under, the BBR stream will see "spare" capacity and gradually eat the difference until it has significantly more than 50% of the total throughput.<br> </div> Sun, 25 Sep 2016 02:20:21 +0000 BBR congestion control https://lwn.net/Articles/701839/ https://lwn.net/Articles/701839/ Cyberax <div class="FormattedComment"> Realistically though, the global network topology doesn't usually change drastically throughout the life of a typical TCP connection. It's not possible to obtain accurate guaranteed throughput number, but simple estimates might help.<br> <p> Something like ICMP "Packet Too Big" message, but for bandwidth. So if a router sees an "Advised Throughput Capacity" message with a value greater than it can sustain, it re-sends the message with a lower value. Kinda like extended ECN.<br> <p> Of course, endpoints will be free to ignore this hint.<br> </div> Sun, 25 Sep 2016 01:59:22 +0000 data rate, not bandwidth https://lwn.net/Articles/701835/ https://lwn.net/Articles/701835/ giraffedata <p>I had a hard time understanding this at first because of the incorrect use of the word "bandwidth." Bandwidth is capacity, so phrases such as, "the actual bandwidth of data delivered to the far end" don't make sense. <p> It's supposed to say, "the actual rate of data delivered to the far end," and BBR is simply about comparing the rate of data delivered to the far end to the rate sent from the near end, and when you've raised the latter to the point that it's greater than the former, you've discovered the bandwidth of the connection. <p> To be even more clear, I would say "the actual rate of data arriving at the far end." Sun, 25 Sep 2016 00:57:12 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701834/ https://lwn.net/Articles/701834/ giraffedata <blockquote> No, BBR crushes cubic in its earlier phases, in my preliminary tests. </blockquote> <p> I'm not sure what you mean by this. Do you mean a node gets more bandwidth if it uses BBR than if it uses CUBIC, in a typical network? Or that in a network in which one node uses BBR and another user CUBIC, the BBR gets more bandwidth? Or something else? <p> The actual question is about a node using BBR in a network in which other nodes use an older, more aggressive policy. That may be a moot question. The older policies one finds in a network aren't more aggressive. CUBIC in particular is not terribly aggressive. Even older policies do things like cut their sending rate by half when they see the first dropped packet, which is not aggressive at all. Sun, 25 Sep 2016 00:44:42 +0000 BBR congestion control https://lwn.net/Articles/701828/ https://lwn.net/Articles/701828/ hechacker1 <div class="FormattedComment"> Thanks,<br> <p> It was a good post. I hoping to see it in ubuntu server soon since I use a squid proxy for my own network.<br> </div> Sat, 24 Sep 2016 21:23:53 +0000 BBR congestion control https://lwn.net/Articles/701822/ https://lwn.net/Articles/701822/ shemminger <div class="FormattedComment"> End-to-End bandwidth information does not exist in the Internet. Circuit switch networks died years ago.<br> Only in restricted research networks and closed networks has it ever been possible. Even then the dynamic nature of available bandwidth means that any data given to end points would be out of date.<br> </div> Sat, 24 Sep 2016 17:13:52 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701811/ https://lwn.net/Articles/701811/ mtaht <div class="FormattedComment"> No, BBR crushes cubic in its earlier phases, in my preliminary tests.<br> </div> Sat, 24 Sep 2016 12:51:52 +0000 Out-competed by more aggressive algorithms? https://lwn.net/Articles/701693/ https://lwn.net/Articles/701693/ runekock <div class="FormattedComment"> From reading this, I would fear that sharing bandwidth with an older, more aggressive algorithm will result in the BBR flow getting squeezed out. Sort of a networking parallel to Gresham's law.<br> </div> Fri, 23 Sep 2016 10:43:01 +0000 BBR congestion control https://lwn.net/Articles/701688/ https://lwn.net/Articles/701688/ Cyberax <div class="FormattedComment"> ATM (and really, other synchronous networks) were the first step. The next one was RSVP ( <a href="https://en.wikipedia.org/wiki/Resource_Reservation_Protocol">https://en.wikipedia.org/wiki/Resource_Reservation_Protocol</a> ).<br> <p> I worked for a telecom that tried to deploy it. It actually looked awesome when it worked - you could get a channel across a network with guaranteed throughput and no jitter. And because telecoms usually build networks in rings, it could even survive individual fiber cuts without losing a single packet.<br> <p> The catch phrase, of course, "when it worked". RSVP-capable hardware cost a LOT more than regular one and when it breaks finding the cause often involved restarting large parts of the network.<br> <p> Eventually it was abandoned and they simply overprovisioned networks, segregating purely data flows into low-priority buckets.<br> </div> Fri, 23 Sep 2016 09:37:46 +0000 BBR congestion control https://lwn.net/Articles/701686/ https://lwn.net/Articles/701686/ diegor <div class="FormattedComment"> This is what they tried to do in the past, and they came with ATM. ATM before a connection is opened, verified that every router have enough bandwith for you connection, and that bandwidth is allocated to your connection.<br> <p> ATM was pushed by many big telecomunication company, but at the end failed. Without router pre allocating bandwith, there is no real improvements in estimate "bandwith information". <br> <p> Note that in a resiliant network, where every packet of a connection can be routed in a different way, it's just delusional to think that you can better track bandwith allocation for every connection on router, respect to what you can do on the endpoint.<br> <p> Every complex problem, have a nice solution, easy to understand, that doesn't work.<br> </div> Fri, 23 Sep 2016 09:22:24 +0000 BBR congestion control https://lwn.net/Articles/701670/ https://lwn.net/Articles/701670/ gdt <p><i>at least ten such algorithms in the Linux Kernel</i></p> <p>Having all the congestion control algorithms available on a platform allows simple reasoning about the real-world performance of the algorithms: the only difference is the algorithm; the operating system, hardware and network are constant.[1] Stephen and others' implementation of plugable congestion control modules was by itself responsible for an appreciable advance in the state of the art, as it made in-the-field comparisons of algorithm performance easy to do: you didn't have to additionally have to reason about the comparative networking attributes of FreeBSD and Windows. This doesn't mean that all the algorithms provided by Linux are a good choice, far from it. However it is still useful to ship these algorithms as points of reference to identify regressions in newer algorithms.</p> <p>As for user complexity, the default algorithm is a reasonable choice. There's no reason for a user to know other choices exist. If you wish to make a different choice then having all the alternatives available is a good thing, as often whichever alternative is "best" depends upon your circumstances. The default algorithm has been altered in the past, and I expect will be again. However given the possibility of a fault causing congestion collapse, all maintainers of widely-used operating systems take pains to ensure that algorithms and their implementations have proven themselves before they are used as a new default.</p> <p>--------</p> <p>[1] "all", as far as patent law allows.</p> Fri, 23 Sep 2016 03:32:00 +0000 BBR congestion control https://lwn.net/Articles/701668/ https://lwn.net/Articles/701668/ jcm <div class="FormattedComment"> What is being done to solve the real problem (of providing estimated bandwidth information to end points on a network)? Sure, there's all kinds of things that could go wrong and all kinds of lies (equipment needs to propagate this, and provide updates in a timely fashion, etc.) but what is actually being done to try?<br> <p> Too often, I see problems in this world being solved by cleaver technical solutions because "utopia will never happen, so we'll just do this...". It might take a decade to provide a nicer solution that works at internet scale, which is all the more reason to get that moving now. And it's amazing how you can force people to use standards if you get big enough players pushing it through to help.<br> </div> Fri, 23 Sep 2016 02:03:45 +0000 BBR congestion control https://lwn.net/Articles/701659/ https://lwn.net/Articles/701659/ Sesse <div class="FormattedComment"> Sure, but the question was about codel, so I used that as an example.<br> <p> In a sense, it's a bit icky that sch_fq uses the qdisc hooks to do TCP pacing, but given how many failed attempts there were at implementing paced TCP in Linux before Eric Dumazet came along and just did it, and how amazingly well it works (even without BBR, it's a great win), I'll survive that it takes up the “slot” from fq_codel. (Also, it's a bit confusing that it's called sch_fq; it should really be called sch_packet_pacing, because I can't imagine another reason to run it.)<br> </div> Thu, 22 Sep 2016 23:46:24 +0000 BBR congestion control https://lwn.net/Articles/701654/ https://lwn.net/Articles/701654/ shemminger <div class="FormattedComment"> From Eric Dumazet:<br> <p> fq_codel is stochastic, so it wont work very well on hosts with 1,000,000 flows or more...<br> <p> fq_codel is aimed for routers, while sch_fq targets hosts, implementing pacing at a minimal cost (one high resolution timer per qdisc)<br> </div> Thu, 22 Sep 2016 23:18:56 +0000 BBR congestion control https://lwn.net/Articles/701641/ https://lwn.net/Articles/701641/ mtaht <div class="FormattedComment"> sesse - *fq*-codel is for the routers along the way, IMHO. :)<br> <p> Given BBR's characteristics, the "FQ" portion helps most with legacy CC mechanisms, but I did just run a long string of tests showing the BBR + fq_codel is very good, much better than either alone.<br> <p> I'll try to write up those results in the next week or so, but the<br> flent dataset is here: <a href="http://blog.cerowrt.org/flent/bbr-comprehensive.tgz">http://blog.cerowrt.org/flent/bbr-comprehensive.tgz</a> if you want to browse and compare.<br> </div> Thu, 22 Sep 2016 20:43:22 +0000 Where is one supposed to run this? https://lwn.net/Articles/701544/ https://lwn.net/Articles/701544/ farnz <p>Any TCP endpoints - desktops, servers etc. Routers don't need to if they're just routing, but they'll benefit for things like SSH. Thu, 22 Sep 2016 15:28:41 +0000 Where is one supposed to run this? https://lwn.net/Articles/701542/ https://lwn.net/Articles/701542/ blitzkrieg3 <div class="FormattedComment"> Based on developed by Google, and the fact that it's supposed to work better with wifi connected devices, I'm thinking Android handsets and Chromebooks.<br> </div> Thu, 22 Sep 2016 15:17:19 +0000 Where is one supposed to run this? https://lwn.net/Articles/701511/ https://lwn.net/Articles/701511/ giggls <div class="FormattedComment"> Which machine should run this rather than Linux based Servers?<br> <p> My (Linux based) Router or Desktop?<br> <p> Sven<br> </div> Thu, 22 Sep 2016 12:39:31 +0000 BBR congestion control https://lwn.net/Articles/701491/ https://lwn.net/Articles/701491/ Sesse <div class="FormattedComment"> BBR isn't a traffic control algorithm (like CoDel is), it's a TCP congestion control algorithm (like CUBIC is).<br> <p> Think of it this way: BBR is for your endpoints. CoDel is for every router along the way. (The waters get slightly muddied if your server doubles as a router, though.)<br> </div> Thu, 22 Sep 2016 10:38:47 +0000