LWN: Comments on "Multiqueue networking" https://lwn.net/Articles/289137/ This is a special feed containing comments posted to the individual LWN article titled "Multiqueue networking". en-us Wed, 05 Nov 2025 18:45:25 +0000 Wed, 05 Nov 2025 18:45:25 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Multiqueue networking https://lwn.net/Articles/388074/ https://lwn.net/Articles/388074/ softarts <div class="FormattedComment"> hi<br> has any change in user space?(socket system call)<br> how to let the application read from different queue?<br> <p> </div> Tue, 18 May 2010 15:29:29 +0000 Multiqueue networking https://lwn.net/Articles/291877/ https://lwn.net/Articles/291877/ sylware <div class="FormattedComment"><pre> Basically we are taking the path of hardware accelerated traffic classes. Indeed, now the layer above would feed the queues according to the QoS bandwidth allocated for each one of them... may we have in the near futur hardware QoS bandwidth on those queues too? That would offload work from the IPv6 stack for traffic classes management. </pre></div> Tue, 29 Jul 2008 15:59:47 +0000 Multiqueue networking https://lwn.net/Articles/291374/ https://lwn.net/Articles/291374/ efexis <div class="FormattedComment"><pre> Virtual machine networking performance can be improved by virtualising tcp offloading functions often found in hardware network cards. Instead of the virtual machine doing all the tcpy stuff then ethernetty stuff and passing the packet to the host machine which then has to do some processing on it as well, the virtual machines asks its hardware accelerator functions to take care of it... this can put it more quickly into the hosts actual hardware and save a chunk of double processing. I'm not sure about multiple hardware queues. The hardware, drivers, and virtual machine would have to be designed for it, allowing each virtual machine direct access to a hardware queue, with the hardware potentially having to do virtual memory mapping so the virtual machine can't use it to grab data from the hosts memory, and then the hardware deciding how to schedule packets from each machine over each other. Aside from all of this, giving a virtual machine direct access to the real hardware somewhat defeats the purpose of it. </pre></div> Thu, 24 Jul 2008 00:10:02 +0000 Multiqueue networking https://lwn.net/Articles/290997/ https://lwn.net/Articles/290997/ eliezert <div class="FormattedComment"><pre> I think that the main issue was to push the tx lock into the queue. This was a major shortcoming of the previous implementation. </pre></div> Mon, 21 Jul 2008 10:48:42 +0000 Multiqueue networking https://lwn.net/Articles/290665/ https://lwn.net/Articles/290665/ willp <B>(Virtualization)</B>&#160; Will this help widen the networking bottleneck that exists for a host OS that runs many virtual machines (since it's currently a many:one sharing of one queue)...? <BR> This is discussed elsewhere at ACM: <A rel="nofollow" HREF="http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=527" target=_new>http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=527</A> as a problem that needs a solution... &quot;The use of multiqueue network interfaces in this way should improve network virtualization performance by eliminating the multiplexing and copying overhead inherent in sharing I/O devices in software among virtual machines.&quot; for instance...<BR><BR> Perhaps this is a path for KVM development? Fri, 18 Jul 2008 09:56:30 +0000 Multiqueue networking https://lwn.net/Articles/289462/ https://lwn.net/Articles/289462/ gdt <p>I don't know about DaveM's work, but real routers offer differing QoS policies per class-of-traffic queue (eg, a limited queue length with tail drop for a class with voice traffic, a long queue with RED drop for classes with TCP traffic). Since Linux implements scheduling and queue policies using qdiscs, a qdisc per queue would make sense.</p> Thu, 10 Jul 2008 14:05:03 +0000 Multiqueue networking https://lwn.net/Articles/289401/ https://lwn.net/Articles/289401/ walken <div class="FormattedComment"><pre> Hmmm, thanks for the link :) I'm still very confused though. Both in his blog and in the patch series initial comment, David mentions duplicating the qdiscs so that he'd have one per queue rather than one per device. I'm confused about whether this is supposed to be just an implementation detail to reduce locking somehow, or if this would be exposed in the traffic shaping user visible interface (in which case I don't understand how you'd use that :) </pre></div> Thu, 10 Jul 2008 08:03:22 +0000 Multiqueue networking https://lwn.net/Articles/289380/ https://lwn.net/Articles/289380/ johill Yes, there is indeed already a multiqueue implementation. See <a href="http://vger.kernel.org/~davem/cgi-bin/blog.cgi/2008/07/09#netdev_tx_peeling">davem's blog</a> for an explanation better than I could give. Thu, 10 Jul 2008 05:46:03 +0000 Multiqueue networking https://lwn.net/Articles/289367/ https://lwn.net/Articles/289367/ walken <div class="FormattedComment"><pre> I'm confused, I thought there was already a multiqueue implementation in the kernel ? (see Documentation/networking/multiqueue.txt in 2.6.25...) </pre></div> Thu, 10 Jul 2008 03:26:12 +0000