LWN: Comments on "Zero-copy network transmission with io_uring" https://lwn.net/Articles/879724/ This is a special feed containing comments posted to the individual LWN article titled "Zero-copy network transmission with io_uring". en-us Sun, 14 Sep 2025 09:43:05 +0000 Sun, 14 Sep 2025 09:43:05 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Zero-copy network transmission with io_uring https://lwn.net/Articles/883240/ https://lwn.net/Articles/883240/ eximius <div class="FormattedComment"> What is the reason behind the hoop jumping with extra notifications and generations and userspace informing the kernel when it can move on to next generations?<br> <p> With the completion notifications, it would have seemed like the simplest, most resistant to misuse API would be to delay the completion notification until it was *actually* done - which is what we wanted? (Unless there is some semantic meaning worth differentiating between the two that I&#x27;m missing.)<br> </div> Mon, 31 Jan 2022 05:59:36 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/882515/ https://lwn.net/Articles/882515/ Funcan <div class="FormattedComment"> Cows are gassy and slow - even spherical cows in a vacuum.<br> <p> A language like goland that already has magic in the compiler to promote local variables to the heap automatically where needed might be able to optimise this away behind the scenes (not necessarily mainline golang since it would require baking the semantics of networking into the compiler, which would be odd but a language with similar properties) but it is probably better to provide a less leaky abstraction to programmers. Things done behind the scenes to make one interface look like an older, simpler one are rarely optimal, and this code is about getting the last drop of network performance<br> </div> Tue, 25 Jan 2022 14:37:31 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/881256/ https://lwn.net/Articles/881256/ farnz <p>This does also reduce the number of copies when using kTLS. "Zero copy" is a bit of a misnomer - it's only there to eliminate memcpys from user owned memory to kernel owned memory, not all copies. <p>The point of "zero copy" is that in a normal transfer, data is copied from the user buffer to a kernel buffer, then the network card does DMA from the kernel buffer to its own transmit buffer. "zero copy" reduces that to a copy from the user buffer to the NIC's transmit buffer. <p>With kTLS, "zero copy" is a win with or without expensive NICs: <ol> <li>With expensive NICs, the NIC can do the encryption during DMA from CPU memory to the transmit buffer. You thus avoid copying the data into the kernel, and just have the NIC read and encrypt during DMA. <li>With cheap NICs, the kernel has to do a copy. Without zero copy, it copies plain text from user buffer to kernel buffer and then encrypts from kernel buffer to network buffer. With zero copy, it encrypts from user buffer to network buffer. In either case, the NIC will then DMA the network buffer into the on-chip transmit buffer. </ol> Thu, 13 Jan 2022 13:55:09 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/881228/ https://lwn.net/Articles/881228/ Lawless-M <div class="FormattedComment"> <font class="QuotedText">&gt; io_uring&#x27;s zero-copy operations can perform more than 200% better than MSG_ZEROCOPY.</font><br> <p> The maximum speedup posted was 2.27x which is 127% more than MSG_ZEROCOPY.<br> <p> 200% more would be 3x speedup<br> <p> <p> <p> <p> </div> Thu, 13 Jan 2022 09:56:43 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/881193/ https://lwn.net/Articles/881193/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; My question is what&#x27;s the benefit of zero-copy data when the decrypt/encrypt step is in between.</font><br> <p> &quot;Zero copy&quot; is a marketing term. A more accurate term would be &quot;reduced copy&quot;.<br> You might image an naive protocol stack where a copy happens when moving from each level to the next. Then the data is copied onto the network fabric, copied off into the destination, and copied back up the stack.<br> <p> At any stage there is a potential benefit in avoiding the copy (and also a cost, so small messages are likely to be copied anyway).<br> <p> Encrypt/decrypt may require a copy that would not otherwise be needed - though it may be possible to encrypt-in-place or encrypt-and-copy for one of the unavoidable copies (like copying onto the networking fabric). But that doesn&#x27;t mean there aren&#x27;t opportunities to reduce copying when encryption is used.<br> <p> And also, encryption is not always used, even though it should always be available. On the open Internet, or in the public cloud, encryption is a must-have. In a private machine-room with a private network, there is minimal value in encryption, and there may be great value in reducing latency. In that case, it may be possible and beneficial to eliminate all the memory-to-memory copies ... particularly when an RDMA network fabric is used which allows the receiver to tell the sender when in memory to place different parts on an incoming message.<br> <p> </div> Thu, 13 Jan 2022 01:40:13 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/881089/ https://lwn.net/Articles/881089/ al4711 <div class="FormattedComment"> <font class="QuotedText">&gt; What exactly is the question?</font><br> <p> My question is what&#x27;s the benefit of zero-copy data when the decrypt/encrypt step is in between.<br> <p> Maybe I misunderstand the benefit, so please let me draw a picture.<br> <p> client -&gt; data -&gt; nic -&gt; kernel -&gt; reading data and write data to nic buffer -&gt; client<br> <p> When we look now into the decrypt/encrypt step is this my understanding.<br> <p> client -&gt; data -&gt; nic -&gt; kernel -&gt; server reading data -&gt; decrypt/encrypt -&gt; write data to nic buffer -&gt; client<br> <p> Could the ktls help in this case?<br> </div> Wed, 12 Jan 2022 12:25:35 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/881024/ https://lwn.net/Articles/881024/ farnz <p>Two things here: <ol> <li>Available to everyone; in the process of encrypting using the CPU, I end up with a copy of the data in encrypted form. I don't need to copy that again into kernel buffers, I can just point the kernel at the already encrypted data. Same reasoning applies if I'm not processing the plain text, just forwarding pre-encrypted data (E2E use cases). <li>For places with big pockets; there exist expensive network cards from brands like Mellanox capable of doing encryption as part of the scatter-gather DMA to the card. kTLS and IPSec both take advantage of this to have the network card encrypt the payload as it copies it out of main memory and onto the wire. This means that I can use kTLS or IPSec and have the communication encrypted on the wire without using CPU time to do the crypto; if I don't have one of the fancy cards, then the kernel encrypts for me, directly copying from my userspace buffer to the kernel's encrypted send buffer. </ol> Tue, 11 Jan 2022 17:09:24 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880970/ https://lwn.net/Articles/880970/ smurf <div class="FormattedComment"> Simple. You don&#x27;t need to copy the encrypted data.<br> <p> What exactly is the question?<br> </div> Tue, 11 Jan 2022 14:26:30 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880966/ https://lwn.net/Articles/880966/ al4711 <div class="FormattedComment"> As we see today almost everything switch to encrypted communication how does fit this zero-copy concept with encryption and decryption traffic?<br> </div> Tue, 11 Jan 2022 13:19:26 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880661/ https://lwn.net/Articles/880661/ kpfleming <div class="FormattedComment"> &#x27;meticulously undocumented&#x27; has to be my favorite Corbet-ism. It conjures a mental image of a developer spending hours and hours poring over the patches to ensure that no accidental documentation has leaked through. Thanks :-)<br> </div> Fri, 07 Jan 2022 12:28:24 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880524/ https://lwn.net/Articles/880524/ farnz <p>I had to use mutexes because of the way the rest of the application was structured - the socket was already being polled elsewhere via epoll edge triggered (not my decision, and in a bit of code I had no control over), and I needed to somehow feed the notifications from the main loop to the client threads. <p>It was not a nice project to work on, and the moment I got the chance, I rewrote it as a single-threaded application that used less CPU (but same amount of memory) and was much easier to read. Unfortunately, this meant arguing with management that the "framework" from the contracted developers wasn't worth the money they'd paid for it. Thu, 06 Jan 2022 10:24:08 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880518/ https://lwn.net/Articles/880518/ developer122 <div class="FormattedComment"> I&#x27;m really starting to warm to the concept of an exokernel as a high-performance solution. Just do away with all the kernel abstractions and system calls and even the message passing of a microkernel.<br> <p> Two threads, one with access to the networking hardware and one from the application, communicating simultaneously through a lockless ring buffer in shared memory. If you ignore the fact that the former here is in kernelspace (in an exokernel userspace processes can be given direct hardware access) then this starts to look like that a lot.<br> <p> Of course, in an exokernel you don&#x27;t need the first thread that specializes in handling the network hardware if your webserver thread knows how to do it. Real examples exist.<br> </div> Thu, 06 Jan 2022 09:13:07 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880516/ https://lwn.net/Articles/880516/ NYKevin <div class="FormattedComment"> <font class="QuotedText">&gt; And I can (and did) emulate this by using zero-copy send and mutexes in userspace, but it&#x27;s not exactly easy to maintain code - and yet it&#x27;s just doing stuff that the kernel already knows how to do well.</font><br> <p> I don&#x27;t see why you would need to use mutexes. The sample code in <a href="https://www.kernel.org/doc/html/v4.15/networking/msg_zerocopy.html">https://www.kernel.org/doc/html/v4.15/networking/msg_zero...</a> uses poll(2) to wait for the send to complete, and I tend to assume you could also use select(2) or epoll(2) instead if you find those easier or more familiar (until now, I had never heard of poll). Just write a five-line wrapper function that calls send(2) and then waits on the socket with one of those syscalls, and (as far as I can tell) you should be good to go.<br> <p> Frankly, the kernel is not a library. If we *really* need to have a wrapper function for this, it ought to live in libc, not in the kernel.<br> </div> Thu, 06 Jan 2022 06:47:24 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880431/ https://lwn.net/Articles/880431/ foom <div class="FormattedComment"> It&#x27;s not the time it blocks that&#x27;s gonna be the problem, but the impact on buffering.<br> <p> Waiting until step 4 effectively removes local kernel buffering -- when one send completes there must be no further data available to send, thus outgoing packets will pause. And, like TCP_NODELAY, would cause extra partially filled packets to be sent at the tail of each send. If you&#x27;re trying to send a stream of data as fast as the network allowed, this will all be counterproductive, unless all the data is provided in a single send. Sometimes that may be possible, but it seems like a very limited use case.<br> <p> And, if your primary goal is not scalability or cpu, but rather memory usage, it seems like it&#x27;d be far simpler and just as effective to reduce SO_SNDBUF and use a regular blocking send loop.<br> </div> Wed, 05 Jan 2022 13:41:09 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880425/ https://lwn.net/Articles/880425/ farnz <p>Setting stack sizes is trivial to do - you don't have to stick to the default, and when you're on a small system, you do tune the stack size down to a sensible level for your memory. Plus, those megabytes are VA space, not physical memory; there's no problem having a machine with 256 MiB physical RAM, no swap and 16 GiB of VA space allocated, as long as you don't actually try to use all your VA space. <p>And you're focusing on speed again, not simplicity of programming a low memory usage system; I want to be able to call send, have the kernel not need to double my buffer (typically an entire compressed video frame in the application I was working on) by copying it into kernel space, and then poll the kernel until the IP stack has actually sent the data. I want to be able to call send, and know that when it returns, the video frame has been sent on the wire, and I can safely reuse the buffer for the next encoded frame. <p>It's not that send is slow - it's that doing a good job of keeping memory use down on a system with reasonable CPU (in order to keep the final BOM down while still having enough grunt to encode video) requires co-operation. And I can (and did) emulate this by using zero-copy send and mutexes in userspace, but it's not exactly easy to maintain code - and yet it's just doing stuff that the kernel already knows how to do well. Wed, 05 Jan 2022 10:06:55 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880423/ https://lwn.net/Articles/880423/ NYKevin <div class="FormattedComment"> <font class="QuotedText">&gt; but not in terms of memory usage, </font><br> <p> Each thread has a stack, whose default size seems to be measured in megabytes (by cursory Googling, anyway). If you spawn way too many threads, you are going to use way too much memory just allocating all of those stacks.<br> <p> <font class="QuotedText">&gt; nor in terms of small servers handling a few tens of clients at peak</font><br> <p> I find it difficult to believe that blocking send(2) is too slow yet you only have tens of clients. You are well off the beaten path if that&#x27;s really the shape of your problem. So I guess you get to build your own solution.<br> </div> Wed, 05 Jan 2022 09:46:50 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880419/ https://lwn.net/Articles/880419/ farnz <p>You've explained why it's inadequate in terms of CPU time given a large number of clients, but not in terms of memory usage, nor in terms of small servers handling a few tens of clients at peak; different optimization targets for different needs. <p>For the general case, io_uring and async is the "best" option, but it brings in a lot of complexity managing the state machines in user code rather than simply relying on thread per client. Zero-copy reduces memory demand as compared to current send syscalls, and having a way to do simple buffer management would be useful for the subset of systems that don't actually care about CPU load that much, don't have many clients at a time to multiplex (hence not many threads), but do want a simple "one thread per client" model that avoids cross-thread synchronisation fun. <p>Not everything is a Google/Facebook/Netflix level problem, and on embedded systems I've worked on, a zero-copy blocking until ACK send would have made the code smaller and simpler; we emulated it in userspace via mutexes, but that's not exactly a high performance option. Wed, 05 Jan 2022 09:39:11 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880403/ https://lwn.net/Articles/880403/ NYKevin <div class="FormattedComment"> In order to get to 4, you must first get to 2, and as I explained, there&#x27;s no way to make that perform acceptably in the general case, so you might as well give up on zero-copy and go with regular old blocking, copying send(2) instead.<br> </div> Wed, 05 Jan 2022 04:37:57 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880305/ https://lwn.net/Articles/880305/ farnz <p>Getting to 4 is, however, useful in terms of RAII or other stack-based buffer management. Until you get to 4, the kernel may need access to the data again so that it can resend if not ACKed; once you get to 4, the kernel will never look at the buffer again, even if the remote application doesn't receive the data. <p>Basically, getting to 1 is the minimum useful state, but it makes zero-copy hard, because I now have to keep the buffer alive in user-space until the kernel gets to 4. Getting to 4 or clearly indicating that we will never get to 4 is useful because it means that when send returns, the kernel is promising to never look at the buffer again. Tue, 04 Jan 2022 11:19:09 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880247/ https://lwn.net/Articles/880247/ schuyler_t <div class="FormattedComment"> The overall bandwidth matters a lot too. With high bandwidth (handwave at &gt;10 gbps), it&#x27;s extremely easy to begin running into CPU&lt;-&gt;memory bandwidth limitations, especially for non-server class big metal CPUs. With a dummy NIC it&#x27;s hard to tell, like you said. But once you run into that cliff, ZC is pretty much the only way around it.<br> </div> Mon, 03 Jan 2022 16:39:20 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880201/ https://lwn.net/Articles/880201/ taladar <div class="FormattedComment"> While that is true the kernel can not really recover from a high level rejection of the message using just the buffer content anyway.<br> </div> Mon, 03 Jan 2022 08:25:15 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880167/ https://lwn.net/Articles/880167/ luto <div class="FormattedComment"> That would be *stupendously* slow. It requires changing the PTE (no big deal, although locking might be nasty), broadcasting a TLB flush to all threads (might as well throw out your fancy server and run on a single CPU from 1999), and then eventually handling a page fault (embarrassingly slow on x86).<br> </div> Sun, 02 Jan 2022 20:35:24 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880158/ https://lwn.net/Articles/880158/ NYKevin <div class="FormattedComment"> It depends on what you mean by &quot;the data has been sent.&quot; There are multiple possible definitions of that phrase, in chronological order:<br> <p> 0. The kernel has validated the operation (i.e. checked the arguments), and is preparing to push the data we care about into the TCP send queue, but it has not actually done that yet (i.e. it might still be sitting in a userspace buffer). This is probably not what you meant, but I include it for completeness.<br> 1. The data we care about is in the TCP send queue and is enqueued for sending, but due to Nagle&#x27;s algorithm and/or network congestion, we can&#x27;t send it yet. Under zero-copy networking, this state is (or should be, anyway) functionally equivalent to state 0, and the application cannot safely free the buffer until it gets a completion notification. Time since previous state: Probably no more than a few hundred microseconds, possibly a lot less in a well-optimized implementation, especially if the data is small or we are async/zero-copy. Might be slower if the data is huge and we have to copy it.<br> 2. The data we care about is &quot;next in line,&quot; i.e. it will be transmitted in the next TCP segment, and we are not waiting on any ACKs. Time since previous: Could take multiple seconds if the network is badly congested, or rarely more than that. We might never reach this state, if the network drops altogether or we receive a RST packet. In a well-performing network, tens or hundreds of milliseconds would be typical depending on ping. Or this could be instant, if the send queue was already empty.<br> 3. The data we care about has been packed into one or (rarely) more IP datagrams, and those datagrams have been sent. IP is an unreliable, connectionless protocol, so at the IP layer, sending is an event, not a process. This probably takes no more than a few milliseconds, but I&#x27;m not very familiar with this sort of low-level hardware timing, so I might be completely wrong there.<br> 4. The data we care about has been ACKed. At this point, we can be somewhat confident that a well-behaved receiving application on the peer will eventually get a copy of the data, assuming it does not crash or abort before then. Time since previous: At least one ping round-trip, possibly forever if the network drops before we receive an ACK.<br> 5. There has been some sort of application-level acknowledgement of the data we care about, such as an HTTP response code. This may or may not happen at all depending on the protocol and the server/client roles, and the kernel is obviously not in a position to figure that out anyway, so this is a non-starter.<br> <p> You probably don&#x27;t mean 0 or 1, because 1 is (I think) when regular old send(2) returns (and 0 is pretty much the exact opposite of &quot;the data has been sent&quot;). But even getting to (2) is potentially multiple seconds and might never happen at all (in which case, I assume you just fail with EIO or something?). If you want that to perform well, you had better not be doing one-thread-per-operation, or you will either spawn way too many threads, or take far too long to accomplish anything. Both of those are bad, so now we&#x27;re in the realm of async networking and io_uring, or at least the realm of thread pools and epoll, so no matter what, you&#x27;re going to be doing some sort of async, event-driven programming. There&#x27;s no way for the kernel to provide the illusion of synchronous networking short of actually doing things synchronously, and that just doesn&#x27;t scale in any kind of reasonable way. I dunno, maybe your language/OS can fake it using async/await and such? But that&#x27;s still event-driven programming, it&#x27;s just dressed up to look like it&#x27;s synchronous (and the kernel has no part in this chicanery anyway).<br> <p> Even getting to (4) is not really enough to have any assurances, because you have to treat every element of the system as unreliable, including the remote host. The peer could still crash after (4) and never see your data (because it was sitting in the receive queue). Until you see (5), you simply cannot know whether the peer received or will receive the data. And the kernel can&#x27;t figure out whether (5) has happened for you, because (5) is application-specific.<br> </div> Sun, 02 Jan 2022 10:32:02 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880159/ https://lwn.net/Articles/880159/ smurf <div class="FormattedComment"> That&#x27;s easy: as soon as the kernel is sure it doesn&#x27;t need the buffer any more, i.e. when the TCP ACK has arrived for its last byte, or when the network interface tells the kernel that it has transmitted the data (UDP).<br> <p> There is no kernel buffer. The whole point of this is to enable Zero Copy.<br> </div> Sun, 02 Jan 2022 09:56:45 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880157/ https://lwn.net/Articles/880157/ Sesse <div class="FormattedComment"> The mmap packet socket has something like this for raw packets.<br> </div> Sun, 02 Jan 2022 09:32:27 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880155/ https://lwn.net/Articles/880155/ jezuch <div class="FormattedComment"> Yes, but we&#x27;re talking about io_uring and managed buffers. Ideally the user space would not touch the buffers at all, there would just be an op that says &quot;gimme a (handle to a) buffer, any buffer, I can write to&quot;, give it to the device that fills it, then pass it along and never think of it again. The ring would handle the completion signals and return the buffer to the pool when it&#x27;s really, truly done with it. This way the buffer would be &quot;attached&quot; not to the read or write, but to the entire sequence of operations. I think the media subsystem has something similar.<br> <p> My guess is I don&#x27;t really understand how managed buffers work :)<br> <p> (The difficulty then is in making sure that the userspace does not reuse the buffer while it&#x27;s &quot;busy&quot;. Rust&#x27;s ownership model looks like it was designed just with this in mind ;) )<br> </div> Sun, 02 Jan 2022 07:59:06 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880146/ https://lwn.net/Articles/880146/ james <blockquote>Or as soon as the remote side acknowledges receipt?</blockquote> And "acknowledgement" can happen at different levels. For example, a mail server might indicate that TCP packets containing an email have been received, but you shouldn't consider that email "sent" until the receiving side sends a "250 OK" message (which might be after SpamAssassin has run, so potentially many milliseconds later). Sat, 01 Jan 2022 23:38:07 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880141/ https://lwn.net/Articles/880141/ jafd <div class="FormattedComment"> There is a small problem with this &quot;synchronous send&quot; idea. How do you define &quot;sent&quot;? Does it get sent as soon as it reaches the kernel&#x27;s buffer? Or does it get sent as soon as it reaches the NIC? Or does it get sent as soon as the NIC has put it onto the wire? Note that the NIC has its own buffers, too. Or maybe when it reaches the nearest router? Or as soon as the remote side acknowledges receipt? But what about raw IP, UDP, and who knows what other stateless protocols?<br> <p> This is a small problem, but it sure opens a huge can of worms.<br> <p> Years ago I wanted this too. But then I read more about this problem (in the context of a rather good article about buffer bloat and how it actually harms both disk and network performance, who knows if it wasn&#x27;t here at LWN, years ago), and I don&#x27;t want it anymore. We can only sensibly agree that the packet is sent as soon as the postman (the kernel) found no immediate problems with it and took it for processing. <br> </div> Sat, 01 Jan 2022 20:27:51 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880135/ https://lwn.net/Articles/880135/ shemminger <div class="FormattedComment"> Many years ago there were experiments with COW and networking. See Intel paper at Ottawa Linux Symposium.<br> The experiment concluded that COW was a slower because the cost of acquiring locks to invalidate the TLB entries on other CPU&#x27;s exceeded the cost of the copy. The parameters might now with larger sends (64K or more) and huge pages. Definitely worth investigating but the VM overhead is significant.<br> </div> Sat, 01 Jan 2022 16:55:03 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880134/ https://lwn.net/Articles/880134/ smurf <div class="FormattedComment"> That depends how much data you have to copy and how you set up your threads.<br> <p> Assume, for instance, that you are a server. You have a thread per client. You get a request, assemble the response, send it, then you free the data structure. An end-to-end-blocking send would obviate the need for copying the data to the kernel while not affecting anything else.<br> <p> I do agree that doing the same thing is way more useful in an io_uring setup.<br> </div> Sat, 01 Jan 2022 12:30:13 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880133/ https://lwn.net/Articles/880133/ Sesse <div class="FormattedComment"> But how is the behavior more predictable? What are you trying to achieve by waiting for the call to return? (Even more so if you put it in a thread; the context switch cost is going to be way higher than the data copy cost.)<br> </div> Sat, 01 Jan 2022 11:14:56 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880132/ https://lwn.net/Articles/880132/ smurf <div class="FormattedComment"> So? Networks are unpredictable, period. Put the call in a thread and deal with it.<br> <p> Nothing new here. File systems (particularly networked ones, but also physical disks when they are under heavy load and/or start going bad) may or may not behave like this also, depending on whether you mount them synchronously and/or open a file with O_SYNC and/or O_DIRECT turned on.<br> </div> Sat, 01 Jan 2022 11:04:54 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880131/ https://lwn.net/Articles/880131/ Sesse <div class="FormattedComment"> How would that behavior be predictable? It would mean that your call could return in 10 ms or in 30 seconds depending on network conditions.<br> </div> Sat, 01 Jan 2022 09:58:57 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880130/ https://lwn.net/Articles/880130/ epa <div class="FormattedComment"> Can there be a synchronous mode for send() so it doesn&#x27;t return until the data has been sent? It would be a bit like turning off write-behind caching for disk I/O. And similarly, it would hit performance in some single-threaded programs, but lead to more predictable behaviour, and in programs with a separate network thread it might perform fine.<br> </div> Sat, 01 Jan 2022 08:17:18 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880125/ https://lwn.net/Articles/880125/ roc <div class="FormattedComment"> That&#x27;s really expensive because when you make the page COW, you have to tell all CPUs that might have a TLB entry for that page to invalidate that TLB entry so that the next time they try to write to it, they get a fault. For a process with many threads using many CPUs, that is slow and scales poorly.<br> </div> Fri, 31 Dec 2021 22:07:23 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880124/ https://lwn.net/Articles/880124/ NYKevin <div class="FormattedComment"> In order for GC to work, the application has to somehow know whether the kernel is still using the buffer (since the GC cannot descend into kernel memory and trace pointers directly). Therefore, you need some sort of completion mechanism like the one described in this patch in order to notify the GC of that fact. Once that mechanism exists, sure, you can hook it up to a GC or anything else you want, because userspace can manage these buffers in whatever manner it likes.<br> <p> Alternatives:<br> <p> * The kernel calls free or something similar to free for you. But malloc (or tcmalloc, or whatever you&#x27;re using to allocate memory) is a black box from the kernel&#x27;s perspective, because it lives in userspace, and the only way the kernel can plausibly invoke it is to either inject a thread (shudder) or preempt an existing thread and borrow its stack. You end up with all of the infelicities of signal handlers, which notoriously cannot allocate or free memory because malloc and free are usually not reentrant. That means your &quot;something similar to free&quot; just ends up being a more elaborate and complicated version of exactly the same completion mechanism, and userspace still has to do the actual memory management itself.<br> * The buffer is mmap&#x27;d, and the kernel unmaps it for you when it&#x27;s done. There are performance issues here; mmap simply cannot compete with a highly optimized userspace malloc, assuming the workload is reasonably compatible with the latter. However, this does at least have the advantage of being *possible* without ridiculous &quot;let&#x27;s inject a thread&quot; tricks. But since the whole point of zero-copy is to improve performance, that&#x27;s probably not enough.<br> </div> Fri, 31 Dec 2021 21:47:06 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880116/ https://lwn.net/Articles/880116/ MattBBaker <div class="FormattedComment"> It&#x27;s a matter of application latency and reasoning about how the application behaves. If the sent data is small enough you can get better application performance by copying the data into the kernel and giving the application the completed signal, then let the buffer work through the network stack at the kernel&#x27;s pace. <br> There is also a problem in that the logic gets a lot harder if the application has to manage separate &quot;message send begin&quot; and &quot;message send complete&quot; signals. People have a hard enough time writing network programs that don&#x27;t deadlock and zero copy makes it more difficult. So it makes a lot of sense to write an API where the simple and easy case is default, and if they need the power tools they have to ask.<br> </div> Fri, 31 Dec 2021 20:01:19 +0000 Improvement percentage: ?? https://lwn.net/Articles/880109/ https://lwn.net/Articles/880109/ corbet I didn't write "better time", I wrote "perform more than 200% better". The amount of data transmitted per unit time can indeed improve by 200%. Fri, 31 Dec 2021 18:03:57 +0000 Improvement percentage: ?? https://lwn.net/Articles/880072/ https://lwn.net/Articles/880072/ smurf <div class="FormattedComment"> There is no such thing as &quot;200% better&quot; time. If an operation takes a second, &quot;100% better&quot; means that it now completes instantly, thus with 200% the operation completes a second before it&#x27;s started. Congratulations, you have invented a time machine.<br> <p> While you can be 200% worse, i.e. take three times as long, the inverse of that would be 67% better.<br> </div> Fri, 31 Dec 2021 17:17:50 +0000 Zero-copy network transmission with io_uring https://lwn.net/Articles/880107/ https://lwn.net/Articles/880107/ Sesse <div class="FormattedComment"> FreeBSD (possibly some of the other BSDs?) have a zerocopy mechanism that flags the page as COW in the MMU (relieving userspace of the need to know, so you can essentially always use zerocopy), but of course then you pay a price if you don&#x27;t leave it alone for long enough afterwards.<br> </div> Fri, 31 Dec 2021 17:07:16 +0000