LWN: Comments on "Toward non-blocking asynchronous I/O" https://lwn.net/Articles/724198/ This is a special feed containing comments posted to the individual LWN article titled "Toward non-blocking asynchronous I/O". en-us Tue, 04 Nov 2025 11:30:41 +0000 Tue, 04 Nov 2025 11:30:41 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Toward non-blocking asynchronous I/O https://lwn.net/Articles/724444/ https://lwn.net/Articles/724444/ oever Alas, this plan cannot be executed while there is no <tt>LIO_GETDENTS</tt> in <tt>aiocb</tt>. Fri, 02 Jun 2017 12:53:24 +0000 Toward non-blocking asynchronous I/O https://lwn.net/Articles/724425/ https://lwn.net/Articles/724425/ peter-b <div class="FormattedComment"> I think the problem is that older kernels ignore those bits entirely, rather than forcing them to be zeroed. So there's no reliable way to find out whether the flag's having any effect.<br> </div> Fri, 02 Jun 2017 07:06:51 +0000 Toward non-blocking asynchronous I/O https://lwn.net/Articles/724422/ https://lwn.net/Articles/724422/ ssmith32 <div class="FormattedComment"> I'm assuming older kernels throw an error if they check the new flag, and it's set? Otherwise I can't understand why that matters...<br> </div> Fri, 02 Jun 2017 06:19:47 +0000 Toward non-blocking asynchronous I/O https://lwn.net/Articles/724367/ https://lwn.net/Articles/724367/ oever I appreciate the work on asynchronous I/O very much. There is a lot of potential to improve software with AIO. A good example I came across recently is the venerable <tt>find</tt>. <p> <tt>find</tt> is a single-threaded application that uses blocking IO. For each directory that it reads it does one or more <tt>getdents</tt> calls. These calls are done sequentially. <tt>find</tt> could be sped up by doing many calls at the same time. <p> Consider the extreme case where each subsequent directory is on the opposite site of the disk. The disk head would travel to the other side of the disk for each directory. The kernel IO scheduler cannot help because it only knows about the next location. <p> If 100 parallel requests were done instead of one, the IO scheduler would handle them in an efficient and quick manner: it would read the nearest entries first. It is possible to send parallel <tt>getdents</tt> requests with threads. This requires one thread per parallel request: quite an overhead. <a href="http://docs.libuv.org/en/v1.x/threadpool.html">libuv</a> does this with a thread pool. This approach can roughly double the speed of <tt>find</tt> for a cold cache. <p> If this would be implemented with <tt>libaio</tt>, the thread overhead could be eliminated. Thu, 01 Jun 2017 16:22:06 +0000 Would still be beneficial to have async fsync even with O_DIRECT https://lwn.net/Articles/724308/ https://lwn.net/Articles/724308/ sitsofe O_DIRECT implies that the I/O won't be left rolling around in the OS' cache but says nothing about whether it is still in the disk device's non-volatile cache. You <em>could</em> send all I/Os down with O_SYNC too but speeds will plummet. Thus it's still desirable to be able to send down an fsync (and it would have been preferable if submitting it didn't have to block)... Wed, 31 May 2017 20:20:25 +0000 Toward non-blocking asynchronous I/O https://lwn.net/Articles/724218/ https://lwn.net/Articles/724218/ ringerc <div class="FormattedComment"> Last time I checked it seemed like we couldn't even trust the AIO layer to provide a reliable fsync() for buffered AIO, so it seems like it's pretty useless for anything except O_DIRECT.<br> </div> Wed, 31 May 2017 06:07:58 +0000