The multiqueue block layer
The basic structure of the block layer has not changed a whole lot since it was described for 2.6.10 in Linux Device Drivers. It offers two ways for a block driver to hook into the system, one of which is the "request" interface. When run in this mode, the block layer maintains a simple request queue; new I/O requests are submitted to the tail of the queue and the driver receives requests from the head. While requests sit in the queue, the block layer can operate on them in a number of ways: they can be reordered to minimize seek operations, adjacent requests can be coalesced into larger operations, and policies for fairness and bandwidth limits can be applied, for example.
This request queue turns out to be one of the biggest bottlenecks in the entire system. It is protected by a single lock which, on a large system, will bounce frequently between the processors. It is a linked list, a notably cache-unfriendly data structure especially when modifications must be made — as they frequently are in the block layer. As a result, anybody who is trying to develop a driver for high-performance storage devices wants to do away with this request queue and replace it with something better.
The second block driver mode — the "make request" interface — allows a driver to do exactly that. It hooks the driver into a much higher part of the stack, shorting out the request queue and handing I/O requests directly to the driver. This interface was not originally intended for high-performance drivers; instead, it is there for stacked drivers (the MD RAID implementation, for example) that need to process requests before passing them on to the real, underlying device. Using it in other situations incurs a substantial cost: all of the other queue processing done by the block layer is lost and must be reimplemented in the driver.
The multiqueue block layer work tries to fix this problem by adding a third mode for drivers to use. In this mode, the request queue is split into a number of separate queues:
- Submission queues are set up on a per-CPU or per-node basis. Each CPU
submits I/O operations into its own queue, with no interaction with the
other CPUs. Contention for the submission queue lock is thus
eliminated (when per-CPU queues are used) or greatly reduced (for
per-node queues).
- One or more hardware dispatch queues simply buffer I/O requests for the driver.
While requests are in the submission queue, they can be operated on by the block layer in the usual manner. Reordering of requests for locality offers little or no benefit on solid-state devices; indeed, spreading requests out across the device might help with the parallel processing of requests. So reordering will not be done, but coalescing requests will reduce the total number of I/O operations, improving performance somewhat. Since the submission queues are per-CPU, there is no way to coalesce requests submitted to different queues. With no empirical evidence whatsoever, your editor would guess that adjacent requests are most likely to come from the same process and, thus, will automatically find their way into the same submission queue, so the lack of cross-CPU coalescing is probably not a big problem.
The block layer will move requests from the submission queues into the hardware queues up to the maximum number specified by the driver. Most current devices will have a single hardware queue, but high-end devices already support multiple queues to increase parallelism. On such a device, the entire submission and completion path should be able to run on the same CPU as the process generating the I/O, maximizing cache locality (and, thus, performance). If desired, fairness or bandwidth-cap policies can be applied as requests move to the hardware queues, but there will be an associated performance cost. Given the speed of high-end devices, it may not be worthwhile to try to ensure fairness between users; everybody should be able to get all the I/O bandwidth they can use.
This structure makes the writing of a high-performance block driver (relatively) simple. The driver provides a queue_rq() function to handle incoming requests and calls back to the block layer when requests complete. Those wanting to look at an example of how such a driver would work can see null_blk.c in the new-queue branch of Jens's block repository:
git://git.kernel.dk/linux-block.git
In the current patch set, the multiqueue mode is offered in addition to the existing two modes, so current drivers will continue to work without change. According to this paper on the multiqueue block layer design [PDF], the hope is that drivers will migrate over to the multiqueue API, allowing the eventual removal of the request-based mode.
This patch set has been significantly reworked in the last month or so; it
has gone from a relatively messy series into something rather
cleaner.
Merging into the mainline would thus appear to be on the agenda for the
near future. Since use of this API is optional, existing drivers should
continue to work and this merge could conceivably happen as early as 3.11.
But, given that the patch set has not yet been publicly posted to any
mailing list and does not appear in linux-next, 3.12 seems like a more
likely target. Either way, Linux seems likely to have a much better block
layer by the end of the year or so.
Index entries for this article | |
---|---|
Kernel | Block layer |
Posted Jun 6, 2013 5:38 UTC (Thu)
by faramir (subscriber, #2327)
[Link] (9 responses)
If we consider multiple processes with synchronization (perhaps via shared memory), or multi-threaded programs; I'm not so sure. Imagine some kind of processing of file data which can be done a block at a time (say a block based cipher or non-adaptive compression). A multi-threaded/multi-process version of such a program may in fact be running code on multiple CPUs but reading from/writing to the same files (and therefore generating coalescible IO requests.) Reads from the input file could come from any of the threads/processes engaged in the task.
In the case of compression; due to variable length output chunks; the writer side will have to be coalesced into a single stream in the program itself in order to put the output in the correct order. Although that might be done by having a management thread simply inform each compression thread when to write; so the actual write calls might still come from different CPUs.
A block based cipher program could probably use lseek() on multiple fds opened to the same output file to maintain correct ordering from each thread.
In either case, it would appear that coalescing across CPUs would be useful. At least if the actual processing required was negligible relative to IO time. It may be that CPUs aren't fast enough to do this for anything beyond ROT13 encryption or simple RLE compression; so it might not matter for now. But it would seem to be at least be a theoretical issue.
Posted Jun 6, 2013 7:08 UTC (Thu)
by dlang (guest, #313)
[Link] (5 responses)
And of those who are doing so, how much do they care if their files get processed one at a time using every CPU for that one file, or many files at a time with each CPU processing a different file (and therefor not needing the combined I/O logic)
Yes, there are going to be some, but is it really worth crippling the more common cases to help this rare case?
Posted Jun 6, 2013 7:28 UTC (Thu)
by dlang (guest, #313)
[Link]
in https://lwn.net/Articles/553086/ I talk about how I/O coalescing should only be done when the I/O is busy (among other things)
Posted Jun 6, 2013 12:06 UTC (Thu)
by ejr (subscriber, #51652)
[Link] (3 responses)
But... Parallel HDF5, etc. interfaces handle some coalescing in the "middleware" layer. They've found that relying on the OS leads to, um, sufficient performance variation across different systems and configurations. Yet another standard parallel I/O layer in user-space could help more than trying to jam the solution into the OS.
But relying on user-space won't help the case multiqueue is attacking.
Posted Jun 7, 2013 2:27 UTC (Fri)
by dlang (guest, #313)
[Link] (2 responses)
It may happen, but it's not going to be the common case.
Posted Jun 7, 2013 10:06 UTC (Fri)
by ejr (subscriber, #51652)
[Link] (1 responses)
But this case often is better handled outside the OS. There could be different interface where clients post their read buffers and the OS fills them in some hardware-optimal order, but that's been considered for >20 years and has no solution yet.
Posted Jun 7, 2013 18:18 UTC (Fri)
by dlang (guest, #313)
[Link]
Also, in HPC aren't you dealing with many different systems accessing your data over the network rather than multiple processes on one machine?
What we are talking about here is the chance that things running on different CPUs in a single system are generating disk writes that the OS could combine into a single write before sending it to the drives
for reads this isn't as big a deal, readahead should go a long way towards making the issue moot.
Posted Jun 6, 2013 14:22 UTC (Thu)
by axboe (subscriber, #904)
[Link]
So in general it's not a huge problem. For "legacy" devices that benefit a lot from merging, we can help them out a little bit. They will typically be single queue anyway, and merging at dispatch time on that queue would be trivial to do.
Posted Jun 6, 2013 14:38 UTC (Thu)
by alankila (guest, #47141)
[Link]
Posted Jun 14, 2013 6:39 UTC (Fri)
by kleptog (subscriber, #1183)
[Link]
Posted Feb 21, 2018 0:10 UTC (Wed)
by dusol (guest, #122669)
[Link] (2 responses)
Posted Feb 21, 2018 1:22 UTC (Wed)
by neilbrown (subscriber, #359)
[Link] (1 responses)
You are always free to ask :-)
I suggest that you read https://lwn.net/Articles/736534/ and https://lwn.net/Articles/738449/ as they go in to quite a bit more detail.
Posted Jan 22, 2020 14:58 UTC (Wed)
by Bobby999 (guest, #136127)
[Link]
I have a question regarding multi-queue (MQ) in SCSI layer. I have read articles and blogs on multi-queue in Linux block layer. Including your brilliant article as well. According to my understanding, since Linux kernel 3.13 (2014), the linux block layer has multi-queue a.k.a mq-blk. And then after mq-blk in block layer, the SCSI IO submission path had to be updated. As a result, SCSI multi-queue a.k.a scsi-mq work has been functional since Linux kernel 3.17.
My question is: How actually multi-queuing is achieved in SCSI? AFAIK, traditionally the SCSI mid-level layer used to create queuecommand (). Now when there is multi-queuing implemented in SCSI, does multi-queuing actually means creating multi queuecommand ()? I am struggling to understand the multi-queue mechanism in context of SCSI. I mean where one can see multi-queue in SCSI code?
Posted Jan 22, 2020 14:46 UTC (Wed)
by Bobby999 (guest, #136127)
[Link]
Great article !
I have a question regarding multi-queue (MQ) in SCSI layer. I have read articles and blogs on multi-queue in Linux block layer. Since Linux kernel 3.13 (2014), the linux block layer has multi-queue a.k.a mq-blk. And then after mq-blk in block layer, the SCSI IO submission path had to be updated. As a result, SCSI multi-queue a.k.a scsi-mq work has been functional since Linux kernel 3.17.
My question is: How actually multi-queuing is achieved in SCSI? AFAIK, traditionally the SCSI mid-level layer used to create queuecommand (). Now when there is multi-queuing implemented in SCSI, does multi-queuing actually means creating multi queuecommand ()? I am struggling to understand the multi-queue mechanism in context of SCSI. I mean where one can see multi-queue in SCSI code?
Please help me understand. Thanks :-)
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
Hypothetical progams might desire cross-CPU coalescing of IO
The multiqueue block layer
I'm stuck in understanding kernel multi-queue block layer I/O scheduling algorithm.
If my task want to submit bios, it uses function 'generic_make_request(bio)'.
I understand that 'generic_make_request(bio)' submit bios to its own software staging queue(one software staging queue per core).
This function get block device driver's queue(bdev->bd_disk->queue) through bdev_get_queue(bio->bi_bdev) and then,
add bios through a recursive call to generic_make_request().
This article says 'the request queue is split into a number of separate queues'.
Are 'request queue' and bdev->bd_disk->queue the same thing?
I uses kernel linux-4.8.17 version.
The multiqueue block layer
If something is not clear after reading those, do ask again, either here or in the comments of the relevant other article.
The multiqueue block layer
Please help me understand. Thanks :-)
The multiqueue block layer
Hi all,