The idea behind plugging is to allow a buildup of requests to better utilize the hardware and to allow merging of sequential requests into one single larger request. The latter is an especially big win on most hardware; writing or reading bigger chunks of data at the time usually yields good improvements in bandwidth.
I have never understood this analysis.
First of all, plugging doesn't improve utilization -- it decreases it. It causes the device to be idle more than it otherwise would for a given workload. Improved utilization would be eliminating waste of hardware so you can have less hardware. Plug-free I/O doesn't waste hardware; it uses only time that in a plugged scenario would be unused.
"Bandwidth" here means the amount you can shove down the pipe to the disk, and is meaningful only in a situation where you drive the disk as fast as it will go -- it is never idle. In that case, plugging plays no role.
I saw plugging help once, when the device was poorly designed so that it sucked up all the I/Os at interface speed into a large buffer, then proceeded to do the mechanical processing without reordering or coalescing. In that case, defeating that in-device queue via Linux plugging was a win. The device should have simply pushed back as soon as it had one turnaround time's worth of I/O in its queue, then the block layer would have coalesced and reordered without any need for plugging.
I've also imagined some carefully constructed burst patterns where response time (not bandwidth or utilization) improves with plugging. But in more common cases, plugging hurts response time for the obvious reason that it lets the device be idle for a brief period while the user is waiting for I/O.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds