I've always been a fierce opponent of queue plugging. I'm not saying there's no case where it's good, but everywhere I see it, it's based on the misconception that capacity matters when you're not using it all. I'm talking about the principle that a 10,000 liter tank is no better than a 5,000 liter tank for an application that never stores more than 2,000 liters.
Sending small scattered I/Os to a disk drive is not a problem as long as the drive is keeping up with it, and if the drive isn't, then your queue is building up anyway, without a plug.
I've seen plugging used to overcome a defect in the thing serving the queue wherein it improperly speed-matches. I think this is what's going on with the network "buffer bloat" issue. I saw it more simply in a disk storage server that thought it was doing its client a favor by accepting I/Os as fast as the client could send them and sticking them in a buffer, then passing them one by one, FIFO, to the disk arms. The server was essentially lying and saying it had capacity when it was really overloaded.
This was fixed with queue plugging in the client, but later fixed better just by making the client send ahead enough work to overwhelm the server's buffer and make it admit that it couldn't keep up.
dlang, in your defense of an application of queue plugging:
the trade-off is that the output is doing far more work than it would need to do if the work was batched moreyou omit an important factor: what is wrong with the output doing more work than it otherwise would? In many cases, that doesn't make any difference.
Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds