Looking at it like a feedback system, the current negative feedback that limits writing at all is the amount of memory (or part of it thanks to tunables, but in the end it's memory that limits it). Practically this means you can have huge number of dirty pages outstanding, all waiting on one slow device.
Now combine the facts that you can't get rid of dirty memory quickly with that they probably aren't used anymore (rewrite doesn't happen that often I think), and you've just managed to throw away all free memory, slowing everything down. So basically you're using most of your memory for buffering writes, while the write speed doesn't increase significantly after a certain buffer size.
What is throttled here is the speed of dirtying pages, not the speed of writeout, to keep the number of dirty pages limited. (But although the dirtying is throttled, the average should in both cases be the write speed of the device, so perhaps "throttling" isn't quite the right word).
The deadlock mentioned is AFAIK the situation where the system is under heavy memory pressure and wants to free some memory, which it tries to do by writing out dirty pages, but doing that might require allocating some memory, which can't be done because that's what lacking, hence a deadlock. But I don't know if that's the mentioned LVM example, or if that's another case.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds