Thanks Jon, for once again expressing this all more succintly than I ever could. A subtle point:
"The protocol code... is expected to check each packet to see whether it comes from a device which is currently using reserve memory. If so, and the packet does not belong to a suitably-marked socket, that packet is to be dropped immediately."
...and the tasks driving the competing traffic are likely to be blocked waiting for memory to be freed, or soon will be. So the competing traffic problem is self-correcting.
Of course we need to worry not just about whether the system avoids deadlock under load, but that it keeps running smoothly. The vm system's highwater/lowwater scheme lets it get out of the way for a while after relieving memory pressure, a natural way of sharing network bandwidth with normal tasks.
However, we are not quite out of the woods yet. As soon as atomic allocs start to fail, the current patch starts dropping non-blockio packets, which might cause user-visible protocol stalls. But the machine does not exist solely to write out dirty memory - we want everything to keep running smoothly, not just block IO. After all, under load these low memory conditions are the rule, not the exception. Fortunately, this behavior is easily tunable: we can adjust the threshold at which non-blockio packets begin to be dropped, effectively giving non-vm traffic access to part of the reserve. When nicely tuned, the vm-related throttling should cause the non-blockio network traffic to taper off just as the vm writeout traffic begins to rise and no packets will ever have to be dropped.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds