Smarter write throttling
Posted Aug 17, 2007 17:17 UTC (Fri) by giraffedata
Parent article: Smarter write throttling
A lot of I/O through the LVM level could create large numbers of dirty pages destined for the physical device. Should things hit the dirty thresholds at the LVM level, however, the process could block before the physical drive starts writeback. In the worst case, the end result here is a hard deadlock of the system
I don't see any deadlock in this description, but maybe I'm supposed to know something more about how LVM works. (It doesn't make sense to me, for example, that a physical drive starts writeback; I expect a physical drive to be the target of a writeback).
But I've seen the pageout-based memory deadlock plenty of times in action when a process that has to move in order for memory to get clean (maybe it holds a lock for some dirty file) blocks on a memory allocation. At least at one time, these were possible in Linux in various ways, network-based block devices and filesystems being the worst offenders, and people addressed it with tuning policies that sought to make it unlikely that the amount of clean allocatable memory ever shrank to zero.
"Throttling" usually refers to that sort of tuning -- a more or less arbitrary limit placed on how fast something can go. I hope that's not what this is about, because it is possible actually to fix deadlocks -- make them mathematically impossible while still making the full resources of the system available to be apportioned any way you like. It's not easy, but the techniques are well known (never wait for a Level N resource while holding a Level N+1 resource; have pools of memory at various levels).
I hate throttling. Throttling is waiting for something artificial. In a well-designed system, you can wait for real resource and never suffer deadlock or unfairness.
to post comments)