Posted Sep 29, 2005 8:52 UTC (Thu) by ringerc
Parent article: Predictive per-task write throttling
This has driven me nuts for years. Anything that improves the situation is wonderful in my book. Currently, running a backup on my dual Xeon server at work (on an 8-disk RAID) will cause everything else to slow to a crawl for hours. Worse, I have no way to tell the kernel "this is a low priority process, please don't let it monopolize memory or disk resources if someone else wants them," nor can I tell the kernel "this process is just doing a once-off copy, please don't shove everything it reads into your IO cache."
A `disknice' command would make me want to send a few cases of nice beer to whoever wrote it. A `memnice' command in the same style, for tuning the amount of system cache memory given to a process, would probably demand a container of beer if only I could provide it. Someone who fixes the kernel so neither is required at all ....
Consider a system with a large pile of GUI apps running. It needs the binaries for these apps in RAM, along with things like mailbox index files, various common config files, etc. Now imagine that a backup has pushed all that out of RAM in favour of data that's only going to be used once, so the GUI processes need to read it in from disk now every time they want to use it. That includes the binary images themselves. That's bad, but what's worse is that the backup is slowing disk I/O right down with no way to throttle it, so each access those GUI apps make takes forever. You now have the recipe for how to almost DoS a Linux terminal server by running a backup.
I still don't understand why `tar' and `cp' can't just tell the kernel "this read should not be cached" and "I'm doing bulk I/O".
to post comments)