User: Password:
|
|
Subscribe / Log in / New account

Some notes from the BFS discussion

Some notes from the BFS discussion

Posted Sep 10, 2009 13:08 UTC (Thu) by michaeljt (subscriber, #39183)
Parent article: Some notes from the BFS discussion

This has actually made me wonder - what benefit is there to trying adjusting a processes scheduling based on how much time it got in the past? It seems to me that unless you are giving very hard QoS guarantees, or something is wrong with your scheduler algorithm, any deviations from what the time the process should have had will be blips which can be ignored in the longer run, and trying to compensate for them is likely to introduce unnecessary complexity.

(That is, the blips should be irrelevant for throughput once you average things out over a period of time, and as far as the responsiveness is concerned, they have already happened and you can no longer take them back.)


(Log in to post comments)

Some notes from the BFS discussion

Posted Sep 10, 2009 15:22 UTC (Thu) by anton (subscriber, #25547) [Link]

what benefit is there to trying adjusting a processes scheduling based on how much time it got in the past?
If a process is I/O bound (e.g., waiting for the user most of the time rather than computing), it can be useful to prefer it, because it will give a faster response to the user (or, for disk-bound processes, it will come up faster with the next request for the disk, increasing disk utilization and hopefully total run-time), whereas a CPU-bound process usually does not benefit from getting its timeslice now rather than later.

Some notes from the BFS discussion

Posted Sep 10, 2009 16:18 UTC (Thu) by mingo (subscriber, #31122) [Link]

Beyond IO bound tasks, there's also a general quality argument behind rewarding sleepers:

Lighter, leaner tasks get an advantage. They run less and subsequently sleep more.

Tasks that do intelligent multi-threading with a nice, parallel set of tasks get an advantage too.

CPU hogs that slow down the desktop and eat battery like the end of the world is nigh should take a back seat compared to ligher, friendlier, 'more interactive' tasks.

So the Linux scheduler always tried to reward tasks that are more judicious with CPU resources. An app can get 10% snappier by using 5% less CPU time.

Some notes from the BFS discussion

Posted Sep 10, 2009 19:08 UTC (Thu) by michaeljt (subscriber, #39183) [Link]

Hm, I'm not sure if either of those arguments quite convince me :) As far as the "CPU bound processes don't need good latencies" is concerned, well, it is a heuristic, and heuristics are only as good as the set of usage cases that the author thought of. Does a process rendering animation, or mixing music which was played back as it is rendered/mixed fare well enough here? You are of course much better qualified than me to think of the possible edge cases of the heuristic...

And regarding the rewarding of processes, it sounds a bit like the scheduler wanting to know better than the user what the user wants. It would be much less heavy handed to just let the user know that a thread was not behaving nicely, and to let the user deal with it. They might have a good reason for running it after all.

Just my thoughts, not to be given more weight than they deserve.

Some notes from the BFS discussion

Posted Sep 10, 2009 19:20 UTC (Thu) by mingo (subscriber, #31122) [Link]

I agree with your observations - these are the basic tradeoffs to consider.

Note that the reward for tasks is limited. (unlimited would open up a starvation hole)

But you are right to suggest that the scheduler should not be guessing about the purpose of tasks.

So this capability was always kept optional, and was turned on/off during the fair scheduler's evolution, mainly driven by user feedback and by benchmarks. We might turn it off again - there are indications that it's causing problems.

Some notes from the BFS discussion

Posted Sep 10, 2009 21:11 UTC (Thu) by anton (subscriber, #25547) [Link]

Does a process rendering animation, or mixing music which was played back as it is rendered/mixed fare well enough here?
Such processes normally won't use all of the CPU (unless the CPU is too slow for them), because they are limited by the speed in which they want to play back the content, so a scheduler prefering sleepers over CPU hogs will prefer them over, say, oggenc. Of course, a browser might get even more preferred treatment, which you may not want; and clock scaling will tend to make stuff that consumes a significant mostly-constant amount of CPU look almost CPU-bound if they are alone on the CPU (but then it does not really matter).
It would be much less heavy handed to just let the user know that a thread was not behaving nicely, and to let the user deal with it.
Traditionally Unix had nice for that. I'm not sure that this still works properly with current Linux schedulers. The last time I tried it did not work well.

Some notes from the BFS discussion

Posted Sep 11, 2009 5:18 UTC (Fri) by michaeljt (subscriber, #39183) [Link]

A process like gcc, which alternates between I/O and CPU bound, may also get more than its share under this algorithm - perhaps that is why people always give build processes as examples of what negatively affects their interactivity?

Some notes from the BFS discussion

Posted Sep 11, 2009 5:20 UTC (Fri) by michaeljt (subscriber, #39183) [Link]

s/algorithm/heuristic/. And of course, since CFS considers the behaviour of the process over a long period of time, this effect should be somewhat limited.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds