|
|
Log in / Subscribe / Register

BFS vs. mainline scheduler benchmarks and measurements

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 11:58 UTC (Tue) by mingo (subscriber, #31122)
In reply to: BFS vs. mainline scheduler benchmarks and measurements by epa
Parent article: BFS vs. mainline scheduler benchmarks and measurements

One difference is that nice levels are relative - that way "nice +5" makes relative sense from within a nice +10 workload. Latency values tend to be absolute. Relative makes more conceptual sense IMO - as workloads are fundamentally hierarchical and a sub-workload of some larger workload might not be aware of the larger entity it is running in.

Also, a practical complication is that there's not much of a culture of setting latencies and it would take years to build them into apps and to build awareness.

Also, latencies are hardware dependent and change with time. 100 msecs on an old box is very different from 100 msecs on a newer box.

Maybe for media apps it would make sense to specify some sort of deadline (a video app if it wants to display at fixed frequency, or an audio app if it knows its precise buffering hard limit) - but in practice these apps tend to not even know their precise latency target. For example the audio pathway could be buffered in the desktop environment, in the sound server and in the kernel too.

Nor would it solve much: most of the latencies that people notice and which cause skipping/dropped-frames etc. are bugs, they are unintended and need fixing.

Nevertheless this has come up before and could be done to a certain degree. I still hope that we can just make things behave by default, out of box, without any extra tweaking needed.


to post comments

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 12:41 UTC (Tue) by epa (subscriber, #39769) [Link]

I agree that relative niceness levels make the most sense in a batch processing environment or in a 'lightly interactive' environment such as a Unix shell, where it should respond quickly when you type 'ls', but there is no firm deadline.

I think they make a bit less sense for multimedia applications or even ordinary desktop software (where users nowadays expect smooth scrolling and animations). You are right that in the Unix world there isn't much culture of setting quantifiable targets for latency or CPU use; we are accustomed to mushy 'niceness' values, where setting a lower niceness somehow makes it go faster, but only the most greybearded of system administrators could tell you exactly how much.

One reason to specify a latency target in milliseconds is just to have something quantifiable. A lot of discussions on LKML and elsewhere about scheduling seem to suffer from a disconnect between one side running benchmarks such as kernel compiles, which give hard numbers but aren't typical of desktop usage, and another side who just talk in qualitative terms about how much faster it 'feels'.

I expect that if a 'max latency' option were added to the kernel and it did almost nothing at all to start with, it would still provide a framework for improvements to take place - a latency of 110ms when 100ms was requested could now be a quantifiable performance regression, and people can benchmark their kernel against a promised performance target rather than just trying to assess how it feels. (You yourself have provided such a latency benchmark - the 'load enormous JPEG in Firefox' test suite :=-).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds