BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
Posted Sep 8, 2009 11:58 UTC (Tue) by mingo (subscriber, #31122)In reply to: BFS vs. mainline scheduler benchmarks and measurements by epa
Parent article: BFS vs. mainline scheduler benchmarks and measurements
One difference is that nice levels are relative - that way "nice +5" makes relative sense from within a nice +10 workload. Latency values tend to be absolute. Relative makes more conceptual sense IMO - as workloads are fundamentally hierarchical and a sub-workload of some larger workload might not be aware of the larger entity it is running in.
Also, a practical complication is that there's not much of a culture of setting latencies and it would take years to build them into apps and to build awareness.
Also, latencies are hardware dependent and change with time. 100 msecs on an old box is very different from 100 msecs on a newer box.
Maybe for media apps it would make sense to specify some sort of deadline (a video app if it wants to display at fixed frequency, or an audio app if it knows its precise buffering hard limit) - but in practice these apps tend to not even know their precise latency target. For example the audio pathway could be buffered in the desktop environment, in the sound server and in the kernel too.
Nor would it solve much: most of the latencies that people notice and which cause skipping/dropped-frames etc. are bugs, they are unintended and need fixing.
Nevertheless this has come up before and could be done to a certain degree. I still hope that we can just make things behave by default, out of box, without any extra tweaking needed.
