|
|
Log in / Subscribe / Register

Morton's Fork

Morton's Fork

Posted Sep 7, 2009 6:35 UTC (Mon) by quotemstr (subscriber, #45331)
In reply to: BFS vs. mainline scheduler benchmarks and measurements by MattPerry
Parent article: BFS vs. mainline scheduler benchmarks and measurements

First of all, the burden of proof is on BFS advocates to provide a better test. Ingo's test was well-described and performed under reasonable conditions. Kolivas provided no comparably rigorous numbers. Your suggestion, to test what users actually use, puts kernel developers in an unreasonable dilemma. One the one hand, kernel developers can test the tasks that "users would perform", but because numeric results of these tests are not easily measured, they are meaningless without an expensive, inconvenient double-blind satisfaction study. (And really, the onus is on BFS advocates to provide one if that's what it takes.)

On the other hand, kernel developers can use contrived tests like the pipe example that are easily quantified, but that only approximate user workloads. These tests can be improved, but one will always be able to claim that they don't measure what users "really" do. Either way, the claim that BFS is superior will have been made unfalsifiable and unscientific.


to post comments

Morton's Fork

Posted Sep 7, 2009 12:57 UTC (Mon) by Lennie (guest, #49641) [Link] (1 responses)

Let's start with 'frames skipped' in mplayer or vlc or something.

Morton's Fork

Posted Sep 11, 2009 1:51 UTC (Fri) by Spudd86 (guest, #51683) [Link]

Ingo mention that he does test exactly this on low end machines further up


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds