Morton's Fork
Morton's Fork
Posted Sep 7, 2009 6:35 UTC (Mon) by quotemstr (subscriber, #45331)In reply to: BFS vs. mainline scheduler benchmarks and measurements by MattPerry
Parent article: BFS vs. mainline scheduler benchmarks and measurements
First of all, the burden of proof is on BFS advocates to provide a better test. Ingo's test was well-described and performed under reasonable conditions. Kolivas provided no comparably rigorous numbers. Your suggestion, to test what users actually use, puts kernel developers in an unreasonable dilemma. One the one hand, kernel developers can test the tasks that "users would perform", but because numeric results of these tests are not easily measured, they are meaningless without an expensive, inconvenient double-blind satisfaction study. (And really, the onus is on BFS advocates to provide one if that's what it takes.)
On the other hand, kernel developers can use contrived tests like the pipe example that are easily quantified, but that only approximate user workloads. These tests can be improved, but one will always be able to claim that they don't measure what users "really" do. Either way, the claim that BFS is superior will have been made unfalsifiable and unscientific.
