BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
Posted Sep 7, 2009 12:09 UTC (Mon) by kragil (guest, #34373)In reply to: BFS vs. mainline scheduler benchmarks and measurements by bvdm
Parent article: BFS vs. mainline scheduler benchmarks and measurements
When I read:
"So the testbox i picked fits into the upper portion of what i
consider a sane range of systems to tune for - and should still fit
into BFS's design bracket as well according to your description:
it's a dual quad core system with hyperthreading."
Tune the scheduler for 16 core machine? Thank you very much. I know nobody with more than a quadcore and those are spanking new.
And it is really really unfair to test a scheduler that wants to enhance interactivity for pure performance on a system that is clearly the upper limit of what the scheduler was designed for.
What I take from this discussion is that Kernel devs live in a world where Intels fastest chips in multi socket systems are low end and they will cater only to the enterprise bullcrap that pays their bills.
Despite what Linus says Linux is not intended to be used on the desktop(at least not in the real world).
Posted Sep 7, 2009 13:18 UTC (Mon)
by aigarius (subscriber, #7329)
[Link] (3 responses)
Cory needs to show quantifiable tests so that performance of different versions of schedulers can actually be compared. How can we know that a patch improves on the code if there is no quantifiable number showing that conclusively?
Scientific approach, please. Insulting people does not win arguments in technical communities. Facts, tests and numbers do.
Posted Sep 7, 2009 16:13 UTC (Mon)
by andreashappe (subscriber, #4810)
[Link]
4 cores plus ht.
Still makes me smile when I see the htop output.
Posted Sep 8, 2009 7:48 UTC (Tue)
by epa (subscriber, #39769)
[Link]
This shows that more than half of Fedora systems are dual-processor, with another 38% having a single CPU. So based on hardware that's in use now, a one- or two- processor test would be more reasonable. Of course it's useful to test on 16-processor monsters as well, but that is not the typical desktop and won't be for some time. (And by the time it is, all sorts of other assumptions will have changed too.)
Posted Sep 8, 2009 8:34 UTC (Tue)
by branden (guest, #7029)
[Link]
How about we bench based on the profiles of the machines people bring to
Posted Sep 7, 2009 13:37 UTC (Mon)
by mingo (guest, #31122)
[Link] (3 responses)
What I take from this discussion is that Kernel devs live in a world where Intels fastest chips in multi socket systems are low end and they will cater only to the enterprise bullcrap that pays their bills.
I certainly dont live in such a world and i use a bog standard dual core system as my main desktop. I also have a 833 MHz Pentium-3 laptop that i booted into a new kernel 4 times today alone:
And that test-system does that every day - today isnt a special day. Look at the build count: #1178. This means that i booted more than a thousand development kernels on this system already.
Now, to reply to your suggestion: for scheduler performance i picked the 8 core system because that's where i do scheduler tests: it allows me to characterise that system _and_ also allows me to characterise lower performance systems to a fair degree.
Check out the updated jpgs with quad-core results.
See how similar the single-socket quad results are to the 8-core results i posted initially? People who do scheduler development do this trick frequently: most of the "obvious" results can be downscaled as a ballpark figure.
(the reason for that is very fundamental: you dont see new scheduler limitations pop up as you go down with the number of cores. The larger system already includes all the limitations the scheduler has on 4, 2 or 1 core, and reflects those properties already so there's no surprises. Plus, testing is a lot faster. It took me 8 hours today to get all the results from the quad system. And this is right before the 2.6.32 merge window opens, when Linux maintainers like me are very busy.)
Certainly there are borderline graphs and also trickier cases that cannot be downscaled like that, and in general 'interactivity' - i.e. all things latency related come out on smaller systems in a more pronounced way.
But when it comes to scheduler design and merge decisions that will trickle down and affect users 1-2 years down the line (once it gets upstream, once distros use the new kernels, once users install the new distros, etc.), i have to "look ahead" quite a bit (1-2 years) in terms of the hardware spectrum.
Btw., that's why the Linux scheduler performs so well on quad core systems today - the groundwork for that was laid two years ago when scheduler developers were testing on a quads. If we discovered fundamental problems on quads _today_ it would be way too late to help Linux users.
Hope this explains why kernel devs are sometimes seen to be ahead of the hardware curve. It's really essential, and it does not mean we are detached from reality.
In any case - if you see any interactivity problems, on any class of systems, please do report them to lkml and help us fix them.
Posted Sep 8, 2009 8:46 UTC (Tue)
by kragil (guest, #34373)
[Link] (2 responses)
I think our major disagreement here is the "look ahead".
I strongly believe that computers have reached the point where this relentless upgrade cycle should and has stopped. If you bought a P4 with HT and 1 GB in 2003 it is still perfectly capable of running the newest software 95% of desktop users need. Machines like that can turn 7 YEARS soon. People will look for computers that use less engery and don't have moving parts that just break after a few years.
I think faster ARM,Mips and Atom CPUs are the architecture most desktop Linux kernels will run on and the relative percentage of X-core X86 monsters will decline (maybe even rapidly).
And no I don't think Fedoras smolt data is any good here. Fedora users are technical people and are unlikely to run really old hardware like my sisters for example.
I also don't think Linux will ever get problems with the fastest computers, its dominance in the HPC area will make sure of that.
Posted Sep 8, 2009 9:30 UTC (Tue)
by mingo (guest, #31122)
[Link] (1 responses)
And no I don't think Fedoras smolt data is any good here. Fedora users
are technical people and are unlikely to run really old hardware like my
sisters for example.
That's all fine and i have a Fedora Core 6 box too on old hardware - which is very old.
I wouldnt upgrade the kernel on it though - and non-technical users would do that even less likely. Software and hardware is in a single unit and for similar reasons it is hard to upgrade hardware is it difficult to upgrade software as well. Yes, you pick up security fixes, etc. - but otherwise main components like the kernel tend to be cast into stone at install time. (And no, if you are reading this on LWN.Net then your box probably does not qualify ;-)
Which means that most of the 4 years old systems have a 4 years old distribution on them, with a 4 years old kernel. That kernel was developed 5 years ago and any deep scheduler decisions were done 6 years ago or even later.
So yes, i agree that the upgrade treadmill has to stop eventually, but _I_ cannot make it stop - i just observe reality and adopt to it. I see what users do, i see what vendors do and i try to develop the kernel in the best possible technical way, matching those externalities.
What i'm seeing right now as the scheduler and as the x86 co-maintainer is that the hardware side shows no signs of slowing down and that users who are willing to install new kernels show eagerness to buy shiny new hardware. Quads yesterday, six-cores today, opto-cores in a year or two.
Most of the new kernel installs goes to fresh new systems, so that's an important focus of the upstream kernel - and of any distribution maker. That is the space where we _can_ do something realistically and if we did something else we'd be ignoring our users.
I could certainly be wrong about all that in some subtle (or not so subtle) way - but right now the fact is that most of the bugreports i get against development code we release is done on relatively new hardware.
That is natural to a certain degree - new hardware triggers new, previously unknown limitations and bottlenecks, and new hardware has its own problems too that gets mixed into kernel problems, etc. Old hardware is also already settled into its workload so there's little reason to upgrade an old, working box in general. There's also the built-in human excitement factor that shiny new hardware triggers on a genetic level ;-)
There's an easy way out though: please report bugs on old hardware and make old hardware count. The mainline kernel can only recognize and consider people who are willing to engage. The upstream kernel process is a fundamentally auto-tuning and auto-correcting mechanism and it is mainly influenced by people willing to improve code.
Posted Sep 9, 2009 11:41 UTC (Wed)
by nix (subscriber, #2304)
[Link]
I suspect your argument is pretty much only true for corporate uses of Linux (i.e. 'just work with *this* set of software', as opposed to other uses which often involve installation of new stuff). But perhaps those are the only uses that matter to you...
Posted Sep 7, 2009 16:19 UTC (Mon)
by einstein (guest, #2052)
[Link]
Speak for yourself. I've been using linux on the desktop in the real world for years, as have a number of other people I know, your snarky little jabs notwithstanding.
Posted Sep 7, 2009 19:38 UTC (Mon)
by leoc (guest, #39773)
[Link]
For a system not intended to be used in the "real world" it is doing pretty well considering it has around 1/4 the market share of OS X.
BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
It might help to see some numbers. Take Fedora's smolt data, which is from people who have clicked 'yes' when installing Fedora and have reported what hardware they use.
BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
Debconf?
BFS vs. mainline scheduler benchmarks and measurements
#0, d5f8b495, Mon_Sep__7_08_39_36_CEST_2009: 0 kernels/hour
#1, b9e808ca, Mon_Sep__7_09_19_47_CEST_2009: 1 kernels/hour
#2, b9e808ca, Mon_Sep__7_10_26_28_CEST_2009: 1 kernels/hour
#3, b9e808ca, Mon_Sep__7_14_58_48_CEST_2009: 0 kernels/hour
$ head /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 8
model name : Pentium III (Coppermine)
stepping : 10
cpu MHz : 846.242
cache size : 256 KB
$ uname -a
Linux m 2.6.31-rc9-tip-01360-gb9e808c-dirty #1178 SMP Mon Sep 7 22:38:18 CEST 2009 i686 i686 i386 GNU/Linux
BFS vs. mainline scheduler benchmarks and measurements
PCs will be like old TV sets and work for many many years (10 to 15 years). The software has to adapt. That is the "look ahead" I see, but I can understand why Red Hat plans for something different.
BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
BFS vs. mainline scheduler benchmarks and measurements
Despite what Linus says Linux is not intended to be used on the desktop(at least not in the real world).
BFS vs. mainline scheduler benchmarks and measurements