|
|
Log in / Subscribe / Register

BFS vs. mainline scheduler benchmarks and measurements

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 7, 2009 20:55 UTC (Mon) by paragw (guest, #45306)
In reply to: BFS vs. mainline scheduler benchmarks and measurements by niner
Parent article: BFS vs. mainline scheduler benchmarks and measurements

"I keep wondering why people seem to have completely forgotten about nice values and instead expect the scheduler to guess what are the important processes for them, when they can simply tell it."

Did that ever work satisfactorily in practice though? If it did why are people still cranking out different scheduler for desktops?

Thing is usability wise we have come further on a Linux desktop and I guess people are starting to expect the OS to do the right thing without them having to do work and make decisions. (About Xorg renice - what about its clients - every time I start a program, should I renice it if it is a Xorg client? If we instead had the desktop scheduler boost interactivity for all Xorg client programs - that makes it very easy for the user.)

And I was saying we can afford to do such silly things in the Desktop scheduler if the sole objective of the desktop scheduler was interactivity. If one scheduler was to do interactivity and throughput and what not - it quickly becomes complex and thus ineffective. If we had a pluggable scheduler for one thing we could simplify a lot of code and for another we can let people choose what fits their needs best.


to post comments

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 3:55 UTC (Tue) by fest3er (guest, #60379) [Link]

When I was going to do something that might get away from me (fork lots of
processes doing something), I would try to remember to 'nice --20 sh' in
another window. Because I would often have the boundary condition wrong
and would generate 20 000 to 40 000 processes running BTTW. That one
nice'd shell would save me almost every time. I've done this on my AT&T
UNIXPC, and systems running SysV/68, SysV/88, Irix, SunOS[345], Linux,
BeOS, BSD, and others.

There have been times in the past when nice'ing the X server improved
performance on my single-proc PIII-866; for 5-10 years now, only two or
more CPUs let X run smoothly.

There have been times in the past when nothing would smooth out the
choppiness of the EXT2/3 driver under heavy R/W load, whether I had two
PII-266's or a PIII-866. I solved that problem by switching to ReiserFS.

In recent years and on two completely different systems, I've noticed a
tendency for the kernel to do weird things with the PS/2 drivers (system
slows down, gets choppy, and even silently resets). This last time, I
pulled the plugs for the PS/2 ports and the system returned to normal.
(The chipset fan was overworking itself, so I had *some* clue where to
look.)

There can be many reasons why a system is 'choppy', and it's not always
the scheduler. Sometimes it's the interrupt handler dealing with some
device that's gone haywire. Sometimes it's the block layer not doing disk
I/O very nicely or a server process being very inefficient. Sometimes it's
an application that's gone braindead. And if a scheduler can be developed
that smooths out the choppiness in single- and dual-core systems, great!
Go for it! An older single-CPU system may never be fast, but it ought to
run smoothly under normal user operations.

The scheduler has gotten better over the past 15 years. And it will
continue to improve. But apps have to improve as well and not always
assume the 'system' will take care of everything.

As Ingo says, 8-core systems aren't mainline. But they will be. Perhaps
Con is looking to improve today's mainline systems, not tomorrow's. Is
this apples v. oranges? Or is it ain't? Mayhap never the twin shall meet.
But all parties involved should strive to keep the discourse civil and
positive.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 6:39 UTC (Tue) by mingo (subscriber, #31122) [Link] (7 responses)

Did that ever work satisfactorily in practice though?

Yes. (See my other post about nice levels in this discussion.) If it does not it's a bug and needs to be reported to lkml.

There's also the /proc/sys/kernel/sched_latency_ns control in the upstream scheduler - that is global and if you set that to a very low value like 1 msec:

    echo 1000000 > /proc/sys/kernel/sched_latency_ns
you'll get very fine-grained scheduling. This tunable has been upstream for 7-8 kernel releases already.

If it did why are people still cranking out different scheduler for desktops?

Primarily because it's fun to do. Also, in no small part because it's much easier to do than to fix an existing scheduler (with all its millions of current users and workloads) :-)

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 12:30 UTC (Tue) by i3839 (guest, #31386) [Link] (6 responses)

Weird, I don't see /proc/sys/kernel/sched_latency_ns. After reading
the code it's clear it depends on CONFIG_SCHED_DEBUG, any reason for
that? It has nothing to do with debugging and the code saved is minimal.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 12:37 UTC (Tue) by mingo (subscriber, #31122) [Link] (5 responses)

Please send a patch, i think we could make it generally available - and also the other granularity options i think. CONFIG_SCHED_DEBUG default to y and most distros enable it. (alongside CONFIG_LATENCYTOP)

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 9, 2009 8:42 UTC (Wed) by realnc (guest, #60393) [Link] (1 responses)

I've tried those tweaks. They don't really help much.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 10, 2009 9:53 UTC (Thu) by mingo (subscriber, #31122) [Link]

Thanks for testing it. It would be helpful (to keep reply latency low ;-) to move this to email and Cc: lkml.

You can test the latest upstream scheduler development tree via:

http://people.redhat.com/mingo/tip.git/README

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 9, 2009 11:50 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

I thought CONFIG_LATENCYTOP had horrible effects on the task_struct size and people were being encouraged to *disable* it as a result?

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 10, 2009 9:56 UTC (Thu) by mingo (subscriber, #31122) [Link]

It shouldnt have too big cost unless you are really RAM constrained. (read running: a 32 MB system or so) So it's a nice tool if you want to see a general categorization of latency sources in your system.

latencytop is certainly useful enough so that several distributions enable it by default. It has size impact on task struct but otherwise the runtime cost should be near zero.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 10, 2009 19:35 UTC (Thu) by i3839 (guest, #31386) [Link]

I'll try to send a patch against tip later this week, not feeling too well at the moment.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds