|| ||Alex Shi <email@example.com> |
|| ||firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org |
|| ||[patch v7 0/8] sched: using runnable load avg in balance |
|| ||Thu, 30 May 2013 15:01:56 +0800|
|| ||email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, Jason Low <email@example.com>,
Changlong Xie <firstname.lastname@example.org>|
|| ||Article, Thread
Thanks comments from Peter, Paul, Morten, Michael and Preeti.
The most important change of this version is rebasing on latest
I tested on Intel core2, NHM, SNB, IVB, 2 and 4 sockets machines with
benchmark kbuild, aim7, dbench, tbench, hackbench, oltp, and netperf
On SNB EP 4 sockets machine, the hackbench increased about 50%, and
result become stable. on other machines, hackbench increased about
2~10%. oltp increased about 30% in NHM EX box. netperf loopback also
increased on SNB EP 4 sockets box. no clear changes on other
Michael Wang gotten better performance on pgbench on his box with this
And Morten tested previous version with better power consumption.
Changlong found ltp cgroup stress testing get faster on SNB EP
3.10-rc1 patch1-7 patch1-8
duration=764 duration=754 duration=750
duration=764 duration=754 duration=751
duration=763 duration=755 duration=751
duration means the seconds of testing cost.
Jason also found java server load benefited on his 8 sockets machine:
When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40%
improvement in performance of the workload compared to when using the
vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip
kernel with just patches 1-7, the performance improvement of the
workload over the vanilla 3.10-rc2 tip kernel was about 25%.
We also tried to include blocked load avg in balance. but find many
benchmark performance drop a lot! So, seems accumulating current
blocked_load_avg into cpu load isn't a good idea.
The blocked_load_avg is decayed same as runnable load, sometime is far
bigger than runnable load, that drive tasks to other idle or slight
load cpu, than cause both performance and power issue. But if the
blocked load is decayed too fast, it lose its effect.
Another issue of blocked load is that when waking up task, we can not
know blocked load proportion of the task on cpu. So, the blocked load is
meaningless in wake affine decision.
According to above problem, I can not figure out some way to use
blocked_load_avg in balance now.
Anyway, since using runnable load avg in balance brings much benefit on
performance and power. and this patch was reviewed for long time.
So maybe it's time to let it clobbered in some sub-maintain tree, like tip
or linux-next. Any comments?
[patch v7 1/8] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
[patch v7 2/8] sched: move few runnable tg variables into CONFIG_SMP
[patch v7 3/8] sched: set initial value of runnable avg for new
[patch v7 4/8] sched: fix slept time double counting in enqueue
[patch v7 5/8] sched: update cpu load after task_tick.
[patch v7 6/8] sched: compute runnable load avg in cpu_load and
[patch v7 7/8] sched: consider runnable load average in move_tasks
[patch v7 8/8] sched: remove blocked_load_avg in tg
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/