Re: [RFC patch 1/2] sched: dynamically adapt granularity with
nr_running
[Posted September 14, 2010 by corbet]
| From: |
| Peter Zijlstra <peterz-AT-infradead.org> |
| To: |
| Mathieu Desnoyers <mathieu.desnoyers-AT-efficios.com> |
| Subject: |
| Re: [RFC patch 1/2] sched: dynamically adapt granularity with
nr_running |
| Date: |
| Sat, 11 Sep 2010 20:57:50 +0200 |
| Message-ID: |
| <1284231470.2251.52.camel@laptop> |
| Cc: |
| LKML <linux-kernel-AT-vger.kernel.org>,
Linus Torvalds <torvalds-AT-linux-foundation.org>,
Andrew Morton <akpm-AT-linux-foundation.org>,
Ingo Molnar <mingo-AT-elte.hu>,
Steven Rostedt <rostedt-AT-goodmis.org>,
Thomas Gleixner <tglx-AT-linutronix.de>,
Tony Lindgren <tony-AT-atomide.com>,
Mike Galbraith <efault-AT-gmx.de> |
| Archive‑link: | |
Article |
On Sat, 2010-09-11 at 13:37 -0400, Mathieu Desnoyers wrote:
Its not at all clear what or why you're doing what exactly.
What we used to have is:
period -- time in which each task gets scheduled once
This period was adaptive in that we had an ideal period
(sysctl_sched_latency), but since keeping to this means that each task
gets latency/nr_running time. This is undesired in that it means busy
systems will over-schedule due to tiny slices. Hence we also had a
minimum slice (sysctl_sched_min_granularity).
This yields:
period := max(sched_latency, nr_running * sched_min_granularity)
[ where we introduce the intermediate:
nr_latency := sched_latency / sched_min_granularity
in order to avoid the multiplication where possible ]
Now you introduce a separate preemption measure, sched_gran as:
sched_std_granularity; nr_running <= 8
sched_gran := {
max(sched_min_granularity, sched_latency / nr_running)
Which doesn't make any sense at all, because it will either be larger or
as large as the current sched_min_granularity.
And you break the above definition of period by replacing nr_latency by
8.
Not at all charmed, this look like random changes without conceptual
integrity.