|| ||Linus Torvalds <torvalds-AT-linux-foundation.org> |
|| ||"H. Peter Anvin" <hpa-AT-zytor.com> |
|| ||Re: Re-tune x86 uaccess code for PREEMPT_VOLUNTARY |
|| ||Sat, 10 Aug 2013 09:43:33 -0700|
|| ||Mike Galbraith <bitbucket-AT-online.de>, Andi Kleen <andi-AT-firstfloor.org>, Linux Kernel Mailing List <linux-kernel-AT-vger.kernel.org>, "the arch/x86 maintainers" <x86-AT-kernel.org>, Ingo Molnar <mingo-AT-kernel.org>|
|| ||Article, Thread
On Sat, Aug 10, 2013 at 9:09 AM, H. Peter Anvin <firstname.lastname@example.org> wrote:
> Do you have any quantification of "munches throughput?" It seems odd
> that it would be worse than polling for preempt all over the kernel, but
> perhaps the additional locking is what costs.
Actually, the big thing for true preemption is not so much the preempt
count itself, but the fact that when the preempt count goes back to
zero we have that "check if we should have been preempted" thing.
And in particular, the conditional function call that goes along with it.
The thing is, even if that is almost never taken, just the fact that
there is a conditional function call very often makes code generation
*much* worse. A function that is a leaf function with no stack frame
with no preemption often turns into a non-leaf function with stack
frames when you enable preemption, just because it had a RCU read
region which disabled preemption.
It's similar to the kind of code generation issue that Andi's patches
are trying to work on.
Andi did the "test and jump to a different section to call the
scheduler with registers saved" as an assembly stub in one of his
patches in this series exactly to avoid the cost of this for the
might_sleep() case, and generated that GET_THREAD_AND_SCHEDULE asm
macro for it. But look at that asm macro, and compare it to
I have often wanted to have access to that kind of thing from C code.
It's not unusual. Think lock failure paths, not Tom Jones.
to post comments)