|| ||Heiko Carstens <firstname.lastname@example.org> |
|| ||Andrew Morton <email@example.com> |
|| ||[patch 0/3] Allow inlined spinlocks again V4 |
|| ||Fri, 14 Aug 2009 14:58:01 +0200|
|| ||Linus Torvalds <firstname.lastname@example.org>,
Peter Zijlstra <email@example.com>,
Ingo Molnar <firstname.lastname@example.org>, email@example.com,
Martin Schwidefsky <firstname.lastname@example.org>,
Heiko Carstens <email@example.com>,
Arnd Bergmann <firstname.lastname@example.org>,
Horst Hartmann <email@example.com>,
Christian Ehrhardt <firstname.lastname@example.org>,
Nick Piggin <email@example.com>|
|| ||Article, Thread
This patch set allows to have inlined spinlocks again.
The rationale behind this is that function calls on at least s390 are
If one considers that server kernels are usually compiled with
!CONFIG_PREEMPT a simple spin_lock is just a compare and swap loop.
The extra overhead for a function call is significant.
With inlined spinlocks overall cpu usage gets reduced by 1%-5% on s390.
These numbers were taken with some network benchmarks. However I expect
any workload that calls frequently into the kernel and which grabs a few
locks to perform better.
The implementation is straight forward: move the function bodies of the
locking functions to static inline functions and place them in a header
By default all locking code remains out-of-line. An architecture can
in arch/<whatever>/include/asm/spinlock.h to force inlining of a locking
V2: rewritten from scratch - now also with readable code
V3: removed macro to generate out-of-line spinlock variants since that
would break ctags. As requested by Arnd Bergmann.
V4: allow architectures to specify for each lock/unlock variant if
it should be kept out-of-line or inlined.