User: Password:
|
|
Subscribe / Log in / New account

What about nested qspinlocks?

What about nested qspinlocks?

Posted Apr 7, 2014 14:35 UTC (Mon) by giltene (guest, #67734)
Parent article: MCS locks and qspinlocks

Limiting qspinlock to only 4 per-CPU slots assumes no logical nesting of the locks. While the 4 slots provide for concurrent use of locks in different interrupt contexts ("...normal, software interrupt, hardware interrupt, and non-maskable..."), nesting within the same context will still be a problem, as the same CPU would want to hold multiple qspinlocks at the same time. A good example of this need with regular ticketed spinlocks can be found in move_ptes (http://lxr.free-electrons.com/source/mm/mremap.c#L90).

This can probably be addressed by limiting the nesting level within each interrupt context to some reasonable but generous limit (4?) and providing more slots in each per-CPU mcs_spinlock array. Each CPU would probably need to keep a per-interrupt-context current_nesting_level to figure out the right slot to use and make sure no overflows occur.


(Log in to post comments)

What about nested qspinlocks?

Posted Apr 7, 2014 15:31 UTC (Mon) by corbet (editor, #1) [Link]

Remember that the per-CPU array is only used during the lock acquisition process. When nested spinlocks are held, all but (perhaps) the last are already acquired. Since the CPU is no longer trying to acquire them, it need not place an entry into the queue and, thus, does not need to use an element from the per-CPU array.

What about nested qspinlocks?

Posted Aug 21, 2014 13:44 UTC (Thu) by luto (subscriber, #39314) [Link]

Why is the number 4 correct?

Process context, software interrupt, hardware interrupt, kprobe breakpoint, krpobe, NMI, MCE. If this can happen, that's seven.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds