|
|
Subscribe / Log in / New account

Ticket spinlocks and MP-guest kernels

Ticket spinlocks and MP-guest kernels

Posted Feb 12, 2008 19:51 UTC (Tue) by eSk (guest, #45221)
In reply to: Ticket spinlocks by Nick
Parent article: Ticket spinlocks

There is also the case for multi-vCPU guests running in a VM.  For regular spinlocks you have
the problem that a guest vCPU may be preempted while holding a spinlock, thereby potentially
causing large delays for other vCPUs (i.e., think spinlock wait times in the order of hundreds
of milliseconds rather than nanoseconds).  A guest vCPU0 is preempted while holding a
spinlock.  Guest vCPU1 is scheduled and spends its whole timeslice wating for the usually
shortly held spinlock to be released; not knowing that the holder of the spinlock is not
really running at the time.

One might argue that it's stupid to run an MP guest system on a single physical processor, but
the above scenario is also present for MP guests running on multiple physical processors.  You
only get around the problem if you perform strict gang-scheduling of all the vCPUs.  The other
option is to add heuristics to prevent prempting guest vCPUs holding spinlocks.

Using ticket spinlocks the problem with preempting lock-holders is increased.  Not only do you
have the problem that preempting a lock-holder is bad for performance, but since all the
waiters must be granted the lock in FIFO order one had better make sure that the different
lock contenders are scheduled in the proper order.  Failure to do so will massively increase
the lock waiting time.  With regular spinlocks you have the chance that any one of the waiting
vCPUs can grab the lock and be done with it quicky.  With ticket spinlocks you don't have this
option.

I would expect that anyone trying to run a multi-MP guest kernel will run into this
performance problem rather quickly (subject to number of vCPUs and kernel workload, of
course).


to post comments

Ticket spinlocks and MP-guest kernels

Posted Feb 14, 2008 7:53 UTC (Thu) by goaty (guest, #17783) [Link] (1 responses)

You could get a lovely cascade going. VCPU0 grabs the spinlock, then is pre-empted. VCPU1
spends its whole time slice waiting for the spinlock. VCPU2 is scheduled, and starts queuing
for the spinlock. VCPU0 is scheduled, releases the spinlock, goes to sleep. The host scheduler
looks for another thread to schedule. VCPU1 and VCPU2 have just been busy-waiting so they get
penalised. VCPU3, VCPU4, VCPU5, etc., each get scheduled in turn, each run until they hit the
spinlock in question, and start busy waiting. The cascade can continue until we run out of
virtual CPUs. If the rate at which CPUs manage to acquire the lock is slower than the rate at
which CPUs attempt to acquire the lock, the cascade can continue forever!

Although on a general-purpose system the likelihood that all the CPUs would keep trying to
acquire the same spinlock is pretty small, virtual guests are often used to run specific
workloads. Since all the virtual processors are doing the same kind of work, it is quite
likely that they would all keep contending the same spinlock. Which is exactly the situation
that ticket spinlocks are supposed to help with!

Probably anyone who's running MP guest kernels had already got each virtual CPU locked to a
different real CPU, or is running with realtime scheduling, precisely to avoid this kind of
problem. If not, then ticket spinlocks should provide them with all the motivation they need!
:-)

Ticket spinlocks and MP-guest kernels

Posted Feb 14, 2008 16:38 UTC (Thu) by PaulMcKenney (✭ supporter ✭, #9624) [Link]

Gang scheduling within the hypervisor would be one way to avoid this issue, and without the
need to lock the VCPUs to real CPUs.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds