Ticket spinlocks and MP-guest kernels
Ticket spinlocks and MP-guest kernels
Posted Feb 14, 2008 7:53 UTC (Thu) by goaty (guest, #17783)In reply to: Ticket spinlocks and MP-guest kernels by eSk
Parent article: Ticket spinlocks
You could get a lovely cascade going. VCPU0 grabs the spinlock, then is pre-empted. VCPU1 spends its whole time slice waiting for the spinlock. VCPU2 is scheduled, and starts queuing for the spinlock. VCPU0 is scheduled, releases the spinlock, goes to sleep. The host scheduler looks for another thread to schedule. VCPU1 and VCPU2 have just been busy-waiting so they get penalised. VCPU3, VCPU4, VCPU5, etc., each get scheduled in turn, each run until they hit the spinlock in question, and start busy waiting. The cascade can continue until we run out of virtual CPUs. If the rate at which CPUs manage to acquire the lock is slower than the rate at which CPUs attempt to acquire the lock, the cascade can continue forever! Although on a general-purpose system the likelihood that all the CPUs would keep trying to acquire the same spinlock is pretty small, virtual guests are often used to run specific workloads. Since all the virtual processors are doing the same kind of work, it is quite likely that they would all keep contending the same spinlock. Which is exactly the situation that ticket spinlocks are supposed to help with! Probably anyone who's running MP guest kernels had already got each virtual CPU locked to a different real CPU, or is running with realtime scheduling, precisely to avoid this kind of problem. If not, then ticket spinlocks should provide them with all the motivation they need! :-)
Posted Feb 14, 2008 16:38 UTC (Thu)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link]
Ticket spinlocks and MP-guest kernels
Gang scheduling within the hypervisor would be one way to avoid this issue, and without the
need to lock the VCPUs to real CPUs.