LWN: Comments on "The big kernel lock strikes again" https://lwn.net/Articles/281938/ This is a special feed containing comments posted to the individual LWN article titled "The big kernel lock strikes again". en-us Sat, 11 Oct 2025 00:39:36 +0000 Sat, 11 Oct 2025 00:39:36 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net The big kernel lock strikes again https://lwn.net/Articles/282873/ https://lwn.net/Articles/282873/ vonbrand <p> For simple queueing models at least (Poisson arrivals, exponential service times) the only thing that matters is the number of customers in the queue, not the order in which they are served. Sun, 18 May 2008 01:18:50 +0000 The big kernel lock strikes again https://lwn.net/Articles/282763/ https://lwn.net/Articles/282763/ giraffedata <blockquote> I can't make my mind up whether comparable systems in real life (e.g. bars or food kiosks) have greater throughput when there's an ordered queue or a mess of people waiting. </blockquote> <p> How could there possibly be higher throughput with the ordered queue? Because it takes time to figure out who's next in the mess of people? Because that doesn't have an analog in these lock mechanisms. <p> <blockquote> I suspect that having people wait in line allows them to think of other things and not be ready when their turn comes -- a phenomenon with parallels to the cold caches mentioned above. </blockquote> <p> This analogy suggests a sophisticated optimal way to address the issue. At the kiosk, I don't think this effect actually happens with the ordered queue because you can see your turn coming up, and you get ready. If a waiter for a lock could, shortly before the lock is available, transform from waiting on a semaphore to spinning, he would be ready to use the lock the moment it becomes available but not be able to jump much ahead of his fair ordering. Now if the dispatcher could just reload caches at each dispatch, it would be great. Fri, 16 May 2008 17:48:00 +0000 The big kernel lock strikes again https://lwn.net/Articles/282497/ https://lwn.net/Articles/282497/ dmag <div class="FormattedComment"><pre> I think a better analogy would be a restaurant that gives you a buzzer that alerts you when your table is ready. Only in this case, the buzzer is really a pager that works anywhere in the world. So customers will go home or go run errands while waiting. This causes a lot of latency (tables unoccupied for huge stretches because the 'next' customer is not close by). You can solve the problem by not having a long range buzzer (i.e. lock waiting programs in memory to prevent them from being swapped out -- but this would waste memory, since it could be hours before the resource is ready, and programs that don't need the resource could use the extra memory), or you could simply use the buzzer to say "the next table free, if you can't come quickly we'll give it to someone else and buzz you later". </pre></div> Thu, 15 May 2008 14:25:38 +0000 The big kernel lock strikes again https://lwn.net/Articles/282468/ https://lwn.net/Articles/282468/ k3ninho <div class="FormattedComment"><pre> &lt;i&gt;[T]the thread at the head of the queue only gets the semaphore once it starts running and actually claims it; if it's too slow, somebody else might get there first. In human interactions, this sort of behavior is considered impolite (in some cultures, at least), though it is far from unknown.&lt;/i&gt; I can't make my mind up whether comparable systems in real life (e.g. bars or food kiosks) have greater throughput when there's an ordered queue or a mess of people waiting. I suspect that having people wait in line allows them to think of other things and not be ready when their turn comes -- a phenomenon with parallels to the cold caches mentioned above. K3n. </pre></div> Thu, 15 May 2008 11:55:27 +0000 The big kernel lock strikes again https://lwn.net/Articles/282377/ https://lwn.net/Articles/282377/ dwheeler <div class="FormattedComment"><pre> I like Ingo Molnar's approach to the BKL: <a href="http://lwn.net/Articles/282319/">http://lwn.net/Articles/282319/</a> Basically, it's REALLY hard right now to eliminate the BKL, so Ingo's first step to make it much easier. </pre></div> Thu, 15 May 2008 02:17:23 +0000