LWN: Comments on "Split PMD locks" http://lwn.net/Articles/568076/ This is a special feed containing comments posted to the individual LWN article titled "Split PMD locks". hourly 2 Split PMD locks http://lwn.net/Articles/568557/rss 2013-09-27T06:24:12+00:00 corbet "Many fewer mutexes than pages pointing to them" is pretty much the situation. Remember, these locks are only needed for pages holding page tables, not for pages in general. Split PMD locks http://lwn.net/Articles/568555/rss 2013-09-27T05:53:41+00:00 jzbiciak <P>I had the same thought: It must be a much smaller pool of locks that<TT> struct page </TT>points to, otherwise the indirection only bought you the cost (both time and space) of indirection.</P> <P>This was one place I was hoping for a link to an LWN article or other thread that explained what as on the other side of that pointer. (I admit, because I didn't want to try to decode the code myself.)</P> Split PMD locks http://lwn.net/Articles/568520/rss 2013-09-26T21:30:00+00:00 ncm A pointer in struct page to a mutex elsewhere makes the problem worse, unless there are many fewer mutexes than pages pointing to them. <p> Instead of a lock for each struct page, it should suffice to have a global, fixed-size table of mutexes, with the mutex for a particular page identified by hashing the page identifier. The mutex table just needs to be large compared to the number of CPUs, not the number of pages. Yes, sharing a mutex among multiple pages increases contention, but that can be tuned. <p> Growing struct page invites apocalypse. Split PMD locks http://lwn.net/Articles/568453/rss 2013-09-26T16:13:23+00:00 ejr <div class="FormattedComment"> *Some* highly threaded workloads slow down, and others (e.g. Graph500's BFS) run faster with transparent huge pages. The benchmark cited no longer seems to be at the URL, so it's difficult to tell who will be hit.<br> </div>