|Subject:||[PATCH 0/6] pseudo-interleaving for automatic NUMA balancing|
|Date:||Fri, 17 Jan 2014 01:17:30 -0500|
|Cc:||firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org|
The current automatic NUMA balancing code base has issues with workloads that do not fit on one NUMA load. Page migration is slowed down, but memory distribution between the nodes where the workload runs is essentially random, often resulting in a suboptimal amount of memory bandwidth being available to the workload. In order to maximize performance of workloads that do not fit in one NUMA node, we want to satisfy the following criteria: 1) keep private memory local to each thread 2) avoid excessive NUMA migration of pages 3) distribute shared memory across the active nodes, to maximize memory bandwidth available to the workload This patch series identifies the NUMA nodes on which the workload is actively running, and balances (somewhat lazily) the memory between those nodes, satisfying the criteria above. As usual, the series has had some performance testing, but it could always benefit from more testing, on other systems. Some performance numbers, with two 40-warehouse specjbb instances on an 8 node system with 10 CPU cores per node, using a pre-cleanup version of these patches, courtesy of Chegu Vinod: numactl manual pinning spec1.txt: throughput = 755900.20 SPECjbb2005 bops spec2.txt: throughput = 754914.40 SPECjbb2005 bops NO-pinning results (Automatic NUMA balancing, with patches) spec1.txt: throughput = 706439.84 SPECjbb2005 bops spec2.txt: throughput = 729347.75 SPECjbb2005 bops NO-pinning results (Automatic NUMA balancing, without patches) spec1.txt: throughput = 667988.47 SPECjbb2005 bops spec2.txt: throughput = 638220.45 SPECjbb2005 bops No Automatic NUMA and NO-pinning results spec1.txt: throughput = 544120.97 SPECjbb2005 bops spec2.txt: throughput = 453553.41 SPECjbb2005 bops My own performance numbers are not as relevant, since I have been running with a more hostile workload on purpose, and I have run into a scheduler issue that caused the workload to run on only two of the four NUMA nodes on my test system... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to email@example.com More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds