|| ||Tejun Heo <firstname.lastname@example.org> |
|| ||email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, JBeulich@novell.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
|| ||[PATCHSET percpu#for-next] implement and use sparse embedding first chunk allocator |
|| ||Tue, 21 Jul 2009 19:25:59 +0900|
|| ||Article, Thread
This patchset teaches percpu allocator how to manage very sparse
units, vmalloc how to allocate congruent sparse vmap areas and combine
them to extend the embedding allocator to allow embedding of sparse
unit addresses. This basically implements Christoph's sparse
This allows NUMA configurations to use bootmem allocated memory
directly as non-NUMA machines do with the embedding allocator.
Setting up the first chunk is basically consisted of allocating memory
for each cpu and then build percpu configuration to so that the first
chunk is composed of those memory areas, which means that there can be
huge holes between units and chunks may overlap each other.
When further chunks are necessary pcpu_get_vm_areas() is called with
parameters to specify how many areas are necessary, how large each
should be and how apart they're from each other. The function scans
vmalloc address space top down looking for matching holes and returns
array of vmap areas. As the newly allocated areas are offset exactly
the same as the first chunk, the rest is pretty straight-forward.
This has the following benefits.
* No special remapping necessary. Arch codes don't need change its
address mapping or anything. It just needs to inform percpu
allocator how percpu areas ends up like. percpu allocator will
take any layout.
* No additional TLB pressure. Both page and large page remapping adds
TLB pressure. With embedding, there's no overhead. Whatever
translations being used for linear mapping is used as-is.
* Removes dup-mapping. Large page remapping ends up mapping the same
page twice. This causes subtle problem on x86 when page attribute
needs to be changed. The maps need to be looked up and split into
page mappings, which is a bit fragile. As embedding doesn't remap
anything, this problem doesn't exist.
The only restriction is that the vmalloc area needs to be huge - at
least orders of magnitude larger than the distances between NUMA
nodes. For 64bit machines, this isn't a problem but on 32bit NUMA
machines address space is a scarce resource. For x86_32 NUMAs, the
page mapping allocator is used. The reason for choosing page over
large page is because page is far simpler and the advantage of large
page isn't very clear.
0001 fixes locking bug on reclaim path which was introduced by
0002-0007 are misc changes. 4k allocator is renamed to page.
Messages are made prettier and more informative. Avoid building
unused first chunk allocators and so on. Nothing really drastic but
small cleanups to ease further changes.
0008-0009 prepares for later changes. @align is added to
pcpu_fc_alloc and functions are relocated.
0010 changes how first chunk configuration is passed to
pcpu_setup_first_chunk(). All information is collected into
pcpu_alloc_info struct including the unit grouping information which
used to be lost in the process. This change allows percpu allocator
to have enough information to allocate congruent vmap areas.
0011-0012 prepares percpu for sparse groups and units in them. offset
information is added and used to calculate addresses.
0013-0014 implement pcpu_get_vm_areas() which allocate congruent vmap
0015-0016 teaches percpu how to use multiple vm areas to allow sparse
groups and extends embedding allocator so that it knows how to embed
0017 converts x86_64 NUMA to use embedding and x86_32 NUMA page.
0018 kills now unused lpage allocator and the related page attribute
0019 converts sparc64 to use embedding allocator.
0020 converts powerpc64 to dynamic percpu allocator using embedding
After this series, only ia64 is left with the static allocator. I
have the patch but don't have machine to verify it on. Will post as
This patchset is on top of
+  perpcu-fix-sparse-possible-cpu-map-handling patchset
+ pulled into percpu#for-next (457f82bac659745f6d5052e4c493d92d62722c9c)
and available in the following git tree. Please note that the
following tree is temporary and will be rebased.
Diffstat follows. Only 112 lines added. :-)
Documentation/kernel-parameters.txt | 11
arch/powerpc/Kconfig | 4
arch/powerpc/kernel/setup_64.c | 61 +
arch/sparc/Kconfig | 3
arch/sparc/kernel/smp_64.c | 124 ---
arch/x86/Kconfig | 6
arch/x86/kernel/setup_percpu.c | 201 +-----
arch/x86/mm/pageattr.c | 20
include/linux/percpu.h | 105 +--
include/linux/vmalloc.h | 6
mm/percpu.c | 1139 +++++++++++++++++-------------------
mm/vmalloc.c | 338 ++++++++++
12 files changed, 1065 insertions(+), 953 deletions(-)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/