|| ||KAMEZAWA Hiroyuki <firstname.lastname@example.org> |
|| ||"email@example.com" <firstname.lastname@example.org> |
|| ||[PATCH 0/4] big chunk memory allocator v4 |
|| ||Fri, 19 Nov 2010 17:10:33 +0900|
|| ||"email@example.com" <firstname.lastname@example.org>,
email@example.com, Bob Liu <firstname.lastname@example.org>,
email@example.com, firstname.lastname@example.org, email@example.com,
|| ||Article, Thread
Hi, this is an updated version.
No major changes from the last one except for page allocation function.
Order of patches is
[1/4] move some functions from memory_hotplug.c to page_isolation.c
[2/4] search physically contiguous range suitable for big chunk alloc.
[3/4] allocate big chunk memory based on memory hotplug(migration) technique
[4/4] modify page allocation function.
I hear there is requirements to allocate a chunk of page which is larger than
MAX_ORDER. Now, some (embeded) device use a big memory chunk. To use memory,
they hide some memory range by boot option (mem=) and use hidden memory
for its own purpose. But this seems a lack of feature in memory management.
This patch adds
alloc_contig_pages(start, end, nr_pages, gfp_mask)
to allocate a chunk of page whose length is nr_pages from [start, end)
phys address. This uses similar logic of memory-unplug, which tries to
offline [start, end) pages. By this, drivers can allocate 30M or 128M or
much bigger memory chunk on demand. (I allocated 1G chunk in my test).
But yes, because of fragmentation, this cannot guarantee 100% alloc.
If alloc_contig_pages() is called in system boot up or movable_zone is used,
this allocation succeeds at high rate.
I tested this on x86-64, and it seems to work as expected. But feedback from
embeded guys are appreciated because I think they are main user of this
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/