|| ||Lee Schermerhorn <firstname.lastname@example.org> |
|| ||email@example.com, firstname.lastname@example.org |
|| ||[PATCH 0/3] hugetlb: V2 constrain allocation/free based on task mempolicy |
|| ||Wed, 08 Jul 2009 15:24:30 -0400|
|| ||<email@example.com>, Mel Gorman <firstname.lastname@example.org>,
Nishanth Aravamudan <email@example.com>,
David Rientjes <firstname.lastname@example.org>,
Adam Litke <email@example.com>,
Andy Whitcroft <firstname.lastname@example.org>, <email@example.com>|
|| ||Article, Thread
PATCH 0/3 hugetlb: constrain allocation/free based on task mempolicy
Against: 25jun09 mmotm atop the "hugetlb: balance freeing..."
This is V2 of a series of patches to constrain the allocation and
freeing of persistent huge pages using the task NUMA mempolicy of
the task modifying "nr_hugepages". This series is based on Mel
Gorman's suggestion to use task mempolicy.
V2 addresses review comments from Mel Gorman and Andrew Morton.
See the patch description of patch 2/3.
I have some concerns about a subtle change in behavior [see patch
2/3 and the updated documentation] and the fact that
this mechanism ignores some of the semantics of the mempolicy
mode [again, see the doc]. However, this method seems to work
fairly well. And, IMO, the resulting code doesn't look all that
A couple of limitations in this version:
1) I haven't implemented a boot time parameter to constrain the
boot time allocation of huge pages. This can be added if
anyone feels strongly that it is required.
2) I have not implemented a per node nr_overcommit_hugepages as
David Rientjes and I discussed earlier. Again, this can be
added and specific nodes can be addressed using the mempolicy
as this series does for allocation and free. However, after
some experience with the libhugetlbfs test suite, specifically
attempting to run the test suite constrained by mempolicy and
a cpuset, I'm thinking that per node overcommit limits might
not be such a good idea. This would require an application
[or the library] to sum the per node limits over the allowed
nodes and possibly compare to global limits to determine the
available resources. Per cpuset limits might work better.
This are requires more investigation, but this patch series
doesn't seem to make things worse than they already are in
To unsubscribe from this list: send the line "unsubscribe linux-numa" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html