|
|
Log in / Subscribe / Register

Beware fragmentation

Beware fragmentation

Posted Sep 11, 2022 0:09 UTC (Sun) by jreiser (subscriber, #11027)
In reply to: The transparent huge page shrinker by WolfWings
Parent article: The transparent huge page shrinker

Often I get fewer huge pages than requested, even with 32GB of RAM on a (now) lightly-loaded system that has been up for a week or so:

    system("echo \"requested 32MB of anon huge pages:\";\
            grep -i hugepage /proc/meminfo");
    //printf( "Go run: grep -i hugepage /proc/meminfo\nPausing for 60 seconds.\n" );
    //sleep( 60 );

requested 32MB of anon huge pages:
AnonHugePages:      6144 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Also remember: #include <stdio.h> if printf remains in the source.


to post comments

Beware fragmentation

Posted Sep 12, 2022 8:25 UTC (Mon) by WolfWings (subscriber, #56790) [Link]

Sorry, minor oversight on my part, since you do need to fault the RAM in before it exists. The program was literally typed from memory as an example, sorry if I missed an include. :P
#ifdef MADV_HUGEPAGE
    if ( x != NULL ) {
        madvise( x, size, MADV_HUGEPAGE )
        for ( int i = 0; i < size; i += alignment ) {
            ((char *)x)[i] = 0;
        }
    }
#endif
If you fault the pages in before the madvise then you have to wait for the THP scanner to circle back around and find things, but if you madvise before you fault the pages in then each fault is generated as a THP directly so you also need a lot less page faults to actually commit all the RAM you allocated, since Linux is (too) aggressive about overcommit until pages are actually accessed.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds