Beware fragmentation
Beware fragmentation
Posted Sep 11, 2022 0:09 UTC (Sun) by jreiser (subscriber, #11027)In reply to: The transparent huge page shrinker by WolfWings
Parent article: The transparent huge page shrinker
Often I get fewer huge pages than requested, even with 32GB of RAM on a (now) lightly-loaded system that has been up for a week or so:
system("echo \"requested 32MB of anon huge pages:\";\
grep -i hugepage /proc/meminfo");
//printf( "Go run: grep -i hugepage /proc/meminfo\nPausing for 60 seconds.\n" );
//sleep( 60 );
requested 32MB of anon huge pages:
AnonHugePages: 6144 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Also remember: #include <stdio.h> if printf remains in the source.
