Am I right in believing that if this infrastructure were in place, the kernel could in theory move towards using huge pages wherever possible in the VM? In other words, instead of hacks like hugetlbfs, the kernel could aim to allocate chunks of 4MB (or whatever the large page size is) as single pages, only using 4KB pages for smaller allocs?
Obviously, I'm glossing over issues like memory fragmentation, efficiency of allocation, how to slice up a 4MB page into a 4KB page and go back again when possible, whether it's worth defragging physical memory to use 4MB pages (less of an issue if you have a block-move engine), and other important issues.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds