LWN: Comments on "Transparent huge pages in 2.6.38" http://lwn.net/Articles/423584/ This is a special feed containing comments posted to the individual LWN article titled "Transparent huge pages in 2.6.38". hourly 2 Transparent huge pages in 2.6.38 http://lwn.net/Articles/434411/rss 2011-03-20T20:29:10+00:00 pfefferz <div class="FormattedComment"> Hugepages are a big deal for Mobile SoCs. Designers preallocate large chunks of physical memory to ensure that their encode/decode blocks operate on contiguous memory. This large memory gets locked out of the system forever. This leads to increased system costs because manufactures need to put down more memory than they'd like to. <br> <p> Some SoC manufactures have started using IOMMUs to map memory, but they're running up against TLB depth which they solve by using hugepages instead of regular pages. This support should allow these IOMMUs to map memory at runtime with hugepages and theoretically allow manufactures to use less memory. Of course that won't happen since hugepages are very scarce. <br> <p> I wrote an IOMMU prototype that used its own allocator and presented it at OLS in 2010, The Virtual Contiguous Memory Manager. I think the Samsung guys put together something based on its ideas. <br> </div> Hugetlbfs pages are dynamically allocate-able http://lwn.net/Articles/424074/rss 2011-01-21T15:35:14+00:00 emunson <div class="FormattedComment"> The presence of contiguous memory will be entirely dependant on the system and work load. You are correct that allocating huge pages becomes more difficult as memory is fragmented. My reply was to the section of the article that said hugetlbfs based huge pages must be set aside at boot time which is not correct for all page sizes. On systems that support them, 1GB and 16GB pages must be reserved at boot, but 2MB, 4MB, and 16MB pages can be allocated any time there is contiguous space.<br> </div> Hugetlbfs pages are dynamically allocate-able http://lwn.net/Articles/424048/rss 2011-01-21T09:31:08+00:00 jthill <p> I think the <a href="http://lwn.net/Articles/368869/">memory compaction patch</a> is intended to fix that: <blockquote><i>Mel ran some simple tests showing that, with compaction enabled, he was able to allocate over 90% of the system's memory as huge pages while simultaneously decreasing the amount of reclaim activity needed. </i></blockquote> Hugetlbfs pages are dynamically allocate-able http://lwn.net/Articles/424035/rss 2011-01-21T06:38:36+00:00 Tuna-Fish <div class="FormattedComment"> Only if there is contiguous real memory available. Under real-world situations, there rarely is.<br> <p> Just try allocating space on hugetlbfs after running an active web server for a few hours.<br> </div> Hugetlbfs pages are dynamically allocate-able http://lwn.net/Articles/423939/rss 2011-01-20T17:12:28+00:00 emunson <div class="FormattedComment"> Your description of using huge pages via hugetlbfs is not quite correct. Most modern kernels and architectures support dynamically allocating huge pages after boot.<br> </div> Transparent huge pages in 2.6.38 http://lwn.net/Articles/423917/rss 2011-01-20T15:36:03+00:00 Tuna-Fish <div class="FormattedComment"> Transparent hugepage support is very interesting at the moment -- especially because both main x86 vendors are beefing up the support for them in their processors. Intel just added real support for 1GiB pages, but AMD takes the jackpot with the DTLB in the upcoming Bulldozer -- 72 L1 entries and 1024 L2 entries, holding any combination of 4kiB, 2MiB or 1GiB pages.<br> <p> </div> Transparent huge pages in 2.6.38 http://lwn.net/Articles/423880/rss 2011-01-20T12:27:35+00:00 rfrancoise Andrea gave a talk on THP at the <a href="http://www.linux-kvm.org/page/KvmForum2010">KVM Forum 2010</a> with some interesting benchmark results: <a href="http://www.linux-kvm.org/wiki/images/9/9e/2010-forum-thp.pdf">slides</a>, <a href="http://vimeo.com/15224470">video</a>. Transparent huge pages in 2.6.38 http://lwn.net/Articles/423844/rss 2011-01-20T06:40:31+00:00 jreiser <div class="FormattedComment"> Huge pages can increase data cache performance by making aliasing (the mapping of address to cache set) more uniform within the huge page, in contrast to the mappings for many equivalent collections of small pages. The difference can be several percent or more.<br> </div>