Hugepages are a big deal for Mobile SoCs. Designers preallocate large chunks of physical memory to ensure that their encode/decode blocks operate on contiguous memory. This large memory gets locked out of the system forever. This leads to increased system costs because manufactures need to put down more memory than they'd like to.
Some SoC manufactures have started using IOMMUs to map memory, but they're running up against TLB depth which they solve by using hugepages instead of regular pages. This support should allow these IOMMUs to map memory at runtime with hugepages and theoretically allow manufactures to use less memory. Of course that won't happen since hugepages are very scarce.
I wrote an IOMMU prototype that used its own allocator and presented it at OLS in 2010, The Virtual Contiguous Memory Manager. I think the Samsung guys put together something based on its ideas.