|
|
Subscribe / Log in / New account

An Introduction to Full Virtualization With Xen (Linux.com)

An Introduction to Full Virtualization With Xen (Linux.com)

Posted Oct 23, 2012 20:29 UTC (Tue) by butlerm (subscriber, #13312)
Parent article: An Introduction to Full Virtualization With Xen (Linux.com)

The article doesn't really get into what "PVH" is supposed to be, unfortunately, and it is not at all obvious what the difference is between "PVHVM" and "PVH", other than these appear to be Xen specific terms.

There is a little bit more information here:
http://blog.xen.org/index.php/2012/09/21/xensummit-sessio...


to post comments

An Introduction to Full Virtualization With Xen (Linux.com)

Posted Oct 23, 2012 21:26 UTC (Tue) by dvrabel (subscriber, #9500) [Link] (1 responses)

The post on blog.xen.org includes a summary which mentions part 2 which will cover PVHVM and PVH.

An Introduction to Full Virtualization With Xen (Linux.com)

Posted Oct 24, 2012 22:47 UTC (Wed) by LarsKurth (guest, #87439) [Link]

There will be a second part next week, which will cover the missing bits.

An Introduction to Full Virtualization With Xen (Linux.com)

Posted Oct 24, 2012 15:08 UTC (Wed) by aliguori (subscriber, #30636) [Link]

I think it's primarily about guest paging modes. A guest page table would normally have a mapping of guest virtual addresses (GVA) to guest physical addresses (GPA).

But the hardware (generally) doesn't know how GPAs map to host physical addresses (HPAs) so the hypervisor needs to somehow generate a GVA->HPA mapping.

The traditional approach is shadowing paging. This involves tricks in the hypervisor to watch the GVA->GPA tables and generate GVA->HPA tables on the fly. This means two copies of page tables and is pretty slow.

Xen PV had a different approach to solve this problem called direct paging. It exposed a GPA->HPA mapping to the guest (this is the pfn2mfn table) and let the guest be responsible for creating GVA->HPA tables. The details are tricky but it was much faster than shadow paging.

But this was state of the art in 2005. Hardware came along and solved this problem by having a way to tell hardware the GPA->HPA mapping so that the hypervisor no longer needed to do any of this. This is known as EPT or NPT depending on the vendor.

And since then, direct paging has actually been *slower* than using using EPT/NPT.

So this new Xen mode exposes a fake GPA->HPA table to the guest (pfn2mfn) that is an identity mapping (so GPA->GPA).

That means the guest is really creating a GVA->GPA table and Xen can take advantage of the EPT/NPT hardware support.

But you already get this with HVM mode (and KVM has done it forever). So why even bother? There are tons of xenpv guests out there already.

Poor design choices can create a huge amount of work later which is what has happened here. In the early days of KVM, the lack of direct paging was always thrown around as a major disadvantage.

Funny how these things work out in the long run :-)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds