An Introduction to Full Virtualization With Xen (Linux.com)
At XenSummit 2012 in San Diego, Mukesh Rathor from Oracle presented his work on a new virtualization mode, called "PVH". Adding this mode, there are now a rather dizzying array of different terms thrown about -- "HVM", "PV", "PVHVM", "PVH" -- what do they all mean? And why do we have so many? The reason we have all these terms is that virtualization is no longer binary; there is a spectrum of virtualization, and the different terms are different points along the spectrum."
Posted Oct 23, 2012 20:29 UTC (Tue)
by butlerm (subscriber, #13312)
[Link] (3 responses)
There is a little bit more information here:
Posted Oct 23, 2012 21:26 UTC (Tue)
by dvrabel (subscriber, #9500)
[Link] (1 responses)
The post on blog.xen.org includes a summary which mentions part 2 which will cover PVHVM and PVH.
Posted Oct 24, 2012 22:47 UTC (Wed)
by LarsKurth (guest, #87439)
[Link]
Posted Oct 24, 2012 15:08 UTC (Wed)
by aliguori (subscriber, #30636)
[Link]
But the hardware (generally) doesn't know how GPAs map to host physical addresses (HPAs) so the hypervisor needs to somehow generate a GVA->HPA mapping.
The traditional approach is shadowing paging. This involves tricks in the hypervisor to watch the GVA->GPA tables and generate GVA->HPA tables on the fly. This means two copies of page tables and is pretty slow.
Xen PV had a different approach to solve this problem called direct paging. It exposed a GPA->HPA mapping to the guest (this is the pfn2mfn table) and let the guest be responsible for creating GVA->HPA tables. The details are tricky but it was much faster than shadow paging.
But this was state of the art in 2005. Hardware came along and solved this problem by having a way to tell hardware the GPA->HPA mapping so that the hypervisor no longer needed to do any of this. This is known as EPT or NPT depending on the vendor.
And since then, direct paging has actually been *slower* than using using EPT/NPT.
So this new Xen mode exposes a fake GPA->HPA table to the guest (pfn2mfn) that is an identity mapping (so GPA->GPA).
That means the guest is really creating a GVA->GPA table and Xen can take advantage of the EPT/NPT hardware support.
But you already get this with HVM mode (and KVM has done it forever). So why even bother? There are tons of xenpv guests out there already.
Poor design choices can create a huge amount of work later which is what has happened here. In the early days of KVM, the lack of direct paging was always thrown around as a major disadvantage.
Funny how these things work out in the long run :-)
Posted Oct 23, 2012 21:01 UTC (Tue)
by cyanit (guest, #86671)
[Link] (37 responses)
Why haven't they all switched to KVM already?
Posted Oct 23, 2012 21:22 UTC (Tue)
by lutchann (subscriber, #8872)
[Link]
Posted Oct 23, 2012 21:28 UTC (Tue)
by sytoka (guest, #38525)
[Link]
Xen is in the Linux mainstream now. It's very easy to change from Xen to KVM or LXC... So why change a good solution ?
Posted Oct 23, 2012 22:52 UTC (Tue)
by cesarb (subscriber, #6266)
[Link] (10 responses)
AFAIK, KVM needs HVM.
Posted Oct 24, 2012 4:53 UTC (Wed)
by stefanha (subscriber, #55072)
[Link] (9 responses)
No, KVM has always done HVM. HVM means using the hardware virtualization extensions (Intel VMX or AMD SVM). This allows unmodified guest operating systems to run.
Speaking of KVM, there is a project called Xenner to run Xen PV guests on KVM. More info here:
http://kraxel.fedorapeople.org/xenner/
[Disclaimer: I work on KVM]
Posted Oct 24, 2012 7:35 UTC (Wed)
by drago01 (subscriber, #50715)
[Link] (8 responses)
Your answer does not make sense, what he said is that KVM needs (as in requires) hardware virtulaization extensions to work, while Xen does not (for paravirualizied guests).
Posted Oct 24, 2012 12:32 UTC (Wed)
by cesarb (subscriber, #6266)
[Link] (7 responses)
Yes, that is what I meant. I did not notice that "needs" could be read in an alternate way (as in "should have" or "is missing"); sorry for the confusion.
The problem is that not all systems are able to do hardware virtualization. You have older machines which are from before the virtualization extensions, newer machines where for some reason the virtualization extensions are disabled by the BIOS, and Intel processors where the same model might or might not have virtualization extensions (unless you know to look at http://ark.intel.com/ before buying).
Posted Oct 24, 2012 14:38 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (4 responses)
Posted Oct 24, 2012 15:08 UTC (Wed)
by gnb (subscriber, #5132)
[Link] (3 responses)
http://ark.intel.com/products/69669/Intel-Pentium-Process...
is a plausible laptop/low-end desktop CPU, 64-bit, came out this year, no VT-x.
Posted Oct 24, 2012 16:38 UTC (Wed)
by drag (guest, #31333)
[Link]
Intel intentionally disables features to create market segmentation. AMD does not do this and as such AMD is a superior processor for Linux desktop users that don't want to spend lots of money.
The idea of having the possibility of using Xen-style paravirtualized systems is lovely, but in practice it leaves a lot to be desired.
Two of the biggest reasons for using virtualization are to deal with legacy software that requires a specific configuration and being able to run Windows systems on Linux. Both of those things don't exist for Xen without VT hardware support.
AND if you can take advantage of using Xen PV without changing kernels or anything like that then you will almost always get better performance if you use something like LXC.
Posted Oct 26, 2012 21:47 UTC (Fri)
by jond (subscriber, #37669)
[Link] (1 responses)
Posted Oct 26, 2012 22:16 UTC (Fri)
by dlang (guest, #313)
[Link]
at one time Xen was "the way to do virtualization" on Linux, and enterprises that setup their networks at that time aren't willing to change.
Actually, I strongly suspect that most of those organizations are still running the OS versions that they installed on the systems, but because of the 'installed base', having Xen updates in new versions is 'important', even for those companies that aren't running the new versions (after all, they may want to, and it shows that they made the right decisions way back when)
Posted Oct 26, 2012 21:46 UTC (Fri)
by jond (subscriber, #37669)
[Link] (1 responses)
Posted Oct 27, 2012 1:25 UTC (Sat)
by ixs (subscriber, #47170)
[Link]
Posted Oct 23, 2012 23:55 UTC (Tue)
by Lennie (subscriber, #49641)
[Link] (5 responses)
http://wiki.xen.org/wiki/Remus like VMWare which allows state of a VM to be replicated over a highspeed link to an other machine for failover.
And I keep reading Xen is faster than KVM, but I haven't tested that in my environment yet.
Posted Oct 24, 2012 8:44 UTC (Wed)
by robert_s (subscriber, #42402)
[Link] (1 responses)
From what I've seen, Xen is faster than KVM about half the time, and vice versa the other half.
And it's hard to predict which will be faster for a particular workload.
Posted Oct 24, 2012 9:59 UTC (Wed)
by Lennie (subscriber, #49641)
[Link]
Judging by some of the other things I've seen online, KVM has gotten better and Xen and KVM seem to be getting closer in performance and I think I've even seen them supporting paravirtualisation APIs from each other.
Posted Oct 24, 2012 13:20 UTC (Wed)
by cas (guest, #52554)
[Link] (2 responses)
For KVM, there's plain old migration.
It's not the same as VM mirroring, but if you don't have the hardware (or the need) for completely transparent VM failover, you can do something similar with virsh save and virsh restore of a currently running VM.
The VM is paused for as long as it takes to save, transfer to another machine, and restore the VM's state.
Works well enough with shared storage (like NFS), and (i haven't tried this) might even work if you save to stdout, pipe over ssh, and then restore from stdin.
Otherwise if the VM or the server it's running on has died, DRBD or iscsi volumes or even qcow2 on NFS can be used to boot a VM on another server.
Posted Oct 24, 2012 13:50 UTC (Wed)
by Lennie (subscriber, #49641)
[Link] (1 responses)
If you want some form of failover it is usually better to have 2 VM's in proper failover configuration. In a way that fits the software involved.
But the question was why Xen, so I thought I'd mention it. :-)
Posted Oct 24, 2012 15:45 UTC (Wed)
by Lennie (subscriber, #49641)
[Link]
Posted Oct 24, 2012 8:17 UTC (Wed)
by man_ls (guest, #15091)
[Link] (5 responses)
Posted Oct 24, 2012 15:20 UTC (Wed)
by aliguori (subscriber, #30636)
[Link] (4 responses)
But what most people don't understand about Xen is that it's not "part of Linux". The bits that were merged into the kernel in recent years are guest-enablement features only. It's a full blown Operating System that has no relationship to Linux at all. It's a microkernel design based on an old research project (search for Nemesis Micorkernel if you're interested). Linux only runs as a guest under Xen.
Xen has its own scheduler, own MMU, own set of device drivers. By constrast, there is no such thing as the "KVM scheduler". KVM is just a small layer that adds virtualization support to Linux. *Linux is the hypervisor*.
I prefer KVM over Xen for the virtualization for the same reason I prefer Linux over FreeBSD for running Apache, or Linux over <insert custom OS> for whatever workload you can think of.
History has shown that collaborating on a general purpose OS wins time and time again over special purpose boutique OSes. That's why many of our cell phones and DVD players run Linux along with most of the Top 500 supercomputers.
You can always make the argument "but you can build a better scheduler for XYZ workloads". It's a short sighted world view that almost always loses over time.
Posted Oct 24, 2012 23:14 UTC (Wed)
by LarsKurth (guest, #87439)
[Link]
The Xen Hypervisor delegates a lot of functionality to the Dom0 kernel (typically Linux, but can also be NetBSD). And although there are Xen specific drivers for the PV interface in the kernel, these are essentially just shims that call the device drivers in the Dom0 kernel and are part of the PV interface.
Posted Oct 26, 2012 11:06 UTC (Fri)
by dunlapg (guest, #57764)
[Link] (2 responses)
Both Xen and qemu-kvm have interesting parts in Linux and interesting parts outside. So if KVM is Linux, then Xen is Linux; if Xen is not Linux, then KVM (at least qemu-kvm) is not Linux.
Posted Oct 27, 2012 12:39 UTC (Sat)
by pbonzini (subscriber, #60935)
[Link]
On Xen you have two levels of scheduling and two levels of memory management. Xen distributes CPU shares to all domains (including dom0), and dom0 distributes CPU shares among its processes. Xen assigns memory to all domains (including dom0), and dom0 distributes memory among its processes. It's much harder to use a Xen dom0 for non-VM-related tasks, because you can only partly control the resources that dom0 receives.
For power management, Xen has to ask dom0 to process ACPI tables and basically summarize them to the hypervisor. It's even more complicated when it comes to paging, because Xen doesn't do paging on its own---it asks dom0 to page stuff in and out.
Sure, the Xen architecture seems simpler (because you "just" have to handle VCPUs, not arbitrary tasks, and because you "just" have to share memory among a few dozen domains rather than thousands of arbitrary tasks). And to some extent it is, because inventing a new scheduler or memory manager trick is much easier on Xen than on Linux. But in the end I think KVM's performance and simplicity proves that it is not worthwhile, also because every improvement done to favor KVM (think of Andrea Arcangeli's transparent huge pages and AutoNUMA) will benefit every workload, and will effectively have double benefit if you can use it to speed up both the host and the guest.
Posted Oct 27, 2012 12:40 UTC (Sat)
by pbonzini (subscriber, #60935)
[Link]
You knew already what I wrote, which makes me so much more eager to read what you think about it...
Posted Oct 24, 2012 14:47 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (1 responses)
Posted Oct 24, 2012 15:11 UTC (Wed)
by aliguori (subscriber, #30636)
[Link]
In fact, it was recently completely rewritten as a generalized Linux feature (VFIO) that could also be used to write userspace device drivers protected by an IOMMU.
There was even an LWN article: http://lwn.net/Articles/474088/
Posted Oct 24, 2012 19:13 UTC (Wed)
by pbonzini (subscriber, #60935)
[Link] (3 responses)
Posted Oct 25, 2012 0:28 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Oct 28, 2012 16:16 UTC (Sun)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Oct 28, 2012 22:11 UTC (Sun)
by pbonzini (subscriber, #60935)
[Link]
Anyhow, qemu is shared between Xen and KVM, so that part of the feature set is shared (especially since both Xen and KVM can now use upstream qemu rather than their own forks).
Posted Oct 25, 2012 1:41 UTC (Thu)
by Tobu (subscriber, #24111)
[Link] (5 responses)
Posted Oct 25, 2012 10:55 UTC (Thu)
by rwmj (subscriber, #5474)
[Link] (4 responses)
In the KVM world, you can already write a virtual machine that is entirely self-contained and requires no operating system. It's called .. erm .. a *process*, and Linux has had them for rather a long time.
KVM virtual machines are just regular processes, and you can run ordinary processes alongside them.
In a realworld case, say that your virtualized Apache server isn't getting the performance you need running under KVM. Well, just run an Apache server on the host instead.
Rich.
Posted Oct 25, 2012 12:08 UTC (Thu)
by Tobu (subscriber, #24111)
[Link] (3 responses)
Posted Oct 25, 2012 13:00 UTC (Thu)
by rwmj (subscriber, #5474)
[Link] (2 responses)
Posted Oct 25, 2012 13:37 UTC (Thu)
by Tobu (subscriber, #24111)
[Link] (1 responses)
Posted Oct 25, 2012 14:00 UTC (Thu)
by rwmj (subscriber, #5474)
[Link]
If you mean that it's better to program directly against the Xen hypervisor or some other exokernel, instead of using Linux at all, then Mirage is certainly an argument for doing that. (Also loving it myself because it's written largely in OCaml ...)
But the at some point I just know that my program is going to want to write to a file or ask the user a question, and then having Linux around and improving its support for low-level ops starts to look like a better long term option.
An Introduction to Full Virtualization With Xen (Linux.com)
http://blog.xen.org/index.php/2012/09/21/xensummit-sessio...
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
> AFAIK, KVM needs HVM.
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
Even with PV drivers kvm is not even playing in the same league whenever we test this.
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
What is wrong with Xen? Amazon uses Xen, most people use Amazon, therefore most people use Xen.
Amazon and Xen
Amazon and Xen
Amazon and Xen
Amazon and Xen
Amazon and Xen
Amazon and Xen
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
Xen is pretty powerful and flexible. But Mirage is the coolest thing: a framework for building applications in a safe language that, once compiled, will run on the bare metal with no OS involved.
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
You mean a process with a kernel, a scheduler, a page allocator, etc, underneath?
That's not the same level of safety and implementation control at all.
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
You did, but Apache doesn't illustrate very well. It targets POSIX, and just about every operation it does (network, memory, storage) goes through the syscall boundary. It also has multiple processes, which implies another large chunk of ipc, scheduling, and resource management is done outside of it. These abstractions are rigid boundaries that it cannot cross.
An Introduction to Full Virtualization With Xen (Linux.com)
An Introduction to Full Virtualization With Xen (Linux.com)
