Not logged in
Log in now
Create an account
Subscribe to LWN
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
Perhaps I'm not the target audience for Xen -- having used it for a number of research projects -- but it is a royal pain to have to deal with back- or forward-ported Xen Dom0's.
Xen: finishing the job
Posted Mar 4, 2009 16:41 UTC (Wed) by martinfick (subscriber, #4455)
Posted Mar 4, 2009 20:05 UTC (Wed) by jmorris42 (subscriber, #2203)
KVM is basically QEMU with a kernel module to speed it up. KQEMU is a kernel module to speed up QEMU that doesn't depend on hardware virtualization. So is Xen on old hardware enough faster than QEMU+KQEMU to justify keeping around yet another virtualization platform? That is the billion dollar question Xen is hoping they can answer yes to. Because if they can't Citrix is going to feel really dumb after throwing big sacks 'o cash to own Xen.
Posted Mar 4, 2009 20:10 UTC (Wed) by martinfick (subscriber, #4455)
Posted Mar 4, 2009 21:33 UTC (Wed) by drag (subscriber, #31333)
Using Qemu there are several different ways to setup networking... you can use the default 'userspace tcp stack' which provides easy tcp networking (not the entire tcp/ip stuff though..). Or you can setup a virtual ethernet switch and connect your virtual ethernet ports to that and then use iptables to create a NAT firewall that then allows that virtual ethernet network a gateway to the outside network. Or you can combine the virtual ethernet ports with the physical external port and use a bridge to connect them.
Of course as you can imagine the default is rather limited. On my Fedora laptop virt-manager sets up a virtual ethernet switch and then connects that to the external world using a NAT firewall. That works with Network-Manager and dnsmasq so my virtual machines have access to the network irregardless of how my laptop is connected and can adapt to changing network topographies.
By default Qemu (and modified versions) use a emulated 100Mbit ethernet connection. The fastest emulated ethernet card you can use would be a Intel 1000Mbit ethernet switch.
However if you want very good performance you need to use PV network drivers which then provide good performance. I had a 300% improvement in performance, more reliable performance, and reduced cpu load from using those over the emulated nic devices.
But I guess that PV drivers are only available to people using KVM and not Kqemu/Qemu?
Now I don't know exactly what Xen uses for networking stuff. But I know that it's performance is similar when using full virtualization. I don't know about it using it's paravirtualization mode.
Posted Mar 4, 2009 22:17 UTC (Wed) by aliguori (subscriber, #30636)
<p><i>But I guess that PV drivers are only available to people using KVM and not Kqemu/Qemu?</i></p>
<p>PV drivers (ala VirtIO) are now available in upstream QEMU</p>
Posted Mar 5, 2009 2:55 UTC (Thu) by bluefoxicy (guest, #25366)
The problem is, with KQemu/VMware/Qemu/KVM, you're running through an emulated network interface; whereas with Xen, you are not.
With KVM or Qemu, non-paravirtualized, the network hardware is emulated. The hard disk is emulated too. You make some system calls to write raw Ethernet frames or spin up TCP/IP connections; the kernel plays with some MMIO registers or do PIO IN/OUT instructions, and there's a piece of code (a reverse driver, pretty much) that tracks the state of the "hardware" and determines what exactly you're trying to do. Then it relays your intent to the host OS, which then encodes all this to... games with MMIO or PIO, through a hardware driver, into real hardware.
With Xen paravirtualization, the hardware isn't emulated. You make some system calls to emit a raw Ethernet frame or open a TCP/IP connection. The kernel calls a Xen function and says, "On this device, emit this to the network." Xen passes this to a hook in the Dom0 OS, which then looks at the virtual device in a map to find the physical device and does all the hardware magic of MMIO/PIO games to actually send it out to the network.
In other words, the kernel and the hypervisor do a hell of a lot less work when you're paravirtualized. Hardware drivers for virtual devices are essentially "Tell the hypervisor I need to write this data to this device," instead of "Do a crazy, complicated rain dance to get this device to perform this function." Even better, the hypervisor doesn't have to interpret this crazy, complicated rain dance; it's handed exactly what you want in simple, easy to read instructions which don't have to be decoded and passed to the kernel and then re-encoded for a different hardware device etc.
This means it's faster.
No it does not.
Posted Mar 6, 2009 7:22 UTC (Fri) by khim (subscriber, #9252)
Xen may be faster today but this is not intrinsic advantage.
The story with KVM:
1. Userspace asks kernel to the send the packet.
2. Context switch to kernel.
3. Kernel asks "hardware" to send the packet.
4. "Reverse driver" asks the outr kernel to send the packet.
5. Context switch to outer kernel.
7. Outer kernel talks to real hardware.
The story with Xen:
1. Userspace asks kernel to the send the packet.
2. Context switch to kernel.
3. Kernel asks Xen to send the packet.
4. Conext switch to Xen.
5. Xen asks the outer kernel to send the packet.
6. Context switch to Dom0 kernel.
7. Dom0 kernel talks to real hardware.
Context switches are expensive (equal to hundred simple operations or
so) and Xen uses one additional context over KVM. This can easily
compensate for simpler interface without "reverse driver". That's why there
are push to create drivers directly for Xen - this way it'll be faster then
KVM... if KVM will not use paravirtualization. I fail to see why it can not
use paravirtualization for all devices except CPU (where it has hardware
support and so is fast enough already).
In the end Xen can become fast specialized OS, but so can KVM - and
which way is faster? Drepper's words are
still relevant: neither Xen nor VMWare have any real advantages which
cannot be surmounted by giving KVM more time to catch up, i.e., grant it
the same time to develop the features.
And if so then why should we include interim solution? It depends on
timeframe: everyone agree btrfs will do everything ext4 does, yet ext4 was
included anyway. Because btrfs will not be ready for a few more years. If
KVM will need few more years to catch up then may be Dom0 support is worth
having in the kernel, but if it's only matter of months - the story will be
Posted Mar 5, 2009 11:57 UTC (Thu) by danpb (subscriber, #4831)
Posted Mar 7, 2009 7:29 UTC (Sat) by mab (subscriber, #314)
Posted Mar 8, 2009 8:08 UTC (Sun) by rahulsundaram (subscriber, #21946)
Posted Mar 4, 2009 16:50 UTC (Wed) by drag (subscriber, #31333)
However I've moved on to using KVM for most everything. Having the ability to simply _have_ a hypervisor by default with no effort, no patching, no rebooting, no 'lifting' my system kernel out of Ring 0, etc etc is a wonderful thing.
And the other thing is that no special or weird configurations are needed. While Fedora with virt-manager provides a nice gui and other tools... for many of my tasks simply being able to launch qemu with screen and serial output to my terminal is quite convenient.
That being said, if people are using Xen and finding it useful and there are cases were it would be superior then it would be nice to get support into the kernel.
Posted Mar 5, 2009 10:06 UTC (Thu) by dw (subscriber, #12017)
Hey I heard about this really neat new free OS by communists called DEBIAN LINUX which has all this stuff built in. Sure beats that Slackware nonsense you appear to be running. :)
apt-get install xen-linux-system && grub-install /dev/sda && reboot
Posted Mar 5, 2009 17:29 UTC (Thu) by drag (subscriber, #31333)
It's still not that easy.
With KVM... "modprobe kvm-intel" (or -amd or whatever) That will work on any recent Linux distribution. The difference being is that KVM is already there. Having to install a modified qemu is all I need to do and is _still_ quite a bit simplier and less problem prone then what you pasted there.
With my laptop, for example, which I make heavy use of virtualization for small development and documentation projects I run Fedora 10 for various reasons (my prefered distribution is Debian, btw). I have a Intel GMA X3100 video card and wifi. For various other reasons I like to have DRI2 enabled. This requires having a rather new kernel, a very new kernel (along with newer X stuff)
Also I like having good power management stuff. Being able to suspend my laptop and such is very handy as I move around quite a bit.
All of this sort of stuff makes life for a Xen user much much more difficult.
Also all the benefits of running Xen seem to stem from it's paravirtualization features. For what I do I need full virtualization... Having to muck around with the kernel of the guest systems in addition to the kernel of the host system is just not worth the trouble and is frequently not really even practical.
It is still not the same
Posted Mar 7, 2009 1:19 UTC (Sat) by gwolf (subscriber, #14632)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds