User: Password:
|
|
Subscribe / Log in / New account

Xen: finishing the job

Xen: finishing the job

Posted Mar 4, 2009 21:33 UTC (Wed) by drag (subscriber, #31333)
In reply to: Xen: finishing the job by martinfick
Parent article: Xen: finishing the job

The problem with that is that even with Xen your still going to run through a emulated network interface.

Using Qemu there are several different ways to setup networking... you can use the default 'userspace tcp stack' which provides easy tcp networking (not the entire tcp/ip stuff though..). Or you can setup a virtual ethernet switch and connect your virtual ethernet ports to that and then use iptables to create a NAT firewall that then allows that virtual ethernet network a gateway to the outside network. Or you can combine the virtual ethernet ports with the physical external port and use a bridge to connect them.

Of course as you can imagine the default is rather limited. On my Fedora laptop virt-manager sets up a virtual ethernet switch and then connects that to the external world using a NAT firewall. That works with Network-Manager and dnsmasq so my virtual machines have access to the network irregardless of how my laptop is connected and can adapt to changing network topographies.

By default Qemu (and modified versions) use a emulated 100Mbit ethernet connection. The fastest emulated ethernet card you can use would be a Intel 1000Mbit ethernet switch.

However if you want very good performance you need to use PV network drivers which then provide good performance. I had a 300% improvement in performance, more reliable performance, and reduced cpu load from using those over the emulated nic devices.

But I guess that PV drivers are only available to people using KVM and not Kqemu/Qemu?

-----------------------------

Now I don't know exactly what Xen uses for networking stuff. But I know that it's performance is similar when using full virtualization. I don't know about it using it's paravirtualization mode.


(Log in to post comments)

Xen: finishing the job

Posted Mar 4, 2009 22:17 UTC (Wed) by aliguori (subscriber, #30636) [Link]

<p><i>However if you want very good performance you need to use PV network drivers which then provide good performance. I had a 300% improvement in performance, more reliable performance, and reduced cpu load from using those over the emulated nic devices.</i></p>

<p><i>But I guess that PV drivers are only available to people using KVM and not Kqemu/Qemu?</i></p>

<p>PV drivers (ala VirtIO) are now available in upstream QEMU</p>

Xen: finishing the job

Posted Mar 4, 2009 22:17 UTC (Wed) by aliguori (subscriber, #30636) [Link]

Ugh, sorry for the ugly post.

Xen: finishing the job

Posted Mar 5, 2009 2:55 UTC (Thu) by bluefoxicy (guest, #25366) [Link]

> The problem with that is that even with Xen your still going to run
> through a emulated network interface.

The problem is, with KQemu/VMware/Qemu/KVM, you're running through an emulated network interface; whereas with Xen, you are not.

With KVM or Qemu, non-paravirtualized, the network hardware is emulated. The hard disk is emulated too. You make some system calls to write raw Ethernet frames or spin up TCP/IP connections; the kernel plays with some MMIO registers or do PIO IN/OUT instructions, and there's a piece of code (a reverse driver, pretty much) that tracks the state of the "hardware" and determines what exactly you're trying to do. Then it relays your intent to the host OS, which then encodes all this to... games with MMIO or PIO, through a hardware driver, into real hardware.

With Xen paravirtualization, the hardware isn't emulated. You make some system calls to emit a raw Ethernet frame or open a TCP/IP connection. The kernel calls a Xen function and says, "On this device, emit this to the network." Xen passes this to a hook in the Dom0 OS, which then looks at the virtual device in a map to find the physical device and does all the hardware magic of MMIO/PIO games to actually send it out to the network.

In other words, the kernel and the hypervisor do a hell of a lot less work when you're paravirtualized. Hardware drivers for virtual devices are essentially "Tell the hypervisor I need to write this data to this device," instead of "Do a crazy, complicated rain dance to get this device to perform this function." Even better, the hypervisor doesn't have to interpret this crazy, complicated rain dance; it's handed exactly what you want in simple, easy to read instructions which don't have to be decoded and passed to the kernel and then re-encoded for a different hardware device etc.

This means it's faster.

No it does not.

Posted Mar 6, 2009 7:22 UTC (Fri) by khim (subscriber, #9252) [Link]

Xen may be faster today but this is not intrinsic advantage.

The story with KVM:
1. Userspace asks kernel to the send the packet.
2. Context switch to kernel.
3. Kernel asks "hardware" to send the packet.
4. "Reverse driver" asks the outr kernel to send the packet.
5. Context switch to outer kernel.
7. Outer kernel talks to real hardware.

The story with Xen:
1. Userspace asks kernel to the send the packet.
2. Context switch to kernel.
3. Kernel asks Xen to send the packet.
4. Conext switch to Xen.
5. Xen asks the outer kernel to send the packet.
6. Context switch to Dom0 kernel.
7. Dom0 kernel talks to real hardware.

Context switches are expensive (equal to hundred simple operations or so) and Xen uses one additional context over KVM. This can easily compensate for simpler interface without "reverse driver". That's why there are push to create drivers directly for Xen - this way it'll be faster then KVM... if KVM will not use paravirtualization. I fail to see why it can not use paravirtualization for all devices except CPU (where it has hardware support and so is fast enough already).

In the end Xen can become fast specialized OS, but so can KVM - and which way is faster? Drepper's words are still relevant: neither Xen nor VMWare have any real advantages which cannot be surmounted by giving KVM more time to catch up, i.e., grant it the same time to develop the features.

And if so then why should we include interim solution? It depends on timeframe: everyone agree btrfs will do everything ext4 does, yet ext4 was included anyway. Because btrfs will not be ready for a few more years. If KVM will need few more years to catch up then may be Dom0 support is worth having in the kernel, but if it's only matter of months - the story will be different...

Xen: finishing the job

Posted Mar 5, 2009 11:57 UTC (Thu) by danpb (subscriber, #4831) [Link]

The PV driver backends for all VirtIO devices are present in the mainline QEMU codebase - the VirtIO backends don't care about the type of virtualization - all they want is ability to provide emulated PCI devices. So QEMU, KQEMU, KVM all work just fine in this regard, though obviously the best performance will be when you combine them with KVM. You could also probably make the VirtIO backends work under Xen fullvirt too..

Xen: finishing the job

Posted Mar 7, 2009 7:29 UTC (Sat) by mab (subscriber, #314) [Link]

Is irregardless the opposite of regardless?

Xen: finishing the job

Posted Mar 8, 2009 8:08 UTC (Sun) by rahulsundaram (subscriber, #21946) [Link]


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds