|
|
Subscribe / Log in / New account

Let's step back a bit

Let's step back a bit

Posted Jun 3, 2009 19:03 UTC (Wed) by Thue (guest, #14277)
In reply to: Let's step back a bit by BrucePerens
Parent article: Xen again

Unlike KVM, XEN does not require hardware virtualization support.


to post comments

Let's step back a bit

Posted Jun 3, 2009 19:21 UTC (Wed) by BrucePerens (guest, #2510) [Link] (18 responses)

OK. But DomU is already in the kernel, and isn't that part already coded to not require hardware virtualization support?

So, the important part of Xen, in that it provides something that KVM doesn't have, is already in the kernel. KVM has a hypervisior already in the kernel. The Xen hypervisor is inelegant.

So, is it possible to make the KVM hypervisor support Dom0?

Let's step back a bit

Posted Jun 3, 2009 20:09 UTC (Wed) by nevets (subscriber, #11875) [Link] (5 responses)

KVM only works on hardware that has virtualization support. Of my 12 boxes, I have three that do that. One is crap, the other is OK, and the third is my latest laptop.

KVM developers have no interest (nor have they designed KVM) to work with paravirtualization (the thing needed by the OS to support non virtualization supported hardware). Although, I do believe KVM can make use of virtio, but that's another story.

We have enough in the kernel to support a DomU. That is, a true guest.

But Dom0 is a special guest with Xen. The Xen hypervisor passes off the work of drivers to Dom0 to have it do the work. But this interface between Dom0 and the hypervisor is a bit more intrusive than the interface needed by DomU (and already exists).

The issue is that once we add this Dom0 interface, we will forever need to support it. Because any changes we make will break Xen. This is why I suggested having Linux host the Xen source code. Then we can freely change the Dom0<->hypervisor interface without worrying about breaking an external ABI.

Note, my suggestion is not about Xen being inside Linux. It would still be a micro kernel loaded first. But the vmlinuz image would be one. First we load the Xen hypervisor, and then we load Dom0. This will couple the two tightly and the user would not need to worry about incompatibilities.

Let's step back a bit

Posted Jun 4, 2009 8:42 UTC (Thu) by rwmj (subscriber, #5474) [Link] (4 responses)

Bruce, this is an interesting and valid point, but it's also a bit like the discussion of 3D rendering that happened in the mid 90s. Sure, 3D graphics cards were rare and expensive at first, and that meant there was a place for software rendering.

Nowadays though no serious 3D program (ie. no game!) comes with a software renderer, because the 3D hardware is everywhere, on motherboards, in open handhelds like the GP2x-Wiz, and even in experimental boards like the ARM-based Beagleboard.

Hardware virt support is in just about every new x86-64 processor that comes out. A few 32 bit netbooks don't have it right now, but it'll come to those too.

Also don't overlook the fact that KVM does have software emulation. OK, it's slow, it's in userspace, and it relies on qemu. Nevertheless, just running qemu-kvm will transparently fall back to software emulation if the hardware doesn't support virtualization.

Rich.

Let's step back a bit

Posted Jun 4, 2009 8:43 UTC (Thu) by rwmj (subscriber, #5474) [Link]

s/Bruce/nevets/ ...

Let's step back a bit

Posted Jun 4, 2009 12:18 UTC (Thu) by nye (subscriber, #51576) [Link]

While I mostly agree, there are still a number of new mainstream CPUs which don't support hardware virtualisation, mostly aimed at the budget or mobile market. I know if I could use KVM on this laptop it would make my life a little easier. I've never used Xen so I don't know if it would be worth the effort for my purposes, but I'd be a lot more likely to try if it were in the kernel already.

Let's step back a bit

Posted Jun 4, 2009 17:44 UTC (Thu) by buchanmilne (guest, #42315) [Link] (1 responses)

<blockquote>Hardware virt support is in just about every new x86-64 processor that comes out.</blockquote>

But, an 18-month-old 16-core (8*"Dual Core AMD Opteron(tm) Processor 885") server (Sun X4600-M1) doesn't have it. With another 5 years of lifetime on these boxes, it really would be nice to keep Xen (which is what they are currently running). There's no way I would migrate this (with heavily utilised VMs) to qemu-kvm ...

Let's step back a bit

Posted Jun 4, 2009 21:28 UTC (Thu) by jimparis (guest, #38647) [Link]

But when you were purchasing a box 18 months ago, with a plan for a 5-year lifetime, wasn't it a huge mistake to overlook the hardware virtualization feature? I mean, KVM was merged into the kernel some 28 months ago.
I agree your situation sucks, but it seems more of a purchasing mistake than a reason to not move the world towards proper hardware virtualization.

Let's step back a bit

Posted Jun 3, 2009 20:10 UTC (Wed) by dtlin (subscriber, #36537) [Link]

xenner is a utility which is able to run xen paravirtualized kernels as guests on linux hosts, without the xen hypervisor, using kvm instead.

I haven't tried it out, but running Xen DomU on KVM seems perfectly possible.  In any case, KVM and Xen+HVM are about equal in terms of guest support.

KVM's "Dom0" is the unmodified Linux kernel, running on bare hardware — there's nothing special about it.  I'm not sure why you'd even want Xen's Dom0 there?

             HVM                      No HVM
KVM  Supports many guests           Not possible
Xen  Supports manu guests  Supports paravirtualized guests

The "not possible" (unless you're satisfied with QEMU) is what the Xen supporters are really focusing on.

No, it's completely unrelated.

Posted Jun 3, 2009 20:35 UTC (Wed) by gwolf (subscriber, #14632) [Link] (10 responses)

Xen and KVM are similar in that both can be used to run _hardware-assisted_ virtual machines. The strategies are, yes, completely different - KVM uses Linux as the "uppermost" piece, and each virtual machine is just a process as far as the host Linux is concerned.

KVM is great, say, if you want to run Windows instances - None of them will know (well, except for the hardware self-description strings) they are running virtualized. Same thing, yes, can be specified to Xen.

However, Xen's paravirtualization funcionality is completely unmatched by KVM - Xen can run DomU (guest) kernels that are explicitly aware they are running under a paravirtualized environment. This, of course, excludes non-free software, as they would have to be ported to the Xen pseudo-architecture. However, it is a very popular way to run completely independent Linux systems.

Why do you want to paravirtualize? Because the performance impact is way lower. You don't have to emulate hardware at all - In a regular virtualization setup, the guest OS will still shuffle bits around to give them to, say, the ATA I/O interface, possibly aligning them to cylinder/head/sector - On a hard disk that just does not exist, that is a file on another filesystem or whatever. When it is paravirtualized, the guest OS just signals the host OS to do its magic.

My favorite way out for most of the cases I would be forced to handle with Xen for this kind of needs is to use vserver - Which is _not_ formally a virtualization technology, but a compartmentalization/isolation technology (akin to what was introduced as the BSD Jails around 2000), where many almost-independent hosts share a single kernel, but live within different security contexts.

No, it's completely unrelated.

Posted Jun 4, 2009 1:59 UTC (Thu) by drag (guest, #31333) [Link] (9 responses)

> My favorite way out for most of the cases I would be forced to handle with Xen for this kind of needs is to use vserver - Which is _not_ formally a virtualization technology, but a compartmentalization/isolation technology (akin to what was introduced as the BSD Jails around 2000), where many almost-independent hosts share a single kernel, but live within different security contexts.

Well things like BSD Jails, Vserver, OpenVZ, etc etc. All of these are very much virtualization technology in a very real sense. They just are not hardware virtualization.

> Why do you want to paravirtualize? Because the performance impact is way lower. You don't have to emulate hardware at all - In a regular virtualization setup, the guest OS will still shuffle bits around to give them to, say, the ATA I/O interface, possibly aligning them to cylinder/head/sector - On a hard disk that just does not exist, that is a file on another filesystem or whatever. When it is paravirtualized, the guest OS just signals the host OS to do its magic.

Heh. KVM has paravirt drivers that are built into the kernel right now.

virtio-blk = block driver
virtio-rng = random number generator
virtio-net = ethernet network driver
virtio-balloon = used for reclaiming memory from VMs
virtio-pci = pci driver
9pnet_virtio = plan9 networking

And that works fine with updated versions of Qemu also. So you should be able to take advantage of them if your using Kqemu + Qemu for your virtualization. I think. But virtio is a standardized way of doing things. Should probably work with Qemu-dm for Xen stuff.

I there are windows drivers for virtio network. I am not sure about virtio block or balloon though...

I don't know how well KVM + Virtio compares to Xen PV..

Then on top of that you can use AMD's IOMMU or Intel's VT-d to map real hardware directly to virtualized hosts, which would be the fastest possible since your handing off direct access to the hardware.

No, it's completely unrelated.

Posted Jun 4, 2009 7:03 UTC (Thu) by sf_alpha (guest, #40328) [Link] (1 responses)

If KVM + virtio still need processor support it would be very slow compared to Xen when running on unsupported processor.

No, it's completely unrelated.

Posted Jun 4, 2009 12:20 UTC (Thu) by drag (guest, #31333) [Link]

Yes.

You need to have Intel or AMD's virtualization support to take advantage of KVM.

Even with the virtualization support KVM will be slower then PV. Xen's PV is very superior in terms of performance in almost all situations.

KVM's advantages over Xen are:

* Cleaner design. I am guessing that KVM hypervisor code is between 20k-30k with all the arch it supports were Xen's hypervisor code is easily 10x that much.

* Much easier to administrate and deal with. Does not require patches, does not require rebooting or anything of that nature. It's "just there". Does not require special console software or management tools beyond just qemu if that is all you want. You can use top to monitor VMs and crtl-z to pause them if you started them from a terminal, for example.

* Does not require to have your OS "lifted" into a Dom0... The way Linux interacts with the hardware does not change. This means (with latest kernels) I can suspend my laptop while running VMs and it just works.

* Heavily leverage's Linux's existing features. Instead of having to write various peices of hardware support into the hypervisor KVM gets all that and more by default. When Linux does improvements to, say, memory management then people using KVM directly benefit from that work.
(this is not a huge advantage over Xen, its more of a big improvement when compared to Vmware ESX.. no restrictions to hardware, network block protocols, or sata or anything like that... if Linux supports it you can use it in KVM)

* It is already installed and setup on your machine. All you have to do is intall the qmeu portion and the virt-manager or libvirt stuff if you want to have a nice and easy way to manage them. All Linux distributions have KVM support.. it's modules are by default by everything I've looked at.

disadvantages:

* PV on Xen is still easily performance king.

* require some hardware support.

No, it's completely unrelated.

Posted Jun 4, 2009 7:06 UTC (Thu) by bronson (subscriber, #4806) [Link] (4 responses)

> 9pnet_virtio

Wow, people are still writing 9p code? Given the sad state of http://sourceforge.net/projects/v9fs and http://sourceforge.net/projects/npfs I thought that these projects were stone dead.

I'd really like a network filesystem that is easier to administer than NFS and CIFS... Tried DRBD but didn't like it much. Is v9fs worth a look?

No, it's completely unrelated.

Posted Jun 4, 2009 12:03 UTC (Thu) by drag (guest, #31333) [Link] (2 responses)

No clue about plan9.

But DRBD is a way of keeping volumes in sync, not so much a file system.

The easiest FS to administer that I know of is sshfs. I use it heavily and it is stable and actually very fast. It can beat NFS even sometimes.. And all you need is Openssh server running and a fuse support in the client. The ssh server is the real gauge on how well sshfs works. Anything other then a relatively recent version of OpenSSH and I doubt the results will be that good.

But if DRBD was even being considured then your needs are going to be specialized. Other alternative to look at could possibly be Redhat's GNBD from GFS or ISCSI.

No, it's completely unrelated.

Posted Jun 4, 2009 19:32 UTC (Thu) by bronson (subscriber, #4806) [Link] (1 responses)

Tried sshfs 5 or so years ago, rejected it because the crypto overhead prevented me from filling a 100 MBit link. I should probably try it again since that won't be a problem nowadays.

I only mentioned DRBD to illustrate how desperate I've become! It was actually pretty good except that I couldn't get the split brain recovery to work the way I wanted. So close and yet so far. Haven't gotten desperate enough to try AFS yet!

Why doesn't 9p or webdav or some simple protocol take off? It's amazing to me that NFS and CIFS are still state of the art. I guess I don't understand the trade-offs very well.

No, it's completely unrelated.

Posted Jun 4, 2009 20:20 UTC (Thu) by drag (guest, #31333) [Link]

For sshfs if you want to have good performance you need to disable compression. If you think the crypto has to much overhead then change the encryption method to RC4.

Very likely you were running something like 3DES that has very high overhead. And like I said you need to have a relatively recent version of OpenSSH (say a version from the past 2 years or so) for reliable service.

You can set these on a per server basis in your ~/.ssh/config

I have had no problem personally beating NFS when it comes to my personal usage at home over wireless and gigabit link.. although of course this sort of thing is not suitable for large numbers of users.

:)

No, it's completely unrelated.

Posted Jun 4, 2009 14:08 UTC (Thu) by sbergman27 (guest, #10767) [Link]

My understanding is that the main thrust of the the 9p virtio stuff is to implement shared volumes without all the ugly network guts being exposed to the administrator. And hopefully, at lower latency than the rather significant local latencies one sees even using a virtio network driver.

I have an ugly situation where I have a (proprietary) cobol C/ISAM <-> SQL gateway to some cobol accounting files. Due to the brain-deadness of the proprietary vendor (political concerns, their licenscing with their Cobol runtime supplier, yadda, yadda, yadda...), I have to run it virtualized in an old distro and it sees the C/ISAM files via NFS4. It's written to do a lot of fsync'ing and doesn't seem to make any use of any sort of NFS caching, and so latency absolutely kills its performance. I can't use any of the virtio stuff because the guest kernel is too old to support it, and even that has latencies in the hundreds of microseconds. So I'm using the software emulated E1000 driver, which is almost as efficient as virtio.

However, if I could use the 9p shared volume stuff, I suspect, but am not sure, that latency would be much improved. As it stands, it is still over twice as fast as running on a separate machine via NFS4 over 1000baseT.

So far as I know, the 9p-virtio thing is still an active project, but not yet in mainline KVM. Or, at least, it does not seem to be in Ubuntu 9.04 server.

No, it's completely unrelated.

Posted Jun 4, 2009 12:48 UTC (Thu) by gwolf (subscriber, #14632) [Link] (1 responses)

> Heh. KVM has paravirt drivers that are built into the kernel right now.

Yes, and that's good - I use KVM with paravirt network and disk devices for Windows hosts. Still, many things (i.e. memory access, real CPU mapping, even the kind of architecture the guests report as having) have to be emulated. Paravirt devices are a great boost, though - And by being much simpler, say, than hardware-specific drivers, I am also reducing the most common cause for Windows' instability.

Now, both with Xen and with KVM (and I'd expect with any other virtualization technology) you can forward a real device - Just remove support for it on the host (or Dom0) kernel and ask the virtualizer to forward the needed interrupts/mapped memory space/bus address, and you have it natively inside. Of course, you lose the ability to perform live migrations - But you cannot always win! :)

No, it's completely unrelated.

Posted Jun 10, 2009 17:37 UTC (Wed) by tmassey (guest, #52228) [Link]

You say you have virtualized *disk* drivers for Windows for KVM? I'm aware of the paravirt network drivers, but I've looked repeatedly for block drivers. They've always been 'planned for the future', but I've not been able to find them.

Where would I get paravirt Windows drivers for KVM?

Let's step back a bit

Posted Jun 3, 2009 20:13 UTC (Wed) by ncm (guest, #165) [Link] (7 responses)

Does it really matter any more whether a new release of Xen requires hardware virtualization support? Doesn't all the current hardware where people want to run Xen have such support already? This seems akin to compilers supporting funny x86 memory models long after everybody already had a 386. (There were lots of 286s still around, but their owners weren't buying new software.) How many of these 500,000 servers running Xen can't run KVM? And aren't those on a schedule to be retired, for other reasons (e.g. power consumption, increasing failure rate, etc.) soon?

Let's step back a bit

Posted Jun 4, 2009 12:56 UTC (Thu) by gwolf (subscriber, #14632) [Link] (5 responses)

> How many of these 500,000 servers running Xen can't run KVM? And aren't those on a schedule to be retired, for other reasons (e.g. power consumption, increasing failure rate, etc.) soon?

When I bought my laptop, January 2008, I shopped explicitly for one with virtualization capability. However, for a long time I just was not able to use it as such - Because of the lack of support in Xen for core features I want a laptop to support, such as ACPI (which is mainly useful for laptops, granted, but that could be very well used everywhere, leading to sensible power savings). Virtualization does not only work at the server farm, it can also be very useful at desktops.

Let's step back a bit

Posted Jun 4, 2009 15:42 UTC (Thu) by TomMD (guest, #56998) [Link] (3 responses)

> Virtualization does not only work at the server farm, it can also be very useful at desktops.

YES! And its not just for x86 anymore, but there are architectures that don't have VT or SVM hackery and are perfectly viable users of Xen. I'd love to run Xen on the (ARM based) beagle board or a BB based laptop.

Let's step back a bit

Posted Jun 4, 2009 20:29 UTC (Thu) by drag (guest, #31333) [Link] (2 responses)

the VT and SVM cpu extensions are only needed for X86 platform because the X68 ISA design is such a huge pile of shit.

KVM works fine on other architectures (like PowerPC), so that is all a bit of a red herring.

For x86 systems that donnot have VT/SVM you can use Kqemu and get similar functionality and speed.

Let's step back a bit

Posted Jun 9, 2009 2:11 UTC (Tue) by xyzzy (guest, #1984) [Link]

I migrated my Xen DomUs to kqemu VMs a year ago. I didn't rigourously benchmark but the performance drop was noticeable -- I went from being able to fill 100mbps, to not being able to fill even half of it. And this was with wget and apache and static files, so mostly an I/O performance issue.

Let's step back a bit

Posted Jun 9, 2009 7:50 UTC (Tue) by paulj (subscriber, #341) [Link]

Kqemu is long unmaintained. The Qemu developers are discussing ripping it out. Kqemu guest-kernel-space is very buggy and nearly always unuseable. So any deployment of Kqemu will run guest kernel under emulation, which obviously leads to very poor performance for all applications except those which are near-completely userspace CPU bound.

Let's step back a bit

Posted Jun 7, 2009 10:41 UTC (Sun) by djao (guest, #4263) [Link]

When I bought my laptop, January 2008, I shopped explicitly for one with virtualization capability. However, for a long time I just was not able to use it as such - Because of the lack of support in Xen for core features I want a laptop to support, such as ACPI
This is a fatal flaw in Xen, sure, but I don't understand why it would have stopped you from using KVM. You mention that you specifically bought a laptop with support for hardware virtualization, and KVM works fine with ACPI or any other core laptop feature, since KVM is just Linux.

I bought my laptop in April 2008 and I've been using it with KVM almost from day one. Everything works great, including ACPI.

Let's step back a bit

Posted Jun 4, 2009 13:28 UTC (Thu) by ESRI (guest, #52806) [Link]

I know we have a LOT of Dell PE 2850's and newer still with a lot of life and horsepower in them... perfect for running Xen, but not at all good for running KVM.

Let's step back a bit

Posted Jun 3, 2009 22:21 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

But KVM is just a speedup component for qemu, really. If you don't have
KVM, qemu still works, only slower (much slower if you don't load kqemu).

If you don't have VT support, my understanding is that Xen similarly
works, just slower.

So what's the substantive difference?

Let's step back a bit

Posted Jun 3, 2009 23:01 UTC (Wed) by nevets (subscriber, #11875) [Link] (1 responses)

I believe that a paravirtualized guest runs much faster than a qemu guest. But I have not taken any benchmarks.

I also think the issue is that Xen is still quite a head of KVM in features, but this too is slowing down.

Let's step back a bit

Posted Jun 4, 2009 2:34 UTC (Thu) by drag (guest, #31333) [Link]

> I believe that a paravirtualized guest runs much faster than a qemu guest. But I have not taken any benchmarks.

YES PV is massively faster then just plain Qemu. Massively faster in all respects. The overhead of Xen PV vs naked hardware is going to be just a few percent.

Of course this requires modification to the guest.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds