|
|
Subscribe / Log in / New account

KVM 15

Progress in the virtualization world sometimes seems slow. Xen has been the hot topic in the paravirtualization area for some years now - the first "stable" release was announced in 2003 - but the code remains outside of the mainline Linux kernel. News from that project has been relatively scarce as of late - though the Xen hackers are certainly still out there working on the code.

On the other hand, KVM appears to be to be on the fast path. This project first surfaced in October, 2006; it found its way into the 2.6.20 kernel a few months later. On February 25, KVM 15 was announced; this release has an interesting new feature: live migration. The speed with which the KVM developers have been able to add relatively advanced features is impressive; equally impressive is just how simple the code which implements live migration is.

KVM starts with a big advantage over other virtualization projects: it relies on support from the hardware, which is only available in recent processors. As a result, KVM will not work on the bulk of currently-deployed systems. On the other hand, designing for future hardware is often a good idea - the future tends to come quickly in the technology world. By focusing on hardware-supported virtualization, KVM is able to concentrate on developing interesting features to run on the systems that companies are buying now.

The migration code is built into the QEMU emulator; the relevant source file is less than 800 lines long. The live migration task comes down to the following steps:

  • A connection is made to the destination system. This can currently be done with a straight TCP connection to an open port on the destination (which would not be the most secure way to go) or by way of ssh.

  • The guest's memory is copied to the destination. This process is just a matter of looping through the guest's physical address space (which is just virtual memory on the host side) and sending it, one page at a time, to the destination system. As each page is copied, it is made read-only for the guest.

  • The guest is still running while this copy process is happening. Whenever it tries to modify a page which has already been copied, it will trap back into QEMU, which restores write access and marks the page dirty. Copying memory thus becomes an iterative process; once the entire range has been done, the migration code loops back to the beginning and re-copies all pages which have been modified by the guest. The hope is that the list of pages which must be copied shrinks with each pass over the space.

  • Once the number of dirty pages goes below a threshold, the guest system is stopped and the remaining pages are copied. Then it's just a matter of transmitting the current state of the guest (registers, in particular) and the job is done; the migrated guest can be restarted on its new host system.

As it happens, guest systems can be moved between Intel and AMD processors with no problems at all. Moving a 64-bit guest to a 32-bit host remains impossible; the KVM developers appear uninterested in fixing this particular limitation anytime soon. A little more information can be found on the KVM migration page.

The other feature of note is the announced plan to freeze the KVM interface for 2.6.21. This interface has been evolving quickly, despite the fact that it is a user-space API; this flexibility has been allowed because KVM is new, experimental, and has no real user base yet. The freezing of the API suggests that the KVM developers think things are reaching a stable point where KVM can be put to work in production systems. Perhaps that means that, soon, we'll find out how Qumranet, the company which has been funding the KVM work, plans to make its living.

Index entries for this article
KernelKVM
KernelVirtualization/KVM


to post comments

KVM 15

Posted Mar 1, 2007 4:31 UTC (Thu) by aliguori (subscriber, #30636) [Link]

<i>As it happens, guest systems can be moved between Intel and AMD processors with no problems at all. Moving a 64-bit guest to a 32-bit host remains impossible; the KVM developers appear uninterested in fixing this particular limitation anytime soon.</i>

KVM migration is being developed in parallel to QEMU migration. It will actually be possible to migration a 64-bit KVM guest to a 32-bit host running qemu-system-x86_64. In fact, it will be possible to migrate a 64-bit x86 KVM guest to qemu-system-x86_64 running on PowerPC (or any platform that QEMU supports).

Of course, you are moving from a virtualized host to an emulated host so performance will suffer. Even if a host is 64-bit capable, if it's running in 32-bit mode, supporting a 64-bit guest is just too much of a pain for virtualization.

Correction

Posted Mar 1, 2007 7:01 UTC (Thu) by avik (guest, #704) [Link]

The guest is still running while this copy process is happening. Whenever it tries to modify a page which has already been copied, it will trap back into QEMU [...]
No, assuming the guest is running under kvm (and not pure qemu), the guest will trap into the kernel (which marks the page dirty), and then resumes execution.

KVM 15

Posted Mar 1, 2007 7:13 UTC (Thu) by avik (guest, #704) [Link] (4 responses)

In addition to relying on hardware virtualization, kvm has two additional advantages:

  • it relies on the kernel for the stuff the kernel is good at: scheduling, memory management, security, I/O, power management; the list goes on and on.
  • it relies on qemu for the stuff qemu is good at: emulation. kvm only uses the chipset and I/O emulation (and not the cpu emulation), but a world of work was saved by using qemu. Live migration, for example, is actually a qemu project which was adapted to also support kvm.
By relying on the kernel and qemu, kvm is able to focus firmly on virtualization issues. That is what makes the fast development pace possible.

[I'm the kvm maintainer]

KVM 15

Posted Mar 1, 2007 10:50 UTC (Thu) by ekj (guest, #1524) [Link] (2 responses)

Ah, I see what you're trying -- trying to give all those *other* projects the honour.

It won't work. We'll still consider you a cool potato. This is the kinda thing that makes my Windows-using co-workers go; "Linux can do *what*?" which happens at increasing frequency lately.

KVM 15

Posted Mar 1, 2007 11:31 UTC (Thu) by avik (guest, #704) [Link] (1 responses)

Ah, I see what you're trying -- trying to give all those *other* projects the honour.

Well, er, yes.

It could also be interpreted as an mean and underhanded swipe at other virtualization projects which have written their own kernel. I'm sure no one on LWN would suggest that I'd make such an insinuation, though.

It won't work. We'll still consider you a cool potato. This is the kinda thing that makes my Windows-using co-workers go; "Linux can do *what*?" which happens at increasing frequency lately.

It's all part of the master plan. I get to be a cool potato (potato?!) *and* appear to be generous soul who's only wish is to see the credit go where it really belongs.

KVM 15

Posted Mar 9, 2007 17:04 UTC (Fri) by slamb (guest, #1070) [Link]

Perhaps some day you can aspire to being "poa mchizi com ndizi", or "cool crazy like a banana".

KVM 15

Posted Mar 5, 2007 13:03 UTC (Mon) by joern (guest, #22392) [Link]

Actually, I consider relying on hardware virtualization the smallest advantage of all. Even if the paravirtualization approach makes sense - and in many cases it does - there is no reason to re-implement a scheduler, memory management, hardware-bug workarounds, etc.

The ultimate hypervisor is the Linux kernel and kvm is the first widely-available project to take advantage of it.

KVM 15

Posted Mar 1, 2007 22:17 UTC (Thu) by marduk (subscriber, #3831) [Link]

Has the KVM changes to qemu made it's way into qemu proper, or do they still use a forked version?

Intel <=> AMD

Posted Mar 3, 2007 9:36 UTC (Sat) by addw (guest, #1771) [Link] (1 responses)

''guest systems can be moved between Intel and AMD processors with no problems at all.''

What about those programs that use different instructions depending on the CPU ? These generally detect the CPU type when they start, but after a migration they will fail - since they don't redetect again.

Intel <=> AMD

Posted Mar 4, 2007 6:04 UTC (Sun) by avik (guest, #704) [Link]

Programs detect cpu capabilities by means of the cpuid instruction. Since
that instruction is itself virtualized and controlled by the host
userspace, one can tell virtualize a processor with the least capabilities
that are in use on the server farm. Gievn that, programs will only use
instructions that are present on all processors that can be a migration
target.


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds