LWN: Comments on "Ten years of KVM" https://lwn.net/Articles/705160/ This is a special feed containing comments posted to the individual LWN article titled "Ten years of KVM". en-us Tue, 16 Sep 2025 09:40:33 +0000 Tue, 16 Sep 2025 09:40:33 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Ten years of KVM https://lwn.net/Articles/718661/ https://lwn.net/Articles/718661/ zenaan <div class="FormattedComment"> <font class="QuotedText">&gt; faster disk and network I/O is always an area of research</font><br> <p> SNABB seems to have hit the throughput jackpot:<br> <a href="https://lwn.net/Articles/713918/">https://lwn.net/Articles/713918/</a><br> <p> Anyone know if this would be a good approach for KVM/ virtio?<br> </div> Sat, 01 Apr 2017 08:49:17 +0000 Ten years of KVM https://lwn.net/Articles/707424/ https://lwn.net/Articles/707424/ samyan <div class="FormattedComment"> So clearly! Thanks!<br> </div> Sun, 27 Nov 2016 04:48:44 +0000 Ten years of KVM https://lwn.net/Articles/705987/ https://lwn.net/Articles/705987/ pbonzini <div class="FormattedComment"> Aha, now I remember Stefano telling me about XenClient! Unfortunately while writing that comment I mistakenly recalled the name to be XenDesktop, which is actually something completely different, so I didn't mention it.<br> </div> Wed, 09 Nov 2016 11:36:24 +0000 Ten years of KVM https://lwn.net/Articles/705776/ https://lwn.net/Articles/705776/ dunlapg <p><blockquote>George, I think your assessment of the benefits of Xen vs KVM is fair. Of course the cost/benefit ratio of improving the hypervisor vs. improving the kernel is different for Red Hat and Citrix!</blockquote> <p>Glad I managed to get close to the target then. :-) <p><blockquote>I would only add that unfortunately (except for QubesOS!!) usage of Xen's more security-oriented features such as driver domains is very limited. So while Xen aims at string isolation of the hypervisor, in the end the attack surface for HVM guest is going to be similar, because KVM runs QEMU in a strongly confined SELinux setup, and attacking the support code for hardware virtualization extensions is similar for Xen and KVM.</blockquote> <p>It's true that the average distro user at the moment will have a hard time taking advantage of Xen's extra security features. Driver domains don't integrate with distro networking setup well; QEMU stub domains take up extra memory; and setting up XSM to do anything other than the default is quite complicated. Nobody who is primarily selling into that space is actively developing solutions like that for Xen (as opposed to say RedHat, which developed SVirt). <p>But there are actually lots of other projects that use domain disaggregation, XSM, driver domains, and other features of Xen besides QubesOS. <a href="http://openxt.org/">OpenXT</a> (formerly XenClient) is one of them -- they have a very committed community behind them. But in all cases they tend to be more "embedded"-style all-in-one products, where control of Xen's configuration is tightly managed by the developers to achieve a specific end; and so they're less visible. <p><blockquote>More important, you're underestimating the mess that the Linux support was around 2008. The official kernel remained stuck at 2.6.18 for years and used Mercurial like the rest of Xen, rather than git like Linux; also, upstream support for Dom0 was limited or nonexistent until IIRC 2.6.36 (possibly later for some of the pv backends?) and even for DomU it wasn't clear whether to use SUSE's forward port of Xenolinux or the upstream pv-ops code.</blockquote> <p>Full Dom0 backend support wasn't available until Linux 3.0. (I remember because we joked among ourselves that Dom0 support was what Linus had been waiting for to switch the major version number.) <p>But yeah, it certainly was a mess for a long time; and one of the reasons was because of the more extensive changes required to run Linux as a control domain. I didn't want to deny that; I mainly wanted try to clarify what he original article meant when it said, "[Xen] needed to run a modified guest kernel in order to boot virtual machines". Someone might have read that as meaning that one of the motivations for developing KVM was because Xen couldn't run Windows guests, which is incorrect. Mon, 07 Nov 2016 12:42:41 +0000 Ten years of KVM https://lwn.net/Articles/705721/ https://lwn.net/Articles/705721/ pbonzini <div class="FormattedComment"> George, I think your assessment of the benefits of Xen vs KVM is fair. Of course the cost/benefit ratio of improving the hypervisor vs. improving the kernel is different for Red Hat and Citrix!<br> <p> I would only add that unfortunately (except for QubesOS!!) usage of Xen's more security-oriented features such as driver domains is very limited. So while Xen aims at string isolation of the hypervisor, in the end the attack surface for HVM guest is going to be similar, because KVM runs QEMU in a strongly confined SELinux setup, and attacking the support code for hardware virtualization extensions is similar for Xen and KVM.<br> <p> More important, you're underestimating the mess that the Linux support was around 2008. The official kernel remained stuck at 2.6.18 for years and used Mercurial like the rest of Xen, rather than git like Linux; also, upstream support for Dom0 was limited or nonexistent until IIRC 2.6.36 (possibly later for some of the pv backends?) and even for DomU it wasn't clear whether to use SUSE's forward port of Xenolinux or the upstream pv-ops code. This is all remote past now, but at the time of RHEL5.4, which is when I started working on Xen and virtualization, it was quite a pain in the neck. :-)<br> </div> Sat, 05 Nov 2016 13:54:48 +0000 Ten years of KVM https://lwn.net/Articles/705635/ https://lwn.net/Articles/705635/ dunlapg <p>Nice article on the history of KVM. Just a couple of comments related to statements about Xen: <p><blockquote>Since Xen was introduced before the virtualization extensions were available on x86, it had to use a different design. First, it needed to run a modified guest kernel in order to boot virtual machines. </blockquote> <p>I'm not sure exactly what this is supposed to mean. By the time KVM came out in 2006, Xen had had support for unmodified guests <a href=https://lwn.net/Articles/162841/>for a year already</a>. Running in that mode requires QEMU to do device emulation, but so does KVM. <p>Perhaps it means that the interface for "domain 0", which is where you run the toolstack used to boot VMs on a Xen system, was designed before virtualization extensions were available; so the changes required to Linux run the <b>control stack</b> on Xen are more extensive than those required to run KVM. That is certianly true. <p>(As an aside, the Xen community have been working on an update to this interface to allow dom0 to take advantage of the virtualization extensions. That should greatly reduce the footprint of Xen changes in Linux.) <p><blockquote>Second, Xen took over the the role of the host kernel, relegating Linux to only manage I/O devices as part of Xen's special "Dom0" virtual machine. This meant that the system couldn't truly be called a Linux system — even the guest operating systems were modified Linux kernels with (at the time) non-upstream code.</blockquote> <p>Again, support for unmodified guest operating systems was already in place in Xen for a year by the time KVM was released. If you wanted to use a modified version of Linux for a normal guest you could, but it wasn't required. <p>"Not truly a Linux system" had me confused for a bit. It is certainly true that Xen uses its own cpu scheduler rather than Linux's, and that it has another layer of protection around memory and hardware management. That's the point really. Linux's scheduler is designed for processes (primarily kernel compilations), and Xen's is designed for VMs. Linux provides a large rich interface which makes it difficult to provide strong isolation; Xen provides a much narrower interface which makes it easy to provide strong isolation. <p>The fact that you're not getting Linux's scheduler also means you're not getting Linux's power management; the fact that you're not getting Linux's MM manager means that you don't get Linux's NUMA balancer or swap system. Xen has its own power management, NUMA memory balancer, and swapping system, while KVM re-uses the ones in Linux. <p>Both models have advantages and disadvantages. In Xen, you can tailor your algorithms to focus purely on virtual machines; whereas in KVM, algorithms need to support both, and processes must get priority in consideration. On the other hand, in KVM, any advancement in power management or NUMA support in Linux is automatically inherited by KVM, whereas Xen has to duplicate all that effort, and will inevitably be behind in some areas. Which one you think is more important depends largely on your individual use-case and your taste. Fri, 04 Nov 2016 15:56:42 +0000 Ten years of KVM https://lwn.net/Articles/705639/ https://lwn.net/Articles/705639/ rvfh <div class="FormattedComment"> Very nice article, thanks!<br> </div> Fri, 04 Nov 2016 15:11:13 +0000