Linux team tells VMware and Xen to get their acts together (Register)
Xen, by contrast, wants to make the most of its open source ties and create the tightest possible bonds with Linux. Behind closed doors, some Xen backers say that Sun, Microsoft and Novell will refuse to support VMI. Such political manoeuvering shows how seriously Xen backers take this debate."
Posted Apr 20, 2006 18:54 UTC (Thu)
by xtifr (guest, #143)
[Link] (1 responses)
Here's the URL for page one: http://www.theregister.co.uk/2006/04/20/vmware_linux_xen/
Posted Apr 20, 2006 21:32 UTC (Thu)
by cook (subscriber, #4)
[Link]
Posted Apr 21, 2006 14:46 UTC (Fri)
by mmarq (guest, #2332)
[Link] (4 responses)
Isn't that a similar paradox to the device driver interface issue ? Does not a "neutral" interface in the kernel stifle innovation capability ? Does not all hypervisors under Linux have to be GPled, if derived works ?
IMO there are valid points to each of the visions, but if a "neutral" interface is adopted for the hypervisors, why cant that be the same for the relevant device driver interfaces ?
Posted Apr 21, 2006 16:46 UTC (Fri)
by dlang (guest, #313)
[Link] (3 responses)
1. by plugging in the appropriate thing to the interface the same kernel can run either virtual, or on raw hardware.
2. it can increase portability of virtual clients (for example Xen2 and Xen3 are incompatable with each other, with the VMI interface they have run xen3 clients on xen2 hosts and vice versa
as for the concerns that it will be lacking something and therfor limit things in the future. they addressed this by pointing out that they define basicly everything that could be remotely called privilated in this layer, (most of them with simple passthrough calls to the real thing) so it's unlikly that something would be needed that isn't going through the layer.
I don't have a handy link to the thread that discussed this, it would probably be worth somone who really understands kernel programming to summarize that thread, and especially this portion of it.
Posted Apr 21, 2006 20:04 UTC (Fri)
by mmarq (guest, #2332)
[Link] (2 responses)
I belive here the "thing" means a VMM, and the interface (VMI) allows for "on-the-fly"(?) switching mode, making it, the interface, the most important software layer on any OS kernel that adopts it, correct ?... but isnt the paravirtualization technic already an optimized mix of virtual/raw mode ?
Well i'm not a software engineer, and i havent read the specifications for VMI. From the information i could gather, it "sounds" similar to some purposes to a exokernel, but somehow inferior in flexibility and protection ability. (http://tunes.org/wiki/Exokernel)
The point i'm trying to make is if it isnt about time to think about real advanced "killer" features ?... perhaps a new kernel design would only mean basicly adding a layer that is not already there. For example, Apache to run as part of a LibOS on top of an exokernel, would allow Apache to specialize heavly on virtual memory and file system acess, run several times faster. And if hosted on a VM and upon migration between Virtual Machines it has the advantage of only migrating itself and that tiny exokernel, making migration much faster to. Yet, it dosent mean that substancial parts of VMI cant be founded inside a exokernel as "downloaded code".
"" 2. it can increase portability of virtual clients (for example Xen2 and Xen3 are incompatable with each other, with the VMI interface they have run xen3 clients on xen2 hosts and vice versa. ""
Doesn't it impact performance and complexity heavly to run a host inside another host ?... cant see the benefice here! Migration yes, cascading... ?
"" as for the concerns that it will be lacking something and therfor limit things in the future. they addressed this by pointing out that they define basicly everything that could be remotely called privilated in this layer ""
I belive what that means is that there isnt the remote possibility of it(VMI) being a derivative work of any sort because the API is for a separated communication and control layer. So again is VMI the best *kernel of kernel* for all purposes or it only rocks for VMMs ?... is it wise ?
Posted Apr 23, 2006 21:22 UTC (Sun)
by dlang (guest, #313)
[Link] (1 responses)
there are other projects that are aiming at providing additional isolation without the overhead of running a seperate kernel (I don't have their names at the top of my head right now)
becouse VMware and Xen do run seperate kernels and request services from the host OS, they are not going to be any higher in performance then the host OS, even if you have a new nifty VM strategy, the client kernel still needs to request the memory from the host kernel and so is subject to the VM strategy limitations of the host kernel.
2. you misunderstood me, when you run Xen you run two different versions of the kernel.
A. the host kernel
a Xen 2.x host kernel cannot run a Xen 3.x client kernel and vice-versa. (let alone a VMware guest, etc)
what the VMI interface does is allow for a single kernel to be used as either a host kernel, or a client kernel, and to eliminate incompatabilites between different host kernels and the clients they run.
so with the VMI stuff you could have a Xen 3.x host run Xen 3.x clients AND Xen 2.x clients, something that Xen by itself is not able to do.
if this works as advertised (something that I am not technicaly qualified to evaluate) then it allows for a seperation of client and host development. the host (hypervisor) side can work on adding fairness and prioritization features without having to change the client side at the same time. they also claim that the interface layer lets the hypervisor (host) layer implement additional speedups over time without having to change/rebuild the clients (this claim was questioned heavily when it was raised and it seems as if the VMI folks either convinced the doubters or just wore them out :-)
Posted May 2, 2006 22:43 UTC (Tue)
by dps (guest, #5725)
[Link]
Xen, on the other hand, requires some fairly drastic kernel changes, especially in the memory management area. They claim that this allows them to make the hit for the virtualisation much less and provide data to back this claim up. Random people can repeat their benchmarks.
A merger sounds mildly unlikely. The xen people coulsd credibly argue that VMI buys them nothing... why you anyone *expect* a Xen 2.x client to run on a Xen 3.x microkernel (sorry, hypervisor) box?
I do not see Xen and vmware as competing products. IF you want M$ windows, or anything else with no source changes, you need VMWare. If source changes are not a problem then Xen might be a higher performance solution.
For no apparent reason that I can see, the provided link points to page two of the referenced article. Even though I do know something about the issues and the players, I found this a little confusing until I realized what was going on.link is to page two
The link has been fixed.link is to page two
"" Morton, for example, said his preference for VMI has been "overstated"... VMware have proposed an implementation that would allow, in theory, different kinds of hypervisors to run beneath the kernel," Morton said, in an interview with The Register. "It is, if you like, a hypervisor-neutral interface. The question remains if we want to have a hypervisor neutral interface... ""Linux team tells VMware and Xen to get their acts together (Register)
the VMI interface that's been proposed has quite a few advantages.Linux team tells VMware and Xen to get their acts together (Register)
"" 1. by plugging in the appropriate thing to the interface the same kernel can run either virtual, or on raw hardware. ""Linux team tells VMware and Xen to get their acts together (Register)
1. VMware and Xen are by definition running a host inside another host. this does mean that there is a performance hit in doing so, the quest is to minimize the hit while providing the same isolation.Linux team tells VMware and Xen to get their acts together (Register)
B. the client kernel
The last time I heard about this subject VMware emualated a PC right down to the hardware level (presumably using x86 virtual mode). The performance hit for things like I/0 was not trivial. I think the VMWare people claim to have made things faster here, but they also said nobody could do "unapproved" benchmarks so trustworthy data is non-existant. Linux team tells VMware and Xen to get their acts together (Register)