News and updates from DockerCon 2015
News and updates from DockerCon 2015
Posted Jul 2, 2015 5:56 UTC (Thu) by kleptog (subscriber, #1183)In reply to: News and updates from DockerCon 2015 by b7j0c
Parent article: News and updates from DockerCon 2015
This isn't going to put ops out of a job, but it is going to change the way they look at services. Rather than managing machines they'll be managing services directly, which I think will actually make everyone happier. In a sense we're removing a layer of indirection: the VM host.
We're not going to run stuff in the cloud though, there are limits. Our customers might though.
Posted Jul 2, 2015 14:03 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (7 responses)
It was never anyones goal to virtualize the machine such that you needed to run nested kernels, the whole point of the kernel and memory protection is to provide separation between applications, the problem is that the state of the art of process separation lagged far behind what was needed to actually run separate programs with shared libraries on the same hardware. Now that the decade-plus long effort of adding namespaces and separation within the kernel is bearing some fruit we can remove the layer of indirection so that you have one kernel which handles both the interface for applications and the interface for hardware and has all the available information to make the best decisions on how to service the applications requests.
Posted Jul 2, 2015 16:43 UTC (Thu)
by rriggs (guest, #11598)
[Link] (3 responses)
Huh? How would one run a Windows OS on an Apple laptop without nested kernels? It is certainly a reasonable goal to do that. And with VMWare, there is no nesting of kernels -- just a hypervisor and non-nested OS peers. With Docker, it seems that one gives up OS flexibility for a little hardware efficiency.
Posted Jul 2, 2015 16:47 UTC (Thu)
by jberkus (guest, #55561)
[Link] (1 responses)
So to rephrase: "giving up some flexibility for order-of-magnitude better hardware efficiency," which seems like a reasonable tradeoff. Sometimes you need a full VM, but often you don't.
Posted Jul 7, 2015 4:17 UTC (Tue)
by Gnep (guest, #102586)
[Link]
The flexibility tradeoff is not made by Docker. It is instead container VS hypervisor. For a public CaaS platform, BYOK (bring-your-own-kernel) is necessary. Read more: https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-th...
Posted Jul 3, 2015 2:14 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
I'm not sure how that's relevant to a discussion about Docker which largely about servers, especially servers running Linux where it solves a software deployment problem with lower overhead than full machine virtualization solves the same problem.
> And with VMWare, there is no nesting of kernels -- just a hypervisor and non-nested OS peers.
I don't think that's how it works, the vmkernel hypervisor kernel is the primary kernel, all of the other OS kernels are subordinate to it and nested inside the interface which is controlled and provided by the vmkernel. This is highly performant in that the interface is often provided directly by hardware which has the capability to segment itself, such as an IOMMU or VT instructions and a new layer of page tables, with that segmentation controlled by the vmkernel. The vmkernel is the only kernel with a full and complete view of the hardware, the OS kernels which run under it are the only ones privy to the userspace processes and syscall API state.
> With Docker, it seems that one gives up OS flexibility for a little hardware efficiency.
Docker is only targeting Linux, and allows you to migrate from a bunch of Linux VMs on Xen, KVM, or VMware (I guess HyperV too), to running the same software on bare metal using namespaces, changing one management framework for another and removing a layer of abstraction which gets a performance benefit.
Posted Jul 3, 2015 10:34 UTC (Fri)
by niner (subscriber, #26151)
[Link] (2 responses)
Posted Jul 3, 2015 11:54 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (1 responses)
But there are plenty of cases where this isn't really a huge consideration. For example: when you have a large number of applications that all need to communicate with each other and use the same data. The isolation of containers is more than enough here, stricter isolation is needed when you're dealing with multiple customers or different levels of data sensitivity.
I don't think anyone is proposing replacing every VM with a container, but there are lots of situations what containers are a huge improvement on what there is now.
Posted Jul 6, 2015 15:43 UTC (Mon)
by drag (guest, #31333)
[Link]
Containers vs VMs is not a either-or situation.
When I run something like docker on my desktop I run it in a VM. Previously I would have about a dozen VMs and would have to start and stop them individually because I could only run a half a dozen at the same time at most. When I ran into applications that need lots of ram or high disk I/O speed then that reduced the amount of things I could run even further.
Now I just kick off one (relatively) huge VM with a lot of ram and CPU and then just run containers in that. It has it's own dedicated drive (USB 3) and it's own network interface separate from the one I use on my desktop. That way when I run applicaitons that have unique I/O needs then I can run them at the same time as the rest of the software I want to run in the VM.
All in all this has resulted in a massive improvement in resource utilization and just day to day ease of use.
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015
News and updates from DockerCon 2015