LWN: Comments on "An introduction to Clear Containers" https://lwn.net/Articles/644675/ This is a special feed containing comments posted to the individual LWN article titled "An introduction to Clear Containers". en-us Mon, 13 Oct 2025 08:40:38 +0000 Mon, 13 Oct 2025 08:40:38 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net An introduction to Clear Containers https://lwn.net/Articles/689484/ https://lwn.net/Articles/689484/ Sam_Smith <div class="FormattedComment"> Yea I completely agree with you on this one. That didn't cross my mind, but when I read your comment the light when off in my head for JEOS.<br> <p> --<br> Sam_Smith<br> Web Developer and Aspiring Chef<br> Large file transfers<br> www.innorix.com/en/DS<br> </div> Thu, 02 Jun 2016 09:52:40 +0000 An introduction to Clear Containers https://lwn.net/Articles/674209/ https://lwn.net/Articles/674209/ PradeepJagadeesh <div class="FormattedComment"> Hi All,<br> <p> I am new to this clear containers. I am experimenting the memory foot print of the VMs. It is mentioned in the article that memory foot print per container is 18-20MB. Can some one please help me to understand? Even if I use the demo images which are part of this article I cant get those numbers. I always get &gt; 60MB per image. Even if I launch 100 instances it will not be less than 50MB. Please help me to understand. Am I missing something here?<br> <p> When you say over head it is hypervisor+guest?.<br> <p> Also please let me know which kernel you guys used to come to this number (18MB) and cli options you used for running the container.<br> <p> Thanks in advance.<br> <p> Regards,<br> Pradeep<br> </div> Mon, 01 Feb 2016 13:09:49 +0000 An introduction to Clear Containers https://lwn.net/Articles/668988/ https://lwn.net/Articles/668988/ gdamjan <div class="FormattedComment"> (maybe someone still reads this)<br> <p> 1) Is there some checklist for building a kernel without the legacy stuff, and with the necessary stuff for kvmtool/lkvm?<br> for ex. is:<br> # CONFIG_PCI is not set<br> CONFIG_NET_9P_VIRTIO=y<br> CONFIG_VIRTIO_BLK=y<br> CONFIG_VIRTIO_NET=y<br> CONFIG_VIRTIO_CONSOLE=y<br> <p> ok? enough? more is needed, less?<br> <p> 2) Also, what does the userspace need to do to initialize the network and the 9pfs shared directory<br> </div> Wed, 23 Dec 2015 02:23:07 +0000 An introduction to Clear Containers https://lwn.net/Articles/658735/ https://lwn.net/Articles/658735/ einstein <div class="FormattedComment"> <font class="QuotedText">&gt; The current Docker wrapper around the container primitives is OK. It's not great, but it's a start (and definitely better than lxc - yeech). It's still a bit thick as containers go, but it's far thinner than a VM.</font><br> <p> Docker has a completely different focus than lxc, or openvz. Docker seems, to me, primarily a way to launch a single application, so it's basically this little wrapper around an executable. In stark contrast, a typically use case for e.g. openvz is to run a full-blown, multi-user, mutli-function server, and lxc has the same sort of capabilities. Each approach has its supporters and legitimate use cases.<br> </div> Tue, 29 Sep 2015 19:03:27 +0000 An introduction to Clear Containers https://lwn.net/Articles/658244/ https://lwn.net/Articles/658244/ philipsbd <div class="FormattedComment"> This is now merged upstream in rkt: <a href="https://coreos.com/blog/rkt-0.8-with-new-vm-support/">https://coreos.com/blog/rkt-0.8-with-new-vm-support/</a><br> </div> Thu, 24 Sep 2015 05:10:45 +0000 An introduction to Clear Containers https://lwn.net/Articles/656663/ https://lwn.net/Articles/656663/ bmullan <div class="FormattedComment"> Not sure LXC deserves your "yeech". Its not Docker and isn't intended to be.<br> <p> With lxc 1.x release last year, it supports both unprivileged &amp; privileged containers, pre-built container templates for centos, debian, oracle, ubuntu etc linux so I can have say an Ubuntu Host &amp; any other Linux in an LXC container, CRIU, "nested" containers, security w/apparmor, selinux &amp; seccomp.<br> <p> With the introduction of LXC (lexdee) to manage LXC containers locally or remotely, LXC gained a RESTful API.<br> <p> There's now an LXD/LXC plugin for Openstack (nclxd) so Openstack can spin up local/remote LXC containers as "VM's" instead of KVM, etc VMs.<br> <p> You can today already use Canonical's Juju to spin up a complete openstack on your laptop all running in LXC.<br> <p> LXC is also dead simple to use from the CLI perspective.<br> <p> Just thought I'd highlight that not all innovation is limited to Docker.<br> <p> Stephane Graber is one of the core LXC developers and he wrote a great 10 part series last year to introduce all the new LXC features:<br> <p> <a rel="nofollow" href="https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/">https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-ser...</a><br> <p> <p> <p> </div> Fri, 04 Sep 2015 16:09:02 +0000 Virtfs https://lwn.net/Articles/656257/ https://lwn.net/Articles/656257/ nix <div class="FormattedComment"> FWIW last time I tried p9fs (using qemu) a couple of years back it was a couple of orders of magnitude slower than NFS onto the host. This was extremely surprising to me and may well be a bug that's been fixed since then.<br> </div> Tue, 01 Sep 2015 08:02:02 +0000 Virtfs https://lwn.net/Articles/655955/ https://lwn.net/Articles/655955/ Jonno <div class="FormattedComment"> <font class="QuotedText">&gt; my laymans understanding was that VirtFS (the paravirtualized 9p fs used) did have zero copy.</font><br> 9pfs over virtio is zero-copy in the networking sense, not in the memory-management sense.<br> <p> Eg. data goes directly from page-cache to virtio bus, and then directly from the virtio bus to the page-cache on the other side, without having to copy everything to and from some intermediary protocol package. There are still separate data copies in the host and guest page caches, and obviously all changes to one has to be synced to the other...<br> </div> Fri, 28 Aug 2015 13:25:39 +0000 Virtfs https://lwn.net/Articles/655937/ https://lwn.net/Articles/655937/ rektide <div class="FormattedComment"> It sounds like you've gone deep into due diligence, but my laymans understanding was that VirtFS (the paravirtualized 9p fs used) did have zero copy. I suppose the guest would still be doing all the normal file buffering in this case though- I'd expect it would perform near native for systems not under memory pressure. <br> [1] <a rel="nofollow" href="https://www.linuxplumbersconf.org/2010/ocw/system/presentations/597/original/VirtFS_LPC_2010_Final.pdf">https://www.linuxplumbersconf.org/2010/ocw/system/present...</a><br> </div> Fri, 28 Aug 2015 10:43:58 +0000 An introduction to Clear Containers https://lwn.net/Articles/647045/ https://lwn.net/Articles/647045/ ras <div class="FormattedComment"> <font class="QuotedText">&gt; for me fixing it is "the container system becoming update/distro aware".</font><br> <p> That gets hard, because your container now has know a lot about the packaging system the distro uses. In Debian this means it would have run dpkg itself, which is possible because dpkg does take a --root parameter. But that means the container would have to handle dependency resolution. All of which is possible of course, and if we were only talking about Debian probably even easy for some definition of easy. [0] But we are talking about tracking every packaging system out there - including things like pypi.<br> <p> They are not going to do that. Their success so far has been built on them avoiding doing it. Instead the user writes a script, the script uses some magic to build an image. The container's role starts in earnest after the image is built - they can deploy them across server farms, start them, stop them and even provide tools like etcd so they can configure themselves. It all works because the icky details of how to build and extend an image are all held inside the image itself. In that 140MB. That's why it's never going away without something changing.<br> <p> If you are going to get rid of that 140MB there is one place I am pretty sure it isn't going to migrate to - and that is into the container software - eg docker. Debian providing tools that manipulate packages inside of a container, and the user running those tools from the existing docker script sounds like a much saner route to me. Of course this means the docker script would only work when run in a Debian host. Which is how we get to containers being tied to a particular distribution - while the container software (eg Docker) remains distribution agnostic. In principle the built containers could be distribution agnostic, but since Debian built it, it's not difficult for the Debian host to figure out what containers are effected by a security patch and notify the container software to do something about it. And thus you get to the containers being distribution specific too.<br> <p> So we get back to my original point. All the changes that must happen to make this work are in Debian, or whatever distro is being used. The container software just continues to do what it does now. Thus my conclusion that the next step in the evolution in containerisation must come from the distro's - not the container software.<br> <p> [0] The recent discussion on the debian development lists over how poorly aptitude does dependency resolution compared to apt provides a hint. "Easy" here means it could be done by someone - but even software written by Debian itself has trouble getting it right.<br> </div> Wed, 03 Jun 2015 02:56:25 +0000 An introduction to Clear Containers https://lwn.net/Articles/647043/ https://lwn.net/Articles/647043/ dlang <div class="FormattedComment"> If you are taking the approach that containers should be replaced instead of upgraded, you don't need all that infrastructure in the container because you aren't going to use it.<br> <p> I think we differ mostly in that as far as you are concerned, fixing this is "the distros becoming container aware" while for me fixing it is "the container system becoming update/distro aware". The difference being which side is responsible for making the changes.<br> </div> Wed, 03 Jun 2015 00:34:24 +0000 An introduction to Clear Containers https://lwn.net/Articles/647029/ https://lwn.net/Articles/647029/ ras <div class="FormattedComment"> Once the container is running I have no doubt very little of the installed stuff is used. But that isn't because it isn't needed. It's for the same reason my employer hires me instead of just my hands - my hands need a support system.<br> <p> The 140Mb [0] that debootstrap installs maintains the debian distribution that lives inside of the container. The way things are done now it's a necessary part of the container. Docker files generally start with "take one minimal debian installation; apt-get install these packages ...". That can't happen without that 140Mb. If you get your containers to install their own security patches, that 140Mb is going to be needed for the life of the container. Even if you don't Debian's policy of not having explicit dependencies on "required" packages means it's very difficult to figure out what you can remove without writing your own software to follow the reverse dependencies (which I have done).<br> <p> Part of the reason I say the distro's have to change is I agree this stuff shouldn't be in the container. If the distro's become container aware, the host can use it's copy of dpkg and so on to build and later maintain containers. If that happens you get the added benefit of security patches being applied automagically by the host as happens now in the non-container world, rather than having to do this manual rebuilding rubbish.<br> <p> This is where my statement above, that the next step in move to containers is the distro's change, comes from. At the moment what we have is 1/2 baked.<br> <p> [0] It's only recently realised that Debian minimal install is 140Mb. That's huge - and it's after I've pruned the cache's debootstrap creates. Damn Small Linux for example crams an entire distribution (kernel, GUI environment, 2(!) x Browser, a plethora of editors) into 120Mb.<br> </div> Wed, 03 Jun 2015 00:09:28 +0000 An introduction to Clear Containers https://lwn.net/Articles/646904/ https://lwn.net/Articles/646904/ dlang <div class="FormattedComment"> If you do it a package at a time, I easily believe you. If you go down to the file level, the story is very different.<br> </div> Mon, 01 Jun 2015 17:16:28 +0000 An introduction to Clear Containers https://lwn.net/Articles/646825/ https://lwn.net/Articles/646825/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; The fast majority of files in those "Debian essential" packages (and actually quite a few of the full packaged) are actually not going to be needed inside the container.</font><br> We tried that here as an experiment. Turns out that unless your application is almost statically linked pure-C app, you can't really remove that much. You still likely need glibc and all of its crap, libstdc++, OpenSSL, libz and so on.<br> <p> About the only significant redundant piece is python-minimal that is needed for apt. Well and apt itself, of course.<br> <p> In the end, we simply decided to use the official baseimages since several megabytes worth of dead disk space per container (no RAM overhead unless apt/python are actually used) are not worth maintaining our own images.<br> </div> Mon, 01 Jun 2015 10:08:27 +0000 An introduction to Clear Containers https://lwn.net/Articles/646816/ https://lwn.net/Articles/646816/ kleptog <div class="FormattedComment"> It'd be really nice if there was an easy way to make things smaller. For example your application might not need /bin/sh, but you need it if you want to use the packaging system. So if you want to deploy a Python application you need the packaging system to deploy Python, and a whole lot of support stuff to run the packaging system. There is AFAIK no easy way to jettison the unneeded stages after the fact. By hand you could do a lot, but you need something largely automatable. As it is it only costs some disk space.<br> <p> There is the point made further up about how containers are less useful for deploying individual applications that you don't manage yourself like a single wordpress install. In our case we build two or three images but then deploy them a few hundred times with slightly different configurations. This changes the balance significantly and is vastly easier to manage than a few hundred VMs.<br> </div> Mon, 01 Jun 2015 07:39:14 +0000 An introduction to Clear Containers https://lwn.net/Articles/646800/ https://lwn.net/Articles/646800/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; for Debian every container will contain all the Debian essential packages</font><br> <p> you are correct for how containers are being built right now.<br> <p> I am saying that this needs to change<br> <p> The fast majority of files in those "Debian essential" packages (and actually quite a few of the full packaged) are actually not going to be needed inside the container.<br> <p> If you create a container, run it for a while (ideally exercising every feature in the software you installed), and then look at what files have an atime newer then when you started up the container, you would find that the vast majority of the files on the system were never accessed.<br> <p> There is a lot of software that's needed for a 'minimal' system that's running completely self contained than is needed to run a piece of software inside a container that doesn't need to do lots of other things that you need to do on a full system (boot software, daemons, etc). If the software you are running is statically linked, you may not need anything beyond the one binary (in a 'best case' simplified example). Even a lot of the stuff that's typically done inside the container today could actually be done externally (job controls, monitoring, logging are pretty obvious wins), the question is at what point the value of splitting things out of the container is outweighed by the value of having everything bundled together inside the container.<br> <p> Most of the container contents being created today are full distro installs (or pretty close to that), almost the exact same things that would be in a VM image or an image running on bare metal.<br> </div> Mon, 01 Jun 2015 02:29:09 +0000 An introduction to Clear Containers https://lwn.net/Articles/646797/ https://lwn.net/Articles/646797/ ras <div class="FormattedComment"> <font class="QuotedText">&gt; Containers need to contain only the software actually required to run the software, and that is FAR less than just about anyone is putting in a container.</font><br> <p> True. But it creates a different problem. Whereas before you had one installation to manage, now you have many. So while it is true each individual container contains less packages, for Debian every container will contain all the Debian essential packages. Or to put it another way, containerisation doesn't cause the total number of packages to drop. If you needed apache2, varnish, ntp and whatever else in the old setup, you will still need them in the containerised setup - albeit not installed in every container.<br> <p> The net result result is while the total number of packages used doesn't change, but the number of deployments of them you have to manage (read: configure and ensure they are security patched) increases - in fact is multiplied by the number of containers you use in the worst case. On the up side I imagine the configuration of each container much is simpler, but on the down side you now have extra configuration to do - setting up virtual nics, allocating them IP's, mounting file systems inside the container, broadcasting the IP's so they can talk to each other. My guess is on the balance work involved in configuration isn't much different either way.<br> <p> But this explosion in deployments is big deal if the sysadmin has to update and patch all of the containers, which is the case now. If the distro looked after it the work load reduces to what it was and it doesn't matter so much. And you get the security benefits for free.<br> <p> In the long term this will be solved, and what I suspect is the real benefit containers have will make itself felt. Containers bring the principle of "private by default" modularisation to system building. The number of "running on the same system" assumptions will drop as a consequence, interdependencies will drop (despite the dbus's mobs valiant efforts to make everything talk to everything else), and things like huge apache2 config files managing 100's of sites will be a thing of the past. But that's a long way away.<br> </div> Mon, 01 Jun 2015 01:40:24 +0000 An introduction to Clear Containers https://lwn.net/Articles/646794/ https://lwn.net/Articles/646794/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; So containers and distro's are like oil and water. They don't mix very well in most situations (...) If they are going to mix something has to change. I can't see it being the containers </font><br> <p> Actually I see exactly the opposite. I think that the current mentality of people building containers where they install large chunks of a distro and run what's close to a full machines worth of software in each container is what's going to change.<br> <p> Containers need to contain only the software actually required to run the software, and that is FAR less than just about anyone is putting in a container.<br> <p> A decade+ ago I was working to simplify management of software using chroot sandboxes, setting them up so that they only contained files that were actually used by the software in question. (not just the packages listed as dependencies). The result is a much smaller footprint than any of the container definitions I've seen so far. Minimizing the container contents like this does wonders for your security requirements (you don't need to patch things that aren't there)<br> <p> But containers need to evolve away from "install full packages and much of the OS)" and to something that is much more trailered for the job in question. Figuring out how to build such systems cleanly will help figure out how to build updated versions, but there is still going to be the question of how you update anything that contains enough state that you don't just replace it.<br> <p> The idea of doing a CoW image as the base of many containers is trying to sidestep this bloat by spreading it's cost across many running containers (even if different ones use different subsets of the image), but it doesn't at all address the upgrade question. Saying that you layer filesystems so that you can replace lower levels in the stack only works until you need to change something higher up to work with a newer version of a lower layer.<br> </div> Mon, 01 Jun 2015 01:02:00 +0000 An introduction to Clear Containers https://lwn.net/Articles/646792/ https://lwn.net/Articles/646792/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; Yes when you are deploying software you developed that makes perfect sense, and I'm guessing how it worked in the company that pioneered this technology - Google.</font><br> <p> At Google they don't build "containers" and deploy them. They think in terms of "Jobs" that need X instances with Y cores and Z RAM. The fact that the implementation of this is in containers is not something that the normal Google developer or Site Reliability Engineer (the closes they have to sysadmins) ever think about. It's really far closer to the Mainframe job submission mentality than it is the 'traditional' server (even VM) model.<br> </div> Mon, 01 Jun 2015 00:18:51 +0000 An introduction to Clear Containers https://lwn.net/Articles/646789/ https://lwn.net/Articles/646789/ ras <div class="FormattedComment"> <font class="QuotedText">&gt; You're going to rebuild the entire image every time you make a release that needs to be deployed.</font><br> <p> Yes when you are deploying software you developed that makes perfect sense, and I'm guessing how it worked in the company that pioneered this technology - Google.<br> <p> To me, a lone wolf, who must deploy a variety of stuff I didn't develop, it makes far less sense. I inherited a WordPress instance for example, and it's not the only one - I run many of these packages. If I tried to keep track of all the security vulnerabilities in them and all their dependencies and updated them manually, I'd have no time for anything else. The only thing that makes sense time wise for me is to reply on my distro so keep it patched. Which it does, and I'm guessing it does it more reliability than your updating your package at irregular intervals.<br> <p> I suspect it's the little guys like me who are continually popping and asking "what good does this newfangled containerisation thing do for me". The answer is not much. In the short term the only real positive it brings is security. The mental model you need to reason about the isolation imposed by containers is far simpler than the alternatives.<br> <p> The other observation I have is the way containerisation is done now is at odds with how the distro's work. Distro's like Debian are large collection's of little guys, each working on their own packages mostly in isolation. This is necessarily the case because we (I'm a DD) only have so many hours in a day. Thus if it were not possible to divide the large workload into thousands of tiny bite sized chunks, Debian wouldn't exist. Deploying the Debian "container" - ie a new release, is a huge amount of work which is why you see so few of them. Releasing a new one every time a new version of a package comes along (which is effectively what you are doing) is completely out scope.<br> <p> So containers and distro's are like oil and water. They don't mix very well in most situations - yours being a notable exception. If they are going to mix something has to change. I can't see it being the containers - at the packaging level these isn't much too them. So it has to be the distro's. The first approach that strings to mind is the distro that is hosting the containers automagically keeps them patched. That requires both the host and container to be running the same distro - but I suspect that usually is the case. If that happened it would remove the major impediment to containerising everything for small guys like me.<br> </div> Sun, 31 May 2015 22:57:12 +0000 An introduction to Clear Containers https://lwn.net/Articles/646760/ https://lwn.net/Articles/646760/ raven667 <div class="FormattedComment"> I think an underlying assumption that is not often enough stated is that this deployment model is easier for front end and middle software which has little state than for back end databases where you have to put a little more design thought into how you manage upgrades and redundancy. If you just grab a random MySQL or PostgreSQL Docker image and never take it down, never upgrade it, never replace it, like you you treat uptime on a more traditional server, even if you rebuild your app servers on every source control commit, you will have a world of ancient unpatched software pain.<br> </div> Sun, 31 May 2015 15:55:56 +0000 An introduction to Clear Containers https://lwn.net/Articles/646757/ https://lwn.net/Articles/646757/ dmarti <div class="FormattedComment"> You can build out an application as a perfect set of clean packages, but one of the components wants to use the same port as a component of another application, or requires a different version of some dependency, or whatever.<br> <p> Containers can let you have *parallel stacks of clean packages*. First write your RPM specfile (or use your package manager of choice) to make a clean, repeatable install of known software. Then wrap a simple Dockerfile (or config for whatever container flavor is hot at deploy time) around that.<br> <p> Sometimes you see containers used for parallel stacks of "curl | sh" which is a monster time-suck ( <a href="http://blog.neutrondrive.com/posts/235158-docker-toxic-hellstew">http://blog.neutrondrive.com/posts/235158-docker-toxic-he...</a> ) but they don't have to be that way.<br> <p> Packages for clean, repeatable installs. Wrapped in containers for when you need multiple trees of dependencies on the same box.<br> </div> Sun, 31 May 2015 14:24:15 +0000 An introduction to Clear Containers https://lwn.net/Articles/646755/ https://lwn.net/Articles/646755/ kleptog <div class="FormattedComment"> <font class="QuotedText">&gt; But rebuilding is a far heaver operation - so much so that they provide tools to avoid it by persisting it as a .tar.gz. It can be done offline, but then how do you know when to do it? If you don't know you are up for rebuilding and restarting every container at least once day.</font><br> <p> You're going to rebuild the entire image every time you make a release that needs to be deployed. You deploy all the latest OS updates at the same time so in practice it's no extra work.<br> <p> Besides, we have buildbots that do nothing else than build Docker images on every commit. It takes a few minutes per image sure, but the result can be thrown into the test environment to see if it works and if it does you can use the same image to deploy to production.<br> <p> I would love it if it were possible to create VMs as easily. I'm hoping someone will make a Docker-to-VM converter. Livebuild is good, but relatively slow.<br> </div> Sun, 31 May 2015 13:54:25 +0000 An introduction to Clear Containers https://lwn.net/Articles/646719/ https://lwn.net/Articles/646719/ dlang <div class="FormattedComment"> used badly, it can be exactly that.<br> <p> Used properly containers can be somethting very different.<br> <p> One way of looking at containers is that they give the datacenter management similar capabilities to what Mainframes had in that they can just submit 'jobs' to be run, and the different jobs can be scheduled to run as best benefits the datacenter. The different 'jobs' can be shuffled from machine to machine as needed for load, failures, maintenance etc. and the 'job owner' isn't going to care, as long as the job is running somewhere.<br> <p> yes, VMs can do the same thing, but at a significant cost in overhead (cpu, memory allocation inefficiency, etc)<br> </div> Sat, 30 May 2015 22:01:47 +0000 An introduction to Clear Containers https://lwn.net/Articles/646714/ https://lwn.net/Articles/646714/ toyotabedzrock <div class="FormattedComment"> Is it just me or is containers just a lazy way to avoid creating a proper application packaging and dependency versioning system for Linux. It also seems to be a lazy way to not have to configure proper application security. Maybe Linux needs GUI tools to get people used to using iptables and selinux?<br> </div> Sat, 30 May 2015 20:23:55 +0000 An introduction to Clear Containers https://lwn.net/Articles/645974/ https://lwn.net/Articles/645974/ dlang <div class="FormattedComment"> the "don't update, rebuild" approach makes a huge amount of sense if you are running lots of instances of the software. Instead of disrupting each one in turn, you create a new version, start copies of the new version and stop copies of the old version (and it doesn't matter if these are containers of VMs)<br> <p> build from scratch instead of upgrading to create the new gold copy is a good idea because it means that you can't have 'quick hacks' work their way into your system that you don't find for years (until the next major upgrade when you do have to recreate them from scratch), but it is significantly more expensive to recreate the entire image from scratch than to just upgrade one piece of software.<br> <p> I take the middle ground, I create a base image that has most of the system in it and just add the application specific parts when creating the per-function images.<br> <p> if you only have one or two instances of each type of thing, and you are creating them completely from scratch, they it really does waste quite a bit of time (both CPU time and wall clock time)<br> </div> Wed, 27 May 2015 02:07:43 +0000 An introduction to Clear Containers https://lwn.net/Articles/645972/ https://lwn.net/Articles/645972/ ras <div class="FormattedComment"> <font class="QuotedText">&gt; You do not 'apply updates' to containers. You recompile their templates with fixed versions of packages and then restart the affected container instances.</font><br> <p> Sounds like the promised land. But it doesn't quite jar with reality. As a point of comparison, the "old way" of doing this was to install something like unattended-upgrades and let the system handle it itself. It's completely automated, with stuff all down time.<br> <p> To do the same job in a container you say you rebuild it. But rebuilding is a far heaver operation - so much so that they provide tools to avoid it by persisting it as a .tar.gz. It can be done offline, but then how do you know when to do it? If you don't know you are up for rebuilding and restarting every container at least once day.<br> <p> These kernel visualisation containers were born in Google. In Google I suspect none of this mattered because the software in the container was produced by them, and distributed in a container format. The rest of us run mostly software maintained by upstream distro's, distributed as packages that have to be individually installed and configured. Yes, Docker provides a bridge between the two worlds - producing container images from a distro's packages. But it damned primitive bridge. Doing of deboostrap and by a zillion apt-get install's every time you apply a security update just doesn't cut it.<br> <p> We need the next step - something that marries the roles of distro and container. I suspect the next big move will have to be from the distro's. It will would allow (say) a Debian host to build a Debian container from Debian packages in a second or so, or alternatively allow the Deban host to maintain (eg, apply security patches) to all Debian containers under it's control it transparently.<br> </div> Wed, 27 May 2015 01:57:57 +0000 MirageOS and rump kernels https://lwn.net/Articles/645953/ https://lwn.net/Articles/645953/ mato <div class="FormattedComment"> MirageOS[1] is not a microkernel. It is a "unikernel" or "Library operating system". Compared to traditional operating systems, your application and the kernel functionality needed to run it are linked together and run in a single address space.<br> <p> I would also like to point out our work (disclaimer: I'm one of the core developers) on rump kernels[2] and the rumprun unikernel stack[3] which allows you to run existing, unmodified, POSIX applications as unikernels on KVM, Xen and bare metal.<br> <p> I like to think of our (Mirage and rump kernels) approach as doing away with the traditional operating system altogether; it's the ultimate in minimalism. Only include the functionality required to get your application to run and nothing else.<br> <p> This has several interesting advantages:<br> <p> - We've all seen the various bugs found in the industry standard TLS stack. The Mirage folks have developed green-field type-safe implementations of the entire TCP, HTTP and TLS stack in OCaml. They've put up a bounty in the form of the BTC PiƱata[4]. If you can break their stack, you get to keep the bitcoin.<br> - Containers (and Clear Containers) still include an entire operating system, accessible to the application running on it, and thus potentially exploitable. Compare that to running your application on rumprun, which has no concept of exec(). If there's no shell to exec() then there's nothing to break into.<br> - A combination of Mirage and rumprun paves the way to the best of both worlds. Run a Mirage frontend serving HTTP and TLS, and talk to a rumprun unikernel running (for example) your legacy PHP application.<br> <p> [1] <a href="https://mirage.io/">https://mirage.io/</a><br> [2] <a href="http://rumpkernel.org/">http://rumpkernel.org/</a><br> [3] <a href="http://repo.rumpkernel.org/rumprun">http://repo.rumpkernel.org/rumprun</a><br> [4] <a href="http://ownme.ipredator.se/">http://ownme.ipredator.se/</a><br> </div> Tue, 26 May 2015 21:10:09 +0000 An introduction to Clear Containers https://lwn.net/Articles/645784/ https://lwn.net/Articles/645784/ dgm <div class="FormattedComment"> <font class="QuotedText">&gt; It's a different way of thinking. The old way is the "big monolithic server", where each server is hand-installed, hand-maintained, and hand-updated, with an uptime in the decades range.</font><br> <p> This is still how *some* stuff is going to be handled in the foreseeable future. The difference is less that old/new thinking dichotomy, but that now there's a new option, where previously you could only do things the old way. The old ways still offer advantages for some scenarios. One example is that server that has been working on the corner for years, just chugging along by itself, without the need for constant attention.<br> <p> Other services are not susceptible of being containerized, disk for instance, but also routing or specialized hardware access.<br> <p> All in all, containers seem like a great tool for flexibility, and sure they will replace "monolithic servers" where it makes sense. But not everywhere.<br> </div> Mon, 25 May 2015 09:30:44 +0000 An introduction to Clear Containers https://lwn.net/Articles/645732/ https://lwn.net/Articles/645732/ misc <div class="FormattedComment"> Provided of course that you verify that the dockerfile do not suddenly start to do a curl | bash or this kind of stuff, as we tend to see on the docker registry and all across the place. Or pip install, etc. <br> <p> And of course, provided the containers do not requires schema change or any kind of upgrade to the DB or any data store ( storage that you also likely need to handle, potentially with containers too, if possible, in a shared cluster way, which open all kind of fun problems ). That's problems that can be solved, but that's not as easy as people seems to imply.<br> <p> ( there is a few others issues to solve, like logging of containers, proper isolation, and the inherent dependencies on the kernel host which make practice != theory ). Secret distribution is also a interesting one, so how do you give your wordpress containers access to the mysql db somewhere in a clean way. ( again, doable and not a insanely hard issue, but requires a bit more than just the vanilla docker and a workflow that is well defined )<br> </div> Sun, 24 May 2015 03:16:36 +0000 An introduction to Clear Containers https://lwn.net/Articles/645671/ https://lwn.net/Articles/645671/ motk <div class="FormattedComment"> Need to respond to this:<br> <p> <font class="QuotedText">&gt;You end up with far, far less to manage. And all of it provides apis and tools that make it far more </font><br> <font class="QuotedText">&gt;scriptable so you can automate away loads of tasks. And yes, that does require sysadmins who can </font><br> <font class="QuotedText">&gt;write code. But then I never quite understood why we started having sysadmins who couldn't code in </font><br> <font class="QuotedText">&gt;the first place.</font><br> <p> Oh, I can code, but I could no longer by any means call myself a developer. The right tool for the right job, and if I need systems development programming done to glue stacks together it's time for a real developer.<br> </div> Fri, 22 May 2015 22:42:14 +0000 An introduction to Clear Containers https://lwn.net/Articles/645644/ https://lwn.net/Articles/645644/ Cyberax <div class="FormattedComment"> Well, nsenter had not been a documented way to enter a running container until the most recent Docker. And it's still discouraged.<br> </div> Fri, 22 May 2015 16:38:03 +0000 An introduction to Clear Containers https://lwn.net/Articles/645643/ https://lwn.net/Articles/645643/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; So we don't update containers, we re-create them with updated templates. But how _are_ these templates updated?</font><br> Using "docker build" command ( <a rel="nofollow" href="https://docs.docker.com/reference/builder/">https://docs.docker.com/reference/builder/</a> ) or its equivalent.<br> <p> <font class="QuotedText">&gt; Where do the security updates to the templates come from?</font><br> The usual repositories and software installation channels.<br> <p> <font class="QuotedText">&gt; How does an admin know that a template needs updating?</font><br> Using the usual channels. For example, just like with real machines, an admin might periodically try to do 'apt-get update; apt-get upgrade' with only security updates repository turned on a test container.<br> </div> Fri, 22 May 2015 16:36:53 +0000 An introduction to Clear Containers https://lwn.net/Articles/645595/ https://lwn.net/Articles/645595/ niner <div class="FormattedComment"> Still this thread contains mostly hand waving. You are at least somewhat mentioning updates, so I answer to your posting.<br> <p> So we don't update containers, we re-create them with updated templates. But how _are_ these templates updated? Where do the security updates to the templates come from? How does an admin know that a template needs updating?<br> </div> Fri, 22 May 2015 07:25:51 +0000 An introduction to Clear Containers https://lwn.net/Articles/645582/ https://lwn.net/Articles/645582/ ghane <div class="FormattedComment"> <font class="QuotedText">&gt; But then I never quite understood why we started having sysadmins who couldn't code in the first place</font><br> <p> For the much the same reason, I suppose, that we have sysadmins who cannot build a PC from scratch.<br> <p> It is a "layers" thing, or "abstraction", or some such. Each team handles its own layer in the stack.<br> <p> </div> Fri, 22 May 2015 04:22:12 +0000 An introduction to Clear Containers https://lwn.net/Articles/645576/ https://lwn.net/Articles/645576/ lyda <div class="FormattedComment"> Google has been using containers to manage processes for over a decade. Literally millions are created and destroyed every day on hundreds of thousands of machines.<br> <p> So yes, your sysadminly worries have been addressed.<br> <p> The current Docker wrapper around the container primitives is OK. It's not great, but it's a start (and definitely better than lxc - yeech). It's still a bit thick as containers go, but it's far thinner than a VM.<br> <p> There's less to manage. In the docker world you specify the container with a Dockerfile. Want to update a container? Rebuild it from the docker file and then restart it.<br> <p> That's for a single container. Once you start getting more you can use a CI system to launch new versions to test and then deploy. Eventually you can move to a system like kubernettes or mesos to manage the containers.<br> <p> You end up with far, far less to manage. And all of it provides apis and tools that make it far more scriptable so you can automate away loads of tasks. And yes, that does require sysadmins who can write code. But then I never quite understood why we started having sysadmins who couldn't code in the first place.<br> </div> Fri, 22 May 2015 03:14:46 +0000 An introduction to Clear Containers https://lwn.net/Articles/645575/ https://lwn.net/Articles/645575/ lyda <div class="FormattedComment"> Er, no. If you had a current version of the util-linux package you had nsenter. For docker containers they finally made an exec command, but nsenter always worked.<br> </div> Fri, 22 May 2015 02:53:45 +0000 An introduction to Clear Containers https://lwn.net/Articles/645521/ https://lwn.net/Articles/645521/ Cyberax <div class="FormattedComment"> VMs are a completely different beast, as they emulate the real hardware and are used as such.<br> <p> Docker containers were used completely differently from the start. For example, for a long time it had not been possible to run a shell inside an already running container. <br> </div> Thu, 21 May 2015 20:32:19 +0000 An introduction to Clear Containers https://lwn.net/Articles/645501/ https://lwn.net/Articles/645501/ dlang <div class="FormattedComment"> that's the theory anyway<br> <p> In theory this is no different than the way VMs should be handled, you don't update them, you create new ones with the updated software.<br> <p> In practice....<br> <p> (Quote from somewhere)<br> In theory, theory and practice are the same, in practice they are not<br> </div> Thu, 21 May 2015 19:27:38 +0000 An introduction to Clear Containers https://lwn.net/Articles/645493/ https://lwn.net/Articles/645493/ cesarb <div class="FormattedComment"> <font class="QuotedText">&gt; With a Virtual Machine, as with a physical one, you can update the software to apply security updates and other bug fixes easily. But, how easy is that if you have dozens or more containers to track software versions on, and apply updates to these?</font><br> <p> It's a different way of thinking. The old way is the "big monolithic server", where each server is hand-installed, hand-maintained, and hand-updated, with an uptime in the decades range.<br> <p> The new way of thinking is "every server is discardable". You don't update a server, you discard it and spin up a fresh one with all relevant updates already applied. Having a load spike because your server was mentioned on some popular site? Spin up a few more servers. After the storm passes, simply discard the excess servers. This is all made possible by lightweight virtual machines, or containers.<br> <p> And you might have thousands of servers, but they are all clones. The number of different server types to manage is significantly smaller.<br> </div> Thu, 21 May 2015 19:14:50 +0000