|
|
Subscribe / Log in / New account

The Rocket containerization system

By Nathan Willis
December 3, 2014

The field of software-container options for Linux expanded again this week with the launch of the Rocket project by the team behind CoreOS. Rocket is a direct challenger to the popular Docker containerization system. The decision to split from Docker was, evidently, driven by CoreOS developers' dissatisfaction with several recent moves within the Docker project. Primarily, the CoreOS team's concern is Docker's expansion from a standalone container format to a larger platform that includes tools for additional parts of the software-deployment puzzle.

There is no shortage of other Linux containerization projects apart from Docker already, of course—LXC, OpenVZ, lmctfy, and Sandstorm, to name a few. But CoreOS was historically a big proponent of (and contributor to) Docker.

The idea behind CoreOS was to build a lightweight and easy-to-administer server operating system, on which Docker containers can be used to deploy and manage all user applications. In fact, CoreOS strives to be downright minimalist in comparison to standard Linux distributions. The project maintains etcd to synchronize system configuration across a set of machines and fleet to perform system initialization across a cluster, but even that set of tools is austere compared to the offerings of some cloud-computing providers.

Launch

On December 1, the CoreOS team posted an announcement on its blog, introducing Rocket and explaining the rationale behind it. Chief among its stated justifications for the new project was that Docker had begun to grow from its initial concept as "a simple component, a composable unit" into a larger and more complex deployment framework:

Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server.

The post also highlighted the fact that, early on in its history, the Docker project had published a manifesto that argued in favor of simple container design—and that the manifesto has since been removed.

The announcement then sets out the principles behind Rocket. The various tools will be independent "composable" units, security primitives "for strong trust, image auditing and application identity" will be available, and container images will be easy to discover and retrieve through any available protocol. In addition, the project emphasizes that the Rocket container format will be "well-specified and developed by a community." To that end, it has published the first draft of the App Container Image (ACI) specification on GitHub.

As for Rocket itself, it was launched at version 0.10. There is a command-line tool (rkt) for running an ACI image, as well as a draft specification describing the runtime environment and facilities needed to support an ACI container, and the beginnings of a protocol for finding and downloading an ACI image.

Rocket is, for the moment, certainly a lightweight framework in keeping with what one might expect form CoreOS. Running a containerized application with Rocket involves three "stages."

Stage zero is the container-preparation step; the rkt binary generates a manifest for the container, creates the initial filesystem required, then fetches the necessary ACI image file and unpacks it into the new container's directory. Stage one involves setting up the various cgroups, namespaces, and mount points required by the container, then launching the container's systemd process. Stage two consists of actually launching the application inside its container.

What's up with Docker

The Docker project, understandably, did not view the announcement of Rocket in quite the same light as CoreOS. In a December 1 post on the Docker blog, Ben Golub defends the decision to expand the Docker tool set beyond its initial single-container roots:

While Docker continues to define a single container format, it is clear that our users and the vast majority of contributors and vendors want Docker to enable distributed applications consisting of multiple, discrete containers running across multiple hosts.

We think it would be a shame if the clean, open interfaces, anywhere portability, and robust set of ecosystem tools that exist for single Docker container applications were lost when we went to a world of multiple container, distributed applications. As a result, we have been promoting the concept of a more comprehensive set of orchestration services that cover functionality like networking, scheduling, composition, clustering, etc.

But the existence of such higher-level orchestration tools and multi-container applications, he said, does not prevent anyone from using the Docker single-container format. He does acknowledge that "a small number of vendors disagree with this direction", some of whom have "technical or philosophical differences, which appears to be the case with the recent announcement regarding Rocket."

The post concludes by noting that "this is all part of a healthy, open source process" and by welcoming competition. It also, however, notes the "questionable rhetoric and timing of the Rocket announcement" and says that a follow-up post addressing some of the technical arguments from the Rocket project is still to come.

Interestingly enough, the CoreOS announcement of Rocket also goes out of its way to reassure users that CoreOS will continue to support Docker containers in the future. Less clear is exactly what that support will look like; the wording says to "expect Docker to continue to be fully integrated with CoreOS as it is today", which might suggest that CoreOS is not interested in supporting Docker's newer orchestration tools.

In any case, at present, Rocket and its corresponding ACI specification makes use of the same underlying Linux facilities employed by Docker, LXC containers, and most of the other offerings. One might well ask whether or not a "community specification" is strictly necessary as an independent entity. But as containerization continues to make its way into the enterprise market, it is hardly surprising to see more than one project vie for privilege of defining what a standard container should look like.


to post comments

The Rocket containerization system

Posted Dec 4, 2014 5:39 UTC (Thu) by dlang (guest, #313) [Link] (9 responses)

re: docker vs rocket
The way I read the rocket statement is that they will be able to run the docker containers

that in no way says that they will support the docker orchestration tools, current or future.

If they talked about supporting docker configurations, that would imply support for layered networks, etc. But just saying that they will support the containers could (and it sounds to me like it does) mean that you would be able to run the container image, but things external to the container (network definitions for example), would be configured with separate components.

This also doesn't mean that rocket won't support layered networks, just that if they do it will be as a separate layer, in separate binaries than the part that starts the container.

I hope this goes well.

The Rocket containerization system

Posted Dec 4, 2014 11:13 UTC (Thu) by ms (subscriber, #41272) [Link] (8 responses)

I hope this goes well too.

From a pure design PoV, there are many things that are fairly horrible about Docker, from the inability to support deployments across several machines (though pieces like (Zettio|Weaveworks)/Weave address this to a large extent), to the mess of declarative and imperative actions in the Dockerfile, to the general mess of images and inability to reason about what's inside them, or how to update them, let alone compose them.

Is it really great that there are 65,000 images in Docker hub? Do we really need 900 images just running Redis? From a management and maintenance perspective, it's pretty much a disaster: overnight, heartbleed happened; which of your images to you need to update, rebuild, redeploy?

I would love to see even a subset of these issues addressed. For people who care about the long term ability to maintain and deploy services, the dev-ops hipster BS tends to not contribute a great deal. It's easy to get the impression Docker is chasing that market rather than the rather less sexy but crucial former market. If Rocket at the very least prompts some slightly more considered thinking about this sort of thing, that's all to the good.

(An aside: I have never understood why the libvirt-lxc people have not made more noise about their solution: IME, their stuff is more flexible and powerful.)

The Rocket containerization system

Posted Dec 4, 2014 14:35 UTC (Thu) by epa (subscriber, #39769) [Link] (5 responses)

overnight, heartbleed happened; which of your images to you need to update, rebuild, redeploy?
Isn't it enough to run yum upgrade (or its moral equivalent) inside each image? Or is that not how it's done?

The Rocket containerization system

Posted Dec 4, 2014 18:43 UTC (Thu) by cortana (subscriber, #24596) [Link] (4 responses)

Images should be immutable. You build a new image with a fixed version of openssl and push that out to your servers. That way you can always deploy the older version again if you need to roll back.

The Rocket containerization system

Posted Dec 4, 2014 20:23 UTC (Thu) by epa (subscriber, #39769) [Link] (3 responses)

Hmm, immutable images make some sense but in that case you need a reproducible procedure to build each image you use. So when a bug like heartbleed occurs you pull down the latest packages from Debian or wherever, type 'make' and get your new image ready to deploy. If you have anything less than that level of automation, it seems that using immutable images adds more difficulty to system administration than it removes.

The reproducible procedure could include doing a full package update from the upstream distribution before freezing the image, so you can still take advantage of tools like apt and yum.

The Rocket containerization system

Posted Dec 4, 2014 20:43 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

you need that reproducibility anyway.

doing an install and then upgrading it works for a while, but eventually you will need to create the image again (even with debian you can't always upgrade forever), and if you don't have the build reproducible, you will be in trouble.

been there, done that, have the scars :-)

once you have reproducible builds, immutable images become much easier to mange than having to upgrade each image. If you have resilient services, you can bring up the new image as your failover, and failover to it using the same process you would use if you had a system failure. This should be a clean failover, and it makes sure that your failover mechanism actually works because it's not something that never gets tested.

The Rocket containerization system

Posted Dec 5, 2014 6:55 UTC (Fri) by epa (subscriber, #39769) [Link] (1 responses)

Makes sense. That said, for security vulnerabilities you often do want to patch absolutely as soon as possible, so there may still be a case for 'yum upgrade' inside each container as a stopgap measure until the newly built one is pushed out.

The Rocket containerization system

Posted Dec 5, 2014 8:02 UTC (Fri) by dlang (guest, #313) [Link]

sometimes, but usually not.

even with a security update, you need to test the fix before you put it in production to make sure that it doesn't break anything else, and if you are doing things correctly, the image you test on is what you deploy to production. If the risk is bad enough, you take the service down in the meantime.

The Rocket containerization system

Posted Dec 4, 2014 19:38 UTC (Thu) by dlang (guest, #313) [Link] (1 responses)

> From a pure design PoV, there are many things that are fairly horrible about Docker, from the inability to support deployments across several machines

This shouldn't be part of the container definition, this should be an added layer of management above that (which is what it sounds like Rocket is intending to do)

> Is it really great that there are 65,000 images in Docker hub? Do we really need 900 images just running Redis?

why in the world would you trust an image from a website that anyone can upload to? that's worse than just downloading and executing random binaries.

Now, a recipe for building an image from distro X would be reasnable (I think Fedora calls this a kickstart definition)

I agree that Docker seems to be trying to cash in on the dev-ops hype.

The Rocket containerization system

Posted Dec 11, 2014 6:08 UTC (Thu) by Mook (subscriber, #71173) [Link]

> why in the world would you trust an image from a website that anyone can upload to? that's worse than just downloading and executing random binaries.

> Now, a recipe for building an image from distro X would be reasnable (I think Fedora calls this a kickstart definition)

My understanding is that that's pretty much how the docker hub thing works; it grabs a recipe possibly with associated files, runs it on their servers, and exposes the result.

Of course, that means you should probably read that recipe and figure out if the associated files (and actions in the recipe, any downloads that does, etc.) might be dangerous before actually grabbing the image. Their last release was about vulnerabilities when pulling evil images...

The Rocket containerization system

Posted Dec 4, 2014 11:24 UTC (Thu) by fishface60 (subscriber, #88700) [Link]

I'd like to see the file system magic of Docker split out.
Snapshotting and sharing of filesystem trees with pluggable backends would be useful for the project I work on, where we do isolated software builds.
You could probably also build a nifty DVCS on top of it which handles binaries better than git.

The Rocket containerization system

Posted Dec 4, 2014 16:30 UTC (Thu) by raven667 (subscriber, #5198) [Link] (3 responses)

> . Primarily, the CoreOS team's concern is Docker's expansion from a standalone container format to a larger platform that includes tools for additional parts of the software-deployment puzzle.

Well yes, CoreOS sells tools and additional parts for software deployment using containers so having Docker standardize on something other that what CoreOS wrote is a major competitive problem for them, they need to funnel people into their ecosystem to be able to extract revenue. So this seems primarily a business decision rather than a technical one, they don't want to compete on a platform that they don't control, so they are creating an incompatible platform that they do control and competing with that.

In some ways this shows how much competition really isn't affected by having the software be Free or Open Source, having software be proprietary is a disservice to the customers and you can be just as competitive without it.

The Rocket containerization system

Posted Dec 4, 2014 19:41 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

Taking what's been posted here at face value, I think it's less not wanting to compete on a platform they don't control and more not wanting to try and compete where they can't just replace one part of things and are stuck with a monolithic component that they have to work around rather than being able to replace a layer.

I will refrain from drawing parallels with the systemd discusssion ;-)

The Rocket containerization system

Posted Dec 4, 2014 22:06 UTC (Thu) by raven667 (subscriber, #5198) [Link] (1 responses)

> I think it's less not wanting to compete on a platform they don't control and more not wanting to try and compete where they can't just replace one part of things and are stuck with a monolithic component that they have to work around rather than being able to replace a layer.

I'm having a hard time parsing the sentence because it seems to hold two opposing ideas simultaneously but I think that saying that Docker is a monolithic component that can't be replaced is logically mutually exclusive of the fact that they have replaced it.

> I will refrain from drawing parallels with the systemd discusssion ;-)

Also two opposing ideas in the same sentance, nice. 8-)

CoreOS heavily relies on systemd and the dbus API, that's how fleet controls services, a better parallel would be if systemd were a single-vendor rather than representing a consortium of the major distros and device makers, and if this mythical systemd vendor came out with a competing system with their own HA and config synchronization built into systemd directly. Project Atomic is the closest to that but there is a much more level playing field between Atomic and CoreOS, both participate in systemd, neither has authority over the other.

The Rocket containerization system

Posted Dec 4, 2014 22:11 UTC (Thu) by dlang (guest, #313) [Link]

> I'm having a hard time parsing the sentence because it seems to hold two opposing ideas simultaneously but I think that saying that Docker is a monolithic component that can't be replaced is logically mutually exclusive of the fact that they have replaced it.

they can't replace part of it because it's monolithic, so they are replacing all of it.


Copyright © 2014, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds