|
|
Subscribe / Log in / New account

GNOME and application sandboxing revisited

By Nathan Willis
January 21, 2015

The benefits of application containerization have become a near-constant refrain in the server world, but that by no means implies that there is less development on similar ideas for desktop systems. Recently, the GNOME project has placed a renewed emphasis on the idea, aiming to support containerized applications as an alternative to traditional RPM or Debian packages.

On the server side, containers—meaning any of several application-isolation technologies, such as Docker, OpenVZ, LXC, or lmctfy—are usually touted for their ability to simplify application management. When applications are isolated from each other, they can be rapidly deployed, monitored for resource usage, and migrated between nodes—in addition, of course, to the added security that comes with isolating each application in its own virtual environment.

Migrating an application from one desktop system to another is not a particularly sought-after feature, though. But the same underlying principles (confining each application to a sandbox via namespaces, control groups, and similar mechanisms) could allow a single application image to be installed on multiple Linux distributions, without worrying about the incompatibilities that typically make packages built for one distribution unusable on another. Each container can bundle in not just the application itself, but any unusual or version-specific libraries and other dependencies on which it relies.

And, again, users and administrators would benefit from the added security of restricting each application to the sandbox. In fact, the argument typically goes, a secure containerization facility might even make it easier for users to install third-party applications on Linux systems. There would be no need to grant an outside software repository full access to install and update system packages. When adding a third-party repository, users must be on guard for unexpected changes that could accompany any new package update.

Lennart Poettering first floated the idea of using application containers in GNOME at GUADEC 2013. He then resurrected the idea in September 2014. Poettering's proposal was more far-reaching that just application containers: it called for restructuring the way entire distributions are defined and packaged, using layers of Btrfs sub-volumes to separate out the operating system, various distribution releases, large software frameworks, and even individual applications.

Alexander Larsson (who works on Red Hat's Docker support) noted at the time that he was not sold on the use of Btrfs, but that he agreed with the proposed approach as it concerned application containers. Larsson has been pursuing a GNOME implementation of the concept in the subsequent months.

In that email, he also said that a building GNOME implementation of application containers would involve creating several new pieces of infrastructure: a definition of the base platform (or "runtime") that a container developer could rely on, a set of APIs for applications to access the host system (e.g., files, hardware, and basic services), and an interprocess communication (IPC) mechanism for communicating between the container and the host system. There would also need to be a tool set for building and testing the container packages, and GNOME would need to actually implement and ship the agreed-upon runtime.

Larsson followed up with an initial implementation that included a GNOME runtime target for the GNOME Continuous build system and a gnome-sdk tool for creating application containers. Several iterations of both pieces followed; the most recent update having arrived on January 12.

The current application runtime is based on GNOME 3.15. Kdbus provides a secure IPC mechanism, with sandboxing done using control groups, namespaces, and SELinux. Due to the inherent insecurities in X, Wayland is used as the display protocol. Larsson has also written a utility called xdg-app that users can use to install runtimes and application containers and launch containers themselves.

Runtimes and containers can be installed on a per-user or a system-wide basis. Per-user containers are placed in $HOME/.local/share/xdg-app/ and system-wide containers in /usr/share/xdg-app/. A D-Bus–like naming scheme is used to identify containers; the sample Builder container is org.gnome.Builder. Other samples are available in Larsson's repository, including GEdit, glxgears, and PulseAudio's paplay. There is also a patched version of rpmbuild that can be used to generate application containers from existing RPM spec files or source RPMs.

Each container has access to a /self directory where it should place the majority of its installable files, plus an isolated /usr (where it can place any files that need to be mounted in an overlay on top of the runtime) and a /var for various state, log, and temporary files.

xdg-app sets up the container environment for each app, mounting the filesystems and establishing the namespaces and IPC connection. Worth noting, however, is that the sandboxing feature is—for the moment—only partially implemented. It is useful for exploring how the final product might work, but it does not offer the security features that will ultimately be expected.

Furthermore, although Larsson has periodically released runtime images intended for testing purposes, at the moment xdg-app builds runtimes from GNOME OSTree, which can be a bottleneck. Ultimately, the deployment plan would be for GNOME to release runtime images to the public—as could individual Linux distributions or other software projects.

Also still to come are a formal specification for precisely what the sandboxed environment will provide and documentation for the layout of the container format. The project's tracking page on the GNOME wiki includes an example metadata file that showcases the basic ideas. The expected runtime is listed, and a set of "environment" specifies the functionality required by the application (network access, host filesystem access, IPC, etc.).

Nevertheless, this is still a work-in-progress, and the details are subject to change. But the idea has gained considerable traction since September. Christian Schaller listed the application sandboxing in his write-up of planned changes for Fedora 22. If development continues at this pace, users could get their first taste of desktop application containers within a few months.


to post comments

GNOME and application sandboxing revisited

Posted Jan 22, 2015 9:20 UTC (Thu) by alexl (subscriber, #19068) [Link]

I'd like to point out some minor issues with the article:

Currently we support both X11 and wayland, but yes, long term wayland is the only thing that can support a securely sandboxed model.

/usr is a strict read-only mount of the runtime. There is no overlaying happening there.

As for the OSTree bottleneck, the problem there is not during building but during download. It is slow due to it doing a new http connection for each file. This is very slow if http keepalive isn't supported. This is being worked on in ostree by supporting delta files (similar in some sense to the packfiles in git).

GNOME and application sandboxing revisited

Posted Jan 22, 2015 18:39 UTC (Thu) by mm7323 (subscriber, #87386) [Link] (1 responses)

in addition, of course, to the added security that comes with isolating each application in its own virtual environment.
Isn't that also a problem when it comes to patching? I thought static linking is frowned upon because code from a vulnerable library may end up copied into lots of binaries and not easily patched. This sounds potentially worse from that standpoint, though other aspects are clearly a boon.

GNOME and application sandboxing revisited

Posted Jan 31, 2015 5:35 UTC (Sat) by kleptog (subscriber, #1183) [Link]

I think the argument is that if the containers are also secure, then it's not as much of a deal. So a program uses openssl, but if it has no internet connection then it's not a problem.

Currently every application on a desktop has access to internet while only a really small portion actually need it. Remove the internet access and most vulnerabilities become not interesting.

GNOME and application sandboxing revisited

Posted Jan 22, 2015 19:02 UTC (Thu) by flussence (guest, #85566) [Link]

I'd much prefer a set of simple command line tools to make per-program seccomp and namespacing as straightforward as doing a chroot, which projects like this could then build on top of. I've looked but still haven't found anything that fits the bill...

GNOME and application sandboxing revisited

Posted Jan 23, 2015 1:36 UTC (Fri) by jschrod (subscriber, #1646) [Link] (9 responses)

> Each container can bundle in not just the application itself, but any
> unusual or version-specific libraries and other dependencies on which it
> relies.
> ...
> There would be no need to grant an outside software repository full
> access to install and update system packages.

And then the next Heartbleed-style bug comes around and all those containers (who have packaged version-specific libraries) must be updated.

In all those reports and descriptions about "application-isolation technologies", be it Docker, LXC, or now these application containers; I miss information about the assumed operation principles that shall be used to fix security issues and apply security patches. Please, with special focus on standard resource limits for the common public (e.g., needed download bandwidth and time for people who don't have 100Mb/s connections).

I see the use case for real cloud systems, where installation and deployment is highly automated. I don't see such automation (which is needed to create and deploy a multitude of new containers) available on Linux desktops, at least not at the moment.

Good security practices must be part of a good container design right from the start. It frightens me that no article reports about them. IMNSHO, the time where one could ignore security issues for new major designs have passed quite some time ago. The friendly Internet of our 80s and early 90s, where I could telnet into FSFs systems, is gone.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 4:38 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

Why should incremental updates be complicated? Imagine that your application looks like this:
         BASE
          |
LIB1     LIB3
|         |
LIB2      |
 --------LIB4
          |
         APP
Suppose that there's a change in LIB2. The application environment will simply use all the copy-on-write and snapshot techniques to build something like:
         BASE
          |
LIB1     LIB3
|         |
LIB2*     |
 --------LIB4*
          |
         APP*
And dedup will further remove some redundancy.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 8:48 UTC (Fri) by dgm (subscriber, #49227) [Link] (7 responses)

I think you missed the problem jschrod is talking about. The problem is that you may end with 10x different *versions* of LIB2 (or the kernel, or Java), and you need to patch them all!. And for every person running containers (think an office full of developer workstations and laptops)!

It adds up quickly.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 10:34 UTC (Fri) by NAR (subscriber, #1313) [Link] (1 responses)

Do you really need to patch? For example I used to work on a product which used openssl solely to convert certificates. This particular product does not have to be updated. I also work on a Java product that does not connect to the network at all - do I need to update the JDK running this product, because some scary "security warning" was reported? Of course not.

There is always a trade off, the "security fix" might break other stuff, the bug is not actually exploitable in the environment or not relevant at all, etc. The "upgrade always at all costs" mentality is like driving a Hummer in normal traffic. Surely safer, but not better.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 10:51 UTC (Fri) by nim-nim (subscriber, #34454) [Link]

The analysis needed to determine what needs patching and not is quite often a lot more expensive than just patching everything. And that's assuming the owners of the component to be patched actually document all their mistakes (they don't) and correctly asses the problem perimeter (quite often, they also make mistakes there). The people able to perform this analysis will be a lot more qualified and expensive than the people needed to deploy a single patch everywhere

And next time you have a vuln, instead of assessing the vuln diff with the last fully patched system, you need to assess the diff with lots of patially patched variants. Matrix explosion.

Really it's the same problem space as backports. Only entities like Red Hat have the resources to triage what can be backported and not, and *they* only publish a very small runtime set (distro versions) not the number of runtimes this change will enable.

And you need to assume wrongdoers will see exploit scenarios you are missing. They don't have the same limitations as you have.

And to take your JDK example : Sun tried this approach for Java 1.6, it ended an unholy mess of parallel intertwined dev branches, and it totally crumbled when security analysts started looking at the JDK. The first thing Oracle did after getting Java's stewardship was to kill all the parallel custom branches and focus on a single unified tree. And they've not finished cleaning up the mess yet.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 18:24 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

No, I haven't missed it. If patching is automated with easy rollback support then it's no big deal.

However, the inverse - just one library version for all software is just as bad. Imagine that you have to delay your patch for EVERYONE until ALL your vendors test it for compatibility with their applications.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 22:11 UTC (Fri) by dlang (guest, #313) [Link] (2 responses)

well, if you wait for EVERYBODY to test EVERY patch before you deploy it, you aren't going to be deploying very many patches.

The real solution to the problem is for the library developers to take backwards compatibility seriously (some do, far too many don't). If they do, you can deploy the patch with good confidence that it's not going to break things.

And this isn't just a linux problem. How many people have experienced Microsoft patches breaking things (specifically including Microsoft software)? It's very common.

No matter what testing your vendors do with patches, you need to do your own testing to make sure it works in your environment. In which case, why wait for the vendor?

GNOME and application sandboxing revisited

Posted Jan 23, 2015 22:49 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> well, if you wait for EVERYBODY to test EVERY patch before you deploy it, you aren't going to be deploying very many patches.
The company where I worked before did just that. They tested all the Windows patches individually for all the major deployment roles.

> The real solution to the problem is for the library developers to take backwards compatibility seriously (some do, far too many don't). If they do, you can deploy the patch with good confidence that it's not going to break things.
Not going to happen. Developers are not going to change fundamentally in just a couple of years and that'll probably be the timeframe for the widespread deployment of 'containerized apps' ecosystem.

> And this isn't just a linux problem. How many people have experienced Microsoft patches breaking things (specifically including Microsoft software)? It's very common.
And Microsoft actually cares a lot about backwards compatibility. Probably more than anybody else in the industry. Yet even they foul it up from time to time.

GNOME and application sandboxing revisited

Posted Jan 23, 2015 23:24 UTC (Fri) by dlang (guest, #313) [Link]

>> well, if you wait for EVERYBODY to test EVERY patch before you deploy it, you aren't going to be deploying very many patches.

> The company where I worked before did just that. They tested all the Windows patches individually for all the major deployment roles.> well, if you wait for EVERYBODY to test EVERY patch before you deploy it, you aren't going to be deploying very many patches.
> The company where I worked before did just that. They tested all the Windows patches individually for all the major deployment roles.

But your company didn't wait for all of the companies that produced any software you are running to certify that it worked with each patch before you did your own testing.

In fact, I'll bet that you didn't wait for them to certify that their software would work with a patch before you installed the patch.

That's what I'm talking about.

GNOME and application sandboxing revisited

Posted Jan 26, 2015 17:35 UTC (Mon) by drago01 (subscriber, #50715) [Link]

> or the kernel [...]

.. no the kernel is shared ... this is not virtualization.


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds