Docker image "verification"
One might be forgiven for expecting that a message stating that a download has been "verified" would actually be indicating some kind of verification. But, as Jonathan Rudenberg discovered, getting that message when downloading a Docker image is, at best, misleading—at worst it is flat-out wrong. Worse still, perhaps, is that an image file that is definitely corrupted only provokes a warning, though Rudenberg was unable to even make that happen. All told, his post should serve as an eye opener for those Docker users who are concerned about the security of the images they run.
After downloading an official container image using the Docker tools, Rudenberg saw
the following message: "ubuntu:14.04: The image you are pulling has
been verified
". At the time, he believed it was the result of a
feature described
in the Docker 1.3 release announcement, which touted a "tech
preview" of digital-signature verification for images. Subsequently, however, he had
reason to look a bit deeper and was not impressed with what he found:
Docker’s report that a downloaded image is “verified” is based solely on the presence of a signed manifest, and Docker never verifies the image checksum from the manifest. An attacker could provide any image alongside a signed manifest. This opens the door to a number of serious vulnerabilities.
Beyond that, the processing pipeline for images also suffers from a number of flaws: it does three separate processing steps using the unverified (potentially malicious) image. To begin with, the image is decompressed using one of three different algorithms: gzip, bzip2, or xz. The first two use the memory-safe Go language library routines, which should provide resilience against code-execution flaws, he said, but xz decompression is a different story.
To decompress an image that uses the xz algorithm, Docker spawns the xz binary, as root. That binary is written in C, thus it does not have any of the memory safety provided by Go, so it could well be vulnerable to (unknown) code-execution vulnerabilities. That means that a simple "docker pull" command could potentially lead to full system compromise, which is probably not quite what the user expected.
Docker uses TarSum to deterministically generate a checksum/hash from a tar file, but doing so means that the tar file must be decoded. The program calculates a hash for specific portions of the tar file, but that is done before any verification step. So an attacker-controlled tar file could potentially exploit a TarSum vulnerability to evade the hashing process. That might allow additions or subtractions to a tar file without changing its TarSum-calculated hash.
The final step in the processing pipeline is to unpack the tar file into the "proper" location. Once again, this is done pre-verification, so any path traversal or other vulnerability in the unpacking code (Rudenberg points to three vulnerabilities that have already been found there) could be exploited. All three of those problems could be alleviated by verifying the entire image before processing it.
Unfortunately, even after those three processing steps have been done, Docker does not actually verify much of anything before emitting its "verified" message. In fact, Rudenberg reported that the presence of a signed manifest that passes libtrust muster is enough to trigger the message. No checking is done to see if the manifest corresponds to the rest of the image. In addition, the public key that is used to sign the manifest is retrieved each time an image is pulled, rather than provided as part of the Docker tool suite, for example.
Overall, the image verification feature is sloppy work, so far, that is likely to mislead Docker users. In a thread on Hacker News, Docker founder and CTO Solomon Hykes complained that Rudenberg's analysis did not quote the "work in progress" disclaimer in the Docker announcement. Notably, though, he did not argue with any of the technical points made in the analysis.
Rudenberg made several suggestions for improving Docker image verification in the post. Verifying the entirety of the image, rather than just parts using TarSum, is one. Another is to employ privilege separation so that tasks like decompression are not run as root. Furthermore, he suggested adopting The Update Framework rather than using the largely undocumented libtrust for signature verification.
Perhaps the biggest mistake made by Docker here was to enable the feature by default when it was clearly not even close to ready. As pointed out by Red Hat, there are other ways to get Docker images that are more secure, so just avoiding the docker pull command until image verification is fully baked may be the right course for security-conscious users.
| Index entries for this article | |
|---|---|
| Security | Integrity management |
| Security | Signing code |
| Security | The Update Framework (TUF) |
Posted Jan 8, 2015 3:00 UTC (Thu)
by thoughtpolice (subscriber, #87455)
[Link] (2 responses)
Has any other package update framework fully implemented what's been described by TUF, or anything like it? AFAIK, most (packaging) systems either punt the problem to TLS or just implement basic signing without any defined threat model against things like rollback attacks, etc. I know they've secured the Python package framework, but I don't think this is actually how the official PyPI etc work today, is it?
Posted Jan 11, 2015 11:47 UTC (Sun)
by mgedmin (subscriber, #34497)
[Link]
Posted Jan 14, 2015 7:44 UTC (Wed)
by kleptog (subscriber, #1183)
[Link]
Posted Jan 8, 2015 9:01 UTC (Thu)
by zdzichu (subscriber, #17118)
[Link]
Docker image "verification"
Docker image "verification"
Docker image "verification"
Docker image "verification"
