|
|
Subscribe / Log in / New account

Images are a false simplification

Images are a false simplification

Posted Nov 5, 2025 18:32 UTC (Wed) by nim-nim (subscriber, #34454)
Parent article: A security model for systemd

Images are no easier to check or proof than packages. A giant store of isolated images is just a giant store of isolated things, much like Amazon or Alibaba is a giant store of isolated gadgets, where you have no clue if anything is genuine or safe to use, and most often it is no genuine or safe and getting an idea if it is genuine or safe requires someone poking inside products looking what they are built from. Which is more or less equivalent into decomposing an app system into packages.

The disconnect is expecting too much of package validation and too little of image validation, because we wish for things to be simpler than they are.

That being said containment of app systems is clearly worthwhile however you assemble those app systems.


to post comments

Images are a false simplification

Posted Nov 5, 2025 20:28 UTC (Wed) by ebee_matteo (subscriber, #165284) [Link] (10 responses)

I think the point being made is rather that, by virtue of using immutable images that can be verified before being started, you can trust that they were not tampered with from the point where they were signed.

Of course, you are right that this does solve just the integrity problem and does not prove authenticity.

The trust boundary however is pushed a bit further: at the point where you can validate an image was signed by the right people with the right keys.

Compare with a Debian package, whose files can be modified on disk after installation by a malicious user. And of course, an image also often implies a reproducible environment (e.g. controlled env variables, etc.) which makes it a bit harder to exploit.

Images are a false simplification

Posted Nov 5, 2025 21:05 UTC (Wed) by bluca (subscriber, #118303) [Link] (8 responses)

> Of course, you are right that this does solve just the integrity problem and does not prove authenticity.

It provides authenticity too, as the point being made was about signed dm-verity images. The signature is verified by the kernel keyring, so both authenticity and integrity are covered.

Of course this is not the case when using more antiquated image formats such as Docker, where it's just a bunch of tarballs in a trenchcoat, but systemd has been supporting better image formats for a long time now.

Images are a false simplification

Posted Nov 5, 2025 22:06 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (2 responses)

Either way the value is limited.

We’ve known for quite a long time it is useless to install genuine foo if genuine foo can not resist exploitation as soon as it is put online (as companies deploying golden Windows images discovered as soon as networking became common), and we’ve known for quite a long time attackers rarely bother altering components in flight they typo squat and trick you into installing genuine authenticated malware (not different from the counterfeits that flood marketplaces and that Amazon or Alibaba will happily ship you in their genuine state).

Security comes from the ability to rebuild and redistribute stuff when it has a hole (honest mistakes) and from poking inside stuff that will be redistributed to check it actually is what it pretends to be (deliberate poisoning). And then you can sign the result and argue if your signing is solid or not, but signing is only worthwhile if the two previous steps have been done properly.

Images are a false simplification

Posted Nov 6, 2025 1:22 UTC (Thu) by Nahor (subscriber, #51583) [Link] (1 responses)

> And then you can sign the result and argue if your signing is solid or not, but signing is only worthwhile if the two previous steps have been done properly.

If what you built and distributed can easily be replaced without you knowing, then those two steps are of limited value too.

And continuing your line of thought, if signing/immutability/rebuild/distribution are all done right, they are useless if you don't very the source code you're using.

And even if the source code verification is done right, it is useless if the person doing the verification and signing of the code can be corrupted or coerced with a $5 wrench.

And even if [...]

TLDR; what you're arguing is that security is pointless and of limited values because there will always be a point where you have to trust something or someone. There will always be a weak link. All we can do is ensuring that most links are safe to reduce the attack surface. Using images is one step in that direction.

Images are a false simplification

Posted Nov 6, 2025 7:32 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

That’s the “trust the vendor” thought but in out world we definitely do not trust the vendor.

We trust a system, where maximum transparency, accountability and absence of lockdown keep vendors honest. We trust the regulator, that forces vendors to provide a minimum of information on their pretty boxes, we trust consumer associations, that check the regulator is not captured by vendors, we trust people that perform reviews, tests and disassembly of products, the more so they are diverse and independent and unable to collude with one another, we trust competition and the regulations that enforce this competition and prevent vendors from cornering and locking down some part of the market.

And then you can add a certificate of genuine authenticity to the mix but most of the things you'll buy in real life don’t come with those because that’s icing on the cake no more. Trusting the vendor produces 737 maxes. This is not a fluke but human nature. You're usually better served by checking other things such as the quality of materials and assembly.

Performing third party checks is hard, and long, and those checks are usually incomplete, be it by distributions or in the real world, while printing authenticity certificates is easy. It is very tempting to slap shiny certificates on an opaque box and declare mission accomplished but it is not. Moreso if the result is reducing vendor accountability and dis-incentivize doing things right (I’m not saying that’s the case here but it is the usual ending of let’s trust the vendor initiatives).

Images are a false simplification

Posted Nov 5, 2025 22:17 UTC (Wed) by ebee_matteo (subscriber, #165284) [Link] (4 responses)

> It provides authenticity too, as the point being made was about signed dm-verity images. The signature is verified by the kernel keyring, so both authenticity and integrity are covered.

Yes, authenticity against a digital signature.

But trust has to start somewhere. You need to trust the signing keys, or somebody that transitively approved the key, e.g. as a CA.

In other words, you can prove an image was signed against a key, but if I manage to convince you to trust my public key, I can still run malicious software on your machine.

I still haven't seen the problem of supply-chain attacks being solved (by anybody, regardless of the technology employed).

Images are a false simplification

Posted Nov 5, 2025 22:23 UTC (Wed) by bluca (subscriber, #118303) [Link] (3 responses)

> But trust has to start somewhere.

Yes, and this is a solved problem on x86-64: you trust the vendor who sold you your CPU. You have to anyway, since it's your CPU, and it's silly to pretend otherwise.
That CPU verifies the firmware signature, which verifies the bootloader signature, which verifies the UKI signature, which verifies the dm-verity signature.

Images are a false simplification

Posted Nov 6, 2025 1:32 UTC (Thu) by Nahor (subscriber, #51583) [Link] (2 responses)

> this is a solved problem on x86-64

Not really. It's more like it is an unsolvable problem (or at least impractical to do so) so we choose to stop there.

> you trust the vendor who sold you your CPU

Plenty of people will argue you can't ("blabla manufacturing blabla China blabla" and "blabla NSA blabla backdoor blabla")

Images are a false simplification

Posted Nov 6, 2025 2:46 UTC (Thu) by intelfx (subscriber, #130118) [Link] (1 responses)

> Plenty of people will argue you can't ("blabla manufacturing blabla China blabla" and "blabla NSA blabla backdoor blabla")

That's the point of the GP, which I believe you have missed.

If you don't trust your CPU vendor enough to believe that their root of trust implementation is not subverted by your malicious actor of choice, then why would you trust *anything* that comes out of that CPU against the same malicious actor? The only logical choice of action would be to throw the CPU away immediately.

And if you haven't done that, then it necessarily follows that you *do* trust the CPU vendor, so it's fine if they implement a root of trust too.

Images are a false simplification

Posted Nov 6, 2025 10:29 UTC (Thu) by excors (subscriber, #95769) [Link]

I think there are subtly different definitions of "trust". In normal English it means "I believe this person won't harm me", but in a computer security context it often means "I am allowing this person to harm me". Under the first definition, it's best if you can correctly trust as many people as possible. Under the second, it's best to trust as few as possible.

Ideally the people you trust under the second definition are also trusted under the first definition, but in practice you can rarely have that level of belief in anyone, so you're knowingly opening yourself up to some risk of harm.

You can't prevent your CPU vendor harming you, so you do trust them under the second definition. The best you can do is minimise risk by ensuring they're the only people who can harm you.

Images are a false simplification

Posted Nov 6, 2025 8:10 UTC (Thu) by taladar (subscriber, #68407) [Link]

On the other hand images are one step further away from the actual source of the code (as in the dev team, not the files with the lines) which means there is one more layer that knows and cares less about the details, one more layer to be outdated and more layer you have to penetrate to figure out which open security issues exist and one more layer you need to rebuild once a security issue is fixed.

Images have quite frankly left me totally unconvinced that those who build them do actually care about security issues enough to even check for open issues, much less rebuild them every single time one gets fixed.

What good is having the authentic image if the image contains a mere few hundred open security holes of various (but not just low) severity?


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds