Images are a false simplification
Images are a false simplification
Posted Nov 5, 2025 20:28 UTC (Wed) by ebee_matteo (subscriber, #165284)In reply to: Images are a false simplification by nim-nim
Parent article: A security model for systemd
Of course, you are right that this does solve just the integrity problem and does not prove authenticity.
The trust boundary however is pushed a bit further: at the point where you can validate an image was signed by the right people with the right keys.
Compare with a Debian package, whose files can be modified on disk after installation by a malicious user. And of course, an image also often implies a reproducible environment (e.g. controlled env variables, etc.) which makes it a bit harder to exploit.
Posted Nov 5, 2025 21:05 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (8 responses)
It provides authenticity too, as the point being made was about signed dm-verity images. The signature is verified by the kernel keyring, so both authenticity and integrity are covered.
Of course this is not the case when using more antiquated image formats such as Docker, where it's just a bunch of tarballs in a trenchcoat, but systemd has been supporting better image formats for a long time now.
Posted Nov 5, 2025 22:06 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
We’ve known for quite a long time it is useless to install genuine foo if genuine foo can not resist exploitation as soon as it is put online (as companies deploying golden Windows images discovered as soon as networking became common), and we’ve known for quite a long time attackers rarely bother altering components in flight they typo squat and trick you into installing genuine authenticated malware (not different from the counterfeits that flood marketplaces and that Amazon or Alibaba will happily ship you in their genuine state).
Security comes from the ability to rebuild and redistribute stuff when it has a hole (honest mistakes) and from poking inside stuff that will be redistributed to check it actually is what it pretends to be (deliberate poisoning). And then you can sign the result and argue if your signing is solid or not, but signing is only worthwhile if the two previous steps have been done properly.
Posted Nov 6, 2025 1:22 UTC (Thu)
by Nahor (subscriber, #51583)
[Link] (1 responses)
If what you built and distributed can easily be replaced without you knowing, then those two steps are of limited value too.
And continuing your line of thought, if signing/immutability/rebuild/distribution are all done right, they are useless if you don't very the source code you're using.
And even if the source code verification is done right, it is useless if the person doing the verification and signing of the code can be corrupted or coerced with a $5 wrench.
And even if [...]
TLDR; what you're arguing is that security is pointless and of limited values because there will always be a point where you have to trust something or someone. There will always be a weak link. All we can do is ensuring that most links are safe to reduce the attack surface. Using images is one step in that direction.
Posted Nov 6, 2025 7:32 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
We trust a system, where maximum transparency, accountability and absence of lockdown keep vendors honest. We trust the regulator, that forces vendors to provide a minimum of information on their pretty boxes, we trust consumer associations, that check the regulator is not captured by vendors, we trust people that perform reviews, tests and disassembly of products, the more so they are diverse and independent and unable to collude with one another, we trust competition and the regulations that enforce this competition and prevent vendors from cornering and locking down some part of the market.
And then you can add a certificate of genuine authenticity to the mix but most of the things you'll buy in real life don’t come with those because that’s icing on the cake no more. Trusting the vendor produces 737 maxes. This is not a fluke but human nature. You're usually better served by checking other things such as the quality of materials and assembly.
Performing third party checks is hard, and long, and those checks are usually incomplete, be it by distributions or in the real world, while printing authenticity certificates is easy. It is very tempting to slap shiny certificates on an opaque box and declare mission accomplished but it is not. Moreso if the result is reducing vendor accountability and dis-incentivize doing things right (I’m not saying that’s the case here but it is the usual ending of let’s trust the vendor initiatives).
Posted Nov 5, 2025 22:17 UTC (Wed)
by ebee_matteo (subscriber, #165284)
[Link] (4 responses)
Yes, authenticity against a digital signature.
But trust has to start somewhere. You need to trust the signing keys, or somebody that transitively approved the key, e.g. as a CA.
In other words, you can prove an image was signed against a key, but if I manage to convince you to trust my public key, I can still run malicious software on your machine.
I still haven't seen the problem of supply-chain attacks being solved (by anybody, regardless of the technology employed).
Posted Nov 5, 2025 22:23 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (3 responses)
Yes, and this is a solved problem on x86-64: you trust the vendor who sold you your CPU. You have to anyway, since it's your CPU, and it's silly to pretend otherwise.
Posted Nov 6, 2025 1:32 UTC (Thu)
by Nahor (subscriber, #51583)
[Link] (2 responses)
Not really. It's more like it is an unsolvable problem (or at least impractical to do so) so we choose to stop there.
> you trust the vendor who sold you your CPU
Plenty of people will argue you can't ("blabla manufacturing blabla China blabla" and "blabla NSA blabla backdoor blabla")
Posted Nov 6, 2025 2:46 UTC (Thu)
by intelfx (subscriber, #130118)
[Link] (1 responses)
That's the point of the GP, which I believe you have missed.
If you don't trust your CPU vendor enough to believe that their root of trust implementation is not subverted by your malicious actor of choice, then why would you trust *anything* that comes out of that CPU against the same malicious actor? The only logical choice of action would be to throw the CPU away immediately.
And if you haven't done that, then it necessarily follows that you *do* trust the CPU vendor, so it's fine if they implement a root of trust too.
Posted Nov 6, 2025 10:29 UTC (Thu)
by excors (subscriber, #95769)
[Link]
Ideally the people you trust under the second definition are also trusted under the first definition, but in practice you can rarely have that level of belief in anyone, so you're knowingly opening yourself up to some risk of harm.
You can't prevent your CPU vendor harming you, so you do trust them under the second definition. The best you can do is minimise risk by ensuring they're the only people who can harm you.
Posted Nov 6, 2025 8:10 UTC (Thu)
by taladar (subscriber, #68407)
[Link]
Images have quite frankly left me totally unconvinced that those who build them do actually care about security issues enough to even check for open issues, much less rebuild them every single time one gets fixed.
What good is having the authentic image if the image contains a mere few hundred open security holes of various (but not just low) severity?
Images are a false simplification
Images are a false simplification
Images are a false simplification
Images are a false simplification
Images are a false simplification
Images are a false simplification
That CPU verifies the firmware signature, which verifies the bootloader signature, which verifies the UKI signature, which verifies the dm-verity signature.
Images are a false simplification
Images are a false simplification
Images are a false simplification
Images are a false simplification
