Poettering: Authenticated Boot and Disk Encryption on Linux
So, does the scheme so far implemented by generic Linux distributions protect us against the latter two scenarios? Unfortunately not at all. Because distributions set up disk encryption the way they do, and only bind it to a user password, an attacker can easily duplicate the disk, and then attempt to brute force your password. What's worse: since code authentication ends at the kernel — and the initrd is not authenticated anymore —, backdooring is trivially easy: an attacker can change the initrd any way they want, without having to fight any kind of protections.
The article contains a lot of suggestions for how to do things better.
Posted Sep 23, 2021 15:50 UTC (Thu)
by Xiol (guest, #87394)
[Link] (5 responses)
Posted Sep 23, 2021 16:32 UTC (Thu)
by Sesse (subscriber, #53779)
[Link] (4 responses)
Posted Sep 23, 2021 18:27 UTC (Thu)
by joib (subscriber, #8541)
[Link]
Posted Sep 23, 2021 19:21 UTC (Thu)
by fwyzard (subscriber, #90840)
[Link] (2 responses)
Posted Sep 24, 2021 8:09 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (1 responses)
I tried quite hard to find some connection between this and the comment it was answering but came to the conclusion there's none.
Posted Sep 26, 2021 21:53 UTC (Sun)
by rodgerd (guest, #58896)
[Link]
Posted Sep 23, 2021 15:53 UTC (Thu)
by martin.langhoff (guest, #61417)
[Link] (39 responses)
I don't quite grok how this can happen. Can you mount/modify/unmount an encrypted drive without the key/password? Or is /boot unencrypted _and bits in it read and trusted without checking against signatures_?
Posted Sep 23, 2021 15:59 UTC (Thu)
by azumanga (subscriber, #90158)
[Link] (25 responses)
Posted Sep 23, 2021 18:35 UTC (Thu)
by ericonr (guest, #151527)
[Link] (24 responses)
Dracut can even do the whole thing automatically, you just need to use `--uefi` in the invocation and add the key paths to your config.
Posted Sep 23, 2021 19:20 UTC (Thu)
by kreijack (guest, #43513)
[Link] (3 responses)
If you want to store "your" key with EFI, you have to do interactively in EFI bios and/or shim. So this is not something that you can do automatically (or programmatically) at installation time.
This is the reason why is not widely adopted.
TPM is more complex than UEFI, and I suppose that it solve this issue easily.
Posted Sep 25, 2021 0:30 UTC (Sat)
by timrichardson (subscriber, #72836)
[Link] (2 responses)
Posted Nov 1, 2021 7:54 UTC (Mon)
by mcortese (guest, #52099)
[Link] (1 responses)
Posted Nov 1, 2021 17:11 UTC (Mon)
by jem (subscriber, #24231)
[Link]
I don't compile my own kernel, I use the standard Arch kernels. However, each time pacman (the Arch Linux package manager) has installed a new kernel and rolled a new initrd, I run a small script, which creates an EFI Unified Kernel Image, which contains both the kernel and the initrd image. I then sign this image with my own private key.
Posted Sep 23, 2021 19:23 UTC (Thu)
by developer122 (guest, #152928)
[Link] (19 responses)
One solution is fuses that are blown once and encode the signing key. But that doesn't work so good with resale, which is bad for the planet.
(PS: oh, yeah, you also need to fuse a decryption key that only signed code is allowed to access)
The only other alternative I've heard of seems to be TPM+measurement. The TPM if given a magic measurement value will give back a key to decrypt the user's data. Thus, if anything gets messed with, the user's data is gone which makes available attacks equivalent to "smash computer with hammer" (DoS).
HOWEVER, we have to stop people from lying to the TPM. That means the code that does the measurement has to be signed by a hardware-recognized key (in other words it comes from a distro). This means you can't hook into the early boot process to get that measurement value just to echo it to the TPM yourself. The channel to the TPM also needs to be encrypted, or the TPM need to be on-die, so that it measurements can't be intercepted.
Once the TPM gives up the decryption key your signed-by-a-key-the-hardware-recognizes-and-provided-by-a-distro boot code knows two things:
Now, that's how *I understand* it's supposed to work, though I'm certainly not a competent expert. All this jumping back and forth with a TPM that could be lied to seems like a huge expansion of the attack surface.
It also doesn't answer the question for me of why the attacker couldn't just measure the state of everything at rest and provide the magic measurement value to the TPM themselves, without starting the laptop. Then get they key back and decrypt. I'm guessing the answer is that there needs to be an additional assumption: the TPM needs to know it's talking to the main CPU. It could be on-die, or there could be an assumption that "nobody could possibly attach themselves to the traces going to the TPM" which seems really weak to me. Basically, just having the measurement doesn't mean you're a trusted party.
Ugh. This is becoming a real rat's nest.
Is it not possible in hardware to have fuses that could accept a bootloader sigining key, and be reset, but only if a decryption key is erased too? That way if you're an attacker who wants to run different boot software (your own software, to read out the decryption key) you would need to replace the signing key to run it.......which also wipes the key you want to extract. And just skip put that on the main boot CPU and skip the rest of the TPM stuff?
Unless A) you can't build hardware that can securely/reliably do that or B) there's some other subtle flaw in that simplification I can't think of. Both of those are pretty likely.
Posted Sep 23, 2021 20:13 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (12 responses)
But that still leaves the problem of how to handle someone physically lying to the TPM. There's two approaches:
1) TPMs support setting up an encrypted channel. Unfortunately no firmware actually does this at present, so.
[1] Although I don't know of any systems that are actually shipped in this configuration
Posted Sep 23, 2021 22:46 UTC (Thu)
by developer122 (guest, #152928)
[Link] (8 responses)
1) We want the system to only run code we trust. It could be signed by a vendor key, or there could be some mechanism for users to add their own authentication later. If they can add keys later, you need to prevent attackers from doing the same.
2) There are certain secrets we need to protect. These include they keys used to decrypt user data.
Thing is, you can have situations where you have 1) but not 2)
It does no good to only run verified code if all of the secrets needed can be extracted at rest. For example, emulating the boot process to discover the correct measurements and then feeding them to the TPM to extract the root partition decryption key.
TPMs may set up a encrypted channel....but with whom? If I can read the key the firmware uses from the BIOS SPI chip, then it's game over. Having the TPM be on the CPU/SoC/etc (like with the ME, unless you can spoof to the PCH) at least helps with this identity problem.
Posted Sep 23, 2021 23:38 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (4 responses)
You ask the TPM to generate a session key and attest to it, then you use that - every modern TPM has a certificate[1] that ties its root endorsement key to its manufacturer, so in theory the firmware can verify that the attestation data comes from the expected TPM.
[1] Although firmware-based TPMs frequently don't actually store the certificate and you have to obtain it from some sort of web API, but the benefits from doing this dance on firmware-based TPMs are minimal anyway so that doesn't seem like a major issue
Posted Sep 24, 2021 2:04 UTC (Fri)
by developer122 (guest, #152928)
[Link] (3 responses)
I don't see how measurement values can work as an attestation mechanism, and a key stored in the BIOS rom seems easy to retrieve. Maybe you could fuse a TPM-specific key into the main processor, and only run signed code on it*, preventing the key from being read out?
Otherwise you just need to hold the main processor in reset and feed measurements of the rom contents to the TPM over the shared bus.
*signed by someone recognized by the silicon vendor, like a distro
Posted Sep 24, 2021 3:48 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
Posted Sep 24, 2021 23:22 UTC (Fri)
by developer122 (guest, #152928)
[Link] (1 responses)
Posted Sep 25, 2021 7:43 UTC (Sat)
by mjg59 (subscriber, #23239)
[Link]
Posted Sep 23, 2021 23:40 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
> If they can add keys later, you need to prevent attackers from doing the same.
If we can detect that attackers have added new keys, is that sufficient? The secure boot key that's used during boot is measured into PCR 7, and we can extend that with any available user keys as well. In that scenario, an attacker who adds new keys will cause boot to fail since the measurements will now be different.
Posted Sep 24, 2021 23:19 UTC (Fri)
by developer122 (guest, #152928)
[Link] (1 responses)
Posted Sep 25, 2021 7:40 UTC (Sat)
by mjg59 (subscriber, #23239)
[Link]
No, something signed by the system vendor is fine here. The firmware measures the key databases into PCR 7 before executing anything signed with those keys.
Posted Sep 24, 2021 21:35 UTC (Fri)
by jhoblitt (subscriber, #77733)
[Link] (2 responses)
I suspect that from a theortical point of view it is impossible to create a device with a self contained root of trust chain that can be completely trusted once it has passed out of the owners physical control.
Is there such a thing as FED which covers the initrd which uses a physical smart card like device to unlock, such as a yubikey? If we are going to trust the firmware, it seems like that shouldn't be too difficult to implement in UEFI land.
Posted Sep 24, 2021 21:37 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link] (1 responses)
Posted Sep 27, 2021 8:54 UTC (Mon)
by bluca (subscriber, #118303)
[Link]
Posted Sep 23, 2021 20:29 UTC (Thu)
by kreijack (guest, #43513)
[Link]
The answer was told by Lennart
> What's also important to mention is that the secrets are not only protected by these PCR values but encrypted with a "seed key" that is generated on the TPM chip itself, and cannot leave the TPM (at least so goes the theory). The idea is that you cannot read out a TPM's seed key, and thus you cannot duplicate the chip
So the measurements values are different even between two equal computers.
Posted Sep 24, 2021 16:17 UTC (Fri)
by artefact (guest, #154379)
[Link] (4 responses)
If you don't use tpm2-totp, anyone can compromise your kernel/initramfs by resetting the CMOS, tweaking the secure boot keys and putting their own files in the ESP. If you use the TPM to store the dm-crypt secret, unlocking your main disk would trigger an error, and you'd be asked for the rescue passphrase.
Posted Sep 24, 2021 20:07 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (3 responses)
Posted Sep 24, 2021 21:48 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
Posted Sep 25, 2021 10:37 UTC (Sat)
by bluca (subscriber, #118303)
[Link] (1 responses)
Posted Sep 25, 2021 17:29 UTC (Sat)
by mjg59 (subscriber, #23239)
[Link]
Posted Sep 23, 2021 16:00 UTC (Thu)
by dskoll (subscriber, #1630)
[Link] (3 responses)
Presumably you'd backdoor the initrd, which AFAIK is not signed or encrypted.
Posted Sep 23, 2021 18:20 UTC (Thu)
by martin.langhoff (guest, #61417)
[Link] (2 responses)
Posted Sep 27, 2021 14:23 UTC (Mon)
by smurf (subscriber, #17840)
[Link] (1 responses)
Posted Oct 2, 2021 21:15 UTC (Sat)
by spotter (guest, #12199)
[Link]
yes, if someone is able to hack the machine and gain root on it, they can muck with the content that will be placed in the initrd, but they are already root, this is is just a way to make their root access more sticky (i.e. bad, but not the problem that people seem to be worried about here, about an "offline" hands on hardware attack where the unsigned initrd is mucked with, without any need for compromising the machine to begin with.
i.e. at os setup time, we generate public / private key pair that has its public portion "securely" stored (i.e. not easy to replace) and is used to sign the initrd every single time its regenerated. why wouldn't this work transparently to most users? the inability to store such a key securely? something else?
Posted Sep 23, 2021 19:53 UTC (Thu)
by birdie (guest, #114905)
[Link] (1 responses)
I haven't read the article though, so I might be complete wrong.
Posted Sep 25, 2021 17:56 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Sep 23, 2021 19:54 UTC (Thu)
by artefact (guest, #154379)
[Link] (1 responses)
Posted Sep 23, 2021 20:20 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link]
Posted Sep 24, 2021 0:38 UTC (Fri)
by gerdesj (subscriber, #5446)
[Link] (4 responses)
You'll need to do quite a bit of work here and know your adversary rather well to carry out this type of attack.
You could insert a routine that passes the text entered after a password: prompt to somewhere.
Posted Sep 24, 2021 1:49 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
How would the hard drive get that text?
Posted Sep 24, 2021 18:15 UTC (Fri)
by smoogen (subscriber, #97)
[Link] (1 responses)
In the end, nothing is absolutely secure.. but it seems the usual human tendency with computers is 'well its impossible so why have any?' [Its like the people who never lock their doors because well its simple enough to break in through 100 different ways.]
Posted Sep 25, 2021 7:01 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link]
Seriously, though: What's your threat model? Who wants to compromise your device, and how badly do they want to compromise it? Do they even know how to modify USB firmware? Do they have physical access to any of your peripherals? Assuming you said "yes," why are you encrypting your devices, but not taking commensurate precautions in terms of physical security?
Your threat model must bear some relation to reality. The average person is not a character in a Bond movie, and does not have nation-state adversaries. If you are one of the few people who really do have a serious, nontrivial threat model, then you need to take a holistic, layered approach to your security precautions, and focus on making your adversary expend more effort than your secrets are worth to that adversary. You are right to proclaim that there's no magic bullet, but that's hardly the end of the conversation.
Posted Sep 23, 2021 17:03 UTC (Thu)
by madscientist (subscriber, #16861)
[Link] (8 responses)
What do you do about data that doesn't live in /usr, /etc, /var, or /home/$USER? The nice thing about FDE is that it ensures all data, no matter where you put it, is encrypted.
I see the problem being solved with initrd but I don't see a lot of advantage to doing more than fixing the initrd loophole, then applying FDE. Maybe I missed the reason for all the additional complexity.
Posted Sep 23, 2021 18:33 UTC (Thu)
by perennialmind (guest, #45817)
[Link] (6 responses)
The proposal discusses
Posted Sep 23, 2021 19:46 UTC (Thu)
by madscientist (subscriber, #16861)
[Link] (5 responses)
Your comment doesn't address my question: why we should go to all the extra hassle rather than just using FDE. What's the advantage we get for all that extra complexity?
Posted Sep 24, 2021 8:01 UTC (Fri)
by bernat (subscriber, #51658)
[Link]
Posted Sep 25, 2021 18:06 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (3 responses)
But WHY do you want to encrypt it?
You want to encrypt /var - it contains sensitive stuff like /var/mail and /var/spool.
You want to encrypt /home/user - that is blatantly sensitive ...
But /opt? /etc? /usr? You don't care if the contents are leaked - they're identical across heaven know how many unix/linus systems, there's nothing sensitive. What MATTERS is that they haven't been compromised, that they haven't been altered. A completely different problem, for which you want to SIGN the contents, to be sure it hasn't been compromised.
Cheers,
Posted Sep 25, 2021 19:36 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link] (2 responses)
I infer from this that you, Wol, apparently don't care if the contents of your /opt or /etc get leaked.
However, you, Wol, probably shouldn't assert that other people don't (need to) care if the contents of their /opt or /etc get leaked.
Like, if there's nobody out there with Should-Not-Be-Leaked data in /opt or /etc, I'll eat my hat.
Posted Sep 25, 2021 22:47 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
But /etc is supposed to contain system state, right? Which is rarely sensitive?
And /opt likewise contains locally installed programs, which shouldn't be sensitive?
So I'm not saying people DON'T have sensitive data there, but in the normal course of events, they SHOULDN'T.
Cheers,
Posted Sep 25, 2021 23:10 UTC (Sat)
by pizza (subscriber, #46)
[Link]
There's a lot of system-specific configuration in /etc, which includes sensitive stuff like ssh host keys, PKI stuff (/etc/pki/tls), system kerberos or ldap credentials, and much more.
> And /opt likewise contains locally installed programs, which shouldn't be sensitive?
Under /opt you often find complete mini-filesystem hierarchies, with their own /etc /var, etc.
Posted Sep 26, 2021 13:43 UTC (Sun)
by brukernavn (guest, #154435)
[Link]
Posted Sep 23, 2021 17:25 UTC (Thu)
by gmgod (guest, #143864)
[Link] (7 responses)
Only one caveat/aspect that isn't quite covered: disk corruption. One but flip somewhere and nothing boots.
Yes, data can be recovered (recovery methods presented) and yes a bit flip should not go unnoticed!
I am just saying that when it happens, being able to somehow rollback/reinstall/restore would be of tremendous help. Then all that sounds like a plan. :)
Posted Sep 23, 2021 18:54 UTC (Thu)
by perennialmind (guest, #45817)
[Link]
My read led me to think you'd get an IO error on just the relevant sector/block, just as you would if the internal ECC recovery on the drive failed. In which case, it's a different flavor of bad block: a problem for
Posted Sep 23, 2021 19:04 UTC (Thu)
by zuki (subscriber, #41808)
[Link] (2 responses)
No, things are not so bad. If the bit flips in the kernel image or the initrd image and those are verified, then indeed they will not pass verification. But if the bit flips in the file system image protected by dm-verity or dm-integrity, then only that one sector will be rejected, and then you get IO errors on some file.
Considering that the kernel+compressed initrd are on the order of 100MB, while the whole disk is at least 100s of GB, the chances that the bit flips in there is rather small. And usually you have multiple kernels available anyway, so the overall chance of total failure to boot is very small.
Also note that the initrd is compressed, so even a single bit flip could "expand" into quite a big change in the unpacked contents, and OTOH, file systems like btrfs do checksumming internally, so a bit flip would cause IO failures anyway. So in summary, verification makes things a bit more likely to fail, but the total probability is still small.
Posted Sep 23, 2021 20:06 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
So look at how many errors you observe, note that most of them will be cured by a reboot, and how likely is it again that you will have a real problem?
Cheers,
Posted Sep 23, 2021 20:19 UTC (Thu)
by gmgod (guest, #143864)
[Link]
That would almost eliminate the problem and provide a very nice way of making everything more resiliant.
Posted Sep 25, 2021 4:40 UTC (Sat)
by flussence (guest, #85566)
[Link] (2 responses)
For a bit flip in the drive crypt header containing the key nothing happens, for the same reason nothing happens from a bit flip in a superblock (encrypted or not) or disk GPT: there's multiple redundant checksummed copies because they're basically free to have (a few kilobytes).
In a corrupted filesystem sector somewhere: you get one encrypted block scrambled instead of 1 bit, which is usually 64 bytes or so depending on the cipher. They don't use stream cipher block-chaining as that would make random seeks impossible (you could do it on tape, but I wouldn't recommend it). The disk will often (but not always) catch bad sectors as they show up, and a checksumming filesystem or RAID will have you covered too.
If all those layers fail and you have an unencrypted disk you get the worst possible outcome; a single incorrect bit on persistent storage can cause insidious problems for a very long time, whereas encrypted noise will usually make something go bang promptly.
Posted Sep 25, 2021 8:48 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Sep 30, 2021 9:55 UTC (Thu)
by federico3 (guest, #101963)
[Link]
This does not increase resilience against data corruption. I'd rather have a bit flip in a text file, a CSV, a movie or even a script than a whole scrambled block.
Checksumming, early detection and automated recovery from backups is what we need.
Posted Sep 23, 2021 18:52 UTC (Thu)
by IanKelling (subscriber, #89418)
[Link] (3 responses)
> I'd try to ditch the Shim, and instead focus on enrolling the distribution vendor keys directly in the UEFI firmware certificate list.
He's missing the obvious. Afaik, the only free software UEFI just runs in a VM. TPM is completely nonfree. There's also Intel ME / AMD PSP. You have no security from the developer of any of that software, they could easily have backdoors or bugs that can serve as backdoors. Putting a key into a nonfree blob and expecting it only unlock when it gets that key is a serious real flaw in security.
> Some corners of the community tried (unfortunately successfully to some degree) to paint TPMs/Trusted Computing/SecureBoot as generally evil technologies that stop us from using our systems the way we want.
What corners? FSF certainly criticized Secure Boot as /potentially/ bad, but not generally evil: https://www.fsf.org/campaigns/campaigns/secure-boot-vs-re... And about this quote: "the way SecureBoot/TPMs are defined puts you in the driver seat if you want." That is just so wrong. Microsoft DID help ship arm computers where SecureBoot prevented non-windows OSes from booting: https://softwarefreedom.org/blog/2012/jan/12/microsoft-co... . Stating the obvious danger helped!
> Yes, the way SecureBoot/TPMs are defined puts you in the driver seat if you want — and you may enroll your own certificates to keep out everything you don't like.
No, nonfree software does not "put you in the driver seat" in a an essential fundamental way. I'm fairly confident we would already have a cryptographically verified boot on GNU/Linux if it weren't for all this firmware being nonfree.
Posted Sep 23, 2021 20:18 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
No? You can run Tiano as a Libreboot payload.
> TPM is completely nonfree.
Define "nonfree" in this case? There's nothing whatsoever stopping someone from running the reference TPM code on free hardware that has appropriate security properties.
> Putting a key into a nonfree blob and expecting it only unlock when it gets that key is a serious real flaw in security.
Why restrict this to TPMs? How do you know that your CPU isn't backdoored in a way that will detect certain sequences of instructions and then encode anything that goes through AES-NI through carefully modulated clock changes that generate remotely detectable RF output?
> I'm fairly confident we would already have a cryptographically verified boot on GNU/Linux if it weren't for all this firmware being nonfree.
There are older TPM 1.2 devices that don't support firmware updates, so would meet the Respect Your Freedom requirements. There's hardware that can run 100% free system firmware, including a UEFI payload. I think your confidence is misplaced.
Posted Sep 23, 2021 21:14 UTC (Thu)
by IanKelling (subscriber, #89418)
[Link] (1 responses)
I forgot about that. Thanks for pointing out my error.
> > TPM is completely nonfree.
> Define "nonfree" in this case? There's nothing whatsoever stopping someone from running the reference TPM code on free hardware that has appropriate security properties.
I should have said the TPM firmware to be more clear. I didn't know there was a free reference implementation. I'm not sure what you mean by free hardware, the only related thing I know is hardware with a free design. I hope someone makes a free design TPM chip that uses free software.
> Why restrict this to TPMs? How do you know that your CPU isn't backdoored in a way that will detect certain sequences of instructions and then encode anything that goes through AES-NI through carefully modulated clock changes that generate remotely detectable RF output?
Well, people should be working on that too.
It is a somewhat different situation for two reasons. First, the issue can practically and relatively easily be eliminated in the case of software. When there is an unethical practice that also creates a needless security issue, it deserves to be identified differently. Second, since the developer of the nonfree firmware can, in the name of security, update the software to add a backdoor whenever they want, or replace any backdoor someone discovers and replace it with another backdoor they don't know about, it is more practical for the developer to intentionally create a backdoor than baking it into the hardware.
> I think your confidence is misplaced.
Ok, I'm less confident, but developers tend to focus on widely available hardware. If the most widely available hardware had a free bios, I think it would be much more likely. I'm not saying the ideas of implementation are wrong, just that there is an obvious room for improvement that deserves to be mentioned.
Posted Sep 26, 2021 2:00 UTC (Sun)
by thwalker3 (subscriber, #89491)
[Link]
You might want to check out lpnTPM (https://nlnet.nl/project/lpnTPM/). I just learned about it at Piotr Król's tpm.dev miniconf talk this week. Open hardware and firmware.
Posted Sep 23, 2021 19:54 UTC (Thu)
by artefact (guest, #154379)
[Link] (4 responses)
* The initramfs/cmdline is not part of the secure boot trust chain. That can be solved easily by generating a Unified Kernel Image[1], that combines the kernel, cmdline and initramfs in one .efi executable, all of which gets picked up by Secure Boot and measured by TPM. Few distros use it as far as I know, but it's easy enough with eg. package manager hooks.
* The data on disk is not authenticated. That can again be easily solved by using btrfs (or any other filesystem which checksums everything) over dm-crypt.
1: https://wiki.archlinux.org/title/Systemd-boot#Preparing_a...
Posted Sep 23, 2021 20:45 UTC (Thu)
by bluca (subscriber, #118303)
[Link]
Posted Sep 24, 2021 7:34 UTC (Fri)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
Posted Sep 24, 2021 11:59 UTC (Fri)
by hkario (subscriber, #94864)
[Link] (1 responses)
also, even if the attacker is trying to subvert CRC32, they need to do that while modifying encrypted blocks, which while it doesn't provide rock solid security, it is far from trivial
Posted Sep 25, 2021 7:54 UTC (Sat)
by ibukanov (subscriber, #3942)
[Link]
Posted Sep 23, 2021 20:50 UTC (Thu)
by jebba (guest, #4439)
[Link]
Posted Sep 24, 2021 0:43 UTC (Fri)
by gerdesj (subscriber, #5446)
[Link]
Posted Sep 24, 2021 1:36 UTC (Fri)
by bjartur (guest, #67801)
[Link]
Posted Sep 24, 2021 4:19 UTC (Fri)
by JanC_ (guest, #34940)
[Link] (6 responses)
It seems like having to pre-partition your disks into so many partitions (and leave space for extra partitions in case you want to add more users?) is going to waste lots of disk space, especially as most of the partitions will likely be more than half empty.
It would probably be better to build this on a file system that can do its own data encryption & authentication on subtrees/subvolumes…
Posted Sep 24, 2021 9:48 UTC (Fri)
by lindi (subscriber, #53135)
[Link] (5 responses)
Posted Sep 25, 2021 10:40 UTC (Sat)
by bluca (subscriber, #118303)
[Link] (4 responses)
Posted Oct 12, 2021 2:46 UTC (Tue)
by JanC_ (guest, #34940)
[Link] (3 responses)
Posted Oct 19, 2021 11:08 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Nov 2, 2021 12:22 UTC (Tue)
by JanC_ (guest, #34940)
[Link] (1 responses)
Posted Nov 2, 2021 12:40 UTC (Tue)
by JanC_ (guest, #34940)
[Link]
What the user expects: this works without problems, as all those files were removed and there is lots of free space available.
What the user gets: The system gives a very strange error about not enough disk space being available while in the file manager it says there is plenty available.
Posted Sep 24, 2021 5:27 UTC (Fri)
by pabs (subscriber, #43278)
[Link] (1 responses)
https://pulsesecurity.co.nz/articles/TPM-sniffing
Posted Sep 25, 2021 10:44 UTC (Sat)
by bluca (subscriber, #118303)
[Link]
Posted Sep 24, 2021 9:50 UTC (Fri)
by oldtomas (guest, #72579)
[Link] (3 responses)
So I've whole-except-boot partition encryption (with an LVM inside, but that part is irrelevant). This protects my data at rest (the most probable scenario: I leave my laptop in the metro and someone can try to extract the data at their leisure).
Passably strong passphrase: pwgen -n 16.
The day the "hacked initrd" scenario becomes more important, I'll have my boot partition on an SD card I keep separate from my laptop. Perhaps I can leave an initrd partition on disk, as a honeypot. But this day ain't today.
Note that whenever your opponent's dedication reaches that level, they can as well rubberhose-cryptanalyse [1] you. Obligate XKCD and things. Plausibly deniable encryption would be a really cool thing here.
The whole secure "bios" thing? Sorry. Too complex for security. I understand people geeking off about "trusted roots" and things, but that gives to your (hardware and OS) providers so much control over you [2] that I won't touch it. Not with antiseptic gloves.
And, oh, PS: it's impressive how the very first post nearly derailed this discussion into a systemd flamefest. Why, oh, why?
[1] Just a metaphor. Laws like the ones in UK where they can detain you practically indefinitely are on a similar level.
Posted Sep 24, 2021 16:32 UTC (Fri)
by lindi (subscriber, #53135)
[Link] (2 responses)
$ pwgen -n 16 -1 200000000 > passwords.txt
Posted Sep 25, 2021 7:16 UTC (Sat)
by oldtomas (guest, #72579)
[Link] (1 responses)
Pwgen generates passwords with bigram frequencies resembling human (most probably English-biased) languages, for easier memorisation. So your observation that some combinations are more/less common than others is essentially right, the default distribution isn't equi.
Check option `-s', though, if you "feel lucky" ;-)
So I expect it to produce roughly 3 bits of randomness per character; 16 characters lead thus to (roughly) 48 bits of randomness. This table [1] would yield roughly (I'm interpolating geometrically, yes, I'm bold here) 2*10^7 tries for a colission with p=0.5. Your result (ten times as many tries for one colission) seems to suggest that three bits per char is somewhat underestimated. Definitely less than 4, though.
[1] https://en.wikipedia.org/wiki/Birthday_attack#Mathematics
Posted Sep 25, 2021 8:57 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
With 30 people in a room, it's odds-on two of them will share a birthday ...
So if you select 10% of your search space, at random, you should EXPECT a collision.
Thing is, to break crypto, you need to produce desired collisions to order; to the best of my knowledge there is no way to take advantage of random "birthday" collisions.
Cheers,
Posted Sep 24, 2021 13:17 UTC (Fri)
by edeloget (subscriber, #88392)
[Link] (8 responses)
A regular guy whose laptop is stolen by a regular thieve will not get hacked as soon as he has a reasonable password (i.e. not 123456) will NOT get hacked by this thieve. The laptop will be formatted and sold. Or maybe it'll be sold without being formatted.
Even if you're not a regular guy, because some of your hobby or work involves handling sensitive information (that you shall not put on a non-secure computer, and it can only be secure if you know what you're doing, not if you let a distrib do what it wants to do) there is good chance that the thief is not interested in your data.
If he does care about your data then it's far easier for him to "nicely ask for your help" that to hack you in the first place (XKCD style: https://xkcd.com/538).
And if he's not in the position of being able to ask you then you shall make sure the data is safe. Distributions can only gives you a false sense of security - it's not /their/ job to correctly secure /your/ hardware. You shall use whatever suits you (HSM, TPM2, full-disk encryption, biometric sensors... It's quite easy to set up and has been available for ages ; a lenovo w520 (2011) offers hardware-based hard drive encryption that can be protected at boot by the scan of your fingerprint ; you can get one for $200 to $400 on ebay).
The forever elusive "Harry the Hacky Hacker" that will stole your PC to check on your internet search history in order to set up a wildly complex plan to take on the U.N. does not *really* exist outside movies.
Posted Sep 25, 2021 6:46 UTC (Sat)
by gfernandes (subscriber, #119910)
[Link]
Lennarts analyses is fantastic, and at least I'm glad _someone_ is thinking about making things better.
Posted Sep 25, 2021 6:50 UTC (Sat)
by gfernandes (subscriber, #119910)
[Link]
It is _not_ an arbitrary fictional scenario painted out of a movie.
Posted Sep 25, 2021 7:20 UTC (Sat)
by oldtomas (guest, #72579)
[Link] (2 responses)
Hm. I wonder whether there is a regular black market for stolen storage media (with content, of course).
I'd expect secret services to have some "start ups" in this area.
Posted Sep 26, 2021 11:43 UTC (Sun)
by johannbg (guest, #65743)
[Link] (1 responses)
Despite common misconception secret service agencies are not filled with complete morons.
In the spy vs spy game going through garbage, collecting drives or optaining them from "black markets" is not worth the effort.
In essense you want people to come to you so you setup shop and simply flip the sign that says "VPN Service Provider" to the other side and now the sign says "A hard drive destruction service company" and all the people/companies/agencies with the sensitive data come to you to dispose their "sensitive" drives...
Depending on time, target and the objective, there can even be an easier and cheaper and less traceable way to optain the needed information which does not involve having to setup shop, cracking a computer, optaining and cloaning or collecting data from existing hardrives these days...
Posted Sep 29, 2021 8:11 UTC (Wed)
by oldtomas (guest, #72579)
[Link]
I don't know how your observation relates in any way to my text.
What I was hinting at is that a semi-illegal to illegal entity which buys storage media in grey and black markets to try to scoop up the data (and perhaps try to gunshot-decrypt [1] those whenever encrypted) might share interests with secret service agencies (and thus perhaps resources).
Have a look at NSO Group [2] of Pegasus fame or Fancy Bear [3] to see the pattern I'm thinking of.
They aren't morons. They just outsource, like everyone else these days, for a variety of reasons.
[1] meaning: not investing too much into every single instance.
Posted Sep 25, 2021 22:55 UTC (Sat)
by HenrikH (subscriber, #31152)
[Link] (1 responses)
Posted Sep 26, 2021 7:34 UTC (Sun)
by oldtomas (guest, #72579)
[Link]
It seems that more and more operations are favouring a combination of those over genuinely covert action, perhaps because they are less costly -- or perhaps because they can be conveniently combined with other synergetic effects.
Not that any of those techniques were new. But with the current uprising of all shades of autocracies, it seems we are in for more of it.
[1] e.g. because you are drowned in a twisty maze of conspiracy theories, all alike.
Posted Sep 29, 2021 9:54 UTC (Wed)
by eduperez (guest, #11232)
[Link]
1) Hard drive is stolen and brute-forced.
The article also proposes an improvement for the second scenario, but this situation is IMHO very unlikely to happen in the real world. However, the first scenario, which seems more realistic, remains open.
Posted Sep 24, 2021 13:18 UTC (Fri)
by walters (subscriber, #7396)
[Link]
That's probably fine for `/etc`, but likely to be quite noticeable for `/var` in any nontrivial setup. Most data center deployments that already have physical security are going to want to keep performance I'd guess. Probably this would also be a notable hit for "local workstation" flows where you're e.g. doing e.g. compilation; cloning git repos, pulling build containers, etc. Splitting some of this stuff into separate encrypted-but-not-authenticated volumes gets into a lot of nuanced tradeoffs.
In the end, I would agree this type of setup probably deserves to be an available checkbox in most general-purpose Linux system installers. And with my CoreOS hat on, installer = ignition config applied on firstboot, so it *can* work in IaaS clouds too, even if I doubt most people would want it there since "evil maid" types attack in IaaS generally either require root on the hypervisor or root account credentials, in which case you have much bigger problems.
Anyways, on the larger topic I have this blog entry which I think still applies:
Basically it's important to keep in mind here that this proposal does *not* have the property that if your system is compromised remotely (e.g. web browser exploit), then it's still easy for an attacker to persist code in all the usual places (`~/.bashrc`, `/etc/systemd/system`, `~/.local/share/containers`, `/var/lib/containers`, any VM images you have, etc.).
You didn't mention e.g. Fedora CoreOS which *is* shipping image based updates, but not authenticated (e.g. dm-verity) right now because to me it's really important to preserve the ability to e.g. roll back just the kernel on a specific machine because you hit an issue after an update. ( https://github.com/coreos/fedora-coreos-tracker/issues/940 is just one recent example )
Having a dm-verity for the rootfs in theory can make it easier to reprovision if you suspect compromise, but the thing is if you *do* suspect compromise I think it's generally always going to be better to just re-install the entire thing from trusted USB key or equivalent. (And then of course, for general purpose PCs and equivalent there's the whole problem of UEFI malware and flaws in vendor firmware...)
What makes a lot of sense to me is supporting a flow where (as my previous blog mentions) *everything* (OS, OS configuration, apps) are rolled into a big dm-verity or equivalent and there is no mechanism to persist code across reboot. There are people that are doing this for embedded devices. But I have difficulty imagining this for a workstation (dealing with things like `~/.bashrc`), and for the datacenter use case it's nuanced; the hard part there again I think is things like "OK we need to test this kernel on this specific server" and how the signing process works and validating that it's easy and predictable to build a new image in the exact state, just with that kernel override and such.
Posted Sep 25, 2021 15:59 UTC (Sat)
by anton (subscriber, #25547)
[Link] (3 responses)
I have set up my home machine to log in as me, no password needed. lightdm has this as an option, and IIRC so has gdm, but not xdm.
For my laptop I use two users, and I have set up both with passwords (against evil maids and the like), plus they want the password after coming out of suspension. I rarely shut the laptop down, and suspend it instead, so the double password after boot plays hardly matters.
Posted Sep 27, 2021 18:02 UTC (Mon)
by jccleaver (guest, #127418)
[Link] (2 responses)
Most of Poettering's work makes more sense when you consider that he interprets all problems solely in terms of his own experience -- what works for him on his own laptop -- and tends to be obliviously dismissive to other needs or paradigms.
Posted Sep 27, 2021 18:32 UTC (Mon)
by mpr22 (subscriber, #60784)
[Link]
The same can be said of some of his most ardent attackers – just replace "Lennart's laptop" with "their 'pet' (as opposed to 'livestock') servers that can't be safely reconfigured by someone with less than fifteen years' Unix sysadmin experience" :)
Posted Sep 27, 2021 20:03 UTC (Mon)
by rodgerd (guest, #58896)
[Link]
Posted Sep 25, 2021 22:34 UTC (Sat)
by amarao (guest, #87073)
[Link]
If this scenario is not ruled out, I can't understand why we should regularly brick our systems in unbootable state just to prevent non existing threat.
Posted Sep 26, 2021 12:03 UTC (Sun)
by scientes (guest, #83068)
[Link]
At a Linux Plumbers conference you asked other core developers if they would trust to put a private key on a shared machine, but if you study Gernot Heiser's work on SeL4 at CSIRO you will learn that Linux cannot be secure without being re-written, because the single big C-language turing machine that Linux lives in it is impossible to analyze what talks to what.
Posted Sep 28, 2021 17:54 UTC (Tue)
by anatolik (guest, #73797)
[Link]
One interesting way that booster uses for for an encrypted partition key computation is network-binding. See clevis and tang https://github.com/latchset/clevis
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
The article is about what distros could or should do. If you're compiling your own kernel, just get rid of initrd.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
1) the user's stuff hasn't been tampered with (it was actually put there by the user not an attacker)
and 2) how to decrypt and access the user's stuff (which could include keys to decrypt later stages)
Poettering: Authenticated Boot and Disk Encryption on Linux
2) Use a firmware-based TPM rather than a discrete hardware one. Intel support running one in the management engine, AMD support running one in the PSP.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
/usr and /home as distinct volumes, but /etc and /var as key examples on the root volume. Unlocking personal stuff under /opt/data is left as an exercise for the end user.
Let's now look at the OS configuration and state, i.e. the stuff in /etc/ and /var/. It probably makes sense to not consider these two hierarchies independently but instead just consider this to be the root file system.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
ddrescue. If that bad block affects the kernel, then yeah: boot fail. Given the alternative of booting a kernel with a hole in it's guts, I think I'd rather reach for a rescue disk.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
> > Define "nonfree" in this case? There's nothing whatsoever stopping someone from running the reference TPM code on free hardware that has appropriate security properties.
> I should have said the TPM firmware to be more clear. I didn't know there was a free reference implementation. I'm not sure what you mean by free hardware, the only related thing I know is hardware with a free design. I hope someone makes a free design TPM chip that uses free software.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
(I know you could use LVM to _partially_ solve that, but it’s not exactly user-friendly/automatic, and shrinking would be an issue too.)
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
https://news.ycombinator.com/item?id=19379092
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
[2] And it is being used for things like Netflix et al making sure you don't cheat on them. Well I don't cheat on them either. They just don't exist for me.
Poettering: Authenticated Boot and Disk Encryption on Linux
$ sort passwords.txt > passwords-sorted.txt
$ uniq -d passwords-sorted.txt
ohsaeMooghaith5a
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Wol
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
[2] https://en.wikipedia.org/wiki/NSO_Group
[3] https://en.wikipedia.org/wiki/Fancy_Bear
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
2) Hard drive is tampered and the password sniffed.
Poettering: Authenticated Boot and Disk Encryption on Linux
https://blog.verbum.org/2017/06/12/on-dm-verity-and-opera...
Poettering's point about first having to type a password for disk decryption and then for login makes no sense to me:
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
>
> I have set up my home machine to log in as me, no password needed. lightdm has this as an option, and IIRC so has gdm, but not xdm.
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
Poettering: Authenticated Boot and Disk Encryption on Linux
