Ubuntu to add TPM-backed full-disk encryption
In order to deliver these benefits, the implementation of TPM-backed FDE relies on two main design principles. First, it seals the FDE secret key to the full EFI state, including the kernel command line. Second, access to the decryption key will only be permitted if and when the device boots software that has been defined as authorised to access the confidential data. This is when the initrd code will unseal the key in the secure-boot protected kernel.efi at boot time.
Posted Sep 7, 2023 17:14 UTC (Thu)
by cthart (guest, #4457)
[Link] (64 responses)
Posted Sep 7, 2023 17:21 UTC (Thu)
by abbe (subscriber, #137089)
[Link] (5 responses)
Posted Sep 7, 2023 21:30 UTC (Thu)
by WolfWings (subscriber, #56790)
[Link] (3 responses)
This looks to honestly be more of a first move towards Ubuntu trying to change the whole OS into a strictly walled-garden Apple App Store approach.
Posted Sep 8, 2023 7:55 UTC (Fri)
by mfuzzey (subscriber, #57966)
[Link] (1 responses)
Yes I don't really understand it.
I suspect there is a significant subset of users who would like TPM based FDE for the security benefits but would prefer to stay with DEBs.
Posted Sep 15, 2023 1:16 UTC (Fri)
by raof (subscriber, #57409)
[Link]
I don't think so, other than they have aparantly already done it that way for Ubuntu Core.
Additionally, there are other good reasons why this is harder with debs. To do this (as the article mentions) you need to distribute the kernel + the initrd + the kernel command line, but in the existing deb-based system only the kernel is distributed; the initrd is generated on each machine (potentially multiple times), and the kernel command line is also generated on each machine, from user-modifiable configuration.
Now, you could do a whole lot of work to replace the existing bootloader/kernel/initrd package infrastructure with a single .deb package, but why would you? The result would be a .deb that looks almost exactly like the snap, but with worse management capabilities.
Ubuntu is somewhat behind the curve, here. Ubuntu Core Desktop is still in heavy development; Fedora Silverblue is broadly the same concept, but with Flatpaks, and has been usable for a while.
Posted Sep 11, 2023 0:15 UTC (Mon)
by arsen (subscriber, #161285)
[Link]
note that not a single PCR even enrolls programs that come after the kernel anyway.
Posted Sep 10, 2023 8:37 UTC (Sun)
by gmgod (guest, #143864)
[Link]
Btw, if womeone wants something similar, i.e. trusted boot for themselves, custom-key secureboot + LUKS-protected data with systemd-cryptenroll is working very nicely.
It also supports fido hardware keys for whatever partition (so would be a good idea for /home on a single-user system).
I'm only mentioning this because that solution has existed for a long time, it works very well (actually systemd-cryptenroll is basically a C script) and does not involve snapd.
Posted Sep 7, 2023 17:33 UTC (Thu)
by philipstorry (subscriber, #45926)
[Link] (35 responses)
You have backups/multiple copies of your data.
This is the position taken by most organisations' IT departments, and it's the correct one. Yes, they could try to perform all kinds of miracle working. But honestly, if your machine dies, they will get you up and running by replacing your machine with a new one. So have all of your data saved securely on their remote systems (whether on-prem or on-cloud), not locally on the machine.
Those remote systems are backed up or replicated for high availability, and your new machine will be able to access your data on those systems.
Having backups/remote copies is the correct way to handle this issue.
If your data retention strategy involves access to a specific hard disk, then your data retention strategy is equivalent to this: "I will lose data, and I will have to come to terms with that loss."
In the case of moving the HDD as you've mentioned, you treat it as though it were the total loss of the previous computer and do a fresh build and restore to the new machine. If that's a problem, you either have bad backups or insufficient copies of your data. Or you've never given adequate thought to restore timescales.
I fully understand that the old shortcut of moving the drive between systems saves time, but it also encourages people to make bad assumptions about how available that disk will be. That then leads them to make bad decisions about how man backups/copies of their data they will need.
So I really don't see this as a problem. From a data security/availability point of view, it's a good thing...
Posted Sep 7, 2023 17:44 UTC (Thu)
by mb (subscriber, #50428)
[Link] (24 responses)
But it is a problem.
The time saving between swapping a disk from one machine to another and re-installing everying is significant.
I do personally keep all my disks swapable between machines and I do full-system backups. Including operating systems. Even if I could just "easily" re-install the OS. Just swapping the disk to another machine or (if the disk itself dies) restoring the whole system backup is *much* easier.
The loss of an OS and all installed programs is not a data loss. It's a time loss.
Posted Sep 7, 2023 18:15 UTC (Thu)
by NightMonkey (guest, #23051)
[Link] (23 responses)
Posted Sep 7, 2023 18:22 UTC (Thu)
by mb (subscriber, #50428)
[Link] (18 responses)
Oh come on. Have you ever seen such a system for a workstation computer?
I have never seen that. Not even close. Except for things like whole system backups, where it restores to the last backup checkpoint, at least. Still not 100%, though.
Install automation certainly will not do that. It will never do that, if people make any changes to the installed workstation.
Posted Sep 7, 2023 18:40 UTC (Thu)
by somlo (subscriber, #92421)
[Link] (2 responses)
Posted Sep 8, 2023 7:45 UTC (Fri)
by abo (subscriber, #77288)
[Link] (1 responses)
Posted Sep 9, 2023 21:40 UTC (Sat)
by salimma (subscriber, #34460)
[Link]
Posted Sep 7, 2023 19:13 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Yes. This is what happens when you restore a computer from a backup in Windows and Mac OS. You will even have the desktop icons in the same spots.
Posted Sep 14, 2023 14:48 UTC (Thu)
by ms-tg (subscriber, #89231)
[Link] (1 responses)
Heavy +1 to this. It amazes me that the Mac OS examples of how seamless Time Machine backup and restore process works seem not to have spread as far and wide even after many years.
When a Mac OS machine dies, and you have back-ups, you do indeed "Press one button and install everything within one hour to get to 100% state as it was before the disaster".
However, I have seen it take a bit more than 1 hour the last time it happened to me, more like 90 minutes if I recall correctly. But it does work!
Posted Sep 15, 2023 9:56 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
The important thing is not so much the time taken, as the user time spent on it. If you have a Time Machine backup of your Mac, you can restore to a replacement Mac in just a few minutes operator time (plus potentially hours of machine time restoring the backup). As long as the machine time is reasonable (overnight, say), this is fine because you can go and do something else while you wait for the backup to restore.
Posted Sep 7, 2023 19:30 UTC (Thu)
by geofft (subscriber, #59789)
[Link] (5 responses)
A practical real-world case is Chrome OS - everything is in "the cloud" i.e. Google's redundant, replicated, configuration-managed computers, not just your files but also your configuration. Get a new Chrome OS machine, log in with the same account, and everything is back. And Chrome OS uses TPM-locked disks, and is very aggressive about wiping the local data partition when you look at it wrong.
We also do this at my workplace. Developers do not have root on their machines. We have a fairly extensive system for building all the software we depend on, from GCC and Python on down, in a giant monorepo. If you want to install anything you can install it in the monorepo; the entire company has push access to it (with mandatory reviews etc.). If you've got some weird combination of this version of libcurl plus that version of graphviz plus these three configuration files, you produce that weird combination inside a git repo, where you can run "git diff", not via doing stuff via sudo, where you can't. If you want to deploy that weird combination, you can take a git commit ID and tell our infrastructure to deploy that code, instead of trying to replicate the same manual steps on a prod server.
And then we just back up people's home directories. If something goes wrong with their machine, we just install a fresh one, install their home directory from backup, and they're ready to go. If they had their work checked in and pushed they may not even need their home directory restored, strictly speaking. This also helps with situations like people leaving - we never have intern managers wondering what dev stack their intern installed on their machine, because they didn't install anything. (It also helps with initial onboarding: the entirety of the "set up your dev environment" step is just cloning the monorepo and then setting up whatever IDEs you like; you're not asking your coworkers what Node.js versions they have installed.)
We have a team of folks who are willing to help people get things building in the monorepo. This is in large part my current job. My previous job was doing the same high-level function, but in the form of doing OS packaging whenever someone wanted to use a neat new Python or C library, so we didn't have machines where people just ran "sudo make install". I much prefer the current approach for quite a few reasons: in addition to simply avoiding the host of "works on my machine" problems, it means that there isn't a machine-wide concept of "the GCC version" or "the Python version" as far as developers' code goes. So different branches and perhaps even different binaries on the same branch can be on different versions of these things, and also we can upgrade the actual OS without having to upgrade code dependencies in lockstep.
There are a few public systems that share the same philosophy. We're taking a very close look at Nix, which can be coinstalled in the /nix directory of any other Linux distro, and which safely lets unprivileged users install anything; if it had existed before we built our in-house system, we would have probably just used it. There's also Spack, which has more of a specific focus on scientific computing/HPC and compilation options.
Posted Sep 7, 2023 19:49 UTC (Thu)
by smoogen (subscriber, #97)
[Link] (2 responses)
The reality for most programmers and shops I have seen in the last 30 years is that the 'team' of people trying to somehow replicate that is usually 1 person who is probably also juggling a helpdesk and why is the CEO's phone not working. As the company grows, it seems that this staff grows by 1 person for every 400 to 1000 employees. And then you find yourself increasingly dealing with the latest plan to outsource the job to some cloud vendor who end of lifes the service right after the transition.
So yes it is possible, but most people only hear about it in posts and TED talks about how if only your finance department decided to actually spend on IT versus outsourcing it again.. you could have it to. You instead deal with backup systems which are broken, configuration management which is behind, and deadlines which are moved up.
Posted Sep 8, 2023 1:04 UTC (Fri)
by geofft (subscriber, #59789)
[Link] (1 responses)
What I'm advocating for is a system where the team that officially maintains your computers, no matter how big or small that team is, doesn't feel like they have to choose between blocking short-term productivity or creating long-term risk. There usually will be a few people who know enough to install software and cobble things together even if that isn't their job on paper (and they therefore don't have root). Give them the ability to cobble things together, but also ensure their cobbling is recorded somewhere, and isn't just some hackery in their home directory. And if that team is one person, or even zero, there's still a way forward.
And I'm also posting this not to boast about our in-house system but to lament that we had to build one (and did not open source it). I think it might be relatively close to possible to get there with Nix these days, though it's both a fairly steep learning curve as well as an involved conversion from basically any existing system. I think there could be really good FOSS tools for this. I think these tools could be good enough that the average home user - who by definition has a corporate sysadmin staff of zero - can get their setup for installing the right graphics drivers and workarounds recorded in exactly the same way.
I think we (the FOSS community) actually sort of lost our lead: up until maybe the early '00s, Windows and Mac users basically did not have privilege separation at all, and were running everything as effectively root. Installing stuff was just copying files, uninstalling was hoping for the best, and "DLL hell" was a Windows problem. The Linux distros and the BSDs were the ones who said, even if this is your personal computer, run as a non-admin user and use well-defined packaging systems. Now, as another commenter alluded to, Windows and Mac OS have moved towards a model where the OS is read-only, applications are in their own private directories (and often sandboxed), and it is absolutely possible to restore the state of a Windows or Mac machine just by restoring user-level files and config. We haven't kept up, and I would bet there is much more "DLL hell" in practice on Linux machines than Windows ones today.
A few projects like NixOS and Spack are going in the right direction for specific use cases, but they're not commonplace. The Ubuntus and Fedoras of the world should do this too - and in a way that empowers users to try stuff out as opposed to just locking them out of the system and indeed makes them more confident about trying things that might not work.
Posted Sep 8, 2023 11:57 UTC (Fri)
by smoogen (subscriber, #97)
[Link]
I expect that if I were a full time Windows admin I would have been able to get around this but I am not so it ended up being a reinstall from scratch . Having this happen now with 3 times this year, I really should learn
I have been impressed with the Mac on this because it does seem that time machine and other things will allow for most things to be restorable and comparable. It is what I consider the killer app for self-administration as it has solved a lot of little issues. Its not perfect, but it is a lot better than anything I have dealt with recently on Linux or Windows.
Also I didn't take your comments as bragging. I took them as 'this is possible' which can be helpful for us sysadmins who tend to get in a rut and also think nothing can be better than the pig sty we live in :)
Posted Sep 7, 2023 20:09 UTC (Thu)
by mb (subscriber, #50428)
[Link] (1 responses)
And I don't think your approach works well with proprierary software that does all sorts of weird things.
Posted Sep 7, 2023 20:32 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
In corporate workstations, they are already paying these costs anyway. If you a solo user who is doing adhoc things, maybe the latter is more useful for you. You should have backups anyway, so a hardware failure shouldn't affect you even in that case.
Posted Sep 7, 2023 19:53 UTC (Thu)
by sjj (guest, #2020)
[Link]
Getting the system to a known state: feed the package list to dnf/apt and clone the /etc repo. Then restore user backups and the UI settings are back with user data. It’s not 100.00% but it’s only 2-3 steps.
Posted Sep 7, 2023 21:10 UTC (Thu)
by adam820 (subscriber, #101353)
[Link] (1 responses)
Posted Sep 7, 2023 23:08 UTC (Thu)
by sjj (guest, #2020)
[Link]
Posted Sep 8, 2023 9:19 UTC (Fri)
by magi (subscriber, #4051)
[Link]
Posted Sep 9, 2023 14:54 UTC (Sat)
by kreijack (guest, #43513)
[Link]
Yes, I saw these, and these work well. It is not very complicated to do a similar setup that can satisfy 90% of the user.
In my company the PC arrived already configured with the most common software; and the files are already in the cloud; so replacing a PC requires only to login to the new one (and wait the download of the files from the cloud).
The key is the "90%" above. This works very well for the mast majority of the people that have low requirement.
This doesn't work for people that relies massively on complex tools (3D Cad, HW Cad, Software developments, or computer which acts as server ...), were the setup in not canonized (even tough it could be done easily, but it would requires specific setup from the IT and this doesn't worth).
So even tough you can't solve all the problem for all the people, you can massively reduce the load of the IT people.
And, even I never go deeply in this topic, my understand is that for every PC the key to unlock the disk is stored in the TPM, but the IT has another key to unlock the system when (e.g.) an upgrade doesn't work, leaving the system un-bootable (which is the major risk when you put the key inside the TPM).
Posted Sep 14, 2023 8:08 UTC (Thu)
by highvoltage (subscriber, #57465)
[Link]
Posted Sep 7, 2023 20:58 UTC (Thu)
by dmoulding (subscriber, #95171)
[Link] (3 responses)
Posted Sep 7, 2023 21:59 UTC (Thu)
by mbunkus (subscriber, #87248)
[Link]
Posted Sep 9, 2023 14:38 UTC (Sat)
by HenrikH (subscriber, #31152)
[Link]
Posted Sep 9, 2023 21:09 UTC (Sat)
by zblaxell (subscriber, #26385)
[Link]
We can do all that to the host system as well. The host system is essentially an EFI application with a big database (usually consisting of a number of block devices with partitions containing filesystems, sometimes complicated by encryption). Why would we expect a backup+restore cycle of a physical host to be less complete than our virtual ones?
Posted Sep 7, 2023 17:45 UTC (Thu)
by pm215 (subscriber, #98099)
[Link] (1 responses)
I don't think that disk encryption moving to TPM is going to cause anybody to think "hmm, maybe I should have backups, or do them differently". It's just going to increase the number of hardware-failure scenarios which cause users who don't have backups to end up in the "sorry, your data is permanently lost" situation.
Posted Sep 7, 2023 21:16 UTC (Thu)
by gerdesj (subscriber, #5446)
[Link]
If you want to use the feature, you will have to or potentially lose data. Most non techies realise that files saved locally get lost when a disc dies. That's why you use NASs, Nextcloud, *shudder* OneDrive etc. Most folks get that. Here we are debating an Ubuntu user who is capable of installing an OS. Surely they can repeat that feat and retrieve their files again?
Think about how most people treat their mobiles (cell phones). That includes long term sysadmins who should know better (me)! The palava, come upgrade time is hilarious as you find what settings don't transfer because you haven't logged into the correct combination of half forgotten Google/Samsung/Satan accounts. My data is safe as houses, following the 3-2-1 rule.
At the risk of sounding like an evangelist: Veeam offer "community" ie free as in beer editions of their backup products. There's a Linux agent which will happily backup to a NAS or a cloudy offering. You still need Windows to run the console and server, if you use them. A Win10/11 VM can do that. Postgres is now supported for the backend DB, so no more MS SQL or the horrific Vis Studio thing to manage it \o/.
Posted Sep 7, 2023 21:19 UTC (Thu)
by hailfinger (subscriber, #76962)
[Link] (6 responses)
Variant 1: You have TPM-backed encryption. You move the disk, you lose the data. That's equivalent to intentionally destroying one of the copies of your data. If your backup can't be restored for some reason, you have no data.
Variant 2: You have encrypted data not bound to a specific machine. You move the disk, you keep the data. No intentional destruction of data, your redundancy stays the same. If your backup dies for some reason, you still have your data.
If you intentionally destroy your primary copy, your backup is not a backup anymore, it becomes your not-backed-up remaining data.
Posted Sep 8, 2023 0:44 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
You can store the direct disk encryption key or recovery keys outside of the TPM.
The problem is that they are large and unwieldy. You can't remember them.
At the same time, it's also difficult to remember a truly secure passphrase from which you can derive a disk encryption key.
Posted Sep 8, 2023 20:15 UTC (Fri)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
Posted Sep 9, 2023 13:02 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Sep 10, 2023 15:33 UTC (Sun)
by mpg (subscriber, #70797)
[Link]
IMHO if an organisation is able and willing to spend such a computing effort in order to get to your data, then they probably have other, more cost-efficient means of achieving that goal.
For the record, I find that for a small number of passwords that you have to type daily (and I don't have that many "root" passwords - by which I mean those that are not stored in a password manager), memorization is a non-issue. Personally I use some variant of `head -c 7 /dev/urandom | base64` (or `xxd -p | tr '0123456789abcdef' 'sdfghjklertyuiop'` which I find makes things easier to type) and never had any memorization issue.
(I'm happy with 56 bits of entropy, because the encryption scheme uses a purposefully slow key derivation function, as others have already mentioned - and if it doesn't, then again it probably has bigger flaws as well.)
Posted Sep 18, 2023 0:03 UTC (Mon)
by ras (subscriber, #33059)
[Link] (1 responses)
If you store the encryption key locally on the TPM you can do unattended reboots. As far as I can tell, that's it's prime advantage, and it's a very nice thing to have in VM's. If your threat scenario is someone stealing or copying the disk, maybe just storing the key on on some networked storage your hosting provider only allows that VM to access is a poor mans solution.
In any case, if merely remembering a good password is the problem there are multiple solutions. For example you could enable the network stack / USB during boot and supply it from a phone or similar device. But I don't think that's the real problem - it's having to supply the password that's the issue, and the other problems that go along with that - like knowing your sending the password to the right device, and not something controlled by an attacker.
Posted Sep 18, 2023 6:15 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Not really? I have my encryption keys printed on a paper and stored in a safe deposit box in my local bank. So if my NAS has a motherboard failure, I'll still be able to recover all the data.
Of course, TPM allows me to avoid entering any credentials for the NAS during the boot.
Posted Sep 10, 2023 8:46 UTC (Sun)
by gmgod (guest, #143864)
[Link]
Alternatives could involve registering a key file that is centrally managed like any other secret by the organisation or a long proved-random "password" automatically generated by a machine.
Those solution don't reduce security for all intents and purposes.
Also, TPM lends itself very well to being used for OS protection. User data is probably better served by a hardware token they control (like a yubikey) if it has to be hardware based so that's yet another solution to gain flexibility while not being stranded if the way to unlock user's data is unavailable.
Posted Sep 7, 2023 17:36 UTC (Thu)
by jgg (subscriber, #55211)
[Link] (5 responses)
Windows has a special flow where a BIOS update triggered from within Windows effectively disables the PCR checks and the next boot (presumably into the new BIOS version) will automatically reseal the TPM to the new measurement and enable the checks. Even going into the BIOS and changing certain settings can trigger PCR changes and no boot.
These days Windows auto installs BIOS updates from Windows Update. If you dual boot Windows/Linux then, well, one of the two OSs will need a recovery key to unlock..
Posted Sep 7, 2023 18:47 UTC (Thu)
by willy (subscriber, #9762)
[Link] (1 responses)
Posted Sep 7, 2023 21:24 UTC (Thu)
by WolfWings (subscriber, #56790)
[Link]
I have one machine I have two separate Linux installs on (Debian + Arch) that I dual-boot between to test stuff on both distros on physical hardware, but it does seem to be dying out at this point.
Posted Sep 8, 2023 16:32 UTC (Fri)
by demfloro (guest, #106936)
[Link] (2 responses)
Sane way to seal LUKS secret is PCR7+11 if systemd-stub is used. PCR7+11+14 if systemd-stub and Shim are used. Then you seal the secret specifically to known combination of PK+KEK+db+MokList+SecureBoot state and kernel+initrd+kernel cmdline.
PCR7 - SecureBoot state (+ Shim measuresements if it's used)
https://www.freedesktop.org/software/systemd/man/systemd-...
Bitlocker uses not magic, but PCRs other than 0 and 1 for sealing: https://github.com/tianocore-docs/edk2-TrustedBootChain/b...
Posted Sep 9, 2023 17:47 UTC (Sat)
by faramir (subscriber, #2327)
[Link]
"Any sufficiently advanced technology is indistinguishable from magic."
Posted Sep 16, 2023 0:26 UTC (Sat)
by jschrod (subscriber, #1646)
[Link]
If you folks want people to use that TPM stuff (and we're interested) - well, then we, as a community, need better documentation and better packaging.
Posted Sep 7, 2023 18:57 UTC (Thu)
by iabervon (subscriber, #722)
[Link]
Posted Sep 7, 2023 19:20 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Windows also can escrow this key with Microsoft (or with your domain controller in companies), so you can use your Microsoft account to get it. I don't think Ubuntu supports this for now, but it's easy to add.
Posted Sep 8, 2023 6:01 UTC (Fri)
by eduperez (guest, #11232)
[Link] (11 responses)
I have seen may disks die, but I have never seen a motherboard die (anecdotal evidence, I know). The point is, I would always have a back-up of all the data in that disk, not because it is encrypted and I could lose all the data if the motherboard dies, but because the disk could die, encryption or not.
Posted Sep 8, 2023 13:04 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (2 responses)
If it's your personal experience DON'T CALL IT AN ANECDOTE. A court of law would call it evidence. As a scientist, I've known about 3 hard drives fail, and nary a motherboard, so if I'm doing an experiment I'd call that a reliable data point.
"My experience is" is an indisputable fact. "A lot of people have told me" is where we get into the realms of anecdotes and hearsay. But even then, if you make a point of rigorously recording what people tell you, "nobody told me they'd had a motherboard fail, lots of people said they'd had hard drives fail" is circumstantial evidence more than sufficient to drive a high-probability conclusion.
There's nothing wrong with other peoples' reports, provided (a) you're careful to make clear where your data came from, and (b) you are careful about the trustworthiness of the reports. But a report of your own experience is a hard fact - FULL STOP! (It might be a mistaken report, but that's another kettle of fish entirely.)
Cheers,
Posted Sep 11, 2023 10:52 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Sep 11, 2023 10:53 UTC (Mon)
by paulj (subscriber, #341)
[Link]
Posted Sep 8, 2023 14:03 UTC (Fri)
by geert (subscriber, #98403)
[Link] (3 responses)
Posted Sep 8, 2023 18:34 UTC (Fri)
by somlo (subscriber, #92421)
[Link] (2 responses)
I guess the moral of the story is: If you're worried about your data on a stolen/lost laptop, and are using disk encryption as a countermeasure against that, you've presumably *already* thought about how you're going to regain access to your data after such an event (i.e., backups)!
Otherwise simply do *not* enable disk encryption: In the absence of a threat model like described above, it's nereky a foot gun waiting to go off :)
Posted Sep 8, 2023 18:36 UTC (Fri)
by somlo (subscriber, #92421)
[Link]
Posted Sep 8, 2023 21:01 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link]
Yes, I think this should be the key lesson. If you're worried about data loss, protect yourself against data loss. Being able to transplant the hard drive into another computer when something else fails is at most a convenience feature. You still need to be able to recover from a hard drive failure, so you should probably think about how to do it as efficiently as possible anyway.
Posted Sep 8, 2023 20:28 UTC (Fri)
by ibukanov (subscriber, #3942)
[Link]
On the other hand I have Dell XPS laptop from 2016. It recently refused to boot. As the error happens very early in the boot process, I initially assumed it was my first non-acidental motherboard failure. It turned out it was failed SSD. UEFI firmware changed its state into a buggy one when disk misbehaved. Then it refused to boot with that state. Fortunately it was still possible to reset the state. Then after changing SSD everything was fine.
Posted Sep 8, 2023 21:05 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
Motherboard failure is definitely a thing. The big one you'll hear about is bad capacitors, though it sounds as if that more frequently causes stability problems rather than catastrophic failure.
Posted Sep 8, 2023 21:41 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
There was a whole spate of articles about how to tell if you had a dodgy motherboard. Then all the dodgy mobos died-were retired, and it's no longer a problem.
Same with hard drives. I think it was the Thailand floods or something, there was a massive shortage of disks and a lot of dodgy components, and - particularly with Seagate - a huge number of dodgy 3TB hard drives flooded the market. They disappeared over a few years, and things were back to their reliable norm.
At the end of the day, be it hard drives, NVMe, RAM, mobos, these things all have an expected lifetime. And mobos seem to be either (a) one of the most reliable things, or seriously prone to being obsolete before they fail. You never really ever heard of mobo problems apart from that (admittedly very large) dodgy batch.
Cheers,
Posted Sep 9, 2023 11:56 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link]
The general consensus appears to be "incompetent industrial espionage".
Posted Sep 8, 2023 6:05 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
> And you might imagine, that this always happens at the worst time possible when that work (tm) has to be finished by tomorrow evening.
Err... no. I've seen many disks die. I remember about just one motherboard becoming flaky and it was still booting most of the time.
I'm not saying you shouldn't prepare for this situation; you should. A recovery key is probably the best option and in fact my IT department has one escrowed for every device. Also useful when you forget your password :-)
But this situation should definitely not be at the top of the list. There are other, much better reasons to do backups.
Posted Sep 9, 2023 2:21 UTC (Sat)
by jwarnica (subscriber, #27492)
[Link]
Posted Sep 8, 2023 17:55 UTC (Fri)
by lunaryorn (subscriber, #111088)
[Link] (1 responses)
The blog article doesn't mention prior art such as clevis or systemd-cryptenroll, so it's unclear to me what's really new with Ubuntu's approach here.
Posted Sep 15, 2023 1:31 UTC (Fri)
by raof (subscriber, #57409)
[Link]
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
From reading the article it seems you can either have TPM based FDE with SNAP or non TPM based FDE with DEB (which they promise isn't going away...)
But is there a good reason to link TPM and SNAP?
I don't think so, other than they have aparantly already done it that way for Ubuntu Core.
I always thought SNAPs were for add on applications and the core OS was supposed to stay with the distribution package manager.
But it doesn't get much more core than the bootloader and kernel...
Ubuntu to add TPM-backed full-disk encryption
But is there a good reason to link TPM and SNAP?
I get that it's cool to hate on snaps, but isn't “the work has already been done for Snap-based installs” a good reason?
I suspect there is a significant subset of users who would like TPM based FDE for the security benefits but would prefer to stay with DEBs.
Why, particularly, would they want to stay with Debian packages for the bootloader and kernel? This is not changing the rest of the system management. Indeed, this is backporting a feature which already exists on the snap-only Ubuntu Core system to a traditional apt-managed system.
I always thought SNAPs were for add on applications and the core OS was supposed to stay with the distribution package manager.
But it doesn't get much more core than the bootloader and kernel...
This is an artifact of the focus of your attention. Ubuntu Core, the snaps-only transactional OS, is almost a decade old now - first public release was Ubuntu Core 16, based on 16.04, and that emerged from the work on the Ubuntu Touch phone OS dating back to 2011. You've probably only noticed snaps relatively recently, as using them for desktop-y things is a more recent evolution.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
It is a big problem to spend lots of working hours to get a system into the same working state that it was in before the disaster.
And you might imagine, that this always happens at the worst time possible when that work (tm) has to be finished by tomorrow evening.
It is 1 hr vs. 1 week.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Press one button and install everything within one hour to get to 100% state as it was before the disaster?
Disk swap in case of anything but disk breakage is trivial, fast and is a 100% restore. Guaranteed. Unless you break it with TPM voodoo, of course.
A carefully maintained kickstart (e.g., this Fedora 38 example, combined with up-to-date (rsync-based) backups of /home can get you *most* of the way there.Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
> Yes. This is what happens when you restore a computer from a backup in Windows and Mac OS. You will even have the desktop icons in the same spots.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Or you can just allow a simple disk swap.
Ubuntu to add TPM-backed full-disk encryption
> Or you can just allow a simple disk swap.
Ubuntu to add TPM-backed full-disk encryption
- etckeeper backed by (remote) git
- cron job to keep and update list of installed packages in /etc
- backup /usr/local, /root if needed
- backup of user account(s) or just all of /home.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
This because 90% of the user has very low requirement: an web+email client + {libre}office is enough. And likely in the near future only a browser will be enough.
Ubuntu to add TPM-backed full-disk encryption
No whole-machine backups?
No whole-machine backups?
No whole-machine backups?
No whole-machine backups?
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Not entirely correct. BIOS update can change PCR0 (Firmware Code) and PCR1 (Firmware Settings), this is true only if distro or machine owner decided to seal TPM secret to PCR0+1 among others.
PCR11 - systemd-stub UKI measurements
PCR14 - Shim measurements
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Wol
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
If you're worried about your data on a stolen/lost laptop, and are using disk encryption as a countermeasure against that, you've presumably *already* thought about how you're going to regain access to your data after such an event (i.e., backups)!
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Wol
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
At least one significant difference between this and Ubuntu to add TPM-backed full-disk encryption
systemd-cryptenroll
is that Ubuntu manages the policy. To use systemd-cryptenroll
you need to be able to generate & sign a policy that applies to the PCR measurements, which means either you've got a key capable of bypassing this protection on your system or your system is centrally managed and that key exists wherever central management happens.