Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Posted Sep 7, 2023 18:22 UTC (Thu) by mb (subscriber, #50428)In reply to: Ubuntu to add TPM-backed full-disk encryption by NightMonkey
Parent article: Ubuntu to add TPM-backed full-disk encryption
Oh come on. Have you ever seen such a system for a workstation computer?
Press one button and install everything within one hour to get to 100% state as it was before the disaster?
I have never seen that. Not even close. Except for things like whole system backups, where it restores to the last backup checkpoint, at least. Still not 100%, though.
Disk swap in case of anything but disk breakage is trivial, fast and is a 100% restore. Guaranteed. Unless you break it with TPM voodoo, of course.
Install automation certainly will not do that. It will never do that, if people make any changes to the installed workstation.
Posted Sep 7, 2023 18:40 UTC (Thu)
by somlo (subscriber, #92421)
[Link] (2 responses)
A carefully maintained kickstart (e.g., this Fedora 38 example, combined with up-to-date (rsync-based) backups of /home can get you *most* of the way there.Ubuntu to add TPM-backed full-disk encryption
Posted Sep 8, 2023 7:45 UTC (Fri)
by abo (subscriber, #77288)
[Link] (1 responses)
Posted Sep 9, 2023 21:40 UTC (Sat)
by salimma (subscriber, #34460)
[Link]
Posted Sep 7, 2023 19:13 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Yes. This is what happens when you restore a computer from a backup in Windows and Mac OS. You will even have the desktop icons in the same spots.
Posted Sep 14, 2023 14:48 UTC (Thu)
by ms-tg (subscriber, #89231)
[Link] (1 responses)
Heavy +1 to this. It amazes me that the Mac OS examples of how seamless Time Machine backup and restore process works seem not to have spread as far and wide even after many years.
When a Mac OS machine dies, and you have back-ups, you do indeed "Press one button and install everything within one hour to get to 100% state as it was before the disaster".
However, I have seen it take a bit more than 1 hour the last time it happened to me, more like 90 minutes if I recall correctly. But it does work!
Posted Sep 15, 2023 9:56 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
The important thing is not so much the time taken, as the user time spent on it. If you have a Time Machine backup of your Mac, you can restore to a replacement Mac in just a few minutes operator time (plus potentially hours of machine time restoring the backup). As long as the machine time is reasonable (overnight, say), this is fine because you can go and do something else while you wait for the backup to restore.
Posted Sep 7, 2023 19:30 UTC (Thu)
by geofft (subscriber, #59789)
[Link] (5 responses)
A practical real-world case is Chrome OS - everything is in "the cloud" i.e. Google's redundant, replicated, configuration-managed computers, not just your files but also your configuration. Get a new Chrome OS machine, log in with the same account, and everything is back. And Chrome OS uses TPM-locked disks, and is very aggressive about wiping the local data partition when you look at it wrong.
We also do this at my workplace. Developers do not have root on their machines. We have a fairly extensive system for building all the software we depend on, from GCC and Python on down, in a giant monorepo. If you want to install anything you can install it in the monorepo; the entire company has push access to it (with mandatory reviews etc.). If you've got some weird combination of this version of libcurl plus that version of graphviz plus these three configuration files, you produce that weird combination inside a git repo, where you can run "git diff", not via doing stuff via sudo, where you can't. If you want to deploy that weird combination, you can take a git commit ID and tell our infrastructure to deploy that code, instead of trying to replicate the same manual steps on a prod server.
And then we just back up people's home directories. If something goes wrong with their machine, we just install a fresh one, install their home directory from backup, and they're ready to go. If they had their work checked in and pushed they may not even need their home directory restored, strictly speaking. This also helps with situations like people leaving - we never have intern managers wondering what dev stack their intern installed on their machine, because they didn't install anything. (It also helps with initial onboarding: the entirety of the "set up your dev environment" step is just cloning the monorepo and then setting up whatever IDEs you like; you're not asking your coworkers what Node.js versions they have installed.)
We have a team of folks who are willing to help people get things building in the monorepo. This is in large part my current job. My previous job was doing the same high-level function, but in the form of doing OS packaging whenever someone wanted to use a neat new Python or C library, so we didn't have machines where people just ran "sudo make install". I much prefer the current approach for quite a few reasons: in addition to simply avoiding the host of "works on my machine" problems, it means that there isn't a machine-wide concept of "the GCC version" or "the Python version" as far as developers' code goes. So different branches and perhaps even different binaries on the same branch can be on different versions of these things, and also we can upgrade the actual OS without having to upgrade code dependencies in lockstep.
There are a few public systems that share the same philosophy. We're taking a very close look at Nix, which can be coinstalled in the /nix directory of any other Linux distro, and which safely lets unprivileged users install anything; if it had existed before we built our in-house system, we would have probably just used it. There's also Spack, which has more of a specific focus on scientific computing/HPC and compilation options.
Posted Sep 7, 2023 19:49 UTC (Thu)
by smoogen (subscriber, #97)
[Link] (2 responses)
The reality for most programmers and shops I have seen in the last 30 years is that the 'team' of people trying to somehow replicate that is usually 1 person who is probably also juggling a helpdesk and why is the CEO's phone not working. As the company grows, it seems that this staff grows by 1 person for every 400 to 1000 employees. And then you find yourself increasingly dealing with the latest plan to outsource the job to some cloud vendor who end of lifes the service right after the transition.
So yes it is possible, but most people only hear about it in posts and TED talks about how if only your finance department decided to actually spend on IT versus outsourcing it again.. you could have it to. You instead deal with backup systems which are broken, configuration management which is behind, and deadlines which are moved up.
Posted Sep 8, 2023 1:04 UTC (Fri)
by geofft (subscriber, #59789)
[Link] (1 responses)
What I'm advocating for is a system where the team that officially maintains your computers, no matter how big or small that team is, doesn't feel like they have to choose between blocking short-term productivity or creating long-term risk. There usually will be a few people who know enough to install software and cobble things together even if that isn't their job on paper (and they therefore don't have root). Give them the ability to cobble things together, but also ensure their cobbling is recorded somewhere, and isn't just some hackery in their home directory. And if that team is one person, or even zero, there's still a way forward.
And I'm also posting this not to boast about our in-house system but to lament that we had to build one (and did not open source it). I think it might be relatively close to possible to get there with Nix these days, though it's both a fairly steep learning curve as well as an involved conversion from basically any existing system. I think there could be really good FOSS tools for this. I think these tools could be good enough that the average home user - who by definition has a corporate sysadmin staff of zero - can get their setup for installing the right graphics drivers and workarounds recorded in exactly the same way.
I think we (the FOSS community) actually sort of lost our lead: up until maybe the early '00s, Windows and Mac users basically did not have privilege separation at all, and were running everything as effectively root. Installing stuff was just copying files, uninstalling was hoping for the best, and "DLL hell" was a Windows problem. The Linux distros and the BSDs were the ones who said, even if this is your personal computer, run as a non-admin user and use well-defined packaging systems. Now, as another commenter alluded to, Windows and Mac OS have moved towards a model where the OS is read-only, applications are in their own private directories (and often sandboxed), and it is absolutely possible to restore the state of a Windows or Mac machine just by restoring user-level files and config. We haven't kept up, and I would bet there is much more "DLL hell" in practice on Linux machines than Windows ones today.
A few projects like NixOS and Spack are going in the right direction for specific use cases, but they're not commonplace. The Ubuntus and Fedoras of the world should do this too - and in a way that empowers users to try stuff out as opposed to just locking them out of the system and indeed makes them more confident about trying things that might not work.
Posted Sep 8, 2023 11:57 UTC (Fri)
by smoogen (subscriber, #97)
[Link]
I expect that if I were a full time Windows admin I would have been able to get around this but I am not so it ended up being a reinstall from scratch . Having this happen now with 3 times this year, I really should learn
I have been impressed with the Mac on this because it does seem that time machine and other things will allow for most things to be restorable and comparable. It is what I consider the killer app for self-administration as it has solved a lot of little issues. Its not perfect, but it is a lot better than anything I have dealt with recently on Linux or Windows.
Also I didn't take your comments as bragging. I took them as 'this is possible' which can be helpful for us sysadmins who tend to get in a rut and also think nothing can be better than the pig sty we live in :)
Posted Sep 7, 2023 20:09 UTC (Thu)
by mb (subscriber, #50428)
[Link] (1 responses)
And I don't think your approach works well with proprierary software that does all sorts of weird things.
Posted Sep 7, 2023 20:32 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
In corporate workstations, they are already paying these costs anyway. If you a solo user who is doing adhoc things, maybe the latter is more useful for you. You should have backups anyway, so a hardware failure shouldn't affect you even in that case.
Posted Sep 7, 2023 19:53 UTC (Thu)
by sjj (guest, #2020)
[Link]
Getting the system to a known state: feed the package list to dnf/apt and clone the /etc repo. Then restore user backups and the UI settings are back with user data. It’s not 100.00% but it’s only 2-3 steps.
Posted Sep 7, 2023 21:10 UTC (Thu)
by adam820 (subscriber, #101353)
[Link] (1 responses)
Posted Sep 7, 2023 23:08 UTC (Thu)
by sjj (guest, #2020)
[Link]
Posted Sep 8, 2023 9:19 UTC (Fri)
by magi (subscriber, #4051)
[Link]
Posted Sep 9, 2023 14:54 UTC (Sat)
by kreijack (guest, #43513)
[Link]
Yes, I saw these, and these work well. It is not very complicated to do a similar setup that can satisfy 90% of the user.
In my company the PC arrived already configured with the most common software; and the files are already in the cloud; so replacing a PC requires only to login to the new one (and wait the download of the files from the cloud).
The key is the "90%" above. This works very well for the mast majority of the people that have low requirement.
This doesn't work for people that relies massively on complex tools (3D Cad, HW Cad, Software developments, or computer which acts as server ...), were the setup in not canonized (even tough it could be done easily, but it would requires specific setup from the IT and this doesn't worth).
So even tough you can't solve all the problem for all the people, you can massively reduce the load of the IT people.
And, even I never go deeply in this topic, my understand is that for every PC the key to unlock the disk is stored in the TPM, but the IT has another key to unlock the system when (e.g.) an upgrade doesn't work, leaving the system un-bootable (which is the major risk when you put the key inside the TPM).
Posted Sep 14, 2023 8:08 UTC (Thu)
by highvoltage (subscriber, #57465)
[Link]
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
> Yes. This is what happens when you restore a computer from a backup in Windows and Mac OS. You will even have the desktop icons in the same spots.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Or you can just allow a simple disk swap.
Ubuntu to add TPM-backed full-disk encryption
> Or you can just allow a simple disk swap.
Ubuntu to add TPM-backed full-disk encryption
- etckeeper backed by (remote) git
- cron job to keep and update list of installed packages in /etc
- backup /usr/local, /root if needed
- backup of user account(s) or just all of /home.
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
Ubuntu to add TPM-backed full-disk encryption
This because 90% of the user has very low requirement: an web+email client + {libre}office is enough. And likely in the near future only a browser will be enough.
Ubuntu to add TPM-backed full-disk encryption