Still though, you're putting your faith in a security technology whose effectiveness depends on an arms race where the "bad" guys are routinely way ahead. We simply do not know how to make software secure, and so there's no way we can make a large base OS secure. You're *will* regularly lose, if/when you're attacked.
To have any hope of winning the arms race, your faith must go into software that both a) is minimal in size b) is minimal in the services it presents to software that uses it c) and limits the scope of software using it. Userspace sand-boxing and VMs basically. The Android user-space is the state-of-the-art in the real-world, widely-deployed system space when it comes to security and isolation of code, I think. And note that that security model does NOT rely on "Secure Boot".
That's not OS hypervisors, note, because if you're simply running the same OS in the hypervisor you still have the same problem in your virtualised OS. You have a more secure OS underneath to do integrity checks from, I'd agree, but it does not give you any means to have any additional trust in the software you're using at runtime. You still are just as prone to runtime subversion because you still have no isolation/sandboxing between the code & data you don't trust and the rest of the software. Also, things like the Qemu hardware emulation (which seem to be used a lot elsewhere) can have relatively complex interfaces, and they weren't necessarily written in a "security first" way. There have been a number of exploits in this code over the years.