I see your point, but I think we can circumvent some of this.
I think that at some point - probably host-architecture bound - we have to switch from VM-as-supervisor to straight emulation.
I've mentioned this in another reply, so apologies if you've read it already - but basically, when your VM solution finally stops supporting your version of the client OS then it's time to look at a switch to emulating the entire machine, QEMU style.
The advantage of that is that the emulation is much more likely to last longer, albeit be somewhat slower to run.
The Intel/AMD 64-bit chips are (I believe) incapable of running 16-bit code when in 64-bit mode. They can run 32-bit, just not 16-bit.
So we're already at the point where VM systems are unable to run some old OSes or apps without resorting to emulation behind the scenes.
Rather than rely on that assumed emulation, I think we should build in a stage where we simple say "all 16-bit code is emulated", and prepare for the idea that 128-bit processors in a decade or two might mean we have to add 32-bit code to the "emulated by default" pile.
That stops us from having to do VMs within VMs, as you describe. (And if the chip won't run the code, and there's no emulator, I'm not sure VMs within VMs will work anyway.)