I think you've missed their point. Switching to, say, QEMU and software-virtualisation doesn't solve things. Eventually one day QEMU will no longer be maintained. Some time after that, the systems on which QEMU runs (software and hardware wise) will be obsolete. You will then have to run a VM of the system on which QEMU runs, in order to be able to run QEMU, in order to run the VM that contains the software you need to run in order to view the document you're interested in. Even that new system will eventually one day become obsolete, necessitating another layer of VMs. You end up with VM turtles all the way down.
Further, you are assuming that for every such system that becomes obsolete there will be a VM on the newer system to run the older system. This is far from guaranteed. If that assumption ever fails, access to that document is lost after that time. For that assumption to always be true, every system needs to be sufficiently well documented by its maker that someone in the future will be able to emulate it. I.e. it assumes your chain of VMs will never become dependent on a monolithically proprietary system.
So, given that your VM system also relies on open specifications, wouldn't it be much better & simpler to just work towards ensuring documents are stored in openly specified formats? That seems far more future proof to me..