I am not sure why running 'untrusted' native code is considered so dangerous or novel. Since the seventies or earlier time-sharing systems have allowed different users to run their own code on the system, and each user or process is isolated from the others. Modern hardware such as the 386 family was specifically designed to support this. Each process runs in a virtual machine set up by the operating system and cannot access memory belonging to other processes or the kernel. The only access granted to it is what the operating system explicitly provides through its system call interface.
Why, then, is it necessary to go to all this trouble of verifying binaries? Surely it would be far simpler for the operating system to provide a bit of help, setting up a new process with its own memory space, CPU quota, and a limited set of system calls (perhaps just read() and write() to a pair of pipes that already exist). Then you could execute any native code you want, and if it tries to do something naughty, the CPU's built-in mechanisms will trigger a fault and the OS kills the process.
We only think this is exotic because popular OSes of today do not provide a lot of control over what resources a process can have. Typically access to files is set with access control bits, but any process can open TCP/IP connections. Or if the OS does provide capabilities, jails, masking out of system calls and so on, there isn't a single dominant API and model, and the necessary base of knowledgeable people to make good use of it.