Glauber Costa is quite right that hypervisors are a response to the failure of the operating system to provide enough isolation between different processes or different users on the same machine. But there is another important way in which the operating system has failed, and that is in providing an interface which is wide enough to be usable and yet narrow enough to be completely specified and dependable.
Often a large application will specify a particular Linux distribution or Windows version it is 'certified' to run on. The vendor may even insist that its application be the only thing running on the machine, if you want to get support. It may require particular versions of system libraries because those were the ones it was tested with. And yes, I am talking about big companies here, where stupid things are done for stupid big-organization reasons, and if you use free software and compile from source you are free of this nonsense, blah blah. But bear with me and assume that at least some of the time there is a legitimate reason to require an exact operating system version for running an application. (If you have ever worked on a support desk, you will find this reality easier to accept.)
So what we start to see are 'appliances' where the application is packaged up with its operating system ready to load into a virtual machine. Instead of supplying a program which calls the complex interface provided by the kernel, C library, and other system libraries, the vendor supplies one which expects the 'ABI' of an idealized x86-compatible computer. It has proved easier to agree on that than to agree on the higher level interfaces. Even though, somewhat absurdly, it means that TCP/IP and filesystems and virtual memory are all being reimplemented inside the 'appliance', it works out more robust this way.