someone still has to predict the future load and build out accordingly. That someone has to charge you for system that they built that nobody is using. They also get to charge you enough to make a profit.
Now, if you have a bursty load so that you can turn off a large percentage of your systems for a large percentage of the time, it can be far cheaper to pay like this and let someone else who has bursts of load at a different time than you do pay for another chunk of the same systems.
and for a small business, it gives the ability to scale up rapidly based on usage. So if you create something that's wildly popular, you can use your income from your first users to pay for the servers for you next users and ramp up in a matter of minutes to hours rather than weeks as you order and rack new systems
of course, this assumes that you have done the extra engineering work to make sure that your application can actually scale horizontally. This is not trivial for anything other than the most trivial applications (but fortunately there really are a lot of those :-) so it's very possible that you end up in a situation where you would like to add another hundred systems, but doing so won't actually help (due to things like syncing data between them, or bottlenecks to back-end resources)
Also, running many small systems can be less efficient than running fewer larger systems.
with lots of small systems, you pay for the memory/cpu to run a copy of the kernel and the system daemons for every node.
you can also run into problems keeping all of the nodes consistent (when someone changes something, do you have to distribute that change to all other nodes?) This is a larger version of the locking problem that you can have on a huge single node between many threads. But all your communications are much slower.
cloud computing has some places where it's a huge win, but unfortunately comments like "for every $1 consumed in cloud services, there is $4 not being spent on data centers" can be very misleading and cause people to make bad decisions based on bad information.
I've seen companies look at their internal IT budget that includes sysadmins, security, audit, etc salaries and compare it with Amazon's list prices for hardware and decide that they can save a fortune by switching everything to AWS. But this comparison doesn't recognize the fact that they are still going to need sysadmins, security, auditors, etc.
Yes, switching how you build systems to have the builds be automated, having all your systems be the same, etc are wins (some of them gigantic wins compared to how companies are building and managing servers now), but you don't have to switch to the cloud to get those wins, you can change your processes in your existing datacenters and get the benefit.
A few years ago, everyone was talking about how virtualizing your systems was such a huge win, companies spent millions doing so. But if they just switched to running on virtual servers without changing how they built and administered the servers, they found that their problems got larger instead of smaller.
For example, if you are building and patching each system manually, going from 100 bare metal systems to 100 bare metal systems hosting 1000 virtual systems makes your job MUCH harder, you need to change so that you aren't manually building and patching your systems, and at that point the difference between doing so on bare metal and doing so on virtual machines really isn't that large.
Cloud computing is the next step in this process
The hype also ignores the additional complexity (and therefor costs) related to cloud computing
The fact that security when running in someone else's datacenter is more complicated than when running in your own datacenter. In your own, you can protect networks instead of individual systems.
The fact that you have significantly less reliability for individual components.
the fact that there are things you are just not allowed to do (broadcasts for example)
and the fact that there are frequently hidden performance issues (disk and network latency tends to be significantly higher in a cloud environment than in most datacenters.
none of these things mean that you can't run something in a cloud environment, it just means that it may take more effort and/or more systems to do so.
Given the number of shortcuts and "we'll implement that later" delays that happen in normal software development, it's easy to be burned if you rush and don't have the experience.
Early Adopters tend to be the cream of the crop, able to deal with all sorts of special cases and adapt rapidly, but that doesn't mean that the main body of companies are going to be as successful as the early adopters were, even with the benefit of the better tools. They just don't have the experience and skill to deal with everything that the early adopters can.