> t's good for servers where high uptime is needed,
being a bit picky here.
servers don't need high uptime, the services they provide need high availability time. This is not the same thing (even though it sounds like it should be)
pushing for high uptime on a single server will get you quite a ways towards high availability time, but then you hit a wall and you need to move to cluters of machines. Once you move to clusters of machines, the need for high uptime on a each individual server drops dramatically.
While the clustering adds complexity, this now allows for management of the indivudal servers to be much simpler as you are no longer racing the clock for each change. This (usually) results in a very dramatic improvement in service availability, and a dramatic _decrease_ in _unplanned_ server downtime. there is more _planned_ server downtime, but it's the unplanned downtime that your customers notice.
There is a small (and shrinking) pool of application types that are really hard to cluster and really do need high uptime, but far fewer than you would think.