The true costs of hosting in the cloud
Should we host in the cloud or on our own servers? This question was at the center of Dmytro Dyachuk's talk, given during KubeCon + CloudNativeCon last November. While many services simply launch in the cloud without the organizations behind them considering other options, large content-hosting services have actually moved back to their own data centers: Dropbox migrated in 2016 and Instagram in 2014. Because such transitions can be expensive and risky, understanding the economics of hosting is a critical part of launching a new service. Actual hosting costs are often misunderstood, or secret, so it is sometimes difficult to get the numbers right. In this article, we'll use Dyachuk's talk to try to answer the "million dollar question": "buy or rent?"
Computing the cost of compute
So how much does hosting cost these days? To answer that apparently trivial question, Dyachuk presented a detailed analysis made from a spreadsheet that compares the costs of "colocation" (running your own hardware in somebody else's data center) versus those of hosting in the cloud. For the latter, Dyachuk chose Amazon Web Services (AWS) as a standard, reminding the audience that "63% of Kubernetes deployments actually run off AWS". Dyachuk focused only on the cloud and colocation services, discarding the option of building your own data center as too complex and expensive. The question is whether it still makes sense to operate your own servers when, as Dyachuk explained, "CPU and memory have become a utility", a transition that Kubernetes is also helping push forward.
Another assumption of his talk is that server uptime isn't that critical anymore; there used to be a time when system administrators would proudly brandish multi-year uptime counters as a proof of server stability. As an example, Dyachuk performed a quick survey in the room and the record was an uptime of 5 years. In response, Dyachuk asked: "how many security patches were missed because of that uptime?" The answer was, of course "all of them". Kubernetes helps with security upgrades, in that it provides a self-healing mechanism to automatically re-provision failed services or rotate nodes when rebooting. This changes hardware designs; instead of building custom, application-specific machines, system administrators now deploy large, general-purpose servers that use virtualization technologies to host arbitrary applications in high-density clusters.
When presenting his calculations, Dyachuk explained that "pricing is complicated" and, indeed, his spreadsheet includes hundreds of parameters. However, after reviewing his numbers, I can say that the list is impressively exhaustive, covering server memory, disk, and bandwidth, but also backups, storage, staffing, and networking infrastructure.
For servers, he picked a Supermicro chassis with 224 cores and 512GB of memory from the first result of a Google search. Once amortized over an aggressive three-year rotation plan, the $25,000 machine ends up costing about $8,300 yearly. To compare with Amazon, he picked the m4.10xlarge instance as a commonly used standard, which currently offers 40 cores, 160GB of RAM, and 4Gbps of dedicated storage bandwidth. At the time he did his estimates, the going rate for such a server was $2 per hour or $17,000 per year. So, at first, the physical server looks like a much better deal: half the price and close to quadruple the capacity. But, of course, we also need to factor in networking, power usage, space rental, and staff costs. And this is where things get complicated.
First, colocation rates will vary a lot depending on location. While bandwidth costs are often much lower in large urban centers because of proximity to fast network links, real estate and power prices are often much higher. Bandwidth costs are now the main driver in hosting costs.
For the purpose of his calculation, Dyachuk picked a real-estate figure of $500 per standard cabinet (42U). His calculations yielded a monthly power cost of $4,200 for a full rack, at $0.50/kWh. Those rates seem rather high for my local data center, where that rate is closer to $350 for the cabinet and $0.12/kWh for power. Dyachuk took into account that power is usually not "metered billing", when you pay for the actual power usage, but "stepped billing" where you pay for a circuit with a (say) 25-amp breaker regardless of how much power you use in said circuit. This accounts for some of the discrepancy, but the estimate still seems rather too high to be accurate according to my calculations.
Then there's networking: all those machines need to connect to each other and to an uplink. This means finding a bandwidth provider, which Dyachuk pinned at a reasonable average cost of $1/Mbps. But the most expensive part is not the bandwidth; the cost of managing network infrastructure includes not only installing switches and connecting them, but also tracing misplaced wires, dealing with denial-of-service attacks, and so on. Cabling, a seemingly innocuous task, is actually the majority of hardware expenses in data centers, as previously reported. From networking, Dyachuk went on to detail the remaining cost estimates, including storage and backups, where the physical world is again cheaper than the cloud. All this is, of course, assuming that crafty system administrators can figure out how to glue all the hardware together into a meaningful package.
Which brings us to the sensitive question of staff costs; Dyachuk described those as "substantial". These costs are for the system and network administrators who are needed to buy, order, test, configure, and deploy everything. Evaluating those costs is subjective: for example, salaries will vary between different countries. He fixed the person yearly salary costs at $250,000 (counting overhead and an actual $150,000 salary) and accounted for three people on staff. Those costs may also vary with the colocation service; some will include remote hands and networking, but he assumed in his calculations that the costs would end up being roughly the same because providers will charge extra for those services.
Dyachuk also observed that staff costs are the majority of the expenses in a colocation environment: "hardware is cheaper, but requires a lot more people". In the cloud, it's the opposite; most of the costs consist of computation, storage, and bandwidth. Staff also introduce a human factor of instability in the equation: in a small team, there can be a lot of variability in ability levels. This means there is more uncertainty in colocation cost estimates.
In our discussions after the conference, Dyachuk pointed out a social aspect to consider: cloud providers are operating a virtual oligopoly. Dyachuk worries about the impact of Amazon's growing power over different markets:
Demand management
Once the extra costs described are factored in, colocation still would appear to be the cheaper option. But that doesn't take into account the question of capacity: a key feature of cloud providers is that they pool together large clusters of machines, which allow individual tenants to scale up their services quickly in response to demand spikes. Self-hosted servers need extra capacity to cover for future demand. That means paying for hardware that stays idle waiting for usage spikes, while cloud providers are free to re-provision those resources elsewhere.
Satisfying demand in the cloud is easy: allocate new instances automatically and pay the bill at the end of the month. In a colocation, provisioning is much slower and hardware must be systematically over-provisioned. Those extra resources might be used for preemptible batch jobs in certain cases, but workloads are often "transaction-oriented" or "realtime" which require extra resources to deal with spikes. So the "spike to average" ratio is an important metric to evaluate when making the decision between the cloud and colocation.
Cost reductions are possible by improving analytics to reduce over-provisioning. Kubernetes makes it easier to estimate demand; before containerized applications, estimates were per application, each with its margin of error. By pooling together all applications in a cluster, the problem is generalized and individual workloads balance out in aggregate, even if they fluctuate individually. Therefore Dyachuk recommends to use the cloud when future growth cannot be forecast, to avoid the risk of under-provisioning. He also recommended "The Art of Capacity Planning" as a good forecasting resource; even though the book is old, the basic math hasn't changed so it is still useful.
The golden ratio
Colocation prices finally overshoot cloud prices after adding extra capacity and staff costs. In closing, Dyachuk identified the crossover point where colocation becomes cheaper at around $100,000 per month, or 150 Amazon m4.2xlarge instances, which can be seen in the graph below. Note that he picked a different instance type for the actual calculations: instead of the largest instance (m4.10xlarge), he chose the more commonly used m4.2xlarge instance. Because Amazon pricing scales linearly, the math works out to about the same once reserved instances, storage, load balancing, and other costs are taken into account.
He also added that the figure will change based on the workload; Amazon is more attractive with more CPU and less I/O. Inversely, I/O-heavy deployments can be a problem on Amazon; disk and network bandwidth are much more expensive in the cloud. For example, bandwidth can sometimes be more than triple what you can easily find in a data center.
Your mileage may vary; those numbers shouldn't be taken as an absolute. They are a baseline that needs to be tweaked according to your situation, workload and requirements. For some, Amazon will be cheaper, for others, colocation is still the best option.
He also emphasized that the graph stops at 500 instances; beyond that lies another "wall" of investment due to networking constraints. At around the equivalent of 2000-3000 Amazon instances, networking becomes a significant bottleneck and demands larger investments in networking equipment to upgrade internal bandwidth, which may make Amazon affordable again. It might also be that application design should shift to a multi-cluster setup, but that implies increases in staff costs.
Finally, we should note that some organizations simply cannot host in the cloud. In our discussions, Dyachuk specifically expressed concerns about Canada's government services moving to the cloud, for example: what is the impact on state sovereignty when confidential data about its citizen ends up in the hands of private contractors? So far, Canada's approach has been to only move "public data" to the cloud, but Dyachuk pointed out this already includes sensitive departments like correctional services.
In Dyachuk's model, the cloud offers significant cost reduction over traditional hosting in small clusters, at least until a deployment reaches a certain size. However, different workloads significantly change that model and can make colocation attractive again: I/O and bandwidth intensive services with well-planned growth rates are clear colocation candidates. His model is just a start; any project manager would be wise to make their own calculations to confirm the cloud really delivers the cost savings it promises. Furthermore, while Dyachuk wisely avoided political discussions surrounding the impact of hosting in the cloud, data ownership and sovereignty remain important considerations that shouldn't be overlooked.
A YouTube video and the slides [PDF] from Dyachuk's talk are available online.
[We would like to thank LWN's travel sponsor, the Linux Foundation, for
travel assistance to attend KubeCon + CloudNativeCon.]
| Index entries for this article | |
|---|---|
| GuestArticles | Beaupré, Antoine |
| Conference | KubeCon + CloudNativeCon NA/2017 |
Posted Feb 28, 2018 19:21 UTC (Wed)
by flussence (guest, #85566)
[Link] (7 responses)
Posted Feb 28, 2018 19:57 UTC (Wed)
by anarcat (subscriber, #66354)
[Link]
Thank you and have a nice day! :)
Posted Mar 1, 2018 4:38 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
IPv6 is also fully supported in most regions.
Posted Mar 1, 2018 18:35 UTC (Thu)
by sjfriedl (✭ supporter ✭, #10111)
[Link] (3 responses)
Which may be true only while it's running: what happens when that bare metal breaks and you're down for 2 hours waiting for remote hands or 4 days waiting for a new part?
It may be hard to put a dollar value on downtime, but it's probably not $0.
Posted Mar 2, 2018 1:47 UTC (Fri)
by k8to (guest, #15413)
[Link]
Posted May 3, 2018 23:38 UTC (Thu)
by nicram (guest, #124147)
[Link] (1 responses)
They are not IT gods just highly trained monkeys with low salaries.
Posted May 4, 2018 20:54 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Mar 8, 2018 19:29 UTC (Thu)
by seneca6 (guest, #63916)
[Link]
Concerning communities and small-scale servers: Some years ago, tutorials for self-hosting your web site popped up all over the net; right now, where dedicated servers - but without RAID disks or service level agreement - are available for ridiculous prices, I'd love to see the same wave of tutorials for fail-over database clusters or distributed filesystems! Of course they exist, but it's still not trivial to set up such systems. Self-hosting -> self-clustering!
Posted Feb 28, 2018 19:42 UTC (Wed)
by admalledd (subscriber, #95347)
[Link] (1 responses)
Here, we have a few rows of racks in a co-location DC. These support all our 24/7 "base load", then if we have any large spike that cannot be processed within the DC it is shipped to our cloud environments where they scale out to a few thousand within a few minutes. Of course the "magic" to do this is platform/application specific, but they *do* exist for most anything in one form or another.
At least for our estimations, the staffing cost of cloud-vs-dc was near-to-nothing different for our size, and we only need one/two who specialize in pure hardware, the rest of OPS don't need to particularly care between. As mentioned it is really only when we start hitting that wall of "Intricate high-speed inter-node networking at $SIZE" that we then move excess burst to the cloud.
By no means is this a perfect answer, but from what little I have seen of others and talked to, it is an attractive solution if it can be made to fit.
(Disclaimer: I have not yet had a chance to follow any of the links for further details, so if this hybrid concept was already mentioned I apologize.)
Posted Feb 28, 2018 20:02 UTC (Wed)
by anarcat (subscriber, #66354)
[Link]
That said, one big problem with the cloud is when you start using custom extensions like Amazon's serverless stuff or Google's large datasets. Those are "heavy" in that they are a "gravity center" that pull services towards them and make it hard to find the "escape velocity" to leave the service again when you need to. You become dependent on those APIs or large datasets that cannot be abstracted away. So that's also something to be careful about when considering the cloud.
Posted Feb 28, 2018 21:55 UTC (Wed)
by dskoll (subscriber, #1630)
[Link] (2 responses)
The problem with cloud services is that while they have a low cost to get started, they have relatively high incremental costs as you add users compared to your own colocated hardware.
We looked at Amazon, but even for our relatively small company, we concluded it made more sense to use our own colocated hardware.
I do agree with the other poster who said emergency scaling up onto cloud instances can be a good way to handle sudden spikes in capacity. We haven't needed this yet, but we have the capability in our code to do that.
Posted Mar 1, 2018 8:02 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
The main attraction is that hardware management is Someone Else's Problem. This can not be overstated. If your AWS instance has a problem then simply stop it and then resume it to get it moved to a different hardware node. No messing around with remote hands in a DC or waiting for a replacement part to arrive.
Then there's a question of disaster preparedness. It's easy to run servers in several AWS regions. You'd be hard-pressed to do that using colocated servers.
And there are other goodies, like you can use EC2 Spot or Google Preemptible VMs to get dirt-cheap capacity if you need some number crunching. Or you can use AWS Lightsail as a replacement for dodgy VPS providers.
Now, AWS does have an Achilles heel - it's the high price of outbound data transfer. If you need to serve a lot of content, you might be better off running your own hosts and making your own interconnect agreements with tier ISPs.
Posted Mar 1, 2018 12:24 UTC (Thu)
by dskoll (subscriber, #1630)
[Link]
Posted Feb 28, 2018 22:42 UTC (Wed)
by yokem_55 (subscriber, #10498)
[Link] (1 responses)
Posted Mar 1, 2018 0:51 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
Sure. Look at the salary range for an experienced sysadmin in the coastal areas in U.S. Keep in mind, these are HCOL areas so the numbers are inflated in general.
Posted Mar 1, 2018 9:39 UTC (Thu)
by madhatter (subscriber, #4665)
[Link]
Posted Mar 1, 2018 10:14 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (8 responses)
Posted Mar 1, 2018 17:35 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
Do you think you'll be able to beat the economy of scale that AWS (or Azure) enjoys?
Posted Mar 2, 2018 7:07 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (5 responses)
Doesn't that question also apply to doing it yourself and employing people full-time to look after your infrastructure exclusively, as the article describes? I would expect this way to be less expensive than that, whereby "you get what you pay for" applies.
And of course, having people you know physically in charge of the infrastructure has certain trust implications. Not to mention that you are then generally less of a target than AWS (though also less expert at dealing with attackers, though you can also choose how much expertise you need to pay for).
Posted Mar 2, 2018 19:51 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
I would say that $100k per month is too low to even _start_ thinking about moving to your own infrastructure. The article's author, for example, doesn't mention multi-region and multi-AZ (Availability Zone) deployments.
On AWS it's trivially easy to launch instances in multiple datacenters (AZs) or multiple geographic regions. Just click a button on the console and you're done.
But if you're spending $100k per month then you'll have to physically dispatch your engineer to a remote DC to set up your (likely) multi-rack infrastructure. Then you'll have to worry about your supply chain. If a server in Frankfurt fails, do you have a local contact there that can supply a replacement within at most 24 hours?
> And of course, having people you know physically in charge of the infrastructure has certain trust implications.
> Not to mention that you are then generally less of a target than AWS (though also less expert at dealing with attackers, though you can also choose how much expertise you need to pay for).
Posted Mar 4, 2018 23:40 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (3 responses)
The only thing I think is missing from this product spectrum is renting you space in an IBM data center to place your own equipment.
I've heard many times that people are willing to pay a premium to have their own data center because of the risk that some other tenant of a public cloud will hack them. It seems like a low risk to me, but then we have news like Spectre/Meltdown where ostensibly a program running in Company A's AWS virtual machine can see the data in Company B's AWS virtual machine, and I can believe people are willing to pay that premium.
Posted Mar 5, 2018 2:18 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
And if you go to IBM for your software then you truly deserve what you'll get.
Posted Mar 6, 2018 16:49 UTC (Tue)
by gfernandes (subscriber, #119910)
[Link] (1 responses)
Hear! Hear!
Posted Mar 8, 2018 14:48 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
:-)
Cheers,
Posted Mar 8, 2018 14:46 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
For somebody like the Canadian Government, setting up their own private cloud would probably be a very good idea ... :-)
Cheers,
Posted Mar 1, 2018 16:15 UTC (Thu)
by david.a.wheeler (guest, #72896)
[Link]
I am tired of people just following the latest fad (because it's the latest fad), or holding onto an old way (just because it's the old way). There are usually pros and cons to different approaches, and people should approach technical decisions as an engineering trade-off (looking at issues like cost, time-to-develop, functionality, execution performance, reliability, security, maintainability, etc.). Look at the pros and cons, and then make the best decision for that particular situation.
Thanks for the article.
Posted Mar 1, 2018 21:30 UTC (Thu)
by csd (subscriber, #66784)
[Link] (1 responses)
Having worked in this space for a few years, the equation does tend to favor in-house if you compare machines-vs-machines (i.e. taking planned capacity instead of true utilization as the factor) but things change quickly if you take machines-vs-used-capacity - which is a truer comparison todo because one you buy and own while the other you rent as necessary. And if you take the mix of changing your 'lease period' and the discounts that cloud providers give you, you can move capacity you know you'll be using with a high degree of confidence to a longer lease (reserved instances in AWS-speak) at a much lower hourly rate while keeping the bursty capacity on more flexible per-hour lease, while still having the option to wait-and-see until the demand actually shows up. So you end up with something like 30% on long-term lease, 30% on mid-term lease and the remaining 40% no lease (pay-as-you go). And revise the numbers and ratios constantly.
Posted Mar 4, 2018 23:08 UTC (Sun)
by giraffedata (guest, #1954)
[Link]
With cloud, all that is included in the hourly rate and because it's combined with the same margins for a thousand other tenants, it should increase the hourly rate by less than it increases the colo costs.
Posted Mar 2, 2018 23:44 UTC (Fri)
by kpfleming (subscriber, #23250)
[Link]
Posted Mar 4, 2018 23:14 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (1 responses)
The fastest network cables (which include transceivers) cost over a thousand dollars, so it's easy to believe that when you have lots of them for every switch the cable costs are the majority of the networking cost. But they don't outweigh the costs of the servers and other hardware in the data center.
Posted Mar 5, 2018 1:26 UTC (Mon)
by anarcat (subscriber, #66354)
[Link]
That said, I wasn't referring to the cost of actual cabling hardware, but more the human cost of managing all those cables and faults. Most cables are actually pretty cheap, really, unless you go to end-to-end fiber or something. The real cost is, again, labor: labeling, documentation, coloring, DoS attacks and who knows what the heck is happening on that network... ;)
Posted May 29, 2019 17:47 UTC (Wed)
by yshemesh (guest, #132327)
[Link]
The true costs of hosting in the cloud
So before people drive themselves up the wall about Amazon here, I would invite everyone to keep in mind the guidelines here: "Please try to be polite, respectful, and informative, and to provide a useful subject line".
Or, in the words of the speaker:
"!holy wars"
I'm not here to start a holy war. There are true believers of the cloud, there are true believers of bare metal. We are not here to engage in a heated discussions about pros and cons of those. I'm here to talk about money. That is the main objective of this talk: to estimate how much it's going to cost. And all the technical advantages and disadvantages, well there are other talks where those [have been covered].
Also, nitpickers will certainly argue with the math Dyachuk has come up with: that would also be missing the point. This is just one model: if you prefer Dell servers instead of Supermicro or Linode instead of AWS, yes, those prices will change. It's an example, take it with a grain of salt.
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
- System slow down to almost unusable state - 2 days for resolution very fast contact but slow fix
- Some ports become not accessible (changes in routers etc. infrastructure) - 4 days for resolution again fast contact.
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
hybrid setups are interesting
[snipped description of a hybrid cloud/colo setup] (Disclaimer: I have not yet had a chance to follow any of the links for further details, so if this hybrid concept was already mentioned I apologize.)
It has not, thanks for bringing this up. It's definitely something that was brought up in other talks at Kubecon, but it's something lots of people are still struggling with. In other articles about Kubecon, I mentioned how Kubernetes is one way to standardize those applications and allow cross-cloud migrations, or at least make those possible. I think it's why large cloud providers like Google, Amazon and Microsoft are embracing it: it provides an on-ramp to their services. And I think having the possibility to have hybrid infrastructures like what you are proposing is probably the best, as it resolves the main problem with colocation, which is when the plain fails and you run out of capacity or you have catastrophic outages. The possibility of rebuilding in the cloud is an amazing fallback.
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
Which brings us to the sensitive question of staff costs; Dyachuk described those as "substantial". These costs are for the system and network administrators who are needed to buy, order, test, configure, and deploy everything. Evaluating those costs is subjective: for example, salaries will vary between different countries. He fixed the person yearly salary costs at $250,000 (counting overhead and an actual $150,000 salary) and accounted for three people on staff. Those costs may also vary with the colocation service; some will include remote hands and networking, but he assumed in his calculations that the costs would end up being roughly the same because providers will charge extra for those services.
Is there really anyone outside of the San Francisco Bay area making this doing datacenter networking and sysadmin work? I mean, really?
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
>
> Do you think you'll be able to beat the economy of scale that AWS (or Azure) enjoys?
The true costs of hosting in the cloud
I work at AWS but I'd drank Cloud Computing brand of KoolAid long before joining.
I'd argue that cost/benefit of Amazon abusing AWS to access competitors infrastructure is simply ridiculous. Getting caught at it would result in multi-billion loss of business. And there aren't that many secrets that important to steal.
AWS is huge. Even if someone hacks their way into the internal control plane then they probably won't be interested in targeting your company. There are more juicy targets out there.
I believe that is within IBM's range of products. IBM will sell you hardware and software to set up your own data center, will supply the labor to construct and/or operate your data center (running your own cloud or not), will rent you bare metal machines in an IBM data center, will rent you virtual machines in an IBM data center, or will sell you services such as a database running in an IBM data center.
The true costs of hosting in the cloud
The true costs of hosting in the cloud
The true costs of hosting in the cloud
:)
The true costs of hosting in the cloud
Wol
The true costs of hosting in the cloud
Wol
Correct answer for almost all engineering questions: It depends
The true costs of hosting in the cloud
- you are a smaller company, and you are peddling your product/solution to get an uptick in traffic. A physical deployment needs 1m-6m advance planning so you have to dream of how successful your product will be 6 months from now and start buying today. As opposed to turning on the VMs as demand actually knocks on your door. This alone can account an absurd amount of unnecessary or too-early capital expenses, or even worse an under-capacity server farm that can't keep up with quickly growing demand providing poor service to users.
- Seasonal changes - this is particularly true for storefronts which see a spike of 100% in Nov/Dec over the other 10 months of the year. A few years back I heard from someone from Ebay that 50% of their servers were pretty much off Jan-Oct...
For companies that run so big that burstiness becomes more line line-noise then I can see this being true. It took dropbox 8 years of strong growth before they decided to switch...
The true costs of hosting in the cloud
The author touched briefly on "demand management" but that turns out to be a crucial deciding factor in favor of the cloud
I thought the author stated this as a crucial factor, in that it is what pushed the cost of colo, as he developed it, over the cost of cloud. You have to buy bigger servers than you need because a) you might have underestimated your need; b) you might grow; and c) a server might break.
The true costs of hosting in the cloud
The article says cable costs are the majority of the hardware costs in a data center. I think that's a mistake - it refers to an article that says the majority of network hardware cost is cables.
The true costs of hosting in the cloud - cable cost
The true costs of hosting in the cloud - cable cost
The article says cable costs are the majority of the hardware costs in a data center. I think that's a mistake - it refers to an article that says the majority of network hardware cost is cables.
True, that's a typo on my part.Comment about the human-operation cost
One more important point is that with Kubernetes and CNCF eco-system, you can minimize the amount of high-touch SRE engineers to minimum so you can gain even better competitive advantage with clouds.

![Crossover point [Crossover point]](https://static.lwn.net/images/2018/aws-vs-colo.png)