ECS is worth a mention
ECS is worth a mention
Posted Aug 23, 2022 20:18 UTC (Tue) by dw (guest, #12017)Parent article: The container orchestrator landscape
I stopped looking at or caring for alternatives, ECS has just the right level of complexity and it's a real shame nobody has found the time to do a free software clone of its control plane.
Posted Aug 23, 2022 21:22 UTC (Tue)
by beagnach (guest, #32987)
[Link]
Posted Aug 23, 2022 22:34 UTC (Tue)
by k8to (guest, #15413)
[Link] (2 responses)
It's funny, when "open source" meant Linux and Samba to me, it seemed like a world of down to earth implementations that might be clunky in some ways but were focused on comprehensible goals. Now in a world of Kubernetes, Spark, and Solr, I associate it more with engineer-created balls of hair, that you have to take care of with specialists to keep them working. More necessary evils than amplifying enablers.
Posted Aug 23, 2022 23:14 UTC (Tue)
by dw (guest, #12017)
[Link]
As for ECS lock-in, the time saved on a 1 liner SSM deploy of on-prem nodes easily covers the risk at some future date of having to port container definitions to pretty much any other system. Optimistically, assuming 3 days of one person's time to set up a local k8s, ECS offers about 450 node-months before reaching breakeven (450 / 5 node cluster = 90 months, much longer than many projects last before reaching the scrapheap). Of course ECS setup isn't completely free, but relatively speaking it may as well be considered free.
Posted Aug 25, 2022 1:59 UTC (Thu)
by milesrout (subscriber, #126894)
[Link]
For most people it still is. People that just run things normally, the way they always have, just carry on as normal. You don't hear from them because there's nothing to blog about it. It's business as normal. People think that kubernetes and docker and that whole "ecosystem" is far more prevalent than it really is, because when you use such overcomplicated enterpriseware you inevitably have issues and they get talked about. There's just nothing to blog about when it comes to just running a few servers with nginx reverse proxying some internet daemon. It Just Works.
Posted Aug 24, 2022 1:12 UTC (Wed)
by rjones (subscriber, #159862)
[Link] (1 responses)
One of the problems with self-hosting Kubernetes in the typical approach is to self host uses the naive approach of mixing Kubernetes API components (API/Scheduler/etcd/etc) with Infrastructure components (Networking/storage/ingress controllers/etc) with applications all on the same set of nodes.
So you have all these containers operating at different levels all mixing together. Which means that your "blast radius" for the cluster is very bad. If you mess up a network controller configuration you can take your kubernetes offline. If a application freaks out then it can take your cluster offline. Memory resources could be exhausted by a bad deploy or misbehaving application, which then takes out your storage, etc. etc.
This makes upgrades irritating and difficult and full of pitfalls and the cluster very vulnerable to misconfigurations.
You can mitigate these issues by separating out 'admin' nodes from 'etcd', 'storage', and 'worker' nodes. This greatly reduces the chances of outages and makes management easier, but it also adds a lot of extra complexity and setup. This is a lot of configuring and messing around if you are interested in just hosting 1-5 node kubernetes cluster for personal lab or specific project or whatever.
With K0s (and similar approaches with k3s and RancherOS) you have a single Unix-style service that provides the Kubernetes API components. You can cluster if you want, but the simplest setup just uses sqlite as the backend, which works fine for small or single use clusters. This runs in a separate VM or small machine from the rest of the cluster. Even if it's a single point of failure it's not too bad. The cluster will happily hum right along as you reboot your k0s controller node.
In this way managing the cluster is much more like how AWS EKS or Azure AKS cluster works. With those the API services are managed by the cloud provider separate from what you manage.
This is a massive improvement over what you may have experienced with something like OpenShift, Kubespray, or even really simple kadmin-based deploys. And most other approaches. It may not seem like a big deal, but for what most people are interested in terms of self-hosted kubernetes clusters I think it is.
Also I think that having numerous smaller k8s clusters is preferable over having a very large multi-tenet clusters. Just having things split up solves a lot of potential issues.
Posted Aug 24, 2022 6:30 UTC (Wed)
by dw (guest, #12017)
[Link]
The problem with kubernetes starts and ends with its design, it's horrible to work with in concept never mind any particular implementation
Posted Aug 24, 2022 16:05 UTC (Wed)
by sbheinlein (guest, #160469)
[Link] (1 responses)
That's enough of a mention for me.
Posted Aug 25, 2022 4:45 UTC (Thu)
by samuelkarp (subscriber, #131165)
[Link]
ECS is worth a mention
ECS is worth a mention
ECS is worth a mention
ECS is worth a mention
ECS is worth a mention
ECS is worth a mention
ECS is worth a mention
That would cover EKS, Amazon's hosted Kubernetes offering. ECS isn't Kubernetes.
ECS is worth a mention
