|
|
Subscribe / Log in / New account

ECS is worth a mention

ECS is worth a mention

Posted Aug 23, 2022 20:18 UTC (Tue) by dw (guest, #12017)
Parent article: The container orchestrator landscape

There are a few of us out there who'd prefer ECS at all costs after experiencing the alternatives. Much simpler control plane, AWS-grade backwards compatibility, no inscrutable hypermodularity, and you can run it on your own infra so long as you're happy forking over $5/mo. per node for the managed control plane.

I stopped looking at or caring for alternatives, ECS has just the right level of complexity and it's a real shame nobody has found the time to do a free software clone of its control plane.


to post comments

ECS is worth a mention

Posted Aug 23, 2022 21:22 UTC (Tue) by beagnach (guest, #32987) [Link]

Agreed. I feel our team dodged a bullet by opting for ECS over K8S for our fairly straightforward web application.

ECS is worth a mention

Posted Aug 23, 2022 22:34 UTC (Tue) by k8to (guest, #15413) [Link] (2 responses)

This is a rough one. Getting locked into the amazon ecosystem could hurt in the long term, which is full of overcomplicated and difficult services. But container orchestration is often also a huge tarpit of wasted time struggling with overcomplexity.

It's funny, when "open source" meant Linux and Samba to me, it seemed like a world of down to earth implementations that might be clunky in some ways but were focused on comprehensible goals. Now in a world of Kubernetes, Spark, and Solr, I associate it more with engineer-created balls of hair, that you have to take care of with specialists to keep them working. More necessary evils than amplifying enablers.

ECS is worth a mention

Posted Aug 23, 2022 23:14 UTC (Tue) by dw (guest, #12017) [Link]

"Open source" stratified long ago to incorporate most of what we used to consider enterprise crapware as the default style of project that gets any exposure. They're still the same teams of 9-5s pumping out garbage, it's just that the marketing and licenses changed substantially. Getting paid to "work on open source" might have had some edge 20 years ago, but I can only think of 1 or 2 companies today doing what I'd consider that to have meant in the early 2000s.

As for ECS lock-in, the time saved on a 1 liner SSM deploy of on-prem nodes easily covers the risk at some future date of having to port container definitions to pretty much any other system. Optimistically, assuming 3 days of one person's time to set up a local k8s, ECS offers about 450 node-months before reaching breakeven (450 / 5 node cluster = 90 months, much longer than many projects last before reaching the scrapheap). Of course ECS setup isn't completely free, but relatively speaking it may as well be considered free.

ECS is worth a mention

Posted Aug 25, 2022 1:59 UTC (Thu) by milesrout (subscriber, #126894) [Link]

>it seemed like a world of down to earth implementations that might be clunky in some ways but were focused on comprehensible goals

For most people it still is. People that just run things normally, the way they always have, just carry on as normal. You don't hear from them because there's nothing to blog about it. It's business as normal. People think that kubernetes and docker and that whole "ecosystem" is far more prevalent than it really is, because when you use such overcomplicated enterpriseware you inevitably have issues and they get talked about. There's just nothing to blog about when it comes to just running a few servers with nginx reverse proxying some internet daemon. It Just Works.

ECS is worth a mention

Posted Aug 24, 2022 1:12 UTC (Wed) by rjones (subscriber, #159862) [Link] (1 responses)

If you don't want to be joined to the hip with AWS a possibly better solution is K0s.

One of the problems with self-hosting Kubernetes in the typical approach is to self host uses the naive approach of mixing Kubernetes API components (API/Scheduler/etcd/etc) with Infrastructure components (Networking/storage/ingress controllers/etc) with applications all on the same set of nodes.

So you have all these containers operating at different levels all mixing together. Which means that your "blast radius" for the cluster is very bad. If you mess up a network controller configuration you can take your kubernetes offline. If a application freaks out then it can take your cluster offline. Memory resources could be exhausted by a bad deploy or misbehaving application, which then takes out your storage, etc. etc.

This makes upgrades irritating and difficult and full of pitfalls and the cluster very vulnerable to misconfigurations.

You can mitigate these issues by separating out 'admin' nodes from 'etcd', 'storage', and 'worker' nodes. This greatly reduces the chances of outages and makes management easier, but it also adds a lot of extra complexity and setup. This is a lot of configuring and messing around if you are interested in just hosting 1-5 node kubernetes cluster for personal lab or specific project or whatever.

With K0s (and similar approaches with k3s and RancherOS) you have a single Unix-style service that provides the Kubernetes API components. You can cluster if you want, but the simplest setup just uses sqlite as the backend, which works fine for small or single use clusters. This runs in a separate VM or small machine from the rest of the cluster. Even if it's a single point of failure it's not too bad. The cluster will happily hum right along as you reboot your k0s controller node.

In this way managing the cluster is much more like how AWS EKS or Azure AKS cluster works. With those the API services are managed by the cloud provider separate from what you manage.

This is a massive improvement over what you may have experienced with something like OpenShift, Kubespray, or even really simple kadmin-based deploys. And most other approaches. It may not seem like a big deal, but for what most people are interested in terms of self-hosted kubernetes clusters I think it is.

Also I think that having numerous smaller k8s clusters is preferable over having a very large multi-tenet clusters. Just having things split up solves a lot of potential issues.

ECS is worth a mention

Posted Aug 24, 2022 6:30 UTC (Wed) by dw (guest, #12017) [Link]

I've spent enough time de-wtfing k3s installs that hadn't been rebooted just long enough for something inscrutable to break that I'd assume k0s was a non-starter for much the same reason. You can't really fix a stupid design by jamming all the stupid components together more tightly, although I admit it at least improves the sense of manageability

The problem with kubernetes starts and ends with its design, it's horrible to work with in concept never mind any particular implementation

ECS is worth a mention

Posted Aug 24, 2022 16:05 UTC (Wed) by sbheinlein (guest, #160469) [Link] (1 responses)

> "seemingly every tech company of a certain size has its own distribution and/or hosted offering to cater to enterprises"

That's enough of a mention for me.

ECS is worth a mention

Posted Aug 25, 2022 4:45 UTC (Thu) by samuelkarp (subscriber, #131165) [Link]

That would cover EKS, Amazon's hosted Kubernetes offering. ECS isn't Kubernetes.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds