The container orchestrator landscape
The container orchestrator landscape
Posted Aug 24, 2022 6:55 UTC (Wed) by dw (guest, #12017)In reply to: The container orchestrator landscape by rjones
Parent article: The container orchestrator landscape
Take as a simple example the network abstraction, it's maybe 20%+ of the the whole Kubernetes conceptual overhead. K8 more or less mandates some kind of mapping at the IP and naming layers, so you usually have at a minimum some variation of a custom DNS server and a few hundred ip/nf/xdp rules or whatnot to implement routing. Docker's solution to the same problem was simply a convention for dumping network addresses into environment variables. No custom DNS, no networking nonsense.
It's one of a thousand baked-in choices made in k8s that really didn't need to be that way. The design itself is bad.
No conversation of Kubernetes complexity is complete without mention of their obsolescent-by-design approach to API contracts. We've just entered a period where Ingresses went from marked beta, to stable, to about-to-be-deprecated by gateways. How many million lines of YAML toil across all k8s users needed trivial updates when the interface became stable, and how many million more will be wasted by the time gateways are fashionable? How long will gateways survive? That's a meta-design problem, and a huge red flag. Once you see it in a team you can expect it time and time again. Not only is it overcomplicated by design, it's also quicksand, and nothing you build on it can be expected to have any permanence.
Posted Aug 25, 2022 18:44 UTC (Thu)
by Depereo (guest, #104565)
[Link]
It's quite frustrating to go from the infrastructure world of VMs, which are extremely backwards and forwards compatible, to kubernetes, where the necessary major upgrades every few months will break several deployment pipelines, or deprecate APIs, or do various other things that require your clients to endlessly scramble to 'keep up'. And you're right, it's usually to do with network requirements (or sometimes storage which is somewhat related to network design anyway).
Committing to deployment on k8s is a commitment to a much higher degree of required ongoing updates for and probably unexpected issues with deployment than I'm used to with for example virtual machine orchestration. Unless you're at a certain and very large size I have come to think it's not worth it at all.
Posted Aug 26, 2022 1:38 UTC (Fri)
by thockin (guest, #158217)
[Link]
Last I looked in depth, docker had a DNS server built in, too. Publishing IPs via env vars is a TERRIBLE solution for a bunch of reasons. DNS is better, but still has a lot of historical problems (and yeah, kube sort of tickles it wrong sometimes). DNS + VIP is much better, which is what k8s implements. Perfect? No. But pretty functional.
> No conversation of Kubernetes complexity is complete without mention of their obsolescent-by-design approach to API contracts. We've just entered a period where Ingresses went from marked beta, to stable, to about-to-be-deprecated by gateways.
I know of no plan to formally deprecate Ingress, and I would be the approver of that, so....FUD. Also, deprecate != EOL. We have made a public statement that we have NO PLANS to remove GA APIs. Perhaps some future circumstance could cause us to re-evaluate that, but for now, no.
> How many million lines of YAML toil across all k8s users needed trivial updates when the interface became stable
The long-beta of Ingress is a charge I will accept. That sucked and we have taken action to prevent that from ever happening again.
> and how many million more will be wasted by the time gateways are fashionable?
Nobody HAS to adopt gateway, but hopefully they will want to. It's a much more functional API than Ingress.
> How long will gateways survive? That's a meta-design problem, and a huge red flag.
APIs are forever. That's how long. Once it hits GA, we will keep supporting it. No FUD required.
> nothing you build on it can be expected to have any permanence.
We have a WHOLE LOT of evidence to the contrary. If you have specific issues, I'd love to hear them.
I don't claim kubernetes is perfect or fits every need, but you seem to have had a bad experience that is not quite the norm.
The container orchestrator landscape
The container orchestrator landscape
