LWN: Comments on "The container orchestrator landscape" https://lwn.net/Articles/905164/ This is a special feed containing comments posted to the individual LWN article titled "The container orchestrator landscape". en-us Tue, 02 Sep 2025 09:31:59 +0000 Tue, 02 Sep 2025 09:31:59 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net The container orchestrator landscape https://lwn.net/Articles/910252/ https://lwn.net/Articles/910252/ Klavs <div class="FormattedComment"> Personally the operator concept in k8s is a HUGE thing for me.<br> It allows me to deliver ALL users of standard apps, such as databases (postgresql, mongodb etc.) and other type of services, which can be very complicated to deliver in a scalable and highly-availalble manner, while enabling me to ensure that backup and recovery is HANDLED - and I need only have ONE set of procedures/documentation for handling this - and it works for ALL users of this (as we then use this operator for ALL places where we need f.ex. postgresql).<br> <p> It saves me and my colleagues soo much time - and actually gives a huge pease of mind, knowing that our growing infrastructure - its not beyond us, to actually do a recovery test, we have a decent chance at believing will work for all services we operate. And we&#x27;re a small company.. I&#x27;ve consulted for many larger corps - and k8s to me, enables the delivery of a &quot;pre-determined but flexible enough&quot; solution, to enable automatic consumption of &quot;operations services&quot; - by development teams - where the ops team has an actual chance of ensuring the ops quality is maintained.<br> <p> As opposed to the old world of just handing out VMs and really &quot;hoping for the best and otherwise blaming the developer teams&quot;.<br> <p> It is defeintely complex though, and you should definetely be aware of your choices and their cost in complexity.<br> <p> </div> Tue, 04 Oct 2022 09:21:13 +0000 Does anybody use `kubectl apply --kustomize`? https://lwn.net/Articles/907687/ https://lwn.net/Articles/907687/ Lennie <div class="FormattedComment"> Seems like we still need an other approach because we haven&#x27;t gotten the nuance exactly right.<br> <p> The newest approach seems to be kpt Any idea they are on the right track ?<br> </div> Sat, 10 Sep 2022 16:47:02 +0000 The container orchestrator landscape https://lwn.net/Articles/907110/ https://lwn.net/Articles/907110/ kleptog <div class="FormattedComment"> I was pretty enthusiastic about Swarm in the beginning as has a good model for managing containers. But if you&#x27;re deploying production applications with it you get into a point that others have noted: all the management of permanent resources is done in Swarm itself. And the API is just the Docker API with no authentication.<br> <p> So if you have a complicated application where the resources (say networks, or services) depend on configuration settings, you have to write a kind of wrapper which reads the configuration and then uses that to update the Swarm configuration. And that configuration is stored separately to Swarm itself. This is annoying and error prone. Because the tool to do this is complex you get the situation where you distribute the tool in a container and then start it up passing the Docker control socket in.<br> <p> So Swarm can work well if your application is simple enough to deploy via a Docker Compose file. But if you&#x27;re getting to the point where you&#x27;re thinking &quot;I need to make a tool to generate the Compose file for me&quot; you&#x27;re basically at the point where you need something more powerful than Swarm can offer.<br> <p> That said: for our CI environment, and local testing Swarm works fine. But for production it&#x27;s too weak. Fortunately for containers, they don&#x27;t care what tool they&#x27;re running under.<br> </div> Sun, 04 Sep 2022 11:35:47 +0000 what actually is orchestration? https://lwn.net/Articles/907096/ https://lwn.net/Articles/907096/ rra <div class="FormattedComment"> We do some very basic automated pull-request testing that mostly amounts to installing the whole environment on minikube inside CI and making sure everything installs and there are no syntax errors. Ideally we would then run some end-to-end tests on that environment; right now, we don&#x27;t, and instead rely on a cloud-based integration cluster and automated integration testing.<br> <p> Mostly for development beyond the basic unit test sort of stuff we use a dev cluster in the cloud (on Google Kubernetes Engine to be precise). It&#x27;s just easier and less fiddly than a local install, and GKE is rock-solid. That of course comes with a monetary cost, although IMO it&#x27;s pretty small compared to the cost of developers. But not being able to test locally easily is a bit of a gap that does occasionally cause problems, and while minikube is in theory an answer to this, in practice it&#x27;s tricky to get all the pieces working happily on a laptop for typical local development (particularly on modern macOS, which a lot of people like to use but which adds the wrinkle of not being x86-based).<br> <p> In terms of reference material, honestly I mostly just read the Kubernetes reference and tutorial pages on kubernetes.io (and of course implementation-specific guidance for specific cloud providers), plus the Helm documentation. But I joined a team that was already doing Kubernetes, so a lot of my training was from watching what other people were doing and asking questions, so I&#x27;m maybe not the best person to ask about initial reference material.<br> <p> We use Argo CD to automate our Kubernetes deployment and maintenance, and I cannot recommend it highly enough. It makes it so much easier to automate the deployment process end-to-end and then be able to easily see what the cluster is doing and debug problems (and upgrade things, which is very important since we have a very fast development pace and are usually updating five or more times a week). I&#x27;ll fall back on kubectl for some specific problems, but the Argo CD interface is usually more useful, and I say this as someone who almost always prefers command lines to any graphical tools.<br> </div> Sat, 03 Sep 2022 22:37:36 +0000 what actually is orchestration? https://lwn.net/Articles/907091/ https://lwn.net/Articles/907091/ rorycl <div class="FormattedComment"> <font class="QuotedText">&gt; Kubernetes abstracts away the differences in hosting environments, so that we can develop the hosting platform targeting Kubernetes and anyone who can deploy Kubernetes can deploy a copy of it. It works exactly the same on a cloud Kubernetes environment as it does in a private data center, with only minor changes required to customize things like underlying storage methods.</font><br> <p> <font class="QuotedText">&gt; [Kubernetes provides an] orchestration layer lets you define very complex ecosystems of related applications in declarative code and deploy it in a hosting-agnostic way...</font><br> <p> Thank you for these very helpful descriptions of the benefits of Kubernetes, particularly its use across heterogenous environments at scale and your comments about the time it has saved your team.<br> <p> I would be grateful to know how your team deals with local development and if it uses automated testing with Kubernetes, possibly as part of continuous integration workflows. It would also be great to know what reference material you and your team has found most useful in its implementation of Kubernetes, particularly from a conceptual perspective.<br> <p> <p> <p> </div> Sat, 03 Sep 2022 21:53:26 +0000 Does anybody use `kubectl apply --kustomize`? https://lwn.net/Articles/907085/ https://lwn.net/Articles/907085/ brianeray <div class="FormattedComment"> Exactly the background and advice I needed. Many thanks.<br> </div> Sat, 03 Sep 2022 18:29:03 +0000 what actually is orchestration? https://lwn.net/Articles/907081/ https://lwn.net/Articles/907081/ rra <div class="FormattedComment"> <font class="QuotedText">&gt; Based on my admittedly inexpert research it is difficult to see how the concept of devops orchestration brings together the idea of creating a performance from containers in a way that makes sense in different cloud environments and equally in one&#x27;s own racks.</font><br> <p> It&#x27;s interesting that you would say this because this is exactly the problem that my job solves with Kubernetes.<br> <p> Our mission is to provide a reusable platform for scientific astronomy, initially targeted at the needs of our specific project, but hopefully generalizable to similar problems. This is a complex set of interrelated services and, perhaps more importantly, underlying infrastructure that handles such things as authentication and authorization and makes it easy to deploy additional astronomy services. And, vitally, we have to be able to run copies of the entire platform both in the cloud and in private data centers. The team I&#x27;m part of currently maintains six separate deployments, half in the cloud and half on prem, in addition to developing the infrastructure for the platform as a whole, and the same underlying infrastructure is deployed in three other on-prem data centers by other groups.<br> <p> We went all in for Kubernetes and it was the best decision we ever made and the only way in which any of this is possible. Kubernetes abstracts away the differences in hosting environments, so that we can develop the hosting platform targeting Kubernetes and anyone who can deploy Kubernetes can deploy a copy of it. It works exactly the same on a cloud Kubernetes environment as it does in a private data center, with only minor changes required to customize things like underlying storage methods. It gives us a fairly tight interface and set of requirements for any new hosting environment: we can just say &quot;give us Kubernetes of at least this version&quot; with a few other requirements, and then we know our entire platform will deploy and work. There is absolutely no way that we could have done this as quickly or consistently, with a very tiny team, while trying to deploy directly on Debian, or even using something like Terraform. We need the additional layer of abstraction and it saves us an absolutely IMMENSE amount of work and debugging.<br> <p> I&#x27;m saying this as someone who has been in this industry for approaching 30 years now and has done just about every type of system administration from hand-compiled GNU software trees in shared file systems through hand-rolled configuration management systems, Puppet, Chef, AWS at scale, and proprietary container orchestration systems; I&#x27;m not some neophile who has no experience with other ways of doing things. Kubernetes has its problems to be sure, and sometimes can be quite frustrating, but that orchestration layer lets you define very complex ecosystems of related applications in declarative code and deploy it in a hosting-agnostic way and that solves a critical problem for us.<br> </div> Sat, 03 Sep 2022 17:38:13 +0000 Does anybody use `kubectl apply --kustomize`? https://lwn.net/Articles/907080/ https://lwn.net/Articles/907080/ rra <div class="FormattedComment"> We use kustomize for some things because it&#x27;s much simpler, but it has major limitations due to that simplicity.<br> <p> The way I would explain it is that, when using kustomize, you write your Kubernetes manifests directly, and then you use kustomize to &quot;poke&quot; changes into them. It&#x27;s akin to maintaining a core set of resources and then a set of diffs that you layer on top. As such, it has the problem of all diff systems: it&#x27;s great and very convenient and easy to understand if the diffs you need are small, but it quickly becomes unwieldy if there are a lot of differences between deployments.<br> <p> Because of that, if you&#x27;re maintaining a big wad of flexible open source software (think Grafana, Redis, InfluxDB, that sort of thing), you are not going to have your downstream use kustomize; it would be a nightmare.<br> <p> Helm can be used the same way, but I think it&#x27;s best thought of as having an entirely different philosophy: you write a Helm chart that deploys your thing, you pick and choose exactly where that deployment can be customized, and you present an API to the consumers of your chart. (This API is in the form of your values.yaml file, which enumerates all of the supported customization points). Then, your downstream provides their own values.yaml to selectively override the default values, and Helm assembles the result. This has all the advantages that an API always has: you can hide complexity and separate concerns, which is much harder to do with kustomize (and any other patch system). And it has the disadvantages that any API has: more flexibility means more complexity, you have to learn the templating system (which is moderately annoying and tends to produce hideously confusing error messages), and you have to think hard about the API to provide a good one (and mostly people provide bad APIs with too many untested options).<br> <p> Overall, having used both extensively, I went all in for Helm and haven&#x27;t regretted it. I really like the clean separation of concerns of a proper API. But using kustomize is not wrong, and for smaller-scale projects than the fairly complex Kubernetes-based ecosystem I work on it may be the right choice.<br> </div> Sat, 03 Sep 2022 17:22:58 +0000 The container orchestrator landscape https://lwn.net/Articles/907037/ https://lwn.net/Articles/907037/ Cyberax <div class="FormattedComment"> Integration with load balancers for traffic ingress and management of stateful resources are among the most problematic.<br> </div> Sat, 03 Sep 2022 05:00:54 +0000 what actually is orchestration? https://lwn.net/Articles/907010/ https://lwn.net/Articles/907010/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; For a small company turning over less than, say, $10m and til now able to work quite comfortably running services on dedicated machines without a dedicated sysadmin/devops team and enjoying the simplicity and stability of Debian, the &quot;orchestration&quot; component seems to be the fly in the ointment of containerisation.</font><br> <p> There is no simplicity in Debian on bare metal if you want to deploy complicated applications there. Especially for deployments for more than one machine (e.g a clustered server).<br> <p> Containers, first and foremost, simplify _deployment_ of applications. And this very much includes small companies.<br> </div> Sat, 03 Sep 2022 03:22:15 +0000 The container orchestrator landscape https://lwn.net/Articles/907004/ https://lwn.net/Articles/907004/ rorycl <div class="FormattedComment"> <font class="QuotedText">&gt; Because Swarm is too simplistic</font><br> <p> ...for what, for example?<br> <p> (I&#x27;ve made a longer comment below, by the way.)<br> </div> Fri, 02 Sep 2022 14:47:45 +0000 what actually is orchestration? https://lwn.net/Articles/906999/ https://lwn.net/Articles/906999/ rorycl <div class="FormattedComment"> On further thought I think there are some missing elements to this article.<br> <p> When I think of &quot;orchestration&quot; as a word outside of its devops usage I think of scoring music for band or orchestra, with the implicit idea that the resulting performance will be conducted by a Herbert von Karajan type figure who helps balance the strings with the brass, percussion with woodwind.<br> <p> Based on my admittedly inexpert research it is difficult to see how the concept of devops orchestration brings together the idea of creating a performance from containers in a way that makes sense in different cloud environments and equally in one&#x27;s own racks.<br> <p> For a small company turning over less than, say, $10m and til now able to work quite comfortably running services on dedicated machines without a dedicated sysadmin/devops team and enjoying the simplicity and stability of Debian, the &quot;orchestration&quot; component seems to be the fly in the ointment of containerisation.<br> <p> Containerisation itself offers considerable benefits through modularisation and automated testing and deployment. What is very alluring about orchestration tech is that it would allow us to turn a group of servers into a virtual box using overlay networks, with neat scaling features. But that isn&#x27;t so different from running our proxies to address certain servers rather than others. The overlay tech would allow us to more easily drop and add servers, for example to upgrade machine firmware or OS, but there seem few other advantages at the cost of considerably more complexity. Features often seen as orchestration features, such as secrets or configuration management, can be managed fine in the &quot;spinning rust&quot; environment we currently use (ironically often using tools such as vault or etcd).<br> <p> Another major issue that hasn&#x27;t been discussed is how data is handled. What is sometimes called &quot;persistent storage&quot; in the containerisation world, as if it was a side issue rather than the main point of providing SaaS in the first place, seems to have an uneasy relationship with orchestration. Does Herbert ensure that we didn&#x27;t just mount the postgres 15 container on the postgres 12 mount point? The article doesn&#x27;t cover this aspect.<br> <p> So to this luddite it seems that orchestration is really just different approaches to using largely proprietary systems in the way those proprietary systems were made to be sold to you. It makes about as much sense as the software programmer I was interviewing when I asked him about his python skills and he responded &quot;I don&#x27;t know python, but I&#x27;m good with django&quot;.<br> </div> Fri, 02 Sep 2022 14:27:30 +0000 Does anybody use `kubectl apply --kustomize`? https://lwn.net/Articles/906862/ https://lwn.net/Articles/906862/ brianeray <div class="FormattedComment"> Thanks for the useful article and the comments.<br> <p> k8s neophyte here, courtesy of &quot;GitOps and Kubernetes: Continuous Deployment [..]&quot; (Yuen, Matyushentsev, et al) a few years ago.<br> <p> At one point the book pitched --kustomize as an alternative to at least some of the functionality provided by Helm. I was pressed for time so skipped the Helm content and stuck with the --kustomize content since hey, it&#x27;s right there in `kubectl`.<br> <p> Does --kustomize obviate the need for Helm? Is it widely used?<br> </div> Thu, 01 Sep 2022 21:40:41 +0000 The container orchestrator landscape https://lwn.net/Articles/906750/ https://lwn.net/Articles/906750/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; Apart from concerns such as @kleptog&#x27;s, it isn&#x27;t clear to me why many more businesses aren&#x27;t using Swarm.</font><br> <p> Because Swarm is too simplistic. It&#x27;s kinda like writing in BASIC. It&#x27;s OK for beginners, but you quickly reach its limits once you start using it seriously.<br> <p> So people avoid it and jump straight into a more complex solution.<br> </div> Thu, 01 Sep 2022 13:01:06 +0000 The container orchestrator landscape https://lwn.net/Articles/906739/ https://lwn.net/Articles/906739/ zoobab <div class="FormattedComment"> &quot;Nomad eschews YAML in favor of HashiCorp Configuration Language (HCL), which was originally created for another HashiCorp project for provisioning cloud resources called Terraform&quot;<br> <p> Well, having done some Terraform with their HCL language, I will happily stay with Yaml :-) <br> </div> Thu, 01 Sep 2022 11:22:21 +0000 The container orchestrator landscape https://lwn.net/Articles/906659/ https://lwn.net/Articles/906659/ rorycl <div class="FormattedComment"> <font class="QuotedText">&gt; I&#x27;ve used a lot of Docker in production over the years, most of it with Swarm and while it&#x27;s a lot better than it was there seem to be some core structural issues which make it unreliable</font><br> <p> Our SaaS outfit is considering moving from a traditional Linux environment across around a few 10s of servers to use containerisation predominantly to allow a better development experience and testing, particularly for groups of covalent apps, but also to help divorce os and machine maintenance from app deployment.<br> <p> Having built our business on reading the classic O&#x27;Reilly texts to pick up both concepts and implementation details, that combination seems difficult to find in books about orchestration. That is probably the fault of old age, but perhaps the proprietary beginnings of some of these technologies means marketing has confused purpose.<br> <p> A guru pointed me to the Poulton &quot;Docker Deep Dive&quot; book (I read the May 2020 edition) and the last few chapters are devoted to Swarm. Despite the curious dissimilarities between Compose and Swarm, Swarm seems perfect for our sort of environment and a reasonable translation from our familiar linux setup in production, but where the Swarm manager acts to make hosts act like one large host by utilizing overlay networks on which apps can conveniently be scaled.<br> <p> For a smallish outfit the benefits of Swarm seems straight-forward. Poulton summarises the situation like this: &quot;Docker Swarm competes directly with Kubernetes -- they both orchestrate containerized applications. While it&#x27;s true that Kubernetes has more momentum and a more active community and ecosystem, Docker Swarm is an excellent technology and a lot easier to configure and deploy. It&#x27;s an excellent technology for small to medium businesses and application deployments&quot;.<br> <p> Apart from concerns such as @kleptog&#x27;s, it isn&#x27;t clear to me why many more businesses aren&#x27;t using Swarm.<br> <p> <p> <p> <p> </div> Wed, 31 Aug 2022 20:54:59 +0000 The container orchestrator landscape https://lwn.net/Articles/906441/ https://lwn.net/Articles/906441/ kleptog <div class="FormattedComment"> I&#x27;ve used a lot of Docker in production over the years, most of it with Swarm and while it&#x27;s a lot better than it was there seem to be some core structural issues which make it unreliable, at least for us. The overlay network corrupts itself often enough (maybe due to having 150+ of them) and while restarting services seems to fix it usually, it&#x27;s just frustrating to deal with. It&#x27;s unfortunate, because as a conceptual design Swarm is very nice (well, except for secrets), just the implementation lets it down.<br> <p> Our next projects will not use Swarm. We&#x27;ve experimented with K8s (on EKS) and you can make it do amazing things. But ECS is really easy to use and basically does what you want, just like Swarm. Nomad is something to look into.<br> </div> Tue, 30 Aug 2022 12:57:17 +0000 The container orchestrator landscape https://lwn.net/Articles/906270/ https://lwn.net/Articles/906270/ bartoc <div class="FormattedComment"> When I was setting this up I found the supporting infrastructure for just using real ipv6 was really poor. You at least need something to read out whatever prefix you got from DHCP-PD and set up forwarding rules, and either set addresses on the VMs, or start a copy of radvd facing the containers and tell it about said prefix.<br> <p> Or you could use a virtual switch, that would probably &quot;just work&quot;<br> </div> Fri, 26 Aug 2022 23:18:46 +0000 The container orchestrator landscape https://lwn.net/Articles/906242/ https://lwn.net/Articles/906242/ paulj <div class="FormattedComment"> Systemd would be used for setting up the containers / cgroups yes. But, it&#x27;s a very small, localised implementation detail really.<br> <p> Twine (the external name, but more often called &#x27;Tupperware&#x27; - probably the better name to use in searches) would be hard-to-impossible to make available to non-FB use, and probably mostly pointless. It is very heavily integrated in with lots of other Facebook infrastructure, from the CI system, to the automated fleet roll-out system of services, to the service discovery and routing system, etc., etc.<br> <p> <p> <p> </div> Fri, 26 Aug 2022 15:46:37 +0000 Yikes... https://lwn.net/Articles/906241/ https://lwn.net/Articles/906241/ flussence <div class="FormattedComment"> I for one don&#x27;t mind that my problems aren&#x27;t big enough to need any of this.<br> <p> It would be nice if there was a consistent definition of what a &quot;container&quot; is though so I can copy the interesting bits. My entire motivation for that is getting better pretty-printed output in things like htop/atop/glances; those have to use a bunch of ad-hoc detection heuristics for all these competing container formats which is unfortunate.<br> </div> Fri, 26 Aug 2022 15:38:00 +0000 The container orchestrator landscape https://lwn.net/Articles/906240/ https://lwn.net/Articles/906240/ mathstuf <div class="FormattedComment"> How close to fleet[1] do you think this might turn out to be?<br> <p> [1]<a href="https://github.com/coreos/fleet">https://github.com/coreos/fleet</a><br> </div> Fri, 26 Aug 2022 15:20:36 +0000 The container orchestrator landscape https://lwn.net/Articles/906233/ https://lwn.net/Articles/906233/ mdaverde <div class="FormattedComment"> With systemd-nspawn, systemd-sysext &amp; portable services, it really does feel like there&#x27;s a space to be explored for a new systemd-based orchestrator. <br> <p> I believe Facebook/Meta&#x27;s infra is heavily systemd-based with their in-house Twine cluster manager but I don&#x27;t know how much of the internals are available. <br> </div> Fri, 26 Aug 2022 14:37:36 +0000 The container orchestrator landscape https://lwn.net/Articles/906136/ https://lwn.net/Articles/906136/ smitty_one_each <div class="FormattedComment"> At work, where we run AWS, the Good Idea Fairies are all: &quot;Hey, let&#x27;s use Kubernetes&quot;, as though it were a magic wand.<br> <p> Qualitatively, it seems that we&#x27;re basically eating all of the networking and orchestration capability that the cloud provider handles. We&#x27;re trading the &quot;cloud&quot; for the &quot;puff&quot;.<br> <p> Analogies are all like something that sucks, but I use this to curb the enthusiasm of those who think some Magic Wand Of Technical Debt Retirement exists. No, dudes: we&#x27;re going to have to put in the hard work of un-jacking the architecture.<br> <p> Paraphrasing Zawinski: `Some people, when confronted with a problem, think &quot;I know, I&#x27;ll use Kubernetes.&quot; Now they have two problems.`<br> </div> Fri, 26 Aug 2022 03:18:57 +0000 The container orchestrator landscape https://lwn.net/Articles/906131/ https://lwn.net/Articles/906131/ thockin <div class="FormattedComment"> This is FUD. In general you need one flexible network or one node network plus a cluster-centric overlay system.<br> <p> You DO need to think about addressing and how you want you cluster(s) to interact with everything else.<br> </div> Fri, 26 Aug 2022 01:41:44 +0000 The container orchestrator landscape https://lwn.net/Articles/906129/ https://lwn.net/Articles/906129/ thockin <div class="FormattedComment"> <font class="QuotedText">&gt; K8 more or less mandates some kind of mapping at the IP and naming layers, so you usually have at a minimum some variation of a custom DNS server and a few hundred ip/nf/xdp rules or whatnot to implement routing. Docker&#x27;s solution to the same problem was simply a convention for dumping network addresses into environment variables. No custom DNS, no networking nonsense.</font><br> <p> Last I looked in depth, docker had a DNS server built in, too. Publishing IPs via env vars is a TERRIBLE solution for a bunch of reasons. DNS is better, but still has a lot of historical problems (and yeah, kube sort of tickles it wrong sometimes). DNS + VIP is much better, which is what k8s implements. Perfect? No. But pretty functional.<br> <p> <font class="QuotedText">&gt; No conversation of Kubernetes complexity is complete without mention of their obsolescent-by-design approach to API contracts. We&#x27;ve just entered a period where Ingresses went from marked beta, to stable, to about-to-be-deprecated by gateways.</font><br> <p> I know of no plan to formally deprecate Ingress, and I would be the approver of that, so....FUD. Also, deprecate != EOL. We have made a public statement that we have NO PLANS to remove GA APIs. Perhaps some future circumstance could cause us to re-evaluate that, but for now, no.<br> <p> <font class="QuotedText">&gt; How many million lines of YAML toil across all k8s users needed trivial updates when the interface became stable</font><br> <p> The long-beta of Ingress is a charge I will accept. That sucked and we have taken action to prevent that from ever happening again.<br> <p> <font class="QuotedText">&gt; and how many million more will be wasted by the time gateways are fashionable? </font><br> <p> Nobody HAS to adopt gateway, but hopefully they will want to. It&#x27;s a much more functional API than Ingress.<br> <p> <font class="QuotedText">&gt; How long will gateways survive? That&#x27;s a meta-design problem, and a huge red flag. </font><br> <p> APIs are forever. That&#x27;s how long. Once it hits GA, we will keep supporting it. No FUD required.<br> <p> <font class="QuotedText">&gt; nothing you build on it can be expected to have any permanence.</font><br> <p> We have a WHOLE LOT of evidence to the contrary. If you have specific issues, I&#x27;d love to hear them.<br> <p> I don&#x27;t claim kubernetes is perfect or fits every need, but you seem to have had a bad experience that is not quite the norm.<br> </div> Fri, 26 Aug 2022 01:38:11 +0000 The container orchestrator landscape https://lwn.net/Articles/906128/ https://lwn.net/Articles/906128/ thockin <div class="FormattedComment"> <font class="QuotedText">&gt; I don&#x27;t know how mature K8s IPv6 support is nowadays</font><br> <p> Should work fine.<br> </div> Fri, 26 Aug 2022 01:26:47 +0000 The container orchestrator landscape https://lwn.net/Articles/906127/ https://lwn.net/Articles/906127/ thockin <div class="FormattedComment"> <font class="QuotedText">&gt; One thing that always really annoyed me about k8s is the whole networking stack and networking requirements. My servers have real ipv6 addresses, that are routable from everywhere and I really, really do not want to deal with some insane BGP overlay. Each host can good and well get (at least) a /60 that can be further subdivided for each container.</font><br> <p> You don&#x27;t need an overlay if you already have a decent sized range of IPs per node. Just use those IPs.<br> <p> I don&#x27;t know where the idea that you NEED an overlay comes from. If you have IPs, just use those. That&#x27;s what it was designed for.<br> </div> Fri, 26 Aug 2022 01:25:19 +0000 The container orchestrator landscape https://lwn.net/Articles/906105/ https://lwn.net/Articles/906105/ Depereo <div class="FormattedComment"> Having lived some of the issues with maintaining an inhouse distribution of kubernetes (non-certified), I would agree with the shifting sands analogy.<br> <p> It&#x27;s quite frustrating to go from the infrastructure world of VMs, which are extremely backwards and forwards compatible, to kubernetes, where the necessary major upgrades every few months will break several deployment pipelines, or deprecate APIs, or do various other things that require your clients to endlessly scramble to &#x27;keep up&#x27;. And you&#x27;re right, it&#x27;s usually to do with network requirements (or sometimes storage which is somewhat related to network design anyway).<br> <p> Committing to deployment on k8s is a commitment to a much higher degree of required ongoing updates for and probably unexpected issues with deployment than I&#x27;m used to with for example virtual machine orchestration. Unless you&#x27;re at a certain and very large size I have come to think it&#x27;s not worth it at all.<br> </div> Thu, 25 Aug 2022 18:44:49 +0000 ECS is worth a mention https://lwn.net/Articles/905946/ https://lwn.net/Articles/905946/ samuelkarp That would cover EKS, Amazon's hosted Kubernetes offering. ECS isn't Kubernetes. Thu, 25 Aug 2022 04:45:25 +0000 ECS is worth a mention https://lwn.net/Articles/905940/ https://lwn.net/Articles/905940/ milesrout <div class="FormattedComment"> <font class="QuotedText">&gt;it seemed like a world of down to earth implementations that might be clunky in some ways but were focused on comprehensible goals</font><br> <p> For most people it still is. People that just run things normally, the way they always have, just carry on as normal. You don&#x27;t hear from them because there&#x27;s nothing to blog about it. It&#x27;s business as normal. People think that kubernetes and docker and that whole &quot;ecosystem&quot; is far more prevalent than it really is, because when you use such overcomplicated enterpriseware you inevitably have issues and they get talked about. There&#x27;s just nothing to blog about when it comes to just running a few servers with nginx reverse proxying some internet daemon. It Just Works.<br> </div> Thu, 25 Aug 2022 01:59:49 +0000 Yikes... https://lwn.net/Articles/905916/ https://lwn.net/Articles/905916/ dskoll <p>I don't have much to add, but reading this hurt my brain and I now understand a second meaning of the term "Cluster****" <p>I am so glad I'm nearing the end of my career and not starting out in tech today. Wed, 24 Aug 2022 17:58:20 +0000 The container orchestrator landscape https://lwn.net/Articles/905915/ https://lwn.net/Articles/905915/ jordan It's worth noting, that while Docker's website is largely devoid of any mention of Swarm, Mirantis <a href="https://www.mirantis.com/blog/mirantis-is-committed-to-swarm">reaffirmed their commitment to Swarm</a> in April of this year. It seems like it will continue to be supported in Mirantis's product, but it's unclear to me what that might mean users of the freely-available version of Docker, which is developed and distributed by an entirely different company. Wed, 24 Aug 2022 17:40:18 +0000 The container orchestrator landscape https://lwn.net/Articles/905912/ https://lwn.net/Articles/905912/ schmichael <div class="FormattedComment"> <font class="QuotedText">&gt; I&#x27;m wondering if nomad has a similar functionality?</font><br> <p> No, Nomad has chosen not to implement CRDs/Controllers/Operators as seen in Kubernetes. Many users use the Nomad API to build their own service control planes, and the Nomad Autoscaler - <a href="https://github.com/hashicorp/nomad-autoscaler/">https://github.com/hashicorp/nomad-autoscaler/</a> - is an example of a generic version of this: it&#x27;s a completely external project and service that runs in your Nomad cluster to provide autoscaling of your other Nomad managed services and their infrastructure. Projects like Patroni also work with Nomad, so similar projects to controllers due exist: <a href="https://github.com/ccakes/nomad-pgsql-patroni">https://github.com/ccakes/nomad-pgsql-patroni</a><br> <p> The reason (pros) for this decision is largely that it lets Nomad focus on core scheduling problems. Many of our users build a platform on top of Nomad and appreciate the clear distinction between Nomad placing workloads and their higher level platform tooling managing the specific orchestration needs of their systems using Nomad&#x27;s APIs. This should feel similar to the programming principles of encapsulation and composition.<br> <p> The cons we&#x27;ve observed are: (1) you likely have to manage state for your control plane ... somewhere ... this makes it difficult to write generic open source controllers, and (2) your API will be distinct from Nomad&#x27;s and require its own security, discovery, UI, etc.<br> <p> I don&#x27;t want to diminish the pain of forcing our users to solve those themselves. I could absolutely see Nomad gaining CRD-like capabilities someday, but in the short term you should plan on having to manage controller state and APIs yourself.<br> <p> Disclaimer: I am the HashiCorp Nomad Engineering Team Lead<br> </div> Wed, 24 Aug 2022 17:09:16 +0000 The container orchestrator landscape https://lwn.net/Articles/905911/ https://lwn.net/Articles/905911/ schmichael <div class="FormattedComment"> <font class="QuotedText">&gt; the &quot;enterprise&quot; version locking in some very useful features like multi-region support.</font><br> <p> <p> Quick point of clarification: Multi-region federation is open source. You can federate Nomad clusters to create a single global control plane.<br> <p> Multi-region deployments (where you can deploy a single job to multiple regions) are enterprise. Single-region jobs and deployments are open source.<br> <p> Disclaimer: I&#x27;m the HashiCorp Nomad Engineering Team Lead<br> </div> Wed, 24 Aug 2022 16:42:19 +0000 ECS is worth a mention https://lwn.net/Articles/905876/ https://lwn.net/Articles/905876/ sbheinlein <div class="FormattedComment"> <font class="QuotedText">&gt; &quot;seemingly every tech company of a certain size has its own distribution and/or hosted offering to cater to enterprises&quot;</font><br> <p> That&#x27;s enough of a mention for me. <br> </div> Wed, 24 Aug 2022 16:05:39 +0000 The container orchestrator landscape https://lwn.net/Articles/905831/ https://lwn.net/Articles/905831/ jezuch <div class="FormattedComment"> My favorite &quot;orchestrator&quot; is actually testcontainers. It turns integration tests from a horrible nightmare into something that&#x27;s almost pleasant ;) The biggest downside is that they&#x27;re usuallly somewhat slow to start, but everyone at my $DAYJOB is more than willing to pay that cost (which is also monetary, since those tests are executed in CI in the cloud).<br> </div> Wed, 24 Aug 2022 11:15:45 +0000 The container orchestrator landscape https://lwn.net/Articles/905817/ https://lwn.net/Articles/905817/ bartoc <div class="FormattedComment"> <font class="QuotedText">&gt; The idea is that you needed to have a way for Kubernetes to easily adapt to a wide variety of different cloud architectures. The people that are running them don&#x27;t have control over the addresses they get, addresses are very expensive, and they don&#x27;t have control over any of the network infrastructure. Ipv6 isn&#x27;t even close to a option for most of these types of setup.</font><br> <p> Well, I don&#x27;t care about any cloud architectures except mine :). More seriously though the people running clouds absolutely do have control over the addresses they get! And tunneling works just as well if you want to provide access to the ipv6 internet on container hosts that only have ipv4, except in that situation you have some hope of getting rid of the tunnels once you no longer need ipv4.<br> <p> <font class="QuotedText">&gt; Generally speaking you&#x27;ll want to have 3 LANs. One for the pod network, one for the service network, and one for external network. More sophisticated setups might want to have a dedicated network for storage on top of that, and I am sure that people can find uses for even more then that.</font><br> <p> IMO this is _nuts_, I want _ONE_ network and I want that network to be the internet (with stateful firewalls, obviously).<br> </div> Wed, 24 Aug 2022 07:58:23 +0000 The container orchestrator landscape https://lwn.net/Articles/905815/ https://lwn.net/Articles/905815/ dw <div class="FormattedComment"> I don&#x27;t mean to keep jumping into your replies, but I feel I can see what stage in the cycle you&#x27;re at with Kubernetes and it&#x27;s probably worth pointing out something that might not immediately be obvious: in all the rush to absorb the design complexity of the system, it&#x27;s very easy to forget that there are numerous ways to achieve the flexibility it offers, and the variants Kubernetes chose to bake in are only one instantiation, and IMHO usually far from the right one.<br> <p> Take as a simple example the network abstraction, it&#x27;s maybe 20%+ of the the whole Kubernetes conceptual overhead. K8 more or less mandates some kind of mapping at the IP and naming layers, so you usually have at a minimum some variation of a custom DNS server and a few hundred ip/nf/xdp rules or whatnot to implement routing. Docker&#x27;s solution to the same problem was simply a convention for dumping network addresses into environment variables. No custom DNS, no networking nonsense.<br> <p> It&#x27;s one of a thousand baked-in choices made in k8s that really didn&#x27;t need to be that way. The design itself is bad.<br> <p> No conversation of Kubernetes complexity is complete without mention of their obsolescent-by-design approach to API contracts. We&#x27;ve just entered a period where Ingresses went from marked beta, to stable, to about-to-be-deprecated by gateways. How many million lines of YAML toil across all k8s users needed trivial updates when the interface became stable, and how many million more will be wasted by the time gateways are fashionable? How long will gateways survive? That&#x27;s a meta-design problem, and a huge red flag. Once you see it in a team you can expect it time and time again. Not only is it overcomplicated by design, it&#x27;s also quicksand, and nothing you build on it can be expected to have any permanence.<br> </div> Wed, 24 Aug 2022 06:55:46 +0000 ECS is worth a mention https://lwn.net/Articles/905814/ https://lwn.net/Articles/905814/ dw <div class="FormattedComment"> I&#x27;ve spent enough time de-wtfing k3s installs that hadn&#x27;t been rebooted just long enough for something inscrutable to break that I&#x27;d assume k0s was a non-starter for much the same reason. You can&#x27;t really fix a stupid design by jamming all the stupid components together more tightly, although I admit it at least improves the sense of manageability<br> <p> The problem with kubernetes starts and ends with its design, it&#x27;s horrible to work with in concept never mind any particular implementation<br> </div> Wed, 24 Aug 2022 06:30:56 +0000 The container orchestrator landscape https://lwn.net/Articles/905808/ https://lwn.net/Articles/905808/ Cyberax <div class="FormattedComment"> K8s is more flexible compared to Nomad, but at the cost of complexity. There&#x27;s a nice page with a description here: <a href="https://www.nomadproject.io/docs/nomad-vs-kubernetes">https://www.nomadproject.io/docs/nomad-vs-kubernetes</a><br> <p> I personally would avoid Nomad right now. It&#x27;s an &quot;open core&quot; system, with the &quot;enterprise&quot; version locking in some very useful features like multi-region support. <br> <p> With K8s you can also use EKS on AWS or AKS on Azure to offload running the control plane to AWS/Azure. It&#x27;s still very heavy on infrastructure that you need to configure, but at least it&#x27;s straightforward and needs to be done once.<br> </div> Wed, 24 Aug 2022 03:27:29 +0000