The plan for merging CoreOS into Red Hat
Since Red Hat’s acquisition of CoreOS was announced, we received questions on the fate of Container Linux. CoreOS’s first project, and initially its namesake, pioneered the lightweight, 'over-the-air' automatically updated container native operating system that fast rose in popularity running the world’s containers. With the acquisition, Container Linux will be reborn as Red Hat CoreOS, a new entry into the Red Hat ecosystem. Red Hat CoreOS will be based on Fedora and Red Hat Enterprise Linux sources and is expected to ultimately supersede Atomic Host as Red Hat’s immutable, container-centric operating system." Some information can also be found in this Red Hat press release.
Posted May 10, 2018 14:41 UTC (Thu)
by lsl (subscriber, #86508)
[Link]
Interesting. Can't wait to take a closer look at (some early development version of) OpenShift on top of Fedora-based Container Linux (and a reliable automatic update mechanism!).
Posted May 10, 2018 21:31 UTC (Thu)
by SEJeff (guest, #51588)
[Link] (10 responses)
This seems like the best possible outcome.
Posted May 10, 2018 22:07 UTC (Thu)
by JamesErik (subscriber, #17417)
[Link] (9 responses)
Can you elaborate why you think the Kubernetes within the OpenShift packaging is a fork? Red Hat's way of working has commonly been to base their efforts upon a less-than-bleeding-edge version of an upstream project, do testing and bugfixes (which get upstreamed), and to release a known solid "enterprise" product they can support for many years. Eventually as mainline evolves (frequently with Red Hat contributors), they'll base a newer "enterprise" version on a newer upstream (but still not bleeding edge) version and support that one, too, for many years. That's how it has been with the Kubernetes within OpenShift, too.
I'm not sure how that qualifies as "forking" any more than upstream's "maintenance" kernels by Greg Kroah-Hartman are "forks" of Linus's main line of development, for example. Is there more to your assertion I'm missing?
Posted May 11, 2018 3:38 UTC (Fri)
by SEJeff (guest, #51588)
[Link] (8 responses)
We had a long call with your most amazing Jeremy Eder and I believe Dan W at one point going over some technical concerns, but overall, it is/was solid tech. The only part I strongly disliked was the many thousands of lines of ansible playbooks which took ~25 minutes to complete to build a bare metal HA cluster (the router with ha mode had some limitations related to the vrrp bits if memory serves). We work pretty close with Redhat already (high touch beta, we employ upstream kernel devs who work with your kernel devs and hw vendors, etc), but just got rubbed a bit raw by the entire interaction with everyone who wasn't an engineer on the openshift side of things.
Ultimately, we decided we wanted to stay closer to the open source kubernetes where we could directly send patches upstream ourselves and then deploy it to our clusters, and so we wanted to start with a commercial "closer to oss version" of k8s as a known good platform. Also, the openshift pricing (even for large existing RH customers) is silly, but that isn't your fault. Going vanilla k8s gave us more options for both support and general open source community, so we started looking into that. Tectonic as a k8s distro is almost entirely OSS sans tectonic-console, some of the authentication bits that wrap prometheus + alertmanager with dex and perhaps some of the update bits. It fit better in with our model and we went with them as they wanted to work with us. We ended up being the first people with EL7 tectonic kubelets in pre-prod as far as Alex Polvi told me.
Hopefully this small novel answered your question, but if you'd like even more detail than this, feel free to take this offline: jeffschroeder@computer.org
Posted May 11, 2018 10:01 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (6 responses)
The end result is that it is not possible to pay someone to write and upstream a the fix for a bug in component A, and expect the fix to be picked up and present in component A users. They may (and will) use private copies of component A without the fix ad vitam eternam (years in software land). So upstreaming in Go land means writing a fix + upstreaming it in its original project + upstreaming the update to a new commit hash in every other Go project that uses this original project.
The delays in negociating the use of the fixed version in all the potential users of a software component are so huge they’re completely incompatible with any kind of software support warranty, so in practice you *will* have to use a private fork as the basis for anything you ship to customers, and the more support work you do the more private fixes will accumulate, and the more fixes will accumulate the more difficult it will be to upstream anything more, and the more “modern” and “modular” your code base is the harder it will be to push anything upstream due to the miriad of interlocked subcomponents that all need fixing their code references. With the natural result of cutting the cord and becoming completely autonomous, with sporadic attempts to pick up major original project fixes without trying anymore to keep anything in sync.
And Go developers are so addicted to not propagating fixes the next-gen Go system to manage version state¹ will “innovate” by automatically choosing the most ancient dependency version possible unless forced otherwise. Integrating fixes is work, better not do it and no one will be the wiser. Forking is a built-in Go property. Sharing fixes and changes, not so much.
The end result is that the Go model only works for the entity managing the original software project (it calls its fork “upstream”), and only insomuch as it does not commit on any particular level of support (Google perpetual beta) or its customers do not look to closely on the state of the third-party code the project depends on (that will be typically bitrotten to death).
Of course you can also ship blindly upstream state without committing on any ability to fix it. Many entities do and are happy to be paid for blind rote code pushing.
The whole situation could be a clever sick master plan to neuter free/libre open source mechanisms while ostensibly abiding by its licensing. But I think it’s actually the natural organic result of forcing many devs that didn’t believe and still do not believe in FLOSS to abide by FLOSS licenses for commercial reasons (because they are required by more and more customers that do not want lock in). So the Go devs took the parts of FLOSS workflows they did like (copying the code of other projects with no restriction) and sabotaged the parts proprietary devs typically do not understand or care about (making it easy to upstream and share fixes so there is a working and sane common codebase everyone uses; proprietary/BSD-ish people always hate this part)
Posted May 12, 2018 1:01 UTC (Sat)
by lsl (subscriber, #86508)
[Link] (5 responses)
It's a bit more nuanced than that. For direct module imports, vgo will actually choose the most-recent version available. It's only for transitive dependencies that the Minmal Version Selection algorithm developed for vgo may choose a version such that you can call it "most ancient".
The actual innovation in vgo is somewhere else, though: not that long ago, no one thought you can get away with a package manager without tackling the NP-complete problem that lurks within version selection and causes package managers to employ weird heuristics, full-blown SAT solvers or (most likely) both.
The MVS algorithm implemented by vgo does (naturally) impose some constraints on programmers but they're workable ones. Its simplicity is very refreshing. Simple enough to perform it in your head. Iterate over the list of declared dependencies, for each module keeping track of the newest version that was actually referenced.
This predictability is a really nice property. There's also an obvious way to get a newer version of some module: require it. Contrary to most other systems, this is guaranteed to not result in unsatisfiable version requirements.
Posted May 12, 2018 4:19 UTC (Sat)
by nevyn (guest, #33129)
[Link] (4 responses)
Eh, this is a weird Apples/Oranges thing IMO. I understand why pip/gems decide to call themselves a package manager, even though they do very different jobs than normal package managers ... but go just isn't one. Very few people are building N different go projects at the same time, and any that are should be using containers anyway at this point in time. This is hardly the same task as having to install httpd and sshd on the same machine (even when you take out all of the main parts of package managers that pip etc. don't even bother with).
Posted May 12, 2018 12:55 UTC (Sat)
by lsl (subscriber, #86508)
[Link]
The vgo solution only works in the context of language-specific dependency installation tools, where you don't have to deal with arbitrary software written by people that don't give a shit about the conventions you'd like to establish.
Posted May 15, 2018 9:21 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
The container + private copy thing means you can avoid dealing with code changes in the code of those other projects.
But avoiding dealing with code changes also means avoiding propagating fixes.
Posted May 15, 2018 12:31 UTC (Tue)
by liw (subscriber, #6379)
[Link] (1 responses)
In other words, I agree with nim-nim on embedding code copies on containers. We have a problem and we need to solve it.
The same problem occurs in other contexts as well, when embedding dependencies are used. There's a clear need for a solution to this problem that makes sure security fixes, and other fixes to sufficiently grave bugs, get smoothly and swiftly and securely distributed to all the embedded copies.
In a way it's similar to what distributions like Debian do for their stable releases: packaged software is not updated to every new upstream version, fixes are backported to the versions in the stable release. I fear the way Debian does this is highly labour intensive, and probably doesn't scale.
Note that technical solutions are not enough. There is also a need for the software developers, sysadmins, and users to understand the issues, and to use whatever solutions there are.
I don't have the solution, but I do see the problem.
Posted May 15, 2018 23:29 UTC (Tue)
by pabs (subscriber, #43278)
[Link]
The Debian security team has automated some aspects of their patch backporting:
Posted May 11, 2018 16:26 UTC (Fri)
by JamesErik (subscriber, #17417)
[Link]
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
1. Go does not use dynamic libraries (so every project needs to manage a private copy of the whole Go software universe it depends on) and
2. mostly identifies every bit it depends on by commit hash.
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
Things are easier on the language-specific side: Python programmers generally care whether their code works well with pip just as most Go programmers want their code to play well with vgo. They are going to adjust their versioning and release practices to make that happen.
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat
The plan for merging CoreOS into Red Hat