|
|
Subscribe / Log in / New account

Alternative to modules?

Alternative to modules?

Posted Oct 25, 2024 10:26 UTC (Fri) by dsommers (subscriber, #55274)
Parent article: Kadlčík: Copr Modularity, the End of an Era

Since I've used dnf/yum modules in RHEL8 and RHEL9 (nginx, nodejs, python, etc) ... I'm wondering what the alternative will be? I'm sorry if I've missed any discussions about where to look next.

I'm less concerned about the module feature implementation/user interaction itself. Having the ability to have access to alternative versions of packages otherwise in the standard repos is the most important aspect. And the modules seemed like a more sane and better implemented approach than the scl approach used for GCC (gcc-toolset-*).


to post comments

Alternative to modules?

Posted Oct 25, 2024 15:10 UTC (Fri) by zdzichu (subscriber, #17118) [Link] (6 responses)

I saw modularity as a workaround for RHEL shipping old software – a way to install newer packages in isolated way. This was shoehorned into Fedora where it never made sense. Fedora is about shipping latest software.

This doesn't mean you need to port your software to the newest versions constantly. First, each Fedora release is supported for over one year. Second, there are legacy packages available. For example, at this moment on my F41 system: `nodejs` is at version 22.8, but there are packages for `node18` and `node20`, too.

Python is at 3.13 and nginx at 1.26.2, no alternate versions available, but I believe compatibility story is much better for them than for nodejs.

So the ”alternative“ is to use latest runtime, update for each other version or use compat packages.

Alternative to modules?

Posted Oct 25, 2024 15:54 UTC (Fri) by barryascott (subscriber, #80640) [Link] (2 responses)

> Python is at 3.13 and nginx at 1.26.2, no alternate versions available,

You can install many of python versions side by side in fedora, I think python3.6 through to python3.14 (shortly).
Try `dnf install python3.10` as an example.

Alternative to modules?

Posted Oct 25, 2024 16:21 UTC (Fri) by dsommers (subscriber, #55274) [Link]

Some of these can indeed be installed in parallel on Fedora. But running Fedora is in some situations not applicable, as the platform stability RHEL provides is superior to Fedora - and you don't need to update to the latest release at least every year.

Fedora does not have the same stability guarantees RHEL provides. And that's where the dnf modules in RHEL was very handy: The BaseOS packages has a long term stability guarantee while the modules itself can have a shorter life time - which for most of the modules alone is fine.

I can also agree that modules in Fedora is not necessarily a good match. But for LTS/Enterprise distributions it's a very different story.

Alternative to modules?

Posted Oct 27, 2024 21:55 UTC (Sun) by jafd (subscriber, #129642) [Link]

But you cannot build system packages against any other python than the one provided by /usr/bin/python. Which in the case of modularity (what if that software you need also needs some python tooling, but it needs to run against a specific version of python which is not what the system ships with?) becomes an increasingly harder problem. Also, if you need several modules, you may suddenly run into conflicts in case said tooling is present in all/some of them, all needing a different Python version...

Alternative to modules?

Posted Oct 25, 2024 17:37 UTC (Fri) by intelfx (subscriber, #130118) [Link] (2 responses)

> This was shoehorned into Fedora where it never made sense. Fedora is about shipping latest software.

The quote in the article says that modularity is also going to be dropped from RHEL, so the GP’s concern is still valid, or am I missing something here?

Alternative to modules?

Posted Oct 27, 2024 19:53 UTC (Sun) by smoogen (subscriber, #97) [Link] (1 responses)

I think that in the end, the alternative to modules is going to be home-grown containers. They are easier to build yourself the way you want them. Where they dont' work, then it will be up to sites to compile/build/deploy software in the old fashion methods of making their own RPMs or tar balls or /nfs-cluster/bin .

The problems with modularity come down to the fact that they never got a community of people who wanted to work on the code which builds and maintains them. Things like rpm, mock, and a couple of other tools have people outside of Red Hat who will patch, fix, and deploy their own versions of the systems to fix and build things themselves. Things like SCL's and modularity never got a large enough group of people who were interested in the nuts and bolts to keep something going.

Alternative to modules?

Posted Oct 30, 2024 13:46 UTC (Wed) by raven667 (subscriber, #5198) [Link]

> Things like SCL's and modularity never got a large enough group of people who were interested in the nuts and bolts to keep something going.

As someone who has used SCL I wonder if the need for it has an ebb and flow as a popular RHEL release ages and various language ecosystems deprecate old releases which are still shipped by RHEL, that this was a problem acutely felt by RHEL7 users as python3.6 support was deprecated from upstream projects requiring use of python3.9, and the same with PHP, Redis and other runtime/services, but now that (most) people have upgraded to RHEL8/9 they are new enough to be broadly compatible with upstream projects again, but in 5 or so years if there are major compatibility breaks then the desire for an SCL-like system will come back.

Maybe it's another way of saying that the replacement cycle of hardware/baseOS doesn't seem to be evenly distributed across time, there is a certain synchronicity which may have organically developed across orgs (like a rogue wave), and the desire for SCL/modularity may come back if hardware is kept in service longer.

Alternative to modules?

Posted Oct 28, 2024 12:43 UTC (Mon) by gdt (subscriber, #6284) [Link]

One alternative is project-specific packages, as the scientific computing community does with the Anaconda family of package manager and software, particularly miniforge. Although the scientific computing community has concerns which are different to those of business IT and thus RHEL.

Alternative to modules?

Posted Oct 28, 2024 13:53 UTC (Mon) by nim-nim (subscriber, #34454) [Link] (6 responses)

Any technical “solution” will hit the same wall as modules :
1. users like a lot of version choices
2. developers like to be able to use whatever version is available without any constrain
3. but the people who actually do the work (integrators/packagers/modulers) absolutely hate version explosion, integrating software cleanly is a lot of work, and the more versions to integrate the more work there is

In the end you either have to accept that the people who do the work call the shots and choose and impose the versions they integrate to the rest of the ecosystem, or integrate yourself (with a lot of compromises and approximations and blind shots, because if you were doing the work at the level of QA of distributions you would not be asking why those annoying packagers can no package every version you might want).

Containers are an excellent example where integration quality was given up to increase version coverage, container image audits are pretty depressing to read and the more a container image will use roll-up-your-own component versions the more nasty the result is.

Alternative to modules?

Posted Oct 29, 2024 10:47 UTC (Tue) by taladar (subscriber, #68407) [Link] (5 responses)

It is really a general problem that also affects other ways to avoid runtime dependencies like static linking or "vendoring" libraries into your own codebase.

Part of why it reappears is that many developers and stakeholders in companies fundamentally misunderstand the problem. They think the problem is "the system won't allow me to use an old version" when the actual problem is that the explosion in the number of interactions between versions means we literally don't have the people on Earth to test all the combinations of versions.

Alternative to modules?

Posted Oct 29, 2024 11:10 UTC (Tue) by dsommers (subscriber, #55274) [Link] (4 responses)

These are all valid view points, how I see it.

From an ISV point of view ... the challenge is the gap between different wanted aspects. A customer installing a package from an ISV to be used in a production environment wants to run a long term stable system. They want to be sure their production environment is stable and does not require lots of efforts putting out fires regularly. RHEL provides 10 years support for each major release.

An ISV delivers a product, lets say it's built on a Python stack. This product will need to be able to work on a broad set of distributions and versions. If we for this example reduces the span to RHEL - The span can be as broad as from RHEL-7 (which went EOL in June this year, yes) to RHEL-9 ... there are 3 different Python versions in this span with the default distro Python stack - and a lifetime span of approx 15+ years, since RHEL major releases may overlap 4-5 years or so.

The customers wants the latest and greatest features in the ISV product - and the ISV will at some point need to say, "your distribution is too old". Because it becomes too hard to provide the same feature set across 3 different Python version stacks. It becomes hard because newer features the ISV product will depend on needs a newer Python stack.

And that's where the modules filled a real gap for ISVs. It enabled them to just tell customers to "enable the Python 3.11 module before installing our product". The customer would then have a long term stable system with fully supported packages from the ISV for the life time of the distribution.

Without modules, customers will need to upgrade much more frequently the base distro to be able to get the new ISV features they want to make use of. And that implicitly gives a shorter lifespan for the distribution itself, which gives more maintenance time and cost for the customer.

I am not saying that providing such kind of module support comes for free. I know there's a lot of efforts required, especially in the enterprise distribution segment, with QA, testing, support and maintenance. But that does not invalidate the argument that distro users have use cases where modules makes life a lot easier. The question is who is going to pay for this and how. Because long term stability itself, does have a cost - no matter product or project.

Alternative to modules?

Posted Oct 30, 2024 9:38 UTC (Wed) by taladar (subscriber, #68407) [Link] (3 responses)

Personally I consider that "stability" a lie anyway. A version with a backported fix is just as much a new version with new bugs as a new upstream version, only made by someone who has a significantly worse understanding of the code base and tested on a significantly lower number of systems and in a significantly lower number of situations.

I am not even convinced it lowers the maintenance costs because supporting such a wide range of versions has a huge cost both for all kinds of tools that have to work the same over the whole range of currently supported versions and also at the point of migration when, instead of doing the same well tested procedure you do regularly you essentially have to catch up with all the changes from many years and when something breaks you have a significantly harder time which of the <all packages on the system> you just updated caused the breakage. Not to mention the cost of backporting itself.

Alternative to modules?

Posted Oct 30, 2024 10:12 UTC (Wed) by dsommers (subscriber, #55274) [Link] (2 responses)

> Personally I consider that "stability" a lie anyway. A version with a backported fix is just as much a new version with new bugs as a new upstream version, only made by someone who has a significantly worse understanding of the code base and tested on a significantly lower number of systems and in a significantly lower number of situations.

May I ask you about your real-life experience running enterprise Linux distributions?

I've maintained public servers for about 25+ years, which has run everything from Slackware, Gentoo, Novell, Crux, Ubuntu, Fedora and RHEL (including clones) - and probably even a few more ones I've forgotten about. The last 15+ years, it's mostly been RHEL and clones of it. These servers has provided a broad variety of e-mail/collaboration services, web, database (pgsql, mariadb), VPN, proxy gateways, firewalls, VM hypervisors, etc.

My personal experience is that the enterprise Linux distributions (RHEL with clones, Novel SUSE) are quite hassle free and stable compared to the rest. When enabling auto-updating of security/bugfix updates, those servers just run. I take them for a reboot every 4-8 weeks, unless there are urgent/critical kernel/glibc updates. Not because I always need to do a reboot (one or two isolated internal hosts has many moths uptime), but to ensure they can boot cleanly and ensure the system configs are still correct. For those few situations where a reboot can't be done in a timely manner, kpatch can often do a reasonable job until the next reboot window.

The main difference between "ordinary" and enterprise distros is that the latter ones does go through a pretty good QA cycle before it hits its users. And still with QA, critical updates are made available within days after the discovery of the issue was done. This QA step is what results in a stable experience.

I barely spend time on (system) maintenance these days, compared to the days with the non-enterprise distros. Currently I'm responsible for keeping about 15-20 machines running, and spend around 4-8 hours per month on the pure system maintenance, which gives more time to actually maintain the service provided by these systems (mostly tackling various bruteforce attacks, etc). And while Ubuntu LTS is better than a lot of the other distros, in my experience, it comes nowhere near the stability I experience with RHEL and clones.

That's my experience at least. YMMV.

Alternative to modules?

Posted Oct 30, 2024 14:00 UTC (Wed) by raven667 (subscriber, #5198) [Link]

> the stability I experience with RHEL and clones.
> That's my experience at least. YMMV.

My experience is that except for the recent freeradius patch which did change behavior and required admin intervention that an RHEL update has never broken anything on a production system that required a response so they are pretty safe and low risk to apply. I suppose some of the difference might be between systems which are use to tinker with and experiment with different organizing principals vs systems which only exist to run (commercial) workloads, you can use a more experimental system for work if you have the expertise and want to but for most the baseOS is boring and you don't want to have to think about it, you just want it to support your applications.

Alternative to modules?

Posted Oct 31, 2024 9:27 UTC (Thu) by taladar (subscriber, #68407) [Link]

> May I ask you about your real-life experience running enterprise Linux distributions?

About 20 years of maintaining various Linux server distros, mostly Debian, Ubuntu and RHEL/Centos out of which RHEL was by far the worst to support.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds