|
|
Subscribe / Log in / New account

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 27, 2025 23:30 UTC (Mon) by pizza (subscriber, #46)
In reply to: Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ... by rahulsundaram
Parent article: Vendoring Go packages by default in Fedora

> Distributions have increasingly little hold to influence anything like this beyond what upstream itself is directly doing already. All the pointless distro incompatibilities have resulted in language specific managers and various container and sandbox style solutions effectively bypassing the distro for anything beyond the core layer.

I disagree; all of this bespoke language ecosystem crap stems from the "need" to natively support MacOS and Windows.


to post comments

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 27, 2025 23:53 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (86 responses)

No it's not. Classical Unix package managers are just crap and are not useful for anything but the basic stable set of libraries.

The basic issue is that a package requires _years_ to become available, and probably person-months of work. And even then, dependency management quickly becomes impossible. APT simply dies if you try to make it resolve dependencies for 200k packages.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 0:22 UTC (Tue) by dbnichol (subscriber, #39622) [Link] (74 responses)

Regardless of the quality of distro package managers, the more important part is that you as a developer have no control over the system package manager or what's available to it. With the language package manager, you can specify exactly what's needed to make your program work and know exactly what that corresponds to. You can develop and test your program knowing exactly what your dependencies are.

If you're relying on the system package manager, you're just hoping for the best. Is the package much newer or much older than you've tested against? Is the package patched to do something you don't expect? Is the package even available? What do you do if there are problems with that? "Sorry my program is broken for you. Please go talk to this third party that I have no influence over."

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 1:15 UTC (Tue) by bluca (subscriber, #118303) [Link] (62 responses)

> you as a developer have no control over the system package manager or what's available to it.

Yes, but that's a good thing. An extraordinarily good and fundamental thing, in fact. Because the vast majority of individual app developers have no idea what building a consistent, secure, stable and maintanable ecosystem even looks like, and they just want to get past the "boring bits" and get them out of the way. So it's very common to see these developers who ship their own binaries do insane things like bundle 3 years old and CVE-infested versions of OpenSSL, with no regards whatsoever for the security of users and their systems. Thanks, but no thanks!

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 11:47 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (61 responses)

> So it's very common to see these developers who ship their own binaries do insane things like bundle 3 years old and CVE-infested versions of OpenSSL, with no regards whatsoever for the security of users and their systems. Thanks, but no thanks!

What makes you think the distro would have any luck maintaining an app on their own? If upstream is so negligent as to do this…why trust them for anything? At least Rust has `cargo-audit` that one can run in CI to make sure that you're not depending on anything that is yanked or has CVEs filed against it. I run it nightly on my projects. IIRC, there's a similar tool for golang that can actually do line-based CVE detection (i.e., it won't bother you with CVEs in functions of packages you don't actually end up using), but this is probably far too sharp for distros as vulnerability scanners tend to just seek "package-version" tuples and not care whether the vulnerable code is actually present or not (given the static compilation being able to drop code that isn't used anywhere).

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 12:55 UTC (Tue) by bluca (subscriber, #118303) [Link] (60 responses)

> What makes you think the distro would have any luck maintaining an app on their own?

Because we are actually doing it, and have been doing it for a while

> At least Rust has `cargo-audit` that one can run in CI to make sure that you're not depending on anything that is yanked or has CVEs filed against it.

As everything else in the Rust ecosystem, this is geared toward a single application developer, mostly large corps, and screw everything and everybody else. It doesn't scale to do that for every single thing, all the time, forever, when you have exactly zero resources and can barely keep the lights on with volunteer work.
The only thing that works is forbidding bad behaviour by policy and construction, so that you _don't_ have to rely on expensive and clunky language-specific tools, and hope they tell the truth and aren't broken themselves, and spend time you don't have and money you don't have after them.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 13:55 UTC (Tue) by taladar (subscriber, #68407) [Link] (6 responses)

> The only thing that works is forbidding bad behaviour by policy and construction

I am glad you are in favor of forbidding the bad behavior of running outdated versions that are older than what the upstream project still supports. I agree that this is a waste of our limited resources to try to support this clearly broken use-case and to constantly waste energy all the time, forever to attempt to keep up with the changing world around you by back-porting fixes with volunteer work.

What is actually geared towards large corps is this pretense that supporting their fantasy that they can freeze the world around them is even possible or more stable.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 14:25 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> What is actually geared towards large corps is this pretense that supporting their fantasy that they can freeze the world around them is even possible or more stable.

Question. What do you think all of this software actually runs on?

...."LTS" hardware, of course.

Of which the most prevalent is a 26-year-old backwards-compatible extension of a design that dates to 1985, which in turn is an backwards-compatible extension of a design released in 1978, of a which itself is a non-backwards compatible extension of a design from 1971.

_everything_ depends on _someone else_ "freezing the world".

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 15:36 UTC (Thu) by anton (subscriber, #25547) [Link] (1 responses)

The first Opteron was released on April 22, 2003, so AMD64 is currently 21.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 17:47 UTC (Thu) by pizza (subscriber, #46) [Link]

> The first Opteron was released on April 22, 2003, so AMD64 is currently 21.

Closer to 22 than 21 (with the spec available three years before that). But eh, that doesn't change my point. Most of us are using (recently-compiled) binaries that will natively run on two-decade-old hardware. I'd wager that a majority are still using (still recently-compiled) binaries that will natively run on three-decade-old hardware (ie 'i686'), and in some cases, four-decade-old (ie 'i386') hardware.

...As it turns out, "freezing the world" is utterly necessary in practice.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 14:33 UTC (Tue) by bluca (subscriber, #118303) [Link] (2 responses)

> I am glad you are in favor of forbidding the bad behavior of running outdated versions that are older than what the upstream project still supports.

What the "upstream project" supports or doesn't doesn't matter one bit, because they largely have no concern whatsoever for compatibility. See for example the kernel, which breaks userspace API on every single damn release, and very often in allegedly "stable" releases too, because testing is for customers. And that's why people running real infrastructure with real workloads pay out of their nose to get RH or Canonical to provide targeted fixes in LTS releases for 5-10 years, instead of blindly throwing anything that kernel.org spits out and chucking it over the wall, praying nothing falls apart (hint: there's always something that does).

Preventing upstream regressions

Posted Jan 28, 2025 22:35 UTC (Tue) by DemiMarie (subscriber, #164188) [Link]

Could Red Hat prevent the upstream regressions by doing its QA on the upstream stable trees?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:10 UTC (Thu) by taladar (subscriber, #68407) [Link]

In my experience there are always many things that fall apart on RHEL in particular, especially towards the end of the life cycle when all upstream support for the included versions is just a distant memory.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 21:43 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (52 responses)

> Because we are actually doing it, and have been doing it for a while

Are we discussing specific instances or the concept in general? I find it hard to believe that a distro would say "yes, we can maintain that" without seeing what it is first. See spot's marathon of shaping Chromium into a Fedora-ok-shaped package years ago.

> It doesn't scale to do that for every single thing, all the time, forever, when you have exactly zero resources and can barely keep the lights on with volunteer work.

I mean…nothing scales "for every single thing, all the time, forever", so I don't see what the argument is here. I think one wants to have some level of metrics for these kinds of maintenance tasks and to include that in `crev` so that one can even ask the question "am I built on a solid foundation?".

> forbidding bad behaviour by policy and construction

Great, we agree here :) . Set up `cargo-audit` in CI, require green CI, check the `crev` status of your dependency tree. I'd like to see C and C++ libraries start to do the equivalent, but the ecosystem is just not in a state to even allow these things effectively anyways.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 21:49 UTC (Tue) by bluca (subscriber, #118303) [Link] (51 responses)

> Are we discussing specific instances or the concept in general?

Distributions exist, so by definition this is possible

> "am I built on a solid foundation?"

Exactly, that's the question - and if the answer involves downloading random stuff from the internet at build time and bundling security-critical dependencies so that one doesn't have to be bothered with updating them (and they themselves don't have to be bothered with annoying details such as maintaining backward compatibility), it's obviously "no"

> Great, we agree here :)

Not at all - cargo is exactly the kind of stuff that needs to be avoided like the plague. The entire reason for its existence is cutting corners, embedding extraordinarily bad and damaging behaviours in normal practices and normalizing it. It's the epitome of all that is wrong with the Rust ecosystem.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 3:38 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> Exactly, that's the question - and if the answer involves downloading random stuff from the internet at build time and bundling security-critical dependencies so that one doesn't have to be bothered with updating them (and they themselves don't have to be bothered with annoying details such as maintaining backward compatibility), it's obviously "no"

Go allows hermetically sealed builds with bit-for-bit reproducibility. With almost zero effort. Can C/C++ infrastructure provide the same level of assurance? Without having to tear out your hair?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 11:10 UTC (Wed) by bluca (subscriber, #118303) [Link] (2 responses)

Yes, distributions have been doing this for years

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 19:23 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Nope. Debian and Fedora are still not fully reproducible for any reasonable installation. C/C++ build systems require careful dance to make them deterministic, while Go provides that out-of-the box.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 19:35 UTC (Wed) by bluca (subscriber, #118303) [Link]

The "careful dance" in 90% of the cases is setting the SOURCE_DATE_EPOCH and the build path to a fixed value. The is a long tail of unreproducible packages left in Debian, but it is absolutely not just because of C. Loads of those are due to either toolchain issues in LTO which affect all languages gcc supports (in fact one of the most gnarly repro issue left in gcc/binutils is when mixing static linking and LTO), or things like sphinx documentation. I mean this stuff is well documented and with lots of data, so not sure why there's any need to make stuff up.

https://tests.reproducible-builds.org/debian/reproducible...

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 13:24 UTC (Wed) by joib (subscriber, #8541) [Link] (11 responses)

> Exactly, that's the question - and if the answer involves downloading random stuff from the internet at build time and bundling security-critical dependencies so that one doesn't have to be bothered with updating them (and they themselves don't have to be bothered with annoying details such as maintaining backward compatibility), it's obviously "no"

That's a strawman argument, as there's obviously other ways of doing things than this and the traditional Linux distro model.

Distros do a lot of good work, but after screaming "my way or the highway" at developers, the developers have largely chosen the highway. Perhaps instead of screaming some more at the developers, berating how dumb and irresponsible they are, and hoping that they will see the light and come back to the true faith, the distros need to do some soul-searching on their end? Maybe there are ways of accommodating the workflows the developers have voted for with their feet while still having the distro advantages of policy auditing and enforcement, security tracking, etc.?

Just as a quick strawman proposal (gee, lots of strawmen here..):

- Distro runs its own registry (NOT an automatic mirror or proxy of upstream). Before first syncing a package from the upstream registry into the distro registry, the responsible packager(s) must check it, similar to adding a new package to a distro today (like the Debian WNPP process).
- After a package has been accepted, depending on distro wishes, new package versions are automatically synced into the distro registry from upstream, or then requires manual action by the packager.
- The distro registry has the capability to add distro-specific patches on top of the upstream. This is done automatically as part of the syncing progress.
- When building an application, a resulting binary is then packaged into a deb/rpm/whatever package (lets call it "distro package" as opposed to the source package that lives in the registry), but there is no corresponding "distro source package" (srpm/deb-src). Instead with a Cargo.lock or equivalent file defining the SBOM, the distro package can be reproducibly rebuilt using the distro registry.
- A capability to download part of the distro registry to local disk for developers or organizations that wish to be able to work offline.
- The distro registry also allows adding versions of a package not existing in upstream, e.g. for backporting security fixes.
- A security update of a package automatically triggers a rebuild of all dependent distro packages. Yes, needs a mechanism to blacklist known bad versions, and/or whitelist known good versions.
- A distro user can choose to use the distro registry instead of upstream, and thus work with the distro-vetted packages but otherwise use the normal language tooling and workflow (cargo/npm/CPAN/etc.) the user is used to. Perhaps even support a hybrid model, where one preferably pulls from the distro registry but can fallback to the upstream registry for packages not found in the distro registry?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 13:56 UTC (Wed) by bluca (subscriber, #118303) [Link] (2 responses)

> Distros do a lot of good work, but after screaming "my way or the highway" at developers, the developers have largely chosen the highway. Perhaps instead of screaming some more at the developers, berating how dumb and irresponsible they are, and hoping that they will see the light and come back to the true faith, the distros need to do some soul-searching on their end? Maybe there are ways of accommodating the workflows the developers have voted for with their feet while still having the distro advantages of policy auditing and enforcement, security tracking, etc.?

Or, just go with what anarcat said at the top of this thread:

> or abandon the language in both Debian and Fedora?

It's not like there's any law of physics that says golang and rustlang applications must absolutely be included. If the work distributions do is useless, and the language-specific stores are oh-so-much-better, surely they can just use those, and everyone will be happy?

> Just as a quick strawman proposal (gee, lots of strawmen here..):

That's a lovely list of requests (except it doesn't really solve most of the problems, like having to update one single package 19283819 times every time), where do we send the invoice to pay for it?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:32 UTC (Wed) by Wol (subscriber, #4433) [Link]

> That's a lovely list of requests (except it doesn't really solve most of the problems, like having to update one single package 19283819 times every time), where do we send the invoice to pay for it?

It works fine for Gentoo :-)

Cheers,
Wol

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 8:27 UTC (Thu) by joib (subscriber, #8541) [Link]

> It's not like there's any law of physics that says golang and rustlang applications must absolutely be included. If the work distributions do is useless, and the language-specific stores are oh-so-much-better, surely they can just use those, and everyone will be happy?

That is of course true, and it might indeed be that distros will increasingly be relegated to providing the low level plumbing, and most apps will be provided via app stores, with bundled dependencies and only minimal automated oversight from the distro, etc. Which is roughly what many other operating systems (Android, macOS, iOS, windows) are doing.

A shame, in a way, because I think there are pain points in that model as well that distros could be well places to help out with. But if distros refuse to evolve, it's their choice I guess.

> That's a lovely list of requests (except it doesn't really solve most of the problems, like having to update one single package 19283819 times every time), where do we send the invoice to pay for it?

In my bubble the CPU and BW cost of build/distribution farms are a mere rounding error compared to engineer salaries, facilities etc., but maybe it's different for a volunteer run project with little such expenses.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:51 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (6 responses)

Your solution is basically what distributions are currently doing right now, but instead of calling them "packages" you call them "distro own registry" (which we normally call "the repository").

Having said it, developers are not choosing the distribution way because 99% of them are on windows or osx and have absolutely no idea with the concept of a distribution. It's like explaining music to a deaf person. They lack the life experience to understand why one might like a distribution at all.

That's why they don't care, and that's why appeasing them is probably useless.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 20:04 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> Your solution is basically what distributions are currently doing right now, but instead of calling them "packages" you call them "distro own registry" (which we normally call "the repository").

The crucial difference is that the distros won't require developers to package the code into their own special snowflake version of package format and jump through hoops to get package reviews.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 23:04 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (4 responses)

Distros don't require developers to do anything at all. But developers shouldn't expect their software to be interesting enough for someone to learn whatever snowflake build system they decided to use instead of what everyone else to use, just to package their snowflake software that nobody cares about.

So if you think "use popular libraries" and "don't use some crappy tool nobody has heard of" is "jumping through hoops"… sure… However developers that do things like these probably write low quality software and nobody is worse off for it not being available on distribution repositories. The ego of the author might be a bit bruised but perhaps that can be a learning experience.

Keep in mind that prior to becoming a distribution maintainer, I had my software packaged in Debian by someone else. So I've experienced both sides. Have you?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 23:41 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> So if you think "use popular libraries" and "don't use some crappy tool nobody has heard of" is "jumping through hoops"… sure…

No. The ask is now: "don't use any one of these new-fangled Rust and Go thingies, use C with the libraries we already have".

> The ego of the author might be a bit bruised but perhaps that can be a learning experience.

So you're literally saying that the author needs to debase themselves before the distro packaging gods. They need to follow their dictats and accept the wisdom of the ancients.

As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.

I'm already seeing the results of this. For example, there's a huge infrastructure of "self-hosting" projects. I'm using Immich for photos, FreshRSS for news, calibre-web for books, HomeAssistant, PeerTube, a self-hosted AI infrastructure, etc.

NOTHING out of this truly wonderful infrastructure is packaged in distros. Nothing. And the enabling technology for all of them were not distros, but Docker (and Docker-Compose) that allows them to not care about distros at all.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 1:34 UTC (Thu) by somlo (subscriber, #92421) [Link]

In my experience (as a Fedora packager), upstream developers find it useful that their project is available as an officially packaged RPM.

When they occasionally do something that makes packaging more difficult, I politely point it out, and ask for assistance. 100% of the time I did this, upstream either helpfully pointed out the detail I was missing, or made (small) changes to accommodate me in making downstream packaging less difficult.

> As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.

I'm sorry your experience was *this* negative. OTOH, if every distro packager you ever ran into was this much of a jerk, what do they all have in common? :D

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 6:31 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (1 responses)

Since the incredibly rude and dramatic tone of the response, and the lack to the final question: "Have you?" I will presume I struck a nerve and the reply is "No, I don't have this experience and don't know what I'm talking about"

> No. The ask is now: "don't use any one of these new-fangled Rust and Go thingies, use C with the libraries we already have".

I guess you haven't checked what's in distribution repositories for the past 15 years?

> As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.

No worries, with that "style" of communication everyone will stay away from you regardless of how you write software.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Mar 6, 2025 17:21 UTC (Thu) by nye (subscriber, #51576) [Link]

I know it's been over a month but I just can't keep silent about this any longer.

Corbet, can we please just get a permaban for this malignant troll? LtWorf poisons almost every thread with his mindless toxic bile, and it's ruining LWN.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:11 UTC (Thu) by dbnichol (subscriber, #39622) [Link]

I love this proposal. Obviously, setting up a proxy registry and processes for every language ecosystem would be a lot of work, but it would be a much more natural blending of system packages and language native packages.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 13:53 UTC (Wed) by josh (subscriber, #17465) [Link] (34 responses)

> Not at all - cargo is exactly the kind of stuff that needs to be avoided like the plague. The entire reason for its existence is cutting corners, embedding extraordinarily bad and damaging behaviours in normal practices and normalizing it. It's the epitome of all that is wrong with the Rust ecosystem.

This is incorrect, but I also don't expect you're interested in *discussing* this rather than announcing your position. So, for the benefit of others:

This is incorrect, and in fact, cargo goes to a *great deal of effort* to accommodate many different use cases, including Linux distributions, and Linux distributions manage to package many different things written in Rust and using Cargo. Please don't paint Rust with the same brush as Go/Node/etc. Rust is very well aware that it's in a particularly good position to be able to drop in where previously only C could, and because of that, Rust goes out of its way to make that as feasible as possible.

That said, we could always do better, and there are many efforts in progress to do better. But "better" and "do exactly what C does" are emphatically not the same thing.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:19 UTC (Wed) by bluca (subscriber, #118303) [Link] (33 responses)

"what C does" is irrelevant, and really nobody cares, especially as it can do everything and its opposite.
What matters is what _we have done with it_ in the distro world, which is: stable APIs, stable ABIs, dynamic linking, and sane transitions when occasionally things change. And Rust (as an ecosystem, not as a language, for the exact same reason) and/or Cargo simply does not do the same.

You can correct me if I'm wrong, but as of today the Rust stdlib still gets a different hashed SONAME on every build, so dynamic linking is de-facto impossible and unusable, so there's no stable ABI, and every upload is a new mass ABI transition. This cannot work sensibly.

So, in practice, Golang and Rustlang, as ecosystems, are in pretty much the same position: vendor the impossible, and static link everything and the kitchen sink.

C, as a language, was also in the exact same position, and in fact you can perfectly well still do exactly all that and then some, if you want to. We made it do something very different, in our ecosystem, for extremely good reasons that were true 20 years ago as they are true today, and substituting Golang or Rustlang instead of C doesn't change that one bit.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:40 UTC (Wed) by Wol (subscriber, #4433) [Link] (3 responses)

> You can correct me if I'm wrong, but as of today the Rust stdlib still gets a different hashed SONAME on every build, so dynamic linking is de-facto impossible and unusable, so there's no stable ABI, and every upload is a new mass ABI transition. This cannot work sensibly.

And why would one WANT dynamic linking, apart from the fact Unix die-hards have been brainwashed into thinking it's the "one true way"?

Am I right in thinking that glibc is one of the few places where dynamic linking works as intended? Unfortunately the (then mainstream) linux software I would like to run is incompatible with glibc.

And those of who are old enough to remember a world before DOS/Windows/Linux aren't sold on the balls-up that is Posix and its descendants. I for one spent the start of my career on a Multics derivative, not a Eunuchs derivative. A hell of a lot has been lost to that academic castrated mess that infected all those University students. "Those that forget history are doomed to re-live it".

Cheers,
Wol

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:55 UTC (Wed) by bluca (subscriber, #118303) [Link] (2 responses)

> And why would one WANT dynamic linking

Because you install, update, manage and load a dependency ONCE, instead of 1928391238912 times, one for each user, in slightly different version each time. This affects security, maintenance, bandwidth (not everyone lives in the western world with unlimited data and symmetric fiber), disk space, and memory (a shared library is loaded in memory ONCE and then just remapped for every process that uses it, and this makes a difference on small devices). And more of course. All these are not really language-specific reasons, and apply just the same everywhere. C wasn't (used) like this at the beginning either, it was coerced into, _despite_ its characteristic as a language, and not because of it, and that's why people who work in this ecosystem don't really buy the excuses made by $newlanguageoftheweek.

> Am I right in thinking that glibc is one of the few places where dynamic linking works as intended?

You are very much not right in thinking that, no. By policy every library in most distros is used in the shared object format, and exceptions are extremely rare (and very much frowned upon).
Not everyone is as good as glibc to maintain a stable ABI (but many are, though), but that just means you have to do what in Debian we call 'ABI transitions'. The difference is that in this ecosystem, on a per-library basis, they are rare, as attention is paid to avoid breaking ABI needlessly. There are a few exceptions (*cough* libxml *cough*) but by and large this works. The effort required is anyway orders of magnitude smaller than "rebuild everything every time every minute of every hour of every day".

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 20:05 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> Because you install, update, manage and load a dependency ONCE, instead of 1928391238912 times

This has never been true.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:35 UTC (Thu) by taladar (subscriber, #68407) [Link]

The problem is that this dynamic runtime system, much like the one used in dynamic language, is extremely brittle.

There is a reason people try to work around this by vendoring, linking statically, app bundles on Mac, Docker containers, appimage, flatpak and many other systems that again do ship a copy of everything for each application again.

And that reason is that you can only maintain the brittle dynamic runtime environment if you literally touch it as little as possible which means you keep everything old, not just the core dependencies that benefit from being replaced once instead of multiple times.

That might be fine for you from the perspective of a single distro but out in the real world we have to deal with all the supported versions of your distro and other distros at the same time and keeping stuff old means we have to support an extremely wide range of versions for literally everything, from 10 year old stuff to the most recent.

This can e.g. lead to building elaborate systems to detect which config option is supported on which version when generating configuration for OpenSSH because the supported major versions span literally half the major version range of that software.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:46 UTC (Wed) by farnz (subscriber, #17727) [Link] (27 responses)

One important point - dynamic linking is not something the distros came up with - it's something that SunOS introduced in the 1980s, and distros copied over a decade later. The intent behind dynamic linking as present in ELF (and as used in distros) is to make it possible to supply a bug fixed version of a library to a user who only has binaries for the software they run, and has no access to rebuild from source, or to relink a partially linked object against a fixed library.

The idea was that libraries would carefully curate their ABI, such that users never experienced an ABI break - no "soname bumps" or anything like that - as part of justifying the prices they charge developers to buy access to those libraries. The work distros have backed themselves into doing for all projects is the ABI stabilization work - with a few notable exceptions (glibc, GLib and a small number of others), upstream developers on open source projects don't bother keeping their ABI stable, as long as their APIs are stable enough for their intended consumers.

But the original reason for dynamic linking is simply that consumers cannot recompile proprietary binaries, and want to receive binaries they can run without a "prepare" step. Good tooling can already rebuild all dependent packages on a version change (e.g. Open Build Service does this), and the other reason that people cared about dynamic linking back in the day is that downloading updates over dial-up was already painfully slow with statically linked packages.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:04 UTC (Wed) by bluca (subscriber, #118303) [Link] (15 responses)

> The idea was that libraries would carefully curate their ABI, such that users never experienced an ABI break - no "soname bumps" or anything like that - as part of justifying the prices they charge developers to buy access to those libraries. The work distros have backed themselves into doing for all projects is the ABI stabilization work - with a few notable exceptions (glibc, GLib and a small number of others), upstream developers on open source projects don't bother keeping their ABI stable, as long as their APIs are stable enough for their intended consumers.

That's most definitely not true - some libraries are bad, but most are good on that front, and ABI transitions are relatively easy, with few exceptions especially compared to the grand total of number of libraries shipped.
And of course - in case there's a library that is badly maintained and doesn't bother with keeping ABI stable, that is simply a bad library, and not a candidate for being included in the first place

> But the original reason for dynamic linking is simply that consumers cannot recompile proprietary binaries, and want to receive binaries they can run without a "prepare" step. Good tooling can already rebuild all dependent packages on a version change (e.g. Open Build Service does this), and the other reason that people cared about dynamic linking back in the day is that downloading updates over dial-up was already painfully slow with statically linked packages.

That might surely have been the issue originally, but it's not today. Constantly rebuilding everything all the time costs a lot of compute power (hello, climate change anyone?), and while we in the western countries are blessed with symmetric fiber connections with no data caps, that is most definitely not the case everywhere in the world, and in many many countries metered 2g/3g/4g cellurar connections are the only option. And on top of that, disk space still matters in some cases: when building an initrd, you want that as small as possible even on a modern powerful laptop. And you want to save memory and latency by loading the DSO _once_ instead of every time for every program, which matters a lot of low-end/embedded devices.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:35 UTC (Wed) by farnz (subscriber, #17727) [Link] (14 responses)

And yet, despite all the costs you claim the distros are trying to avoid, they still do regular complete archive rebuilds to find FTBFS bugs caused by a dependency change (thus spending the compute time anyway), the size of a minimal install has skyrocketed (you used to be able to install Debian from 7 floppies, or 10 MiB, the "minimal" install image they offer is now 650 MiB or 65x larger), and the use of ASLR means that the DSO is no longer loaded "just once", but multiple times, because distros don't (based on looking at a Debian 12 system) carefully arrange their DSOs to minimise the number of pages with relocations in them.

On top of that, you only have to follow a distribution development list to see both announced and unannounced soname bumps, where a package has deliberately broken its ABI, along with bug reports because a package has silently broken its ABI. Distros have even set up language-specific tooling for C to generate ABI change reports, so that they can catch broken ABIs before users notice.

All of this is a lot of work, and it's largely based on tooling distros have built for C. It's not hugely surprising that other languages don't have the tooling yet - it took literal decades to get it built for C, and integrated so transparently into distros that it's now possible for package maintainers to not even be aware that it exists, right up until the package they maintain does a bad thing and the tooling reports bugs to them.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 18:37 UTC (Wed) by bluca (subscriber, #118303) [Link] (13 responses)

ABI transitions happen thorough a development cycle sure, but they are a known quantity, and libraries that get included do not break ABI every 5 minutes, otherwise they'd simply not be included. Some of the C++ ones are the worst offenders, like Boost, but their dev cycle is slow enough that it doesn't make that much of a difference in practice.

The most common C libraries that are used in any base system are very stable. glibc last broke compat what, 20 years ago? A very commonly used one with the most recent change was OpenSSL with version 3.0.
libacl, libpam, various util-linux libs, compression libs, selinux libs - those are all at soname revision 0, 1 or at most 2, and they are decades old.

It does happen of course, and there are processes for it, well-regulated and understood. And slow.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:41 UTC (Thu) by taladar (subscriber, #68407) [Link] (10 responses)

So basically what you are saying is that the problem is manageable because most C and C++ libraries don't change much any more anyway.

That isn't really a solution, it just means people avoid change because it has proven to be so painful.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:46 UTC (Thu) by josh (subscriber, #17465) [Link]

Exactly. Language packaging ecosystems are capable of handling problems like "I sent a patch to one of my dependencies to add a new API, they merged it and published a new version, and now I'm publishing a new version of my library using that new API", with a turnaround time in minutes in the limiting case. That doesn't mean every such version needs to be packaged by a Linux distribution, but it does set baseline expectations for what a system is capable of when designed for it.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 15:05 UTC (Thu) by bluca (subscriber, #118303) [Link] (8 responses)

No, that is very much not true, and objective data on that is so overwhelming I have no idea why you have to say it. Just check the changelog of glibc or libsystemd or any other system library if you don't believe me. The problem is manageable because developers of said libraries have learnt to do updates, bug fixes and add new features and interfaces without breaking backward compatibility, but apparently this looks like an alien concept in the golang/rustlang ecosystems.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 15:34 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

> but apparently this looks like an alien concept in the golang/rustlang ecosystems.

Because they don't wear the same rose-tinted glasses as you? Because in their world view this particular problem just doesn't exist? (After all, isn't that one of the selling points of Rust - whole classes of major problems in C just disappear because the concept doesn't make sense in the Rust world.)

It's like a bird trying to explain to a fish how you spread your wings to soar in the wind - the fish (one or two excepted) will have no idea what you're talking about.

Cheers,
Wol

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 16:12 UTC (Thu) by bluca (subscriber, #118303) [Link]

Breaking compatibility in public APIs affects all languages and ecosystems. What changes is just who pays the cost. Making sure it's not _developers_ who pay the cost of _developers_ breaking compat doesn't mean it's free.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 17:34 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Debian 8 from 10 years ago had about 43k packages. Debian 12 has about 64k packages. That's not a particularly fast growth.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 17:59 UTC (Thu) by mb (subscriber, #50428) [Link] (4 responses)

>bug fixes and add new features and interfaces without breaking backward compatibility,
>but apparently this looks like an alien concept in the golang/rustlang ecosystems.

Just because Rust chose different solutions than you prefer for the problem of backward compatibility, doesn't mean that they don't care.
Just because somebody else does something different from what you think would be the correct solution doesn't mean they are doing it wrong. It's quite possible that you are wrong. Not everybody except you is an idiot. Do you get that?

If you think backwards compatibility in the C ecosystem is so far superior, then where are C Editions? Where is the common practice in the C community to use Semantic Versioning?
These things work _really_ well in the Rust ecosystem. And they are either completely absent (Editions) or used extremely rarely (SV) in C.

Except from very few examples like glibc there are few projects in the C community that really care about backward compatibility to the extent possible with Rust Editions and Semantic Versioning. Most projects are stable because they are not changed much on the interface level anymore. glibc being a notable exception.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 18:36 UTC (Thu) by bluca (subscriber, #118303) [Link] (3 responses)

> Most projects are stable because they are not changed much on the interface level anymore. glibc being a notable exception.

This is absolute nonsense. Again, there is data on this, just look at it. Compare symbols exports vs SONAME revision of common system libraries. libsystemd0 alone added 213 new APIs just in the very latest release. SONAME revision is still 0.
Just because rustlang/golang library maintainers can't or won't maintain backward compatibility, it doesn't mean nobody else will.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 18:48 UTC (Thu) by mb (subscriber, #50428) [Link] (2 responses)

>Just because rustlang/golang library maintainers can't or won't maintain backward compatibility

Saying that Rust people don't maintain backward compatibility is:

>This is absolute nonsense

They do it in a very different way.
Like it or not.

There are more solutions to a problem than the Luca-way, you know?

I do understand that you either don't like or don't know what Rust people do.
But I'm just asking you politely to stop spreading this "they don't have/care about backwards compatibility" FUD.
I have ignored you for over half a year here in LWN comments and you are still busy spreading that nonsense. Why? This annoys me just as much as anti-systemd-FUD annoys you.

If you want Rust to have a stable soname/ABI, please come up with a proposal about *how* to do that.
Be constructive rather than destructive.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 19:41 UTC (Thu) by bluca (subscriber, #118303) [Link] (1 responses)

> I have ignored you for over half a year here in LWN comments and you are still busy spreading that nonsense. Why?

Because it's true, whether or not you like it. The standard library changes soname on every build. Backward incompatibility is programmatically baked in the build process itself.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 19:44 UTC (Thu) by corbet (editor, #1) [Link]

So are we sure that this conversation is going anywhere but in circles? Might it be time to conclude that there isn't more to be said (again)?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Feb 7, 2025 18:17 UTC (Fri) by TheBicPen (guest, #169067) [Link] (1 responses)

> glibc last broke compat what, 20 years ago

August 2022. https://lwn.net/Articles/904892/

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Feb 7, 2025 18:34 UTC (Fri) by bluca (subscriber, #118303) [Link]

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:05 UTC (Wed) by pizza (subscriber, #46) [Link] (5 responses)

> The intent behind dynamic linking as present in ELF (and as used in distros) is to make it possible to supply a bug fixed version of a library to a user who only has binaries for the software they run, and has no access to rebuild from source, or to relink a partially linked object against a fixed library.

And that is the point I keep making again and again here -- Go & Rust's tooling for tracking/vendoring/pinning dependencies is quite powerful.. but is utterly useless unless you have the complete source code to _everything_. A single non-public component renders the whole thing unbuildable, even if you have the source to everything else.

This effectively means that you are forever at the mercy of your vendor to provide _any_ sort of updates.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:21 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

Right - but that's a problem for proprietary software, where you don't get the source code for everything, not for Free Software, where you have source and can use it.

It definitely needs a solution in the long run - there's a reason why there are developers working on a stable ABI for Rust - but it's ahistorical to claim that anything other than "protect users of proprietary software" is the reason we have dynamic linking. We have dynamic linking in ELF systems because Sun were fed up of the delays caused by having to wait between sending a tape with an update to an ISV, and the ISV sending tapes with updates to the end users of Sun systems (at minimum two postal delays, assuming the ISV is on top of things).

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 16:21 UTC (Wed) by bluca (subscriber, #118303) [Link]

> it's ahistorical to claim that anything other than "protect users of proprietary software" is the reason we have dynamic linking. We have dynamic linking in ELF systems because Sun were fed up of the delays caused by having to wait between sending a tape with an update to an ISV, and the ISV sending tapes with updates to the end users of Sun systems (at minimum two postal delays, assuming the ISV is on top of things).

Again, that might have very well been the original reason Sun introduced it - but it is not the (sole) reason it was picked up in Linux distros, and why it is still used today.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 20:07 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> but is utterly useless unless you have the complete source code to _everything_

Go will save us! It had _removed_ support for binary-only modules several years ago. You simply can't build Go software without having its full source code available.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 21:04 UTC (Wed) by pizza (subscriber, #46) [Link] (1 responses)

> but is utterly useless unless you have the complete source code to _everything_

Go will save us! It had _removed_ support for binary-only modules several years ago. You simply can't build Go software without having its full source code available.

Which... helps me, as an end-user holding a binary, how?

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 21:09 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

We're talking about OpenSource stuff here. Of course, proprietary bits are proprietary. But at least, Go makes it trivial to verify the source. You can't sneak in a small .so into the binary by using a convoluted `configure` script, for example.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:26 UTC (Wed) by madscientist (subscriber, #16861) [Link] (2 responses)

> But the original reason for dynamic linking is simply that consumers cannot recompile proprietary binaries

I really don't think this is true. The original reason for creating dynamic linking is to save memory and disk space. RAM and storage was scarce and expensive.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 15:42 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

The original motivation as documented by Sun engineering was that if they had to fix a system library in SunOS 3, it took months for the fix to be reliably rolled out across the customer base; Sun would ship tapes with the fix to ISVs and customers, but the fix wouldn't be fully rolled out until ISVs had shipped tapes with new binaries, and this whole process could take months.

The goal for Sun of dynamic linking in SunOS 4 was to let them maintain system libraries without being at the mercy of ISVs rebuilding their binaries against the bugfixed system libraries. Saving memory and disk space came later, after dynamic linking had been implemented in the 1980s and shown to do just that.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 18:19 UTC (Thu) by anton (subscriber, #25547) [Link]

Where can I find this documentation of Sun engineering? Reducing RAM and disk consumption was certainly an issue at the time, and the name "shared object" used in Unix systems (including SunOS) points to this aspect.

On Ultrix, which did not have dynamic linking, a number of utilities were linked into one binary (and argv[0] was used to select the proper entry point), in order to have only one instance of the libraries used by these utilities on disk and in memory.

On Linux, the uselib() system call for loading shared libraries was added before ISVs became relevant for Linux, so fixing libraries under indepenently developed software obviously was not the motivation to do it.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 16:17 UTC (Wed) by dankamongmen (subscriber, #35141) [Link] (1 responses)

> with a few notable exceptions (glibc, GLib and a small number of others), upstream developers on open source projects don't bother keeping their ABI stable

this seems to call for data. even if a library author doesn't bother to learn the common semantics/practices of ABI stability themselves, they're likely to be pointed out as soon as packaging efforts start, or users begin deploying binaries expecting such stability. adding entirely new functionality is not generally abi breakage, if that's not obvious.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 20:04 UTC (Wed) by Wol (subscriber, #4433) [Link]

> this seems to call for data. even if a library author doesn't bother to learn the common semantics/practices of ABI stability themselves, they're likely to be pointed out as soon as packaging efforts start, or users begin deploying binaries expecting such stability. adding entirely new functionality is not generally abi breakage, if that's not obvious.

And here the ugly free-as-in-beer mentality raises its head. If I put software out as a "favour to the world" "I wrote this software for me, I hope you find it as useful as I did", I'm not going to take kindly to you telling me what to do - FOR FREE!

You want to use my software? Great! But it doesn't do exactly what you want? That's YOUR problem, and NOT mine!

Okay, maybe I get my kicks from other people using my software, which means it becomes my problem, but that's MY CHOICE, not YOUR DEMAND. That's my attitude at work - I want to help others, I want to improve things generally, and I get paid for it. But if all I cared about was just getting MY job done, the company wouldn't complain. Thing is, it's MY choice, and him as pays the piper is calling the tune. If you're not paying, you don't get a say.

Cheers,
Wol

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Mar 11, 2025 14:34 UTC (Tue) by ssokolow (guest, #94568) [Link]

I know it's a month and a half late, but I think it's important to point to this post:

The impact of C++ templates on library ABI by Michał Górny

TL;DR: C++ suffers from exactly the same problems that Rust does... it's just more willing to emit an empty .so file when a library uses templates everywhere and doesn't use them as pervasively as Rust does.

Dynamic linking in the face of templates/generics without making the compromises Swift does is an unsolved problem.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 13:52 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (10 responses)

Uh? It's entirely possible to define that you need version >= x.y.z of something in most build systems available, and fail compilation if you get a different version.

If you don't do it then it's your bug really.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:19 UTC (Wed) by farnz (subscriber, #17727) [Link] (9 responses)

Unfortunately, distro policies mean that Debian (for one) will change your ">= x.y.z" to "=a.b.c" where a < x (because that's the version they ship), remove your checks that fail compilation if I try to use anything with major < x, and then consider it a bug in your code that you don't work with the version of the library that they packaged in their stable distro.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 16:52 UTC (Wed) by NYKevin (subscriber, #129325) [Link] (1 responses)

Then you WONTFIX it and redirect all bug reports to the Debian bug tracker. Their code change, their bug.

This isn't even rude. Debian explicitly encourages users to report bugs to Debian and not to upstreams.

Distros lacking power

Posted Jan 30, 2025 12:32 UTC (Thu) by farnz (subscriber, #17727) [Link]

Yep - but that then leads to the original complaint, of the distros not having the power to get upstream to care about stable ABIs and a solid dependency base. Why should I care, if the distro will handle it all for me internally, and do all that hard work for me?

And that's the real point - a stable ABI (in any language) involves a lot of hard work; the distros do this work for C programs (with re-education of significant upstreams), but aren't able + willing to do it for other languages. Further, because users already bypass the distro for programs that fit their needs, but don't have the right version packaged by their distro, the distro has very little power to make upstreams comply with their policies, since they have no leverage - users can and do go round the distro to deploy what they want.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 19:34 UTC (Wed) by dilinger (subscriber, #2867) [Link] (6 responses)

A lot of times this is done specifically because upstreams will bump dependency versions for no reason. It's one thing if you hit a bug in an older version of a library or are using some new API from that library, and therefore require a newer version of said library. It's quite another if you test out a new version of the library, discover that it works just fine, and bump the version build dependency as sort of an I've-tested-this-library-version update. I've seen this a lot with node and go applications in particular. Doing this kind of thing sets the assumption that version dependencies are arbitrary, and therefore distribution maintainers should be free to downgrade version dependencies.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 19:36 UTC (Wed) by bluca (subscriber, #118303) [Link]

Yeah this happens so many times it's not even funny. It's especially bad in Python, where tons of developers just pin their dependencies in requirements.txt to whatever they have on their Windows machine at that particular point in time. But yeah let's completely trust these developers with embedding OpenSSL in their network-facing app, I'm sure it will be just fine.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 23:00 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (4 responses)

They pin dependencies because that's what the bullshit corporate security scanners (snyk, sff scorecards) encourage them to do: to pin to a specific version and constantly change it and pin to the latest available version. Why not just not pin is a mystery.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 8:08 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (3 responses)

Pinning a given dependency usually means, at an absolute minimum, "I ran the unit tests and nothing is obviously broken with this version." In many cases, it also means "I ran the integration tests and nothing is obviously broken." For the bleeding edge version, that's probably about all you can expect from such pins.

For a stable release, it typically implies the same level of testing that you would expect for the release as a whole (hopefully a lot more than just running automated test suites). Obviously, any given distro's definition of "stable" will differ from the application developer's (unless the distro in question is e.g. Arch Linux), but frankly, that's the distro's problem to deal with.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 18:35 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (2 responses)

Except that in go there's a command in go itself to pin everything to latest. So what most people do is to have a bot that if there's a CVE calls that thing.

Effectively they're constantly using the latest version.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 22:24 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Dependabot on GitHub does only minimal upgrades. I also remember seeing a free version of it somewhere.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Feb 1, 2025 7:30 UTC (Sat) by NYKevin (subscriber, #129325) [Link]

> Except that in go there's a command in go itself to pin everything to latest.

The intended use, to my understanding, is to repin everything to latest, run the automated tests, then commit if everything is green, else try to figure out what broke and why. It is not meant to be used in the automated fashion that you describe.

> Effectively they're constantly using the latest version.

The fact that some C developers write terrible code and complain that "the compiler" "broke" it, does not imply that all C developers are incapable of understanding the meaning of UB. By the same token, the fact that some Go or Rust developers do not care about backwards compatibility does not imply that all Go or Rust developers do not care.

Think about it: If this was *really* the intended use case, then pinning would not exist. It would just pull the latest every time, and you would never need to update your pins at all. Pinning, as a feature, only makes sense if you want to have some control over the version you are running.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 8:56 UTC (Tue) by taladar (subscriber, #68407) [Link] (10 responses)

Classic package managers are actually great.

What is crap, on both sides (distros and developers directly) is trying to pretend you can freeze the world around you. LTS, backporting, not updating your dependencies to the latest version, those are the issues we need to get rid of to simplify the problem because they blow up the space of versions that have to work together unnecessarily.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 13:09 UTC (Tue) by pizza (subscriber, #46) [Link] (4 responses)

> What is crap, on both sides (distros and developers directly) is trying to pretend you can freeze the world around you. LTS, backporting, not updating your dependencies to the latest version, those are the issues we need to get rid of to simplify the problem because they blow up the space of versions that have to work together unnecessarily.

This also conveniently leaves out the very real _cost_ to everyone to staying on the very latest of everything.

Every layer in the stack wants "LTS" for everything but _their_ stuff. Because _nobody_ likes stuff changing out from underneath them. Oh, and doesn't want to pay anything for it of course. (see the histrionics over CentOS)

Constant churn, and more often than not, the end-user sees zero (if not outright negative) benefit.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 13:35 UTC (Tue) by ms (subscriber, #41272) [Link] (2 responses)

> Every layer in the stack wants "LTS" for everything but _their_ stuff. Because _nobody_ likes stuff changing out from underneath them. Oh, and doesn't want to pay anything for it of course. (see the histrionics over CentOS)

I completely agree with this. It's no doubt not perfect, but I do suspect "minimal version selection" (https://go.dev/ref/mod#minimal-version-selection) is better behaved in this regard. I'd love to see some statistics (assuming they could be gathered) as to the level of churn caused by the different version selection algorithms.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 30, 2025 14:08 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

Isn't that just the same nonsense you find on dotnet where it selects the version with the fewest security fixes and the maximum number of known bugs?

The golang people are smart folks

Posted Feb 2, 2025 11:09 UTC (Sun) by CChittleborough (subscriber, #60775) [Link]

No, Go's Minimal Version Selection feature is quite advanced, and was built to answer real problems found by large projects.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 14:05 UTC (Tue) by taladar (subscriber, #68407) [Link]

The idea that you can prevent stuff from changing by keeping it static is an illusion though.

You only have two choices, either you can get a lot of changes, all at the same time every once in a while AND some small changes all the time or you can get some more small changes all the time.

There is no "keep it this way forever" option, not even for the tiny subset of libraries where the underlying domain doesn't change (e.g. pure math). There are always some changes required for build systems, hardware,... even for those while the vast majority of libraries and software will have laws that change, new cryptographic algorithms as old ones are broken, new network protocols to support, changes in natural language over time (no, not talking about deliberate ones, even just the natural evolution of language over time), new design guidelines,... even if you ignore changes required for security holes and bugs.

If you choose the option to have lots of changes occasionally you still need to keep up with the legal changes and the security holes and critical bug fixes at the very least so keeping things completely unchanged for any amount of time is never even an option.

On the other hand when that occasional big change comes up (e.g. a RHEL major update) you will have all kinds of unpredicted problems (in addition to known, predicted changes like new config options) due to the change and no clue which of the thousands of changes you did at the same time actually caused the issue.

Meanwhile if you had chosen the other options things would likely break a bit more often but only ever in ways that are very easy to track down to the few recent small changes that happened. You will also have the benefit of an entire community making those changes at roughly the same time and upstream developers that still remember which changes they made.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 3:34 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> What is crap, on both sides (distros and developers directly) is trying to pretend you can freeze the world around you.

I don't see that attitude from Rust/Go/JavaScript developers. They have built tooling that manages the complexity of the real-world development, and helps to identify issues. Each time I install NPM packages, it immediately prints the number of packages with critical errors (and the number of packages seeking funding).

If I work within Go/Rust infrastructure, it's trivially easy to fix the dependencies. Github even automates it for you.

It's the classic distros that are solving the problem of medium-speed-moving ecosystems by saying: "Whatever, we don't need you anyway. We'll stick to libz, libglib, and bash".

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:23 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (3 responses)

It's trivial but the work needs to be replicated by each and every project, constantly. If you add it all up can you still call it "trivial"?

Also since there are no stable APIs, the fact that it compiles doesn't really guarantee anything, if it even compiles.

How many projects will just not bump because they can't be bothered to do this constant churn? I suspect it's most of them.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 18:39 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> It's trivial but the work needs to be replicated by each and every project, constantly. If you add it all up can you still call it "trivial"?

Just like you have to go through each of the dependent projects to make sure that a library update has not broken something.

> Also since there are no stable APIs, the fact that it compiles doesn't really guarantee anything, if it even compiles.

Most libraries do have a relatively stable API, and it's trivial to check if a dependency update breaks the build.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 22:45 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (1 responses)

> Most libraries do have a relatively stable API, and it's trivial to check if a dependency update breaks the build.

It's trivial if you're a distribution with infrastructure to do that. If you are thousands of individual developers all having to do it manually, it's not as trivial as you might imagine I think.

20 minute per thousands of contributors individually adds up to a lot of work globally.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 23:23 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> It's trivial if you're a distribution with infrastructure to do that.

Erm... Look at the article in question?

> If you are thousands of individual developers all having to do it manually, it's not as trivial as you might imagine I think.

I disagree. A distribution with a semi-automated patching infrastructure can solve most of the issues without any undue burden on maintainers. Github does that for developers, you get a stream of automated pull requests with minimal dependency changes for CVE fixes.

Something like this, but managed by distros directly, is not technically complicated. You just need to keep a reverse database of dependencies, consume the CVE feeds, and then file PRs with suggested fixes for maintainers, with something like a trybot to show if the upgrade causes compilation breakages.

I can help with sponsoring this work if somebody wants to try it.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 1:22 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (4 responses)

> I disagree; all of this bespoke language ecosystem crap stems from the "need" to natively support MacOS and Windows.

Mac and Windows doesn't even explain things like Flatpak and Snap. Developers cannot rely on distro package managers because distros follow their own schedule, add patches that change things routinely and package only a small portion of what's available and even then there is almost no chance they have packaged the versions one might want. Net result is that in almost every organization I have worked with, developers very rarely rely on Linux native packages beyond the base OS and Linux distributions have lost the influence they once had. This is not a good thing but I don't see this changing anytime soon.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 28, 2025 10:46 UTC (Tue) by farnz (subscriber, #17727) [Link]

Part of the problem is that users have demands that cannot be met by RPM or dpkg as designed for use in a distro. Users have a very wide definition of "regression", covering everything from "used to work, doesn't any more" through to "this pixel of the UI used to be #ffeedd, it's now #ffeede, but nothing else has changed", and user expectation is that if they encounter a regression of any form, there is a quick and easy workaround.

This puts distros in a bind; I need to be able to simultaneously install both the latest version of something (so that I get the bugfixes I need from it), and also to parallel-install the old version of it (so that I can avoid the perceived regression from the new version). And worse, distros don't particularly want to provide the old, known buggy, version, because, well, it's known buggy.

There is no easy solution here - not least because some regressions will be because a thing was buggy, and the user depended on side effects of that bug. There are only different tradeoffs to be made - and Linux distros have made tradeoffs focused on fewer bugs that result in users moving to other tooling (CPAN, CTAN etc) because it's easier to switch tooling than to ask distros to do what they want.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Jan 29, 2025 14:31 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (2 responses)

flatpak and snap are there mostly for proprietary applications. See also the push for wayland and isolation.

flatpak doesn't even allow CLI applications, so it's completely useless for go.

Completely different use cases.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Feb 7, 2025 18:35 UTC (Fri) by TheBicPen (guest, #169067) [Link] (1 responses)

> flatpak and snap are there mostly for proprietary applications.

That's not true at all. I can't speak for snap, but on flathub, the vast majority of applications are FOSS.

Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...

Posted Mar 11, 2025 14:15 UTC (Tue) by ssokolow (guest, #94568) [Link]

*nod* For example, I install Inkscape off Flathub instead of through the distro repos because it allows me to get the latest version when it works, on top of Kubuntu LTS, but trusting that I can reliably downgrade my way around the occasional crash bugs they ship with a single command.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds