Vendoring Go packages by default in Fedora
The Go language is designed to make it easy for developers to import other Go packages and compile everything into a static binary for simple distribution. Unfortunately, this complicates things for those who package Go programs for Linux distributions, such as Fedora, that have guidelines which require dependencies to be packaged separately. Fedora's Go special interest group (SIG) is asking for relief and a loosening of the bundling guidelines to allow Go packagers to bundle dependencies into the packages that need them, otherwise known as vendoring. So far, the participants in the discussion have seemed largely in favor of the idea.
Discussions about vendoring and distribution packaging are not new nor unique to Go or Fedora. LWN has covered the overlap between language and distribution package managers in 2017, vendoring and packaging Kubernetes for Debian in 2020, a discussion around iproute2 and libbpf vendoring also in 2020, and another Debian conversation about vendoring in 2021—and there's no doubt similar discussions have taken place in the interim. It is a recurring topic because it remains an unsolved problem and a perennial pain point for packagers for Linux distributions that have policies that discourage bundling.
Those policies are not meant to frustrate packagers, however, but to benefit the other contributors and users of the distribution. If there is a bug or security flaw in a dependency, for example, it is easier to update a single package than if the dependency is bundled in several packages. Licensing and technical review of packages with bundled dependencies is more complicated, and there is a greater chance that non-compliant licenses and bugs will be missed. Bundling also leads to bloat: if five packages bring along their own copies of a dependency, that is a waste of disk space for users. While disk space is less precious than it was when distributions first developed these policies, there is no sense in wasting it. In short, there are good reasons Fedora and other distributions have developed packaging policies that discourage bundling.
Nevertheless, those policies do frustrate packagers—many of whom are volunteers with limited time and patience to dedicate to the task of splitting out a program's dependencies into multiple packages. Go programs are known to be particularly painful in this regard. Last year "Maxwell G" wrote a blog post about the problems with unbundling Go to meet Fedora's guidelines. They pointed out that the Go SIG maintains 556 application packages, with 1,858 library packages to support those applications:
Maintaining these library packages is not exactly straightforward. The Go packaging ecosystem is [non-ideal] for distributions that wish to maintain each library as a separate package—it was never designed that way. A fair number of libraries do not publish tagged releases, so each project depends on an arbitrary commit of those libraries. Other widely used libraries, such as the golang-x-* projects, only have 0.Y.Z beta releases, despite being depended on by a large number (494) of source packages. (This is a recent development; for a while, these packages did not have tagged releases either.) Furthermore, the Go module ecosystem lacks a way for packages to set compatible version ranges like, for example, the Python and Rust crate ecosystems. Each project just sets a single version in the go.mod file. We cannot rely on upstream metadata or stable versions to help us determine whether an update is incompatible with other packages in the distribution like other Fedora language ecosystems do.
To cope with the mismatch between Go and RPM packaging, Maxwell G has been working on the go-vendor-tools project to handle creating reproducible vendor archives and better handle license scanning for those dependencies. They said recently that the Go SIG has moved 25 packages to this new tooling, including Docker. Maxwell G recommends bundling by default for Go software.
The proposal
The most recent discussion started when Álex Sáez, a member of the
Go SIG, opened a
ticket about the SIG's "challenges in maintaining and updating
Go dependencies
" with the Fedora Engineering Steering Council
(FESCo) on January 13. There has been a "growing difficulty in
managing Go dependencies in the container ecosystem
"
which is making it difficult to update packages that have new
dependencies. Kubernetes, Docker, Podman, and many other applications
popular in the container ecosystem are primarily written in Go.
Fedora's guidelines
currently state that Go dependencies should be unbundled by default, but
say that it "can be reasonable to build from bundled
dependencies
" for some projects. Sáez argued in the ticket that
the complexity of unbundling dependencies is justification enough, and
suggested a shift toward allowing bundled Go packages by
default instead. FESCo member Kevin Fenzi recommended
gathering feedback from the larger developer community.
Mikel Olasagasti sent a message to the fedora-devel mailing list to make the Go SIG's case and solicit feedback. Allowing vendoring by default would alleviate bottlenecks and improve the stability of Go programs in Fedora, he said. It is urgent now due to the inclusion of Go 1.24 in the upcoming Fedora 42 release, which has caused more than 200 packages to fail to build from source (FTBFS). If consensus on the proposal were to be reached, the Go SIG could implement the change as part of Fedora 43.
Why just Go?
One of the first and predictable reactions
to the idea of loosening bundling restrictions on Go package was that
it would usher in new rules for other languages as well. Michael
Gruber called
the proposal "the can-opener to deviate from our distribution
model in general
".
I mean, if I can pull in a myriad of Go or Rust modules for my package to build (without individual review or re-review in case of changed dependencies), then why should I bother unbundling a library such as zlib, regex, openssl, ... ?
Gruber said that if developers were installing dependencies via language ecosystem tools instead of using dependencies from Fedora, it would fundamentally change Fedora to become merely a platform for applications rather than a development platform.
Daniel P. Berrangé replied
that the logic should be applied consistently across languages, and
suggested that it was time to rethink Fedora's approach to packaging
for "non-C/non-ELF language modules
". With languages such
as Go, Python, Java, Rust, and more, dependencies are "fast moving
with a somewhat more relaxed approach to API [compatibility] than
traditional ELF libraries
". Fedora's approach, he said, relies on
"the heroics of small numbers of SIG members
" and if those
people decide to focus on other things, then support for those languages is
at risk of falling apart. He mentioned Node.js, which was largely
maintained solely by Stephen Gallagher until he stepped
down from that in May 2024.
Fedora's review process has critical components,
Berrangé said, particularly when it comes to license scanning. But Fedora and
its upstreams are "completely non-aligned
" when it comes to
bundling dependencies. And that all signs point to Fedora and its
upstreams moving further apart "for non-infrastructure components
of the OS in the non-C/ELF world
". He did note that there would be
security tradeoffs if Fedora allowed more packages to bundle
dependencies, but suggested that there would be more time to work on
security updates and improve automation if developers didn't have to
do the busy work of unbundling dependencies to begin with.
Gerd Hoffmann pointed out that the current practice of unbundling Go and Rust dependencies does not make it easier to publish security updates, due to static linking. It is not enough to update the package with a security flaw, any packages depending on the updated package have to be rebuilt as well. There was a hefty side discussion that followed after FESCo member Neal Gompa complained about how package rebuilds are handled in Fedora versus openSUSE.
FESCo member Zbigniew Jędrzejewski-Szmek disagreed
with the reasoning that Fedora should allow bundling across the board
for "consistency
", and disagreed with those who opposed
bundling because "that'll create a slippery slope to use it
everywhere
." Other language ecosystems such as Rust and Python
deal well with unbundling dependencies. He disagreed that modern
languages were incompatible with traditional distribution
packaging.
Jan Drögehoff disputed that Rust dealt well with unbundling dependencies, though he said Go's ecosystem was even more annoying than Rust because it had a decentralized namespace that allowed linking to dependencies anywhere. Rust, in comparison, uses the crates.io registry that ensures that two dependencies do not share the same name. He said it should be a matter of when and how bundling was allowed for all languages, not if it is allowed. Unbundling dependencies causes too much work for packagers:
Lots of Fedora package maintainers are already spread across a dozen packages for which they often don't have the time to properly maintain and most of the time no one cares because there are only a handful of people using it or any programs that consume it hasn't been [updated] either.
Rings to unbind them
Many years ago, outgoing Fedora Project Leader (FPL) Matthew Miller put together a proposal to change how Fedora was developed. Specifically, he proposed the idea of package "rings". (LWN covered this in 2013.)Under that plan, Fedora would adopt a tiered system for packages. Packages in the zeroth tier would adhere to the Fedora Packaging Guidelines we know and love today. Packages in the first tier would have looser guidelines, such as allowing vendoring, while outer tiers would have looser guidelines still. That proposal was never adopted, but Miroslav Suchý suggested revisiting the idea and allowing bundling if a package was in the outer ring and no inner packages were dependent on it.
Jędrzejewski-Szmek said
that the ring idea is dead, and should stay dead. The answer to
whether bundling is appropriate depends on the implementation language
and details of the individual package, such as whether that package is at the
core of Fedora with many dependencies or a leaf package with no
dependencies. "This does not sort into rings in any way.
"
Suchý countered that the Copr build system and repositories had become the unofficial outer ring of Fedora. Berrangé also argued that the ring idea did not come to exist within Fedora, but it had grown up outside of Fedora.
An increasingly large part of the ecosystem is working and deploying a way that Fedora (and derivative distros) are relegated to only delivering what's illustrated as Ring 1. This is especially the case in the CoreOS/SilverBlue spins, but we see it in traditional installs too which only install enough of Fedora to bootstrap the outside world. Meanwhile ring 2 is the space filled by either language specific tools (pip, cargo, rubygems, etc), and some docker container images, while ring 3 is the space filled by Flatpaks and further docker container images.
While users adopt things from the outer rings, they lose the benefits that Fedora's direct involvement would have enabled around licensing and technical reviews, as well as security updates.
Should we stay or should we vendor?
At least one FESCo member seemed skeptical of loosening the rules
for all languages, and not entirely convinced about the Go SIG
proposal. Fabio Valentini asked
how bundling would let the Go SIG update 200 FTBFS packages any
faster, and said that patching vendored dependencies is really
annoying at least when it comes to Rust. "It *is* possible, with
workarounds (disabling checksum checks, etc.) but it's very
gnarly.
".
He also said, in reply to a comment about supply-chain security
from Richard W.M. Jones, that bundling is not a cheat code to avoid
review. "The responsibility to check that vendored dependencies
contain only permissible content is still on the package
maintainer
".
Olasagasti explained
that the number of required packages in Fedora can far exceed the
number of dependencies defined in its go.mod (the file
that describes a module's properties including its dependencies). If a
Go package had vendored dependencies "it might not require
these additional packages, as they wouldn't be part of the bundled
application
." The doctl package that implements the Digital
Ocean command-line interface requires 122 dependencies in its
go.mod but requires 752 Fedora packages to build—629
are development packages for Go.
For example, doctl's go.mod file defines 3 indirect dependencies in the containerd modules (console, containerd, and log). However, for Fedora, at least 16 containerd modules would need to be included (aufs, btrfs, cgroups, cni, console, containerd, continuity, fifo, fuse-overlayfs-snapshotter, imgcrypt, nri, runc, stargz-snapshotter, ttrpc, typeurl, and zfs).
FESCo member Fabio Alessandro Locati agreed with Valentini that the conversation should focus only on Go. However, he looked favorably on the idea of moving Go packages to a vendored approach.
At the moment, the debate continues without a clear indication that FESCo will approve or deny the Go SIG proposal. However, there seems to be less resistance to the idea than there might have been a year or three ago. If approved for Go, it seems likely that lobbying will begin shortly after for other languages to follow suit.
Posted Jan 27, 2025 15:41 UTC (Mon)
by amacater (subscriber, #790)
[Link] (110 responses)
The push is now coming to produce reproducible builds, software sourcing, SBOMs - meta knowledge about where your code has come from, is used, is going is now much more important than before.
FTBFS should mean things get dropped unconditionally. Fedora as upstream-ish for Red Hat Enterprise Linux (and therefore Almalinux, Rocky Linux, SUSE Liberty Linux and CentOS Stream)
Google - please step up to the plate and work out a dependency model that works. The death of the distributions will just extend to "How was this built, from where, WTF we can't use this and we can never have any clue as to security for it"
We invented distributions and dependency management from 1991-94 precisely to sort some of these problems. A devil may care "every programmer for themselves" attitude is absolutely fine until you want to USE code.
Posted Jan 27, 2025 17:55 UTC (Mon)
by bluca (subscriber, #118303)
[Link]
Posted Jan 27, 2025 18:25 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Go actually helps a lot in this regard. Its build system produces bit-for-bit reproducible builds without any poking (if you don't use cgo). It would be nice if the distro switches from the "package the world" approach to "verify the world".
There's no need to fight the various build systems, work with them instead! The distro should somehow bubble up all the vendored dependencies and make it easy to look for vulnerable or defective dependencies.
Posted Jan 27, 2025 20:46 UTC (Mon)
by ms (subscriber, #41272)
[Link] (1 responses)
Posted Jan 27, 2025 20:50 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
It is! You can cryptographically safely go from a git reference (or a source code hash) to the resulting binary.
Posted Jan 28, 2025 22:19 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link]
Posted Jan 27, 2025 22:26 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (93 responses)
Distributions have increasingly little hold to influence anything like this beyond what upstream itself is directly doing already. All the pointless distro incompatibilities have resulted in language specific managers and various container and sandbox style solutions effectively bypassing the distro for anything beyond the core layer.
Posted Jan 27, 2025 23:30 UTC (Mon)
by pizza (subscriber, #46)
[Link] (92 responses)
I disagree; all of this bespoke language ecosystem crap stems from the "need" to natively support MacOS and Windows.
Posted Jan 27, 2025 23:53 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (86 responses)
The basic issue is that a package requires _years_ to become available, and probably person-months of work. And even then, dependency management quickly becomes impossible. APT simply dies if you try to make it resolve dependencies for 200k packages.
Posted Jan 28, 2025 0:22 UTC (Tue)
by dbnichol (subscriber, #39622)
[Link] (74 responses)
If you're relying on the system package manager, you're just hoping for the best. Is the package much newer or much older than you've tested against? Is the package patched to do something you don't expect? Is the package even available? What do you do if there are problems with that? "Sorry my program is broken for you. Please go talk to this third party that I have no influence over."
Posted Jan 28, 2025 1:15 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (62 responses)
Yes, but that's a good thing. An extraordinarily good and fundamental thing, in fact. Because the vast majority of individual app developers have no idea what building a consistent, secure, stable and maintanable ecosystem even looks like, and they just want to get past the "boring bits" and get them out of the way. So it's very common to see these developers who ship their own binaries do insane things like bundle 3 years old and CVE-infested versions of OpenSSL, with no regards whatsoever for the security of users and their systems. Thanks, but no thanks!
Posted Jan 28, 2025 11:47 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (61 responses)
What makes you think the distro would have any luck maintaining an app on their own? If upstream is so negligent as to do this…why trust them for anything? At least Rust has `cargo-audit` that one can run in CI to make sure that you're not depending on anything that is yanked or has CVEs filed against it. I run it nightly on my projects. IIRC, there's a similar tool for golang that can actually do line-based CVE detection (i.e., it won't bother you with CVEs in functions of packages you don't actually end up using), but this is probably far too sharp for distros as vulnerability scanners tend to just seek "package-version" tuples and not care whether the vulnerable code is actually present or not (given the static compilation being able to drop code that isn't used anywhere).
Posted Jan 28, 2025 12:55 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (60 responses)
Because we are actually doing it, and have been doing it for a while
> At least Rust has `cargo-audit` that one can run in CI to make sure that you're not depending on anything that is yanked or has CVEs filed against it.
As everything else in the Rust ecosystem, this is geared toward a single application developer, mostly large corps, and screw everything and everybody else. It doesn't scale to do that for every single thing, all the time, forever, when you have exactly zero resources and can barely keep the lights on with volunteer work.
Posted Jan 28, 2025 13:55 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (6 responses)
I am glad you are in favor of forbidding the bad behavior of running outdated versions that are older than what the upstream project still supports. I agree that this is a waste of our limited resources to try to support this clearly broken use-case and to constantly waste energy all the time, forever to attempt to keep up with the changing world around you by back-porting fixes with volunteer work.
What is actually geared towards large corps is this pretense that supporting their fantasy that they can freeze the world around them is even possible or more stable.
Posted Jan 28, 2025 14:25 UTC (Tue)
by pizza (subscriber, #46)
[Link] (2 responses)
Question. What do you think all of this software actually runs on?
...."LTS" hardware, of course.
Of which the most prevalent is a 26-year-old backwards-compatible extension of a design that dates to 1985, which in turn is an backwards-compatible extension of a design released in 1978, of a which itself is a non-backwards compatible extension of a design from 1971.
_everything_ depends on _someone else_ "freezing the world".
Posted Jan 30, 2025 15:36 UTC (Thu)
by anton (subscriber, #25547)
[Link] (1 responses)
Posted Jan 30, 2025 17:47 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Closer to 22 than 21 (with the spec available three years before that). But eh, that doesn't change my point. Most of us are using (recently-compiled) binaries that will natively run on two-decade-old hardware. I'd wager that a majority are still using (still recently-compiled) binaries that will natively run on three-decade-old hardware (ie 'i686'), and in some cases, four-decade-old (ie 'i386') hardware.
...As it turns out, "freezing the world" is utterly necessary in practice.
Posted Jan 28, 2025 14:33 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (2 responses)
What the "upstream project" supports or doesn't doesn't matter one bit, because they largely have no concern whatsoever for compatibility. See for example the kernel, which breaks userspace API on every single damn release, and very often in allegedly "stable" releases too, because testing is for customers. And that's why people running real infrastructure with real workloads pay out of their nose to get RH or Canonical to provide targeted fixes in LTS releases for 5-10 years, instead of blindly throwing anything that kernel.org spits out and chucking it over the wall, praying nothing falls apart (hint: there's always something that does).
Posted Jan 28, 2025 22:35 UTC (Tue)
by DemiMarie (subscriber, #164188)
[Link]
Posted Jan 30, 2025 14:10 UTC (Thu)
by taladar (subscriber, #68407)
[Link]
Posted Jan 28, 2025 21:43 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (52 responses)
Are we discussing specific instances or the concept in general? I find it hard to believe that a distro would say "yes, we can maintain that" without seeing what it is first. See spot's marathon of shaping Chromium into a Fedora-ok-shaped package years ago.
> It doesn't scale to do that for every single thing, all the time, forever, when you have exactly zero resources and can barely keep the lights on with volunteer work.
I mean…nothing scales "for every single thing, all the time, forever", so I don't see what the argument is here. I think one wants to have some level of metrics for these kinds of maintenance tasks and to include that in `crev` so that one can even ask the question "am I built on a solid foundation?".
> forbidding bad behaviour by policy and construction
Great, we agree here :) . Set up `cargo-audit` in CI, require green CI, check the `crev` status of your dependency tree. I'd like to see C and C++ libraries start to do the equivalent, but the ecosystem is just not in a state to even allow these things effectively anyways.
Posted Jan 28, 2025 21:49 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (51 responses)
Distributions exist, so by definition this is possible
> "am I built on a solid foundation?"
Exactly, that's the question - and if the answer involves downloading random stuff from the internet at build time and bundling security-critical dependencies so that one doesn't have to be bothered with updating them (and they themselves don't have to be bothered with annoying details such as maintaining backward compatibility), it's obviously "no"
> Great, we agree here :)
Not at all - cargo is exactly the kind of stuff that needs to be avoided like the plague. The entire reason for its existence is cutting corners, embedding extraordinarily bad and damaging behaviours in normal practices and normalizing it. It's the epitome of all that is wrong with the Rust ecosystem.
Posted Jan 29, 2025 3:38 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Go allows hermetically sealed builds with bit-for-bit reproducibility. With almost zero effort. Can C/C++ infrastructure provide the same level of assurance? Without having to tear out your hair?
Posted Jan 29, 2025 11:10 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (2 responses)
Posted Jan 29, 2025 19:23 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jan 29, 2025 19:35 UTC (Wed)
by bluca (subscriber, #118303)
[Link]
https://tests.reproducible-builds.org/debian/reproducible...
Posted Jan 29, 2025 13:24 UTC (Wed)
by joib (subscriber, #8541)
[Link] (11 responses)
That's a strawman argument, as there's obviously other ways of doing things than this and the traditional Linux distro model.
Distros do a lot of good work, but after screaming "my way or the highway" at developers, the developers have largely chosen the highway. Perhaps instead of screaming some more at the developers, berating how dumb and irresponsible they are, and hoping that they will see the light and come back to the true faith, the distros need to do some soul-searching on their end? Maybe there are ways of accommodating the workflows the developers have voted for with their feet while still having the distro advantages of policy auditing and enforcement, security tracking, etc.?
Just as a quick strawman proposal (gee, lots of strawmen here..):
- Distro runs its own registry (NOT an automatic mirror or proxy of upstream). Before first syncing a package from the upstream registry into the distro registry, the responsible packager(s) must check it, similar to adding a new package to a distro today (like the Debian WNPP process).
Posted Jan 29, 2025 13:56 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (2 responses)
Or, just go with what anarcat said at the top of this thread:
> or abandon the language in both Debian and Fedora?
It's not like there's any law of physics that says golang and rustlang applications must absolutely be included. If the work distributions do is useless, and the language-specific stores are oh-so-much-better, surely they can just use those, and everyone will be happy?
> Just as a quick strawman proposal (gee, lots of strawmen here..):
That's a lovely list of requests (except it doesn't really solve most of the problems, like having to update one single package 19283819 times every time), where do we send the invoice to pay for it?
Posted Jan 29, 2025 14:32 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
It works fine for Gentoo :-)
Cheers,
Posted Jan 30, 2025 8:27 UTC (Thu)
by joib (subscriber, #8541)
[Link]
That is of course true, and it might indeed be that distros will increasingly be relegated to providing the low level plumbing, and most apps will be provided via app stores, with bundled dependencies and only minimal automated oversight from the distro, etc. Which is roughly what many other operating systems (Android, macOS, iOS, windows) are doing.
A shame, in a way, because I think there are pain points in that model as well that distros could be well places to help out with. But if distros refuse to evolve, it's their choice I guess.
> That's a lovely list of requests (except it doesn't really solve most of the problems, like having to update one single package 19283819 times every time), where do we send the invoice to pay for it?
In my bubble the CPU and BW cost of build/distribution farms are a mere rounding error compared to engineer salaries, facilities etc., but maybe it's different for a volunteer run project with little such expenses.
Posted Jan 29, 2025 14:51 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (6 responses)
Having said it, developers are not choosing the distribution way because 99% of them are on windows or osx and have absolutely no idea with the concept of a distribution. It's like explaining music to a deaf person. They lack the life experience to understand why one might like a distribution at all.
That's why they don't care, and that's why appeasing them is probably useless.
Posted Jan 29, 2025 20:04 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
The crucial difference is that the distros won't require developers to package the code into their own special snowflake version of package format and jump through hoops to get package reviews.
Posted Jan 29, 2025 23:04 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (4 responses)
So if you think "use popular libraries" and "don't use some crappy tool nobody has heard of" is "jumping through hoops"… sure… However developers that do things like these probably write low quality software and nobody is worse off for it not being available on distribution repositories. The ego of the author might be a bit bruised but perhaps that can be a learning experience.
Keep in mind that prior to becoming a distribution maintainer, I had my software packaged in Debian by someone else. So I've experienced both sides. Have you?
Posted Jan 29, 2025 23:41 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
No. The ask is now: "don't use any one of these new-fangled Rust and Go thingies, use C with the libraries we already have".
> The ego of the author might be a bit bruised but perhaps that can be a learning experience.
So you're literally saying that the author needs to debase themselves before the distro packaging gods. They need to follow their dictats and accept the wisdom of the ancients.
As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.
I'm already seeing the results of this. For example, there's a huge infrastructure of "self-hosting" projects. I'm using Immich for photos, FreshRSS for news, calibre-web for books, HomeAssistant, PeerTube, a self-hosted AI infrastructure, etc.
NOTHING out of this truly wonderful infrastructure is packaged in distros. Nothing. And the enabling technology for all of them were not distros, but Docker (and Docker-Compose) that allows them to not care about distros at all.
Posted Jan 30, 2025 1:34 UTC (Thu)
by somlo (subscriber, #92421)
[Link]
When they occasionally do something that makes packaging more difficult, I politely point it out, and ask for assistance. 100% of the time I did this, upstream either helpfully pointed out the detail I was missing, or made (small) changes to accommodate me in making downstream packaging less difficult.
> As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.
I'm sorry your experience was *this* negative. OTOH, if every distro packager you ever ran into was this much of a jerk, what do they all have in common? :D
Posted Jan 30, 2025 6:31 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (1 responses)
> No. The ask is now: "don't use any one of these new-fangled Rust and Go thingies, use C with the libraries we already have".
I guess you haven't checked what's in distribution repositories for the past 15 years?
> As a software author, I will just tell distro packagers to go and copulate with themselves in a highly unusual manner.
No worries, with that "style" of communication everyone will stay away from you regardless of how you write software.
Posted Mar 6, 2025 17:21 UTC (Thu)
by nye (subscriber, #51576)
[Link]
Corbet, can we please just get a permaban for this malignant troll? LtWorf poisons almost every thread with his mindless toxic bile, and it's ruining LWN.
Posted Jan 30, 2025 14:11 UTC (Thu)
by dbnichol (subscriber, #39622)
[Link]
Posted Jan 29, 2025 13:53 UTC (Wed)
by josh (subscriber, #17465)
[Link] (34 responses)
This is incorrect, but I also don't expect you're interested in *discussing* this rather than announcing your position. So, for the benefit of others:
This is incorrect, and in fact, cargo goes to a *great deal of effort* to accommodate many different use cases, including Linux distributions, and Linux distributions manage to package many different things written in Rust and using Cargo. Please don't paint Rust with the same brush as Go/Node/etc. Rust is very well aware that it's in a particularly good position to be able to drop in where previously only C could, and because of that, Rust goes out of its way to make that as feasible as possible.
That said, we could always do better, and there are many efforts in progress to do better. But "better" and "do exactly what C does" are emphatically not the same thing.
Posted Jan 29, 2025 14:19 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (33 responses)
You can correct me if I'm wrong, but as of today the Rust stdlib still gets a different hashed SONAME on every build, so dynamic linking is de-facto impossible and unusable, so there's no stable ABI, and every upload is a new mass ABI transition. This cannot work sensibly.
So, in practice, Golang and Rustlang, as ecosystems, are in pretty much the same position: vendor the impossible, and static link everything and the kitchen sink.
C, as a language, was also in the exact same position, and in fact you can perfectly well still do exactly all that and then some, if you want to. We made it do something very different, in our ecosystem, for extremely good reasons that were true 20 years ago as they are true today, and substituting Golang or Rustlang instead of C doesn't change that one bit.
Posted Jan 29, 2025 14:40 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (3 responses)
And why would one WANT dynamic linking, apart from the fact Unix die-hards have been brainwashed into thinking it's the "one true way"?
Am I right in thinking that glibc is one of the few places where dynamic linking works as intended? Unfortunately the (then mainstream) linux software I would like to run is incompatible with glibc.
And those of who are old enough to remember a world before DOS/Windows/Linux aren't sold on the balls-up that is Posix and its descendants. I for one spent the start of my career on a Multics derivative, not a Eunuchs derivative. A hell of a lot has been lost to that academic castrated mess that infected all those University students. "Those that forget history are doomed to re-live it".
Cheers,
Posted Jan 29, 2025 14:55 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (2 responses)
Because you install, update, manage and load a dependency ONCE, instead of 1928391238912 times, one for each user, in slightly different version each time. This affects security, maintenance, bandwidth (not everyone lives in the western world with unlimited data and symmetric fiber), disk space, and memory (a shared library is loaded in memory ONCE and then just remapped for every process that uses it, and this makes a difference on small devices). And more of course. All these are not really language-specific reasons, and apply just the same everywhere. C wasn't (used) like this at the beginning either, it was coerced into, _despite_ its characteristic as a language, and not because of it, and that's why people who work in this ecosystem don't really buy the excuses made by $newlanguageoftheweek.
> Am I right in thinking that glibc is one of the few places where dynamic linking works as intended?
You are very much not right in thinking that, no. By policy every library in most distros is used in the shared object format, and exceptions are extremely rare (and very much frowned upon).
Posted Jan 29, 2025 20:05 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
This has never been true.
Posted Jan 30, 2025 14:35 UTC (Thu)
by taladar (subscriber, #68407)
[Link]
There is a reason people try to work around this by vendoring, linking statically, app bundles on Mac, Docker containers, appimage, flatpak and many other systems that again do ship a copy of everything for each application again.
And that reason is that you can only maintain the brittle dynamic runtime environment if you literally touch it as little as possible which means you keep everything old, not just the core dependencies that benefit from being replaced once instead of multiple times.
That might be fine for you from the perspective of a single distro but out in the real world we have to deal with all the supported versions of your distro and other distros at the same time and keeping stuff old means we have to support an extremely wide range of versions for literally everything, from 10 year old stuff to the most recent.
This can e.g. lead to building elaborate systems to detect which config option is supported on which version when generating configuration for OpenSSH because the supported major versions span literally half the major version range of that software.
Posted Jan 29, 2025 14:46 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (27 responses)
The idea was that libraries would carefully curate their ABI, such that users never experienced an ABI break - no "soname bumps" or anything like that - as part of justifying the prices they charge developers to buy access to those libraries. The work distros have backed themselves into doing for all projects is the ABI stabilization work - with a few notable exceptions (glibc, GLib and a small number of others), upstream developers on open source projects don't bother keeping their ABI stable, as long as their APIs are stable enough for their intended consumers.
But the original reason for dynamic linking is simply that consumers cannot recompile proprietary binaries, and want to receive binaries they can run without a "prepare" step. Good tooling can already rebuild all dependent packages on a version change (e.g. Open Build Service does this), and the other reason that people cared about dynamic linking back in the day is that downloading updates over dial-up was already painfully slow with statically linked packages.
Posted Jan 29, 2025 15:04 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (15 responses)
That's most definitely not true - some libraries are bad, but most are good on that front, and ABI transitions are relatively easy, with few exceptions especially compared to the grand total of number of libraries shipped.
> But the original reason for dynamic linking is simply that consumers cannot recompile proprietary binaries, and want to receive binaries they can run without a "prepare" step. Good tooling can already rebuild all dependent packages on a version change (e.g. Open Build Service does this), and the other reason that people cared about dynamic linking back in the day is that downloading updates over dial-up was already painfully slow with statically linked packages.
That might surely have been the issue originally, but it's not today. Constantly rebuilding everything all the time costs a lot of compute power (hello, climate change anyone?), and while we in the western countries are blessed with symmetric fiber connections with no data caps, that is most definitely not the case everywhere in the world, and in many many countries metered 2g/3g/4g cellurar connections are the only option. And on top of that, disk space still matters in some cases: when building an initrd, you want that as small as possible even on a modern powerful laptop. And you want to save memory and latency by loading the DSO _once_ instead of every time for every program, which matters a lot of low-end/embedded devices.
Posted Jan 29, 2025 15:35 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (14 responses)
On top of that, you only have to follow a distribution development list to see both announced and unannounced soname bumps, where a package has deliberately broken its ABI, along with bug reports because a package has silently broken its ABI. Distros have even set up language-specific tooling for C to generate ABI change reports, so that they can catch broken ABIs before users notice.
All of this is a lot of work, and it's largely based on tooling distros have built for C. It's not hugely surprising that other languages don't have the tooling yet - it took literal decades to get it built for C, and integrated so transparently into distros that it's now possible for package maintainers to not even be aware that it exists, right up until the package they maintain does a bad thing and the tooling reports bugs to them.
Posted Jan 29, 2025 18:37 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (13 responses)
The most common C libraries that are used in any base system are very stable. glibc last broke compat what, 20 years ago? A very commonly used one with the most recent change was OpenSSL with version 3.0.
It does happen of course, and there are processes for it, well-regulated and understood. And slow.
Posted Jan 30, 2025 14:41 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (10 responses)
That isn't really a solution, it just means people avoid change because it has proven to be so painful.
Posted Jan 30, 2025 14:46 UTC (Thu)
by josh (subscriber, #17465)
[Link]
Posted Jan 30, 2025 15:05 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (8 responses)
Posted Jan 30, 2025 15:34 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Because they don't wear the same rose-tinted glasses as you? Because in their world view this particular problem just doesn't exist? (After all, isn't that one of the selling points of Rust - whole classes of major problems in C just disappear because the concept doesn't make sense in the Rust world.)
It's like a bird trying to explain to a fish how you spread your wings to soar in the wind - the fish (one or two excepted) will have no idea what you're talking about.
Cheers,
Posted Jan 30, 2025 16:12 UTC (Thu)
by bluca (subscriber, #118303)
[Link]
Posted Jan 30, 2025 17:34 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 30, 2025 17:59 UTC (Thu)
by mb (subscriber, #50428)
[Link] (4 responses)
Just because Rust chose different solutions than you prefer for the problem of backward compatibility, doesn't mean that they don't care.
If you think backwards compatibility in the C ecosystem is so far superior, then where are C Editions? Where is the common practice in the C community to use Semantic Versioning?
Except from very few examples like glibc there are few projects in the C community that really care about backward compatibility to the extent possible with Rust Editions and Semantic Versioning. Most projects are stable because they are not changed much on the interface level anymore. glibc being a notable exception.
Posted Jan 30, 2025 18:36 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (3 responses)
This is absolute nonsense. Again, there is data on this, just look at it. Compare symbols exports vs SONAME revision of common system libraries. libsystemd0 alone added 213 new APIs just in the very latest release. SONAME revision is still 0.
Posted Jan 30, 2025 18:48 UTC (Thu)
by mb (subscriber, #50428)
[Link] (2 responses)
Saying that Rust people don't maintain backward compatibility is:
>This is absolute nonsense
They do it in a very different way.
There are more solutions to a problem than the Luca-way, you know?
I do understand that you either don't like or don't know what Rust people do.
If you want Rust to have a stable soname/ABI, please come up with a proposal about *how* to do that.
Posted Jan 30, 2025 19:41 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (1 responses)
Because it's true, whether or not you like it. The standard library changes soname on every build. Backward incompatibility is programmatically baked in the build process itself.
Posted Jan 30, 2025 19:44 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Feb 7, 2025 18:17 UTC (Fri)
by TheBicPen (guest, #169067)
[Link] (1 responses)
Posted Feb 7, 2025 18:34 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Posted Jan 29, 2025 15:05 UTC (Wed)
by pizza (subscriber, #46)
[Link] (5 responses)
And that is the point I keep making again and again here -- Go & Rust's tooling for tracking/vendoring/pinning dependencies is quite powerful.. but is utterly useless unless you have the complete source code to _everything_. A single non-public component renders the whole thing unbuildable, even if you have the source to everything else.
This effectively means that you are forever at the mercy of your vendor to provide _any_ sort of updates.
Posted Jan 29, 2025 15:21 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
It definitely needs a solution in the long run - there's a reason why there are developers working on a stable ABI for Rust - but it's ahistorical to claim that anything other than "protect users of proprietary software" is the reason we have dynamic linking. We have dynamic linking in ELF systems because Sun were fed up of the delays caused by having to wait between sending a tape with an update to an ISV, and the ISV sending tapes with updates to the end users of Sun systems (at minimum two postal delays, assuming the ISV is on top of things).
Posted Jan 29, 2025 16:21 UTC (Wed)
by bluca (subscriber, #118303)
[Link]
Again, that might have very well been the original reason Sun introduced it - but it is not the (sole) reason it was picked up in Linux distros, and why it is still used today.
Posted Jan 29, 2025 20:07 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Go will save us! It had _removed_ support for binary-only modules several years ago. You simply can't build Go software without having its full source code available.
Posted Jan 29, 2025 21:04 UTC (Wed)
by pizza (subscriber, #46)
[Link] (1 responses)
Go will save us! It had _removed_ support for binary-only modules several years ago. You simply can't build Go software without having its full source code available.
Which... helps me, as an end-user holding a binary, how?
Posted Jan 29, 2025 21:09 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 29, 2025 15:26 UTC (Wed)
by madscientist (subscriber, #16861)
[Link] (2 responses)
I really don't think this is true. The original reason for creating dynamic linking is to save memory and disk space. RAM and storage was scarce and expensive.
Posted Jan 29, 2025 15:42 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
The goal for Sun of dynamic linking in SunOS 4 was to let them maintain system libraries without being at the mercy of ISVs rebuilding their binaries against the bugfixed system libraries. Saving memory and disk space came later, after dynamic linking had been implemented in the 1980s and shown to do just that.
Posted Jan 30, 2025 18:19 UTC (Thu)
by anton (subscriber, #25547)
[Link]
On Ultrix, which did not have dynamic linking, a number of utilities were linked into one binary (and argv[0] was used to select the proper entry point), in order to have only one instance of the libraries used by these utilities on disk and in memory.
On Linux, the uselib() system call for loading shared libraries was added before ISVs became relevant for Linux, so fixing libraries under indepenently developed software obviously was not the motivation to do it.
Posted Jan 29, 2025 16:17 UTC (Wed)
by dankamongmen (subscriber, #35141)
[Link] (1 responses)
this seems to call for data. even if a library author doesn't bother to learn the common semantics/practices of ABI stability themselves, they're likely to be pointed out as soon as packaging efforts start, or users begin deploying binaries expecting such stability. adding entirely new functionality is not generally abi breakage, if that's not obvious.
Posted Jan 29, 2025 20:04 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
And here the ugly free-as-in-beer mentality raises its head. If I put software out as a "favour to the world" "I wrote this software for me, I hope you find it as useful as I did", I'm not going to take kindly to you telling me what to do - FOR FREE!
You want to use my software? Great! But it doesn't do exactly what you want? That's YOUR problem, and NOT mine!
Okay, maybe I get my kicks from other people using my software, which means it becomes my problem, but that's MY CHOICE, not YOUR DEMAND. That's my attitude at work - I want to help others, I want to improve things generally, and I get paid for it. But if all I cared about was just getting MY job done, the company wouldn't complain. Thing is, it's MY choice, and him as pays the piper is calling the tune. If you're not paying, you don't get a say.
Cheers,
Posted Mar 11, 2025 14:34 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
The impact of C++ templates on library ABI by Michał Górny
TL;DR: C++ suffers from exactly the same problems that Rust does... it's just more willing to emit an empty Dynamic linking in the face of templates/generics without making the compromises Swift does is an unsolved problem.
Posted Jan 29, 2025 13:52 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (10 responses)
If you don't do it then it's your bug really.
Posted Jan 29, 2025 14:19 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (9 responses)
Posted Jan 29, 2025 16:52 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
This isn't even rude. Debian explicitly encourages users to report bugs to Debian and not to upstreams.
Posted Jan 30, 2025 12:32 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
And that's the real point - a stable ABI (in any language) involves a lot of hard work; the distros do this work for C programs (with re-education of significant upstreams), but aren't able + willing to do it for other languages. Further, because users already bypass the distro for programs that fit their needs, but don't have the right version packaged by their distro, the distro has very little power to make upstreams comply with their policies, since they have no leverage - users can and do go round the distro to deploy what they want.
Posted Jan 29, 2025 19:34 UTC (Wed)
by dilinger (subscriber, #2867)
[Link] (6 responses)
Posted Jan 29, 2025 19:36 UTC (Wed)
by bluca (subscriber, #118303)
[Link]
Posted Jan 29, 2025 23:00 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (4 responses)
Posted Jan 30, 2025 8:08 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (3 responses)
For a stable release, it typically implies the same level of testing that you would expect for the release as a whole (hopefully a lot more than just running automated test suites). Obviously, any given distro's definition of "stable" will differ from the application developer's (unless the distro in question is e.g. Arch Linux), but frankly, that's the distro's problem to deal with.
Posted Jan 30, 2025 18:35 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Effectively they're constantly using the latest version.
Posted Jan 30, 2025 22:24 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 1, 2025 7:30 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link]
The intended use, to my understanding, is to repin everything to latest, run the automated tests, then commit if everything is green, else try to figure out what broke and why. It is not meant to be used in the automated fashion that you describe.
> Effectively they're constantly using the latest version.
The fact that some C developers write terrible code and complain that "the compiler" "broke" it, does not imply that all C developers are incapable of understanding the meaning of UB. By the same token, the fact that some Go or Rust developers do not care about backwards compatibility does not imply that all Go or Rust developers do not care.
Think about it: If this was *really* the intended use case, then pinning would not exist. It would just pull the latest every time, and you would never need to update your pins at all. Pinning, as a feature, only makes sense if you want to have some control over the version you are running.
Posted Jan 28, 2025 8:56 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (10 responses)
What is crap, on both sides (distros and developers directly) is trying to pretend you can freeze the world around you. LTS, backporting, not updating your dependencies to the latest version, those are the issues we need to get rid of to simplify the problem because they blow up the space of versions that have to work together unnecessarily.
Posted Jan 28, 2025 13:09 UTC (Tue)
by pizza (subscriber, #46)
[Link] (4 responses)
This also conveniently leaves out the very real _cost_ to everyone to staying on the very latest of everything.
Every layer in the stack wants "LTS" for everything but _their_ stuff. Because _nobody_ likes stuff changing out from underneath them. Oh, and doesn't want to pay anything for it of course. (see the histrionics over CentOS)
Constant churn, and more often than not, the end-user sees zero (if not outright negative) benefit.
Posted Jan 28, 2025 13:35 UTC (Tue)
by ms (subscriber, #41272)
[Link] (2 responses)
I completely agree with this. It's no doubt not perfect, but I do suspect "minimal version selection" (https://go.dev/ref/mod#minimal-version-selection) is better behaved in this regard. I'd love to see some statistics (assuming they could be gathered) as to the level of churn caused by the different version selection algorithms.
Posted Jan 30, 2025 14:08 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Feb 2, 2025 11:09 UTC (Sun)
by CChittleborough (subscriber, #60775)
[Link]
Posted Jan 28, 2025 14:05 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
You only have two choices, either you can get a lot of changes, all at the same time every once in a while AND some small changes all the time or you can get some more small changes all the time.
There is no "keep it this way forever" option, not even for the tiny subset of libraries where the underlying domain doesn't change (e.g. pure math). There are always some changes required for build systems, hardware,... even for those while the vast majority of libraries and software will have laws that change, new cryptographic algorithms as old ones are broken, new network protocols to support, changes in natural language over time (no, not talking about deliberate ones, even just the natural evolution of language over time), new design guidelines,... even if you ignore changes required for security holes and bugs.
If you choose the option to have lots of changes occasionally you still need to keep up with the legal changes and the security holes and critical bug fixes at the very least so keeping things completely unchanged for any amount of time is never even an option.
On the other hand when that occasional big change comes up (e.g. a RHEL major update) you will have all kinds of unpredicted problems (in addition to known, predicted changes like new config options) due to the change and no clue which of the thousands of changes you did at the same time actually caused the issue.
Meanwhile if you had chosen the other options things would likely break a bit more often but only ever in ways that are very easy to track down to the few recent small changes that happened. You will also have the benefit of an entire community making those changes at roughly the same time and upstream developers that still remember which changes they made.
Posted Jan 29, 2025 3:34 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
I don't see that attitude from Rust/Go/JavaScript developers. They have built tooling that manages the complexity of the real-world development, and helps to identify issues. Each time I install NPM packages, it immediately prints the number of packages with critical errors (and the number of packages seeking funding).
If I work within Go/Rust infrastructure, it's trivially easy to fix the dependencies. Github even automates it for you.
It's the classic distros that are solving the problem of medium-speed-moving ecosystems by saying: "Whatever, we don't need you anyway. We'll stick to libz, libglib, and bash".
Posted Jan 29, 2025 14:23 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (3 responses)
Also since there are no stable APIs, the fact that it compiles doesn't really guarantee anything, if it even compiles.
How many projects will just not bump because they can't be bothered to do this constant churn? I suspect it's most of them.
Posted Jan 29, 2025 18:39 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Just like you have to go through each of the dependent projects to make sure that a library update has not broken something.
> Also since there are no stable APIs, the fact that it compiles doesn't really guarantee anything, if it even compiles.
Most libraries do have a relatively stable API, and it's trivial to check if a dependency update breaks the build.
Posted Jan 29, 2025 22:45 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (1 responses)
It's trivial if you're a distribution with infrastructure to do that. If you are thousands of individual developers all having to do it manually, it's not as trivial as you might imagine I think.
20 minute per thousands of contributors individually adds up to a lot of work globally.
Posted Jan 29, 2025 23:23 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Erm... Look at the article in question?
> If you are thousands of individual developers all having to do it manually, it's not as trivial as you might imagine I think.
I disagree. A distribution with a semi-automated patching infrastructure can solve most of the issues without any undue burden on maintainers. Github does that for developers, you get a stream of automated pull requests with minimal dependency changes for CVE fixes.
Something like this, but managed by distros directly, is not technically complicated. You just need to keep a reverse database of dependencies, consume the CVE feeds, and then file PRs with suggested fixes for maintainers, with something like a trybot to show if the upgrade causes compilation breakages.
I can help with sponsoring this work if somebody wants to try it.
Posted Jan 28, 2025 1:22 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (4 responses)
Mac and Windows doesn't even explain things like Flatpak and Snap. Developers cannot rely on distro package managers because distros follow their own schedule, add patches that change things routinely and package only a small portion of what's available and even then there is almost no chance they have packaged the versions one might want. Net result is that in almost every organization I have worked with, developers very rarely rely on Linux native packages beyond the base OS and Linux distributions have lost the influence they once had. This is not a good thing but I don't see this changing anytime soon.
Posted Jan 28, 2025 10:46 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
This puts distros in a bind; I need to be able to simultaneously install both the latest version of something (so that I get the bugfixes I need from it), and also to parallel-install the old version of it (so that I can avoid the perceived regression from the new version). And worse, distros don't particularly want to provide the old, known buggy, version, because, well, it's known buggy.
There is no easy solution here - not least because some regressions will be because a thing was buggy, and the user depended on side effects of that bug. There are only different tradeoffs to be made - and Linux distros have made tradeoffs focused on fewer bugs that result in users moving to other tooling (CPAN, CTAN etc) because it's easier to switch tooling than to ask distros to do what they want.
Posted Jan 29, 2025 14:31 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
flatpak doesn't even allow CLI applications, so it's completely useless for go.
Completely different use cases.
Posted Feb 7, 2025 18:35 UTC (Fri)
by TheBicPen (guest, #169067)
[Link] (1 responses)
That's not true at all. I can't speak for snap, but on flathub, the vast majority of applications are FOSS.
Posted Mar 11, 2025 14:15 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
Posted Jan 28, 2025 4:21 UTC (Tue)
by champtar (subscriber, #128673)
[Link] (2 responses)
Now with go, binary are bit for bit reproducible (if not using cgo), and if you put the golang version used in your package release you get a reproducible RPM, it's that easy.
Also here we are talking about vendoring because the RPM model is to have everything in tarballs, but you could have a Fedora GOPROXY and let go fetch during build as it verify everything it downloads (go.sum).
Posted Jan 28, 2025 4:31 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
You don't need that! You can use `go mod download` to pre-download the dependencies into the local cache (packaged into the same tarball). And they can still be cryptographically verified during the build (`go mod verify`), so the integrity guarantees are not affected.
And all of this can be done for all the supported platforms from any supported platform, cross-compilation is a built-in feature of the toolchain. Go is really an example of how to make the ecosystem pleasant to use.
Posted Jan 28, 2025 22:30 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link]
Posted Jan 30, 2025 14:14 UTC (Thu)
by davecb (subscriber, #1574)
[Link]
In fact, old OSs had been trying to solve an NP-complete problem, with the results you'd expect (:-))
The solution is to avoid having the problem. Multics, Unix and GNU Linux glibc all use an algorithm that avoids this, as described in my (slightly tongue-in-cheek) experience report "Avoiding an NP-Complete Problem by Recycling Multics’ Answer", at https://leaflessca.wordpress.com/2018/09/03/avoiding-an-n...
IMHO, Fedora should encourage folks to use the Linux glibc technique, and not make more work for themselves. It should not encourage contributors to try to solve a known NP-complete problem, but rather note that avoidance is a solved problem in computer science.
Posted Jan 30, 2025 22:45 UTC (Thu)
by amacater (subscriber, #790)
[Link]
The discussion was initially predicated on the fact that distributions prefer to "de-vendor" code, attempting to keep single versions of shared libraries and that Go and Rust do things differently. For Go, specifically, would there be a way to make it easier for the distributions?
I was unaware of the fact that Go, specifically, has means to build in reproducibility. It would be useful if that could work with the existing reproducible builds efforts and the distributions. That would mean effort on both sides and a willingness to see each others' point of view.
The comments produced a range of points of disagreement and non-understanding, especially on Rust and versioning.
The distributions are still useful and the need for traceability of software and issues is still there. There is utility in distributions enforcing a commonality of purpose so that programs work together and are maintainable. Distribution maintenance is sometimes a thankless task - but so is chasing down bugs you have no interest in because your work has moved on.
[Disclaimer: I am a Debian developer and may be biased here - I don't currently maintain a package but am very much in contact with those who do and I'm aware that the distributions more often work together than separately].
Software management and management of security issues - however you do it - is not going to go away and becomes ever more complex. Recognition of that fact and recognition of the intelligence and merit exhibited even by those you disagree with would be welcome. Some of the comments here were needlessly disparaging and ad hominem.
LWN is not a fight arena and disagreement can be civil, even if you feel that a remark touches you or your work. No parent ever wants to be told "You've got a *really* ugly baby there" - there are acceptable ways to express disapproval, surely, which would reflect well on all concerned.
Posted Aug 15, 2025 20:31 UTC (Fri)
by DemiMarie (subscriber, #164188)
[Link] (5 responses)
Open source developers are generally not paid to meet the needs of distributions. If distributions want them to start caring more, they should start paying for that or provide tooling that is a better fit for the developers’ needs - not just the distributions’ - than what the developers currently use.
Posted Aug 15, 2025 21:03 UTC (Fri)
by pizza (subscriber, #46)
[Link] (4 responses)
So are these "open source developers" going to pay the distributions for the stable, tested base platform their entire ecosystem depends upon?
If not, why do you think money deserves to flow in the other direction instead?
Posted Aug 15, 2025 21:26 UTC (Fri)
by DemiMarie (subscriber, #164188)
[Link] (2 responses)
I expect a lot of upstream developers do not see downstream unbundling as providing much if any value, and see language-specific package management as providing a very large amount of value. Furthermore, inclusion in distributions is no longer the only way for software to be widely adopted on Linux. This means that upstream developers have much less incentive to make their software packaging-friendly. Asking upstream developers to put in substantial additional work for a use-case they are not particularly interested in is not likely to be successful unless there is a financial incentive.
Additionally, cross-platform projects must bundle dependencies for Windows, macOS, and mobile *anyway*. This means that bundled dependencies is the case that upstreams need to make sure works, and supporting unbundling is optional.
Upstream developers really have all the power here. I expect that in the vast majority of cases being discussed here, not packaging something hurts distros far more than it hurts upstream developers. Therefore, it is really the distros that need to be making things work for upstreams, not the other way around.
Posted Aug 15, 2025 21:53 UTC (Fri)
by pizza (subscriber, #46)
[Link] (1 responses)
...You left out Ubuntu.
RHEL's users collectively make far more money using (and even developing for) RHEL than Red Hat themselves make.
Traditional distributions are "upstreams" of all of those application (and even language-specific ecosystem) developers, because that software is worthless without a stable foundation to run on.
Posted Aug 16, 2025 1:55 UTC (Sat)
by DemiMarie (subscriber, #164188)
[Link]
Posted Aug 16, 2025 2:41 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
That's an interesting question. My attitude for production deployment shifted entirely from "use as many distro packages as possible" to "get the distro away from anything important". These days, I'm using pretty much nothing out of distros outside of the bare minimum: libc, shells, standard utilities, ca-certificates, and a handful of standard libraries like zlib.
Posted Jan 28, 2025 23:01 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (46 responses)
For this reason I do not understand why Fedora and other distributions still strongly prefer to bundle build-time dependencies as separated distribution-specific packages. The end-user do not need those source packages. So the only argument to have those packages is to have a fully contained build that does not depend on anything outside distro servers. But then distro can serve, say, own GO proxy and just require that maintainers upload any dependencies there in the Go native format and point Go build to that.
Posted Jan 28, 2025 23:12 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (45 responses)
This is extremely important. You absolutely do _not_ want to use any of the language's own servers or definitions with a proxy or anything like that, since there is absolutely zero quality control or gatekeeping, and anybody can just upload any malware to it, all you need is a valid email address, and that's it. We've seen how well this works with NPM and PyPI - choke full of typo squatters, malwares and so on.
Posted Jan 28, 2025 23:29 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (44 responses)
Posted Jan 29, 2025 0:45 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (43 responses)
Posted Jan 29, 2025 1:12 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (42 responses)
My impression with Go discussion in Fedora is that packaging Go dependencies into RPM is precisely the pain point the maintainers complain. The need to maintain separated copies of dependencies on distribution servers I suppose will be much smaller issue if they can just use the native go format.
Posted Jan 29, 2025 1:21 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (41 responses)
Posted Jan 29, 2025 6:40 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (39 responses)
It's trivially easy to package a native Go package into an RPM. Completely, with all the dependencies:
`go mod vendor`
That's it. You can gzip the resulting directory and package it into an SRPM. During the build time, it will not require any network access. Go even embeds the package versions into the resulting binary, giving an easy access to SBOM. This is enough to get Fedora/Debian users access to Go applications.
If you want _development_ to be easier, Fedora or Debian can provide a Go package mirror. Basically, a proxy that allows fetching of the module sources. It's entirely optional, though it will make life easier for developers.
And if Fedora/Debian want to uncrustify their distro and make it genuinely more useful, the SRPMs can install the dependent modules into the shared system-level cache instead of confining them to the source tree of the SRPM. It will require additional tooling from distros, as the same module can be installed by multiple packages, and deleting one package should leave the module available.
However, there's no problem with installing multiple _versions_ of a package. It's perfectly fine if one utility depends on `gorm@v1.9.19` and another one needs a much newer `gorm@v1.25.10`.
Posted Jan 29, 2025 7:00 UTC (Wed)
by gioele (subscriber, #61675)
[Link] (2 responses)
DEBs are pretty much reproducible.
What is the equivalent Go/Rust/any language page of Debian's https://tests.reproducible-builds.org/debian/reproducible... ?
96.4% of the Debian packages can be bit-for-bit reproduced (100% of the essential/core packages).
Posted Jan 29, 2025 19:53 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Cgo builds depend on the underlying build system, the Go part is still automatically reproducible, but the linked C libraries are outside of Go's control.
Rust is more complicated, as it's often more low-level. Cargo guarantees that the build environment is bit-for-bit reproducible, but there are lots of small issues that can cause the resulting binaries to have host-specific bits. You absolutely can make the builds reproducible, and the work is being done to make that automatic. The issues are tracked here: https://github.com/rust-lang/rust/issues/129080
Posted Jan 29, 2025 20:57 UTC (Wed)
by gioele (subscriber, #61675)
[Link]
In other words: after years of fixes to the toolchains, now most packages are reproducible by default, except some that are more problematic and need extra care.
Posted Jan 29, 2025 11:15 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (31 responses)
Yeah except for the tiny problem that none of this is true
> `go mod vendor`
Yes, that's it, you have now ingested, vendored and locked down some malware, as it came from a cesspool with no gatekeeping whatsoever. Success!
> However, there's no problem with installing multiple _versions_ of a package. It's perfectly fine if one utility depends on `gorm@v1.9.19` and another one needs a much newer `gorm@v1.25.10`.
It is very much problematic, as every time gorm has a bug, it needs to be fixed 190238912381 times, instead of once, for the convenience of a handful of developers who don't care about backward compatibility. Again, it's the usual attitude with these language-specific package managers: "I care about me me me, screw everybody else". This is not how Linux distros work. This is not how a sustainable, stable and secure ecosystem gets built.
Posted Jan 29, 2025 12:52 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (13 responses)
And this not how a friendly, sustainable, and popular ecosystem gets built.
For $DEITY's sake, stick your head out the window and smell the coffee. MOST users don't give a monkeys about all that stuff. They have their programs they want to use, and that's why they don't run Linux. Their programs just don't work on Linux because it's a DEVELOPER-friendly system, not a USER-friendly system. FFS most of the software *I* *want* to run typically either (a) doesn't work on *modern* linux (so much for your vaunted stable API), or requires an emulation layer like Wine or VirtualBox.
And that's because of the simple laws of economics - if the majority of people who want the software is a disjoint set from the people capable of writing it - then Linux is just not seen as a viable platform. And like it or not, the majority of FREE software that gets written is by developers scratching developer itches to make developer-friendly software. NOT USER-friendly software. Never forget that the Cathedral in "the cathedral and the bazaar" was not proprietary software, but FREE software.
Oh - and which is better - a system which produces prodigious amounts of buggy code and NEEDS to be updated regularly to fix it, or a system which despite being harder to update actually produces far less buggy code in the first instance! ESPECIALLY for security code I would much prefer the latter - almost certainly far fewer 0-days!
(And no, I'm not saying we necessarily want Linux to be wildly popular, but equally we don't want it occupying a niche where it's forever vulnerable to DRM (the nasty version), or TPMs, or Meta-lomaniacs, and the like.)
Cheers,
Posted Jan 29, 2025 12:57 UTC (Wed)
by pizza (subscriber, #46)
[Link]
This is true of *every* platform.
Or do you think that any bit a tiny minority of iOS, Android, MacOS, and/or Windows _users_ are capable of writing software for that platform?
Posted Jan 29, 2025 14:43 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (11 responses)
What program written in go does a end user who isn't a developer need? Can you cite one such program?
The reality is that go is solely used for jsonrpc or CLI kind of things. So the argument here seems completely OT.
Posted Jan 29, 2025 16:02 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (1 responses)
With respect to all the targeted users of linux, not at all. I would prefer to use linux for all of my stuff, but unfortunately, as a USER, I find most of the stuff I want to use needs some sort of emulation layer. I wish either (a) it didn't need that layer, or (b) I could *buy* a version that works on Linux.
I'd like to see far more software available for linux, but actually it's probably this LACK of a stable API (real or perceived) that puts vendors off packaging their stuff for linux, and it's all the assortment of distros that don't help. Can I build a binary, and copy it between distros, and have a reasonable expectation it will "just work" (tm)? Or do I have to run Red Hat, because that's the only thing that works?
When I was on the LSB, my biggest frustration was that all the emphasis was on enabling the distro to tell an application what the distro provided. Completely arse-about-face imho. What I wanted was a mechanism whereby the APPLICATION told the distro what it wanted! What it needed the distro to supply. Without that, it doesn't matter HOW good your distro is at managing all these assorted libraries, the application is going to vendor the lot because they can't tell whether the distro provides it or not. Dynamic linking is irrelevant if the application installs everything "to make sure I've actually got it".
There's far too much linux guys thinking they're doing the world a favour. The world is only too happy to look this gift horse in the mouth, because they can't see how it solves *their* problems. And as bluca said (in a completely different context, but) this gift horse eats too much hay for too little perceived benefit.
Cheers,
Posted Jan 29, 2025 22:53 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link]
Posted Jan 29, 2025 19:21 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Plenty of such programs exist. For example, I was only able to set up secure boot on my Fedora server by using `sbctl`, written in Go. Fedora provides `mokutil` but it was plain broken.
The UI experience with both of the tools is also incomparable. `sbctl` can do this:
> # sbctl verify
I have never been able to get this output before without strange incantations that I copy-paste from Google.
Posted Jan 29, 2025 22:48 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Posted Jan 29, 2025 23:13 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
So not a "user buying a laptop at Walmart", but still "end-user" functionality.
Posted Jan 29, 2025 23:17 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link]
Posted Feb 7, 2025 19:04 UTC (Fri)
by TheBicPen (guest, #169067)
[Link] (2 responses)
A cursory look at GitHub shows plenty of examples. Syncthing, Alist, AdGuard Home, Photoprism, LocalAI, Croc, lux, filebrowser. I don't know why you think that go is only for writing developer tools.
Posted Feb 8, 2025 20:32 UTC (Sat)
by LtWorf (subscriber, #124958)
[Link] (1 responses)
Posted Feb 8, 2025 21:05 UTC (Sat)
by intelfx (subscriber, #130118)
[Link]
You are obviously arguing in bad faith.
Posted Feb 8, 2025 22:21 UTC (Sat)
by dskoll (subscriber, #1630)
[Link]
What program written in go does a end user who isn't a developer need? Can you cite one such program? Oh, sure. Here are two, in fact:
I know you wrote "need" and could argue that you don't need the above programs, but they are very nice and very useful.
Posted Jan 29, 2025 19:41 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (16 responses)
It is. Sorry to break your entire worldview. It's obsolete and is already showing its age by failing to keep up with the status quo of 10 years ago.
> Yes, that's it, you have now ingested, vendored and locked down some malware, as it came from a cesspool with no gatekeeping whatsoever. Success!
Nope. It will get the locked versions of packages that are specified in the dependency list (go.mod and go.sum). They are cryptographically verified to be the same (go.sum contains the hashes of packages). If you think that the package author is malicious, then don't package the software in the first place.
> It is very much problematic, as every time gorm has a bug, it needs to be fixed 190238912381 times
It's trivially easy to write a tool that goes through all the packages having `gorm`, tries to upgrade its version, and does a test compilation. Which is more than Debian/Fedora do, anyway. These tools already exist, btw.
> instead of once
This is a lie. If something like libffi has a bug or a new feature release, EVERY SINGLE application depending on it has to be checked. For bonus points, one application might depend on features (that are perfectly safe!) that are no longer available in a newer version.
And since C/C++ doesn't have a stable ABI, there's often no way to check that statically. As a result, every Fedora/Debian release is broken at least for some packages.
Posted Jan 29, 2025 19:54 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (3 responses)
It is not. They will be around long after things like golang have faded into obscurity because the one corporation bankrolling it starts chasing the next squirrel instead. How many languages has Google killed by now? Are we in the double-digits already or not yet?
> Nope.
Yep. Because there is no gate keeping and an email address is all that's required, so anything goes. I mean just check Pypi ffs. Golang is just unpopular so it's not worth targeting it - yet.
> It's trivially easy to write a tool that goes through all the packages having `gorm`, tries to upgrade its version, and does a test compilation. Which is more than Debian/Fedora do, anyway. These tools already exist, btw.
Yes, and they are a waste of energy (hello, climate change?), time and effort, and a workaround to sidestep glaring design issues with the ecosystem of these languages. An entirely made up and artificial problem that should not have existed in the first place, had the people designing those ecosystems understood how these things need to work - or cared about it, anyway.
> This is a lie.
If you have to go pick something that is so far removed from the norm that is in fact a bridge to other language runtimes, then you surely understand that the point you are trying to put across doesn't have a leg to stand on.
Posted Jan 29, 2025 20:33 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Like IBM (RedHat) with Fedora? To be clear, Rust has survived the breakup with its parent corporation just fine, by switching to a foundation company model. Go is also large enough to do that.
> How many languages has Google killed by now?
I don't think they have killed a single language outside of internal experiments? Even Dart (used pretty much only in Flutter) is still going strong.
> Yep. Because there is no gate keeping and an email address is all that's required, so anything goes. I mean just check Pypi ffs. Golang is just unpopular so it's not worth targeting it - yet.
Do I need anything more than email to contribute to Fedora or Debian? Python's infrastructure has, unfortunately, been compromised by Linux distros. For too long, people kept pretense that virtual envs are just a tool for development, and distros should take care of packaging.
This is now slowly changing, uv is finally taking the Cargo/Go approach to dependency management.
> Yes, and they are a waste of energy
Go compiles source code about 10-100x times faster than C/C++. A simple `configure` run often requires more time than building the entire K8s. And Rust folks regularly rebuild the _entire_ set of Open Source libraries to check for accidental compiler-induced breakages.
> If you have to go pick something that is so far removed from the norm that is in fact a bridge to other language runtimes, then you surely understand that the point you are trying to put across doesn't have a leg to stand on.
Why? I've been using traditional Linux distros for almost 3 decades, and I'm still not sure how they can rival the status-quo that languages like Rust/Go have established. That of reproducible and reliable build environment.
Posted Jan 29, 2025 20:45 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (1 responses)
Sure? Fedora would face huge challenges if RH evaporated in a puff of smoke tomorrow morning. Don't use it if you don't want to take that very minimal but non-zero risk. A language ecosystem without its developers cannot possibly survive either. And unlike RH -> Fedora, the probability of Google killing a product that is not search/ads at this point are pretty much approaching 100%: https://killedbygoogle.com/
> I don't think they have killed a single language outside of internal experiments? Even Dart (used pretty much only in Flutter) is still going strong.
Let's not joke around - Dart, Flutter, Angular (ok ok not a language, a framework, still) are for all intent and purposes dead. Like those stories about WW2 soldiers in the pacific you might still find someone on a desert island still thinking otherwise, but it's delusion - Google killed them, and they are dead.
> Do I need anything more than email to contribute to Fedora or Debian?
Yes? You seriously don't know how those work?
> Python's infrastructure has, unfortunately, been compromised by Linux distros.
Yeah sure, it's distribution developers who are uploading malware to Pypi. WTF?
Given you are evidently just trolling at this point, I'll stop feeding.
Posted Jan 29, 2025 21:00 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Go also would need to adapt, but it's not dependent on Google anymore. If you look at Go development, it has a robust stream of non-Google contributions. Its governing model is like that of systemd, a "Politburo" group with non-public meetings making decisions on RFCs, but otherwise it's healthy.
Rust is already independent. If I had to put my bet, I'd bet on Rust and Go being around longer than Fedora.
> Let's not joke around - Dart, Flutter, Angular (ok ok not a language, a framework, still) are for all intent and purposes dead.
Angular is a framework, not a language. Dart/Flutter are absolutely alive, and moving up in popularity. They are the most popular combo for mobile development at the moment.
BTW, Fedora/Debian also totally failed on mobile.
> Yes? You seriously don't know how those work?
Hi, my name is Jia Tan. Would you mind taking up this patch to add color output for apt?
> Yeah sure, it's distribution developers who are uploading malware to Pypi. WTF?
No. It's the distro developers who were pushing against robust package management. As a result, PIP is more like APT/RPM than Cargo.
Posted Feb 6, 2025 12:54 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (11 responses)
> Nope. It will get the locked versions of packages that are specified in the dependency list (go.mod and go.sum). They are cryptographically verified to be the same (go.sum contains the hashes of packages). If you think that the package author is malicious, then don't package the software in the first place.
Aged like a fine wine:
https://arstechnica.com/security/2025/02/backdoored-packa...
> A mirror proxy Google runs on behalf of developers of the Go programming language pushed a backdoored package for more than three years
lol, lmao
Posted Feb 6, 2025 13:35 UTC (Thu)
by excors (subscriber, #95769)
[Link] (1 responses)
The novel part is that the attacker removed the malicious code from the corresponding GitHub repository, after publishing the module. Since Go modules are immutable (unlike Git repositories), the mirror kept serving the original code that was first published, and everything was successfully cryptographically verified against that.
The lesson is that anyone wanting to audit a module's code needs to download the crytographically-verified version and read that, instead of downloading the same tag name from the same repository URL and assuming it's still the same code. And of course you also need to be careful of typosquatting, which is a non-trivial problem, but that's nothing new or Go-specific.
Posted Feb 6, 2025 15:16 UTC (Thu)
by bluca (subscriber, #118303)
[Link]
How about "do not allow anyone to upload whatever they want with no gatekeeping whatsoever and requiring just an email address", instead? Like every Linux distro has been doing for 30 years? Just a thought!
Posted Feb 6, 2025 21:30 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 7, 2025 17:50 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
The attack won't work unless you use the proxy service _and_ ignore the sumdb validation.
Posted Feb 7, 2025 18:38 UTC (Fri)
by excors (subscriber, #95769)
[Link] (6 responses)
Posted Feb 7, 2025 20:32 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
Posted Feb 8, 2025 0:54 UTC (Sat)
by excors (subscriber, #95769)
[Link] (4 responses)
Posted Feb 10, 2025 3:55 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
I think a lot of people hold illusions that they wouldn't fall for this, that of course they'd know right away and have a good laugh, but i don't think this is true, a GitHub project with all the assets cloned from the original with just the URL changed is very difficult to spot if you have no prior familiarity and a shallow interaction with the project. The hall of mirrors can be pretty comprehensive and convincing, how could a packager know for sure in a way that would never fail or be missed, or skipped, that wouldn't be subject to human fallibility?
Posted Feb 10, 2025 19:22 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
I was thinking of a case where you would push malicious code to a project's repo, tag it, and let users download it. Then you can force-push a "clean" version, erasing the trace. Go will detect this, once the next person tries to download the module, the Go module proxy will complain about different hashes for the same tag.
It won't help with pure typosquatting.
Posted Feb 11, 2025 15:47 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Feb 11, 2025 15:53 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 3, 2025 11:36 UTC (Mon)
by smoogen (subscriber, #97)
[Link] (3 responses)
Not sure you are still following this, but I am interested in your line:
> RPMs and DEBs are an obsolete way of packaging software, that doesn't provide atomicity or reproducibility.
What is a good way to package software these days? And when you wrote this what were your definitions of :
so I can better understand where you are coming from? [I am not going to argue against your view, I am just trying to get a grasp of what you meant.]
Thank you
Posted Feb 3, 2025 13:14 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
_Thank_you_ for trying to "win" the discussion the right way.
In other words arrive at the CORRECT answer, not at the answer you want to hear.
Cheers,
Posted Feb 3, 2025 18:20 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Not having the two properties below.
> atomicity
The system updates are atomic, they either happen or not, and they don't leave the system in a broken state midway. Reproducibility means that I can recreate the state of the system now, at any point in the future or on another hardware.
The computing world is coalescing around small "immutable" distributions that run Docker/Podman/etc. It is even happening for desktop-based OSes.
The next level is reproducibility for containers. There isn't a good universal solution that works in a general case, mostly because of the "obsolete" status of classic distros. People work around that by using immutable tags in Docker for base images. Nixos is also getting popular.
Posted Feb 4, 2025 4:39 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
For systems like how go and Rust manage code reuse the bridge tooling could be even more useful, providing a standard way to document build dependencies even when the result is a static binary without dynamic linking to inspect. Ask the package manager what software in the repo was built with git@git.example.org/libfoo.git and what commits were used, because the package format can be made to save that info reliably, so you can patch yourself or help the distro automatically do what needs to be done. OpenBuildService already does d
Posted Jan 29, 2025 16:12 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link]
Except we are talking about Go build dependencies that are only relevant for the Go compiler. If the distro wants to put those dependencies into separated SRPM and require to install them under /usr/src just for the build and ignore all mechanisms that Go provides for managing packages (like GO proxy that allows to put a copy of a package with arbitrary extra meta information like maintainer signature into any HTTP server capable of serving static files), then the distro puts an extra burden on the maintainers.
Posted Jan 29, 2025 5:04 UTC (Wed)
by carlosrodfern (subscriber, #166486)
[Link]
Sometimes we err in the software industry on the side of wanting to apply the same rules to all problems that look alike. But it is better to know the "rules" well to know and their reason to be so that we know when it is time to bend them. The benefits of unbundling in other ecosystems outweighs its inconveniences, but when it comes to golang, the scale goes in the other direction. This is unfortunate for the end user, but sometimes ecosystem popularity doesn't correspond with the best choices in the tech world, so we live in a world with many good golang apps out there waiting to be shipped in distros.
Posted Jan 29, 2025 21:11 UTC (Wed)
by corbet (editor, #1)
[Link] (2 responses)
Thank you.
Posted Jan 30, 2025 14:55 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Jan 30, 2025 15:45 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
There's a couple of improvements that could be made to the "unread comments" feature.
Firstly, at each level, sort comments in reverse date order so the newest comment is at the top of the list.
And secondly, I've noticed this a couple of times, but if post B is an old reply to A, and then there are replies to both A and B in the unreads, they appear as two separate subthreads. Is there any chance that it could recognise B is a reply to A and, rather than creating a new subthread, put it in its correct place as a reply to A (despite the fact it's a reply that's been read before). It just gives more context to the discussion.
Which brings up another request ... Any chance that the comment editor page could display the entire thread you are replying to? There's a reason I quote stuff a lot - the number of posts that rapidly go off the rails because the grandparent's context is not visible. I've had a few spats where I rapidly came to the conclusion that the other person either had an appalling memory or was deliberately wiping the context. I hate to say it but the current setup where all you can (easily) see is the post you are replying to, is quite conducive to a game of Chinese Whispers.
Cheers,
Posted Jan 30, 2025 4:01 UTC (Thu)
by firstyear (subscriber, #89081)
[Link] (2 responses)
Posted Jan 31, 2025 16:27 UTC (Fri)
by decathorpe (subscriber, #170615)
[Link] (1 responses)
Posted Feb 1, 2025 4:30 UTC (Sat)
by firstyear (subscriber, #89081)
[Link]
Posted Jan 30, 2025 20:34 UTC (Thu)
by taggart (subscriber, #13801)
[Link] (18 responses)
In addition to disk space savings mentioned in the article, using shared objects gives benefits in shared memory usage, latency and disk io when loading those things into memory at link time (immediate or lazy), etc. This also works across containers (and maybe in some cases for VMs?). This allows for things like mass hosting of nearly identical containers on servers, etc.
What are the implications for these Go vendored things?
Posted Jan 30, 2025 21:12 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
This is not really the case. Alpine Linux that prides itself on mostly doing static linking is snappier than regular Linuxes. With the current state of hardware, the savings from shared text pages are generally insignificant, but the time to do linking adds up.
This is not helped by the general design of the GNU linker, it's obsolete by 20 years. Newer linkers like `mold` help, but they still are not free.
Posted Feb 3, 2025 20:35 UTC (Mon)
by shiz (subscriber, #161017)
[Link]
As a former Alpine maintainer: Alpine does not pride itself on doing static linking as it is dynamically linked like most other other distros.
> With the current state of hardware, the savings from shared text pages are generally insignificant,
Those savings can end up mattering more than you'd think, by their effect on the cache usage rather than simply the RAM footprint.
>but the time to do linking adds up. This is not helped by the general design of the GNU linker, it's obsolete by 20 years. Newer linkers like `mold` help, but they still are not free.
Linking (as in the part that would be done by a program like mold) is done only once in the build process, so it shouldn't matter with regards to snappiness. The dynamic loader (/lib/ld-*.so), like the parent comment mentions, can actually benefit from things like lazy loading to improve runtime performance, however for security reasons that is disabled in most distributions (-z relro -z now) and I don't believe the musl dynamic loader supports it to begin with.
[1]: https://gitlab.alpinelinux.org/alpine/aports/-/blob/3.21-...
Posted Jan 30, 2025 21:18 UTC (Thu)
by mb (subscriber, #50428)
[Link]
I am not convinced that dynamic linking is a net win, if deployed massively.
Posted Jan 30, 2025 22:48 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (14 responses)
Posted Jan 31, 2025 19:22 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jan 31, 2025 19:27 UTC (Fri)
by mb (subscriber, #50428)
[Link] (10 responses)
Sure.
Rust can be used on extremely small systems such as AVR8 with 128 bytes of RAM total.
Posted Jan 31, 2025 19:38 UTC (Fri)
by pizza (subscriber, #46)
[Link] (3 responses)
Rust-the-language may be used, but most existing Rust code (including most crates) won't be useful on targets that limited.
Posted Jan 31, 2025 20:34 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
A lot of crates like serde that are not affected at all.
Posted Jan 31, 2025 21:47 UTC (Fri)
by excors (subscriber, #95769)
[Link] (1 responses)
When you're trying to strictly limit stack usage, I'm not sure Rust is a good fit - something simple like assigning to a static struct will conceputally construct a temporary value on the stack then memcpy it into the static, and you're relying on the optimiser to turn that into direct writes to the static's memory, and sometimes the optimiser won't cooperate and you'll overflow your tiny stack.
If I remember correctly, I tried this a while ago and it stopped optimising away the stack allocation when the struct size exceeded maybe 8KB. If you neatly contain all your application state in one big struct that's more than half your RAM, and try to dynamically initialise it, that's a problem. You have to write ugly, unidiomatic code and then hope the optimiser cooperates. It's easier to avoid implicit stack allocations in C, and in such a small program with no dynamic allocation and no threading you don't need the memory safety benefits of Rust, so it doesn't seem a good tradeoff.
(By the time you get up to, say, 256KB of RAM, you're not going to be accounting so carefully for every byte and you'll probably have a real heap and an RTOS, and then I think the tradeoff flips the other way and Rust is much more appropriate.)
Posted Jan 31, 2025 21:57 UTC (Fri)
by mb (subscriber, #50428)
[Link]
As I said.
>something simple like assigning to a static struct
Assigning to a static struct is not simple. Neither in C nor in Rust.
Posted Jan 31, 2025 22:58 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (5 responses)
Posted Feb 1, 2025 0:18 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Posted Feb 1, 2025 18:13 UTC (Sat)
by bluca (subscriber, #118303)
[Link] (3 responses)
Posted Feb 1, 2025 18:49 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Feb 3, 2025 22:14 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (1 responses)
Posted Feb 3, 2025 22:17 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 31, 2025 20:35 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jan 31, 2025 22:59 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Posted Feb 1, 2025 19:31 UTC (Sat)
by sfo (subscriber, #162195)
[Link]
The old Rock 'n' Roll fan that I am appreciates the pun! :D
Posted Feb 3, 2025 9:59 UTC (Mon)
by rhowe (subscriber, #102862)
[Link] (1 responses)
Bundling hides things from package management and users.
If this means 50 versions of a given library installed on a system, so what? It's no worse than bundling.
With go, the libraries tend to get compiled into the binary rather than dynamically loaded so they end up duplicated on the target system regardless.
Posted Feb 3, 2025 12:38 UTC (Mon)
by amacater (subscriber, #790)
[Link]
This also becomes much more difficult when there is a security problem / a severe bug and you need to know *exactly
For other languages or on the scale of a large project like Debian, the amount of extra archive space would become problematic. There's also no possibility of deduplication. That's one of the reasons why distributions try to de-vendor ... and that takes us round to a whole other part of this series of threads.
Posted Feb 6, 2025 4:11 UTC (Thu)
by Baylink (guest, #755)
[Link]
What has Go done wrong in its architecture design? 1/2 :-)
It's not uncommon for new entrants in a certain space not to realize what all the constraints are for being in that space, and that sounds like what's going (no pun intended) on here...
but the real pinch is if people are building distributions with *system programs* built in those languages which are making the distros hard to maintain.
This is vaguely similar to when SystemD (for no fathomable reason) killed off sysVinit, since packagers stopped making the necessary rc files to support it.
Posted Feb 6, 2025 10:25 UTC (Thu)
by callegar (guest, #16148)
[Link]
Posted Feb 7, 2025 14:16 UTC (Fri)
by dragonn (guest, #175805)
[Link] (1 responses)
Posted Feb 7, 2025 14:18 UTC (Fri)
by daroc (editor, #160859)
[Link]
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
- all of those need software traceability and sourcing evidence for anything up to ten years or longer.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
I don't know whether that's cryptographically signable, but I would have thought that would go a long way towards SBOM etc.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
The only thing that works is forbidding bad behaviour by policy and construction, so that you _don't_ have to rely on expensive and clunky language-specific tools, and hope they tell the truth and aren't broken themselves, and spend time you don't have and money you don't have after them.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
The first Opteron was released on April 22, 2003, so AMD64 is currently 21.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Preventing upstream regressions
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
- After a package has been accepted, depending on distro wishes, new package versions are automatically synced into the distro registry from upstream, or then requires manual action by the packager.
- The distro registry has the capability to add distro-specific patches on top of the upstream. This is done automatically as part of the syncing progress.
- When building an application, a resulting binary is then packaged into a deb/rpm/whatever package (lets call it "distro package" as opposed to the source package that lives in the registry), but there is no corresponding "distro source package" (srpm/deb-src). Instead with a Cargo.lock or equivalent file defining the SBOM, the distro package can be reproducibly rebuilt using the distro registry.
- A capability to download part of the distro registry to local disk for developers or organizations that wish to be able to work offline.
- The distro registry also allows adding versions of a package not existing in upstream, e.g. for backporting security fixes.
- A security update of a package automatically triggers a rebuild of all dependent distro packages. Yes, needs a mechanism to blacklist known bad versions, and/or whitelist known good versions.
- A distro user can choose to use the distro registry instead of upstream, and thus work with the distro-vetted packages but otherwise use the normal language tooling and workflow (cargo/npm/CPAN/etc.) the user is used to. Perhaps even support a hybrid model, where one preferably pulls from the distro registry but can fallback to the upstream registry for packages not found in the distro registry?
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Wol
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
What matters is what _we have done with it_ in the distro world, which is: stable APIs, stable ABIs, dynamic linking, and sane transitions when occasionally things change. And Rust (as an ecosystem, not as a language, for the exact same reason) and/or Cargo simply does not do the same.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Wol
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Not everyone is as good as glibc to maintain a stable ABI (but many are, though), but that just means you have to do what in Debian we call 'ABI transitions'. The difference is that in this ecosystem, on a per-library basis, they are rare, as attention is paid to avoid breaking ABI needlessly. There are a few exceptions (*cough* libxml *cough*) but by and large this works. The effort required is anyway orders of magnitude smaller than "rebuild everything every time every minute of every hour of every day".
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
One important point - dynamic linking is not something the distros came up with - it's something that SunOS introduced in the 1980s, and distros copied over a decade later. The intent behind dynamic linking as present in ELF (and as used in distros) is to make it possible to supply a bug fixed version of a library to a user who only has binaries for the software they run, and has no access to rebuild from source, or to relink a partially linked object against a fixed library.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
And of course - in case there's a library that is badly maintained and doesn't bother with keeping ABI stable, that is simply a bad library, and not a candidate for being included in the first place
And yet, despite all the costs you claim the distros are trying to avoid, they still do regular complete archive rebuilds to find FTBFS bugs caused by a dependency change (thus spending the compute time anyway), the size of a minimal install has skyrocketed (you used to be able to install Debian from 7 floppies, or 10 MiB, the "minimal" install image they offer is now 650 MiB or 65x larger), and the use of ASLR means that the DSO is no longer loaded "just once", but multiple times, because distros don't (based on looking at a Debian 12 system) carefully arrange their DSOs to minimise the number of pages with relocations in them.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
libacl, libpam, various util-linux libs, compression libs, selinux libs - those are all at soname revision 0, 1 or at most 2, and they are decades old.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Wol
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
>but apparently this looks like an alien concept in the golang/rustlang ecosystems.
Just because somebody else does something different from what you think would be the correct solution doesn't mean they are doing it wrong. It's quite possible that you are wrong. Not everybody except you is an idiot. Do you get that?
These things work _really_ well in the Rust ecosystem. And they are either completely absent (Editions) or used extremely rarely (SV) in C.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Just because rustlang/golang library maintainers can't or won't maintain backward compatibility, it doesn't mean nobody else will.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Like it or not.
But I'm just asking you politely to stop spreading this "they don't have/care about backwards compatibility" FUD.
I have ignored you for over half a year here in LWN comments and you are still busy spreading that nonsense. Why? This annoys me just as much as anti-systemd-FUD annoys you.
Be constructive rather than destructive.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
So are we sure that this conversation is going anywhere but in circles? Might it be time to conclude that there isn't more to be said (again)?
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Right - but that's a problem for proprietary software, where you don't get the source code for everything, not for Free Software, where you have source and can use it.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
The original motivation as documented by Sun engineering was that if they had to fix a system library in SunOS 3, it took months for the fix to be reliably rolled out across the customer base; Sun would ship tapes with the fix to ISVs and customers, but the fix wouldn't be fully rolled out until ISVs had shipped tapes with new binaries, and this whole process could take months.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Where can I find this documentation of Sun engineering? Reducing RAM and disk consumption was certainly an issue at the time, and the name "shared object" used in Unix systems (including SunOS) points to this aspect.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Wol
I know it's a month and a half late, but I think it's important to point to this post:
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
.so
file when a library uses templates everywhere and doesn't use them as pervasively as Rust does.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Unfortunately, distro policies mean that Debian (for one) will change your ">= x.y.z" to "=a.b.c" where a < x (because that's the version they ship), remove your checks that fail compilation if I try to use anything with major < x, and then consider it a bug in your code that you don't work with the version of the library that they packaged in their stable distro.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Yep - but that then leads to the original complaint, of the distros not having the power to get upstream to care about stable ABIs and a solid dependency base. Why should I care, if the distro will handle it all for me internally, and do all that hard work for me?
Distros lacking power
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
The golang people are smart folks
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Part of the problem is that users have demands that cannot be met by RPM or dpkg as designed for use in a distro. Users have a very wide definition of "regression", covering everything from "used to work, doesn't any more" through to "this pixel of the UI used to be #ffeedd, it's now #ffeede, but nothing else has changed", and user expectation is that if they encounter a regression of any form, there is a quick and easy workaround.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
For SBOM all the information are in the binary, see 'go version -m' as suggested by others.
With a bit of scripting one could write an RPM auto provides to duplicate the information in the RPM.
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
SUMMARY: Don't just vendor - rebuild the ecosystem and persuade the vendor to work on software management ...
The distributions have succeeded in producing stable environments based on C libraries: the need for stability in
distributions is sometimes at odds with the way that other language ecosystems are developing. Again, there's a need for understanding on all sides as to why things are as they are before any progress.
Language package managers work on all major platforms and are fun and easy to use. They also avoid having to wait for distros to ship a new version of a library you need, and ensure those building your software don’t need to chase down libraries their distro doesn’t provide. Finally, they make it much easier to ensure that what the user uses is the same as what the developer tested, avoiding entire classes of otherwise-unreproducible bugs and making it much easier for developers to provide support. When language-specific package managers are not an option, containers provide the same benefits.
Distros package managers don't meet upstream needs
Distros package managers don't meet upstream needs
Distros package managers don't meet upstream needs
Distros package managers don't meet upstream needs
Distros package managers don't meet upstream needs
Distros package managers don't meet upstream needs
Native dependency cache
Native dependency cache
Distributions want to be self-contained, and for very good reasons.
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
>
> That's it.
Native dependency cache
Wol
Native dependency cache
Native dependency cache
Native dependency cache
Wol
Native dependency cache
Native dependency cache
> Verifying file database and EFI images in /boot/efi...
> ✗ /boot/efi/5133c90315904c4ca2a6986372f9fea5/6.12.8-200.fc41.x86_64/linux is not signed
> ✗ /boot/efi/5133c90315904c4ca2a6986372f9fea5/6.12.9-200.fc41.x86_64/linux is not signed
> ✓ /boot/efi/EFI/BOOT/BOOTX64.EFI is signed
> ✓ /boot/efi/EFI/Linux/linux-6.12.8-200.fc41.x86_64-5133c90315904c4ca2a6986372f9fea5.efi is signed
> ✓ /boot/efi/EFI/Linux/linux-6.12.9-200.fc41.x86_64-5133c90315904c4ca2a6986372f9fea5.efi is signed
> ✓ /boot/efi/EFI/systemd/systemd-bootx64.efi is signed
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
So the question is not what happens to Golang _if_ Google kills it, but _when_.
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
Native dependency cache
obsolete
atomicity
reproducibility
Native dependency cache
Wol
Native dependency cache
> obsolete
> reproducibility
Native dependency cache
full CI and dep tracking across multiple languages for an entire distro of software using RPM spec as the CI language. You can do very slick things for one project on one language with the standard tools for that language, but the distro world has built great tooling for managing a whole runtime that could be made even better, even if the end result is building an OSTree or Flatpak image.
Native dependency cache
Finally
While the conversation on this article has been, for the most part, on-topic for LWN, I would like to ask everybody to take a moment to think before posting further. Will your comment truly advance the discussion, or is it just going around in circles? Unless you have something new to add, perhaps the time has come to hold off.
A request
A request
A request
Wol
Rust is already vendored in SUSE/OpenSUSE
Rust is already vendored in SUSE/OpenSUSE
Rust is already vendored in SUSE/OpenSUSE
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
In fact, we carry patches to enforce dynamic linking for some packages[1] that assume being musl-based means being statically linked.
This was (back when I was active) one of the reasons we linked with -Os in Alpine, aside from fitting the goal for a small base system, for a surprising number of cases it actually lead to speedups over -O2.
shared memory and loading savings
I am old enough to remember workarounds being deployed for that when computers were really slow. I think it was called "prelink" or something like that.
It probably makes sense for a couple of libraries (e.g. libc), but I doubt it brings net benefits when used for each and every dependency.
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
How much smaller do we need to get before you recognize that small systems are not a problem for Rust?
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
128 bytes RAM total work just fine.
This is what actually works in real life.
But anyway, it has to grow *much* larger than anything possible with 128 bytes of RAM to become a memcpy problem.
I have seen this problem with multi-megabyte ESP32. And then the solution is just to use heap allocation. Which is not a real problem on ESP32.
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
shared memory and loading savings
OpenWRT uses musl, they switched from dietlibc in 2015, FWIW.
shared memory and loading savings
The Clash
Let it explode
Let it explode
how many library copies you have / where they all are. That's also the reason for some folk to dislike snaps, Flatpak and similar - it's software bloat to some extent and each flatpak is essentially an unknown black box.
Hmmm...
Questions
- should packages that use vendoring be immediately recognizable as such?
- should packages that bundle dependencies in, and thus somehow refuse the traditional distribution dependency management be allowed to have dependencies? Should only a very restricted set of dependencies be allowed? Should other packages be allowed to depend on them?
- should distros have the burden to update the bundled dependencies when there is an issue with them even if that breaks the alignment with the upstream developer?
Based on the answers to these questions, there could be a simplified/different packaging for the code that uses bundling and vendoring going side to side to the traditional distro one. Which is maybe, what is already done with flatpaks: users immediately recognize that it is a different type of packaging with a different type of dependency management. So maybe the point is just to extend flatpaks to make them more similar to regular packages wrt isolation or lack thereof and to fully support CLI.
Dynamic linking isn't compatible with modern languages
Rust didn't choose static linking "out of some hate for dynamic linking" but simple because that is the only option that works with features it has.
You could spend all the time in the world and money to try to package libs like serde for dynamic linking and it would simple not lead anywhere.
Dynamic linking isn't compatible with modern languages