|
|
Log in / Subscribe / Register

APT Rust requirement raises questions

By Joe Brockmeier
November 24, 2025

It is rarely newsworthy when a project or package picks up a new dependency. However, changes in a core tool like Debian's Advanced Package Tool (APT) can have far-reaching effects. For example, Julian Andres Klode's declaration that APT would require Rust in May 2026 means that a few of Debian's unofficial ports must either acquire a working Rust toolchain or depend on an old version of APT. This has raised several questions within the project, particularly about the ability of a single maintainer to make changes that have widespread impact.

On October 31, Klode sent an announcement to the debian-devel mailing list that he intended to introduce Rust dependencies and code into APT as soon as May 2026:

This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.

In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.

If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.

Klode added this was necessary so that the project as a whole could move forward, rely on modern technologies, "and not be held back by trying to shoehorn modern software on retro computing devices". Some Debian developers have welcomed the news. Paul Tagliamonte acknowledged that it would impact unofficial Debian ports but called the push toward Rust "welcome news".

However, John Paul Adrian Glaubitz complained that Klode's wording was unpleasant and that the approach was confrontational. In another message, he explained that he was not against adoption of Rust; he had worked on enabling Rust on many of the Debian architectures and helped to fix architecture-specific bugs in the Rust toolchain as well as LLVM upstream. However, the message strongly suggested there was no room for a change in plan: Klode had ended his message with "thank you for understanding", which invited no further discussion. Glaubitz was one of a few Debian developers who expressed discomfort with Klode's communication style in the message.

Klode noted, briefly, that Rust was already a hard requirement for all Debian release architectures and ports, except for Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4), because of APT's use of the Sequoia-PGP project's sqv tool to verify OpenPGP signatures. APT falls back to using the GNU Privacy Guard signature-verification tool, gpgv, on ports that do not have a Rust compiler. By depending directly on Rust, though, APT itself would not be available on ports without a Rust compiler. LWN recently covered the state of Linux architecture support, and the status of Rust support for each one.

None of the ports listed by Klode are among those officially supported by Debian today, or targeted for support in Debian 14 ("forky"). The sh4 port has never been officially supported, and none of the other ports have been supported since Debian 6.0. The actual impact on the ports lacking Rust is also less dramatic than it sounded at first. Glaubitz assured Antoni Boucher that "the ultimatum that Julian set doesn't really exist", but phrasing it that way "gets more attention in the news". Boucher is the maintainer of rust_codegen_gcc, a GCC ahead-of-time code generator for Rust. Nothing, Glaubitz said, stops ports from using a non-Rust version of APT until Boucher and others manage to bootstrap Rust for those ports.

Security theater?

David Kalnischkies, who is also a major contributor to APT, suggested that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates, he said, and the only "serious usage" of apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.

Kalnischkies also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:

You can certainly do unit tests in C++, we do. The main problem is that someone has to write those tests. Like docs.

Your new solver e.g. has none (apart from our preexisting integration tests). You don't seriously claim that is because of C++ ? If you don't like GoogleTest, which is what we currently have, I could suggest doctest (as I did in previous installments). Plenty other frameworks exist with similar or different styles.

Klode has not responded to those comments yet, which is a bit unfortunate given the fact that introducing hard dependencies on Rust has an impact beyond his own work on APT. It may well be that he has good answers to the questions, but it can also give the impression that Klode is simply embracing a trend toward Rust. He is involved in the Ubuntu work to migrate from GNU Coreutils to the Rust-based uutils. The reasons given for that work, again, are around modernization and better security—but security is not automatically guaranteed simply by switching to Rust, and there are a number of other considerations.

For example, Adrian Bunk pointed out that there are a number of Debian teams, as well as tooling, that will be impacted by writing some of APT in Rust. The release notes for Debian 13 ("trixie") mention that Debian's infrastructure "currently has problems with rebuilding packages of types that systematically use static linking", such as those with code written in Go and Rust. Thus, "these packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably". Limited security support means that updates to Rust libraries are likely to only be released when Debian publishes a point release, which happens about every two months. The security team has specifically stated that sqv is fully supported, but there are still outstanding problems.

Due to the static-linking issue, any time one of sqv's dependencies, currently more than 40 Rust crates, have to be rebuilt due to a security issue, sqv (at least potentially) also needs to be rebuilt. There are also difficulties in tracking CVEs for all of its dependencies, and understanding when a security vulnerability in a Rust crate may require updating a Rust program that depends on it.

Fabian Grünbichler, a maintainer of Debian's Rust toolchain, listed several outstanding problems Debian has with dealing with Rust packages. One of the largest is the need for a consistent Debian policy for declaring statically linked libraries. In 2022, Guillem Jover added a control field for Debian packages called Static-Built-Using (SBU), which would list the source packages used to build a binary package. This would indicate when a binary package needs to be rebuilt due to an update in another source package. For example, sqv depends on more than 40 Rust crates that are packaged for Debian. Without declaring the SBUs, it may not be clear if sqv needs to be updated when one of its dependencies is updated. Debian has been working on a policy requirement for SBU since April 2024, but it is not yet finished or adopted.

The discussion sparked by Grünbichler makes clear that most of Debian's Rust-related problems are in the process of being solved. However, there's no evidence that Klode explored the problems before declaring that APT would depend on Rust, or even asked "is this a reasonable time frame to introduce this dependency?"

Where tradition meets tomorrow

Debian's tagline, or at least one of its taglines, is "the universal operating system", meaning that the project aims to run on a wide variety of hardware (old and new) and be usable on the desktop, server, IoT devices, and more. The "Why Debian" page lists a number of reasons users and developers should choose the distribution: multiple hardware architectures, long-term support, and its democratic governance structure are just a few of the arguments it puts forward in favor of Debian. It also notes that "Debian cannot be controlled by a single company". A single developer employed by a company to work on Debian tools pushing a change that seems beneficial to that company, without discussion or debate, that impacts multiple hardware architectures and that requires other volunteers to do unplanned work or meet an artificial deadline seems to go against many of the project's stated values.

Debian, of course, does have checks and balances that could be employed if other Debian developers feel it necessary. Someone could, for example, appeal to Debian's Technical Committee, or sponsor a general resolution to override a developer if they cannot be persuaded by discussion alone. That happened recently when the committee required systemd maintainers to provide the /var/lock directory "until a satisfactory migration of impacted software has occurred and Policy updated accordingly".

However, it also seems fair to point out that Debian can move slowly, even glacially, at times. APT added support for the DEB822 format for its source information lists in 2015. Despite APT supporting that format for years, Klode faced resistance in 2021, when he pushed for Debian to move to the new format ahead of the Debian 12 ("bookworm") release in 2021, but was unsuccessful. It is now the default for trixie with the move to APT 3.0, though APT will continue to support the old format for years to come.

The fact is, regardless of what Klode does with APT, more and more free software is being written (or rewritten) in Rust. Making it easier to support that software when it is packaged for Debian is to everyone's benefit. Perhaps the project needs some developers who will be aggressive about pushing the project to move more quickly in improving its support for Rust. However, what is really needed is more developers lending a hand to do the work that is needed to support Rust in Debian and elsewhere, such as gccrs. It does not seem in keeping with Debian's community focus for a single developer to simply declare dependencies that other volunteers will have to scramble to support.



to post comments

portable APT?

Posted Nov 24, 2025 16:42 UTC (Mon) by atai (subscriber, #10977) [Link] (12 responses)

is it possible to fork APT so these Rust components can be filled in with C/C++ implementation so portable APT can be in feature/functionality parity with the APT using Rust so these ports continue

portable APT?

Posted Nov 24, 2025 16:53 UTC (Mon) by epa (subscriber, #39769) [Link] (5 responses)

How about a Rust to C converter? I vaguely expected that would exist somewhere. It wouldn't run the borrow checker or most of Rust's other compile time checks, but that doesn't matter, as long as the code previously built with a full Rust compiler. And it could generate code that always runs singlethreaded.

portable APT?

Posted Nov 24, 2025 17:14 UTC (Mon) by ojeda (subscriber, #143370) [Link]

For that, one may write a new backend for an existing Rust compiler. That way, the borrow checker and every other compile-time check still applies.

`rustc_codegen_clr` has such a mode, and there was also another start on a new C backend for `rustc`. Neither is "production ready", but it is a nice approach, and in fact it is not uncommon for languages to design their compilers that way.

portable APT?

Posted Nov 25, 2025 20:16 UTC (Tue) by hsivonen (subscriber, #91034) [Link] (1 responses)

Rust can target wasm and there’s wasm2c, which works well enough that RLBox is real and is used in Firefox with wasm coming from clang.

I don’t know about the level of effort requires to get the pile of C to expose the same interface as the Rust input is exposing as FFI.

portable APT?

Posted Nov 26, 2025 10:00 UTC (Wed) by moltonel (subscriber, #45207) [Link]

At that stage, you might as well compile to WASM, and run that binary using a wasm runtime on the platforms that lack Rust ?

portable APT?

Posted Nov 27, 2025 14:29 UTC (Thu) by slanterns (guest, #173849) [Link]

portable APT?

Posted Nov 27, 2025 23:52 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

There is a "Rust to C converter", or rather a Rust compiler that emits C (mrustc). More importantly, there are two different projects to bring Rust to GCC (gccrs and rustc_codegen_gcc).

If you can compile Rust to any architecture that supports GCC, you certainly have enough portability for APT.

Rust is already portable to the Debian architectures that people actually use. The ones that it does not support are not up-to-date anyway so shipping with an older version of APT would not be the end of the world.

As above though, there is no reason you cannot compile APT, even with Rust in it, for any of these platforms.

portable APT?

Posted Nov 24, 2025 16:53 UTC (Mon) by jmm (subscriber, #34596) [Link] (4 responses)

There's no strict dependency on apt in Debian, the whole discussion was totally blown out of proportion in the first place.
The ports w/o a Rust toolchain could still use cupt, which is written in C++.

portable APT?

Posted Nov 26, 2025 14:01 UTC (Wed) by ATLief (subscriber, #166135) [Link] (3 responses)

The apt package is marked as "Essential: yes" and "Priority: required", so anything officially supported would need to include it. I guess those designations could be removed, but their intention is probably to ensure that packages can be installed on any system. I think having multiple lowest common denominators (like apt on most systems and cupt on others) would defeat the purpose of marking them as such.

portable APT?

Posted Nov 26, 2025 16:35 UTC (Wed) by smcv (subscriber, #53363) [Link] (2 responses)

apt specifically isn't Essential, but it has a special case to treat itself as pseudo-Essential while making dependency calculations, to make sure you can't accidentally use apt to uninstall apt and then have no way to reinstall it. Non-apt package managers can uninstall it. (Try it in a minimal Debian container: "apt remove apt" fails unless you add some --force options, but "dpkg --remove apt" works fine, and presumably cupt would also allow removing it.)

Priority has very little practical effect, and Priority: required is mostly about ensuring that debootstrap and similar minimal-system bootstrap tools will install it, so that you have a route to install additional packages.

portable APT?

Posted Nov 26, 2025 17:01 UTC (Wed) by Wol (subscriber, #4433) [Link] (1 responses)

Or you use the gentoo approach with virtual packages - something like "Pid1 is essential, can be satisfied by SysV, systemd, OpenRC, yada yada".

So you can't uninstall the only one available from the list - if two are installed there's no problem uninstalling either.

Cheers,
Wol

Debian virtual packages

Posted Nov 27, 2025 16:59 UTC (Thu) by timon (subscriber, #152974) [Link]

Debian does have virtual packages [1], e. g. “xserver” [2]; those you cannot install but they are provided by other packages. Debian also has metapackages [3], e. g. “init” [4] corresponding to your Gentoo example; those you can actually install, and they will then pull in relevant dependencies providing the actual functionality. The “apt” package [5] however doesn’t provide any relevant virtual package name.

[1]: https://packages.debian.org/stable/virtual/
[2]: https://packages.debian.org/stable/xserver
[3]: https://packages.debian.org/stable/metapackages/
[4]: https://packages.debian.org/stable/init
[5]: https://packages.debian.org/stable/apt

portable APT?

Posted Nov 24, 2025 16:56 UTC (Mon) by farnz (subscriber, #17727) [Link]

The question, as always, would be who's going to do the forking and keep up with upstream?

The other route is to contribute to things like gccrs or rust_codegen_gcc, so that Rust is available on these ports, too. This has the slight advantage that, once you have Rust support, any other packages in Debian that need Rust become buildable for that port.

Shared libraries

Posted Nov 24, 2025 17:02 UTC (Mon) by ballombe (subscriber, #9523) [Link] (316 responses)

Most of Debian was build around the need to support shared libraries and the challenge and convenience that come with them.

I am very reticent to lose that by moving to rust, especially since there is no strictly technical reasons,
C++ support shared libraries and rust could in principle support them too. In fact rust shared libraries could fix most of the problems with C shared libraries by having well-defined ABI and API definitions in the library itself.

Rebuilding packages to update their dependencies is not sustainable for Debian.

Shared libraries

Posted Nov 24, 2025 17:26 UTC (Mon) by DemiMarie (subscriber, #164188) [Link] (293 responses)

Rebuilding packages when their dependencies change is the future.

Even in C++, you already need to do this to pull in fixes made to a template, because templates are located in the header file. Most other natively-compiled languages also require such rebuilding. When it comes to new languages, Swift and maybe Hare are the only exceptions I know of.

None of these languages are being developed or funded by distros. They are all developed and funded by companies that can and do rebuild their programs from source and link statically without any issues. Distros are complaining that there is a problem without doing a substantial fraction of upstream maintenance on Cargo, rustc, GHC, Go, or any of the other toolchains.

If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen. It’s not impossible, but it is very difficult, and it has ecosystem-wide implications. Until they do, they get to use whatever the people who do do this work choose to make.

Shared libraries

Posted Nov 24, 2025 17:31 UTC (Mon) by fishface60 (subscriber, #88700) [Link]

> If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen. It’s not impossible, but it is very difficult, and it has ecosystem-wide implications. Until they do, they get to use whatever the people who do do this work choose to make.

Hopefully the likes of Canonical, Red Hat or possibly Valve will step up to fund this, since it doesn't seem realistic to expect volunteer distributions like Debian to do the work.

Shared libraries

Posted Nov 24, 2025 17:49 UTC (Mon) by bluca (subscriber, #118303) [Link] (264 responses)

> Rebuilding packages when their dependencies change is the future.

Then the future is shite

Shared libraries

Posted Nov 24, 2025 18:07 UTC (Mon) by Wol (subscriber, #4433) [Link] (106 responses)

> > Rebuilding packages when their dependencies change is the future.

> Then the future is shite

Or you go back to what I was doing over 40 years ago, when a library was just that ...

Yes you'll need some thought about how to update it into the modern world, but you static link and your library is a bunch of .o's that get copied in.

Yes you need to rebuild your applications, but the compile load is so much lower.

And if you really want to sort-of-merge your compiler and linker, okay you won't be able to mix-n-match compilers in all likelihood, but instead of .o's you compile the library to intermediate compiler representation, optimise whatever hell you can out of it, and then dump that into a .lib file that the compiler can pull into the application.

Okay, you lose the ability to just drop in a new fixed library, that fixes all your apps in one hit, but how well does that really work in practice?

Cheers,
Wol

Shared libraries

Posted Nov 24, 2025 20:12 UTC (Mon) by ballombe (subscriber, #9523) [Link] (74 responses)

> Okay, you lose the ability to just drop in a new fixed library, that fixes all your apps in one hit, but how well does that really work in practice?

It work pretty well. For example each time a new CVE is fixed in libtiff, the libtiff library is upgraded and there is no need to rebuild every software that directly or indirectly process TIFF files.

Making very costly to apply a security fix does not increase security.

Shared libraries

Posted Nov 25, 2025 8:54 UTC (Tue) by taladar (subscriber, #68407) [Link] (18 responses)

Oddly enough none of the distros that advocate for this approach officially cover the part of that process that needrestart does, i.e. actually tell the user what needs to be restarted to get the benefit of that update.

Shared libraries

Posted Nov 25, 2025 9:45 UTC (Tue) by leromarinvit (subscriber, #56850) [Link] (14 responses)

Is that so? My Debian and Ubuntu systems will often restart services after installing updates. Does this not cover shared library updates - i.e. is a service only restarted after it itself was updated? Genuinely curious, I've always just assumed dependencies would also trigger restarts (at least if they're shared libs).

If they don't, just setting APT to auto-install security updates, without somehow restarting individual services or the whole system afterwards, is clearly not enough to at least keep a system free of known (and fixed) vulnerabilities.

Shared libraries

Posted Nov 25, 2025 10:03 UTC (Tue) by taladar (subscriber, #68407) [Link]

There is that "Daemons using outdated libraries" thing that sometimes pops up but it only covers a small subset of the running binaries or even systemd services actually using updated files.

Shared libraries

Posted Nov 25, 2025 10:05 UTC (Tue) by epa (subscriber, #39769) [Link] (12 responses)

Yes, that's the shameful secret of security updates in shared libraries. We've long mocked the Microsoft approach of rebooting the device for every change, but it's hard to beat if you want to guarantee that the vulnerable code is no longer running anywhere on the system.

In principle a program could be re-linked against the new shared library code while it stays running, but that requires an even stronger ABI stability guarantee than most libraries provide.

Shared libraries

Posted Nov 25, 2025 11:26 UTC (Tue) by SLi (subscriber, #53131) [Link] (10 responses)

Hmm. Why is this a shared vs static library problem? Doesn't the application need to be restarted in either case to get the benefits?

Shared libraries

Posted Nov 25, 2025 14:02 UTC (Tue) by farnz (subscriber, #17727) [Link] (9 responses)

The claim being made for shared libraries is that I can just update the library, and all the applications are immediately patched, which reduces admin effort as compared to static linking, where I have to update the binaries and then restart the applications.

The point is that it's not that big a reduction in effort - and it's a reduction in effort in the automated part, to boot.

Shared libraries

Posted Nov 25, 2025 14:59 UTC (Tue) by intelfx (subscriber, #130118) [Link] (8 responses)

> The claim being made for shared libraries is that I can just update the library, and all the applications are immediately patched, which reduces admin effort as compared to static linking, where I have to update the binaries and then restart the applications.

Nobody is making the claim for shared libraries to somehow obviate the need to *restart the applications*. You invented this claim out of thin air.

Shared libraries obviate the need to *update the binaries*, no more, no less.

Shared libraries

Posted Nov 25, 2025 15:12 UTC (Tue) by farnz (subscriber, #17727) [Link] (7 responses)

They don't even do that - you have to update the binaries that are supplied by the shared library, and the in-memory copies of the binaries, too.

The only thing they do is mean that you don't have to replace the executables - but replacing binaries (libraries or executables) is the bit of the update process that's simple to automate.

Shared libraries

Posted Nov 25, 2025 18:23 UTC (Tue) by intelfx (subscriber, #130118) [Link] (6 responses)

Clearly, "updating the binaries" in this argument means updating the _dependent_ binaries (the executables). I intended to include this clarification but omitted it due to sheer obviousness.

> but replacing binaries (libraries or executables) is the bit of the update process that's simple to automate.

It's not so simple (or cheap) to automate _creating_ those binaries in the first place. This is the part shared libraries help with: you do not need to rebuild and redistribute O(n) dependents when all you need is to make a compatible change in O(1) libraries.

Shared libraries

Posted Nov 25, 2025 19:14 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (5 responses)

> It's not so simple (or cheap) to automate _creating_ those binaries in the first place.

The argument made by the newer languages is that it should be. And they have largely succeeded. When is the last time you saw a Rust or Go project that *couldn't* be built with cargo build, go build, etc.?

Yes, yes, the subsequent packaging step is frequently more complicated. But that step is entirely the distro's invention. They don't get to make it everyone else's problem.

Shared libraries

Posted Nov 28, 2025 3:16 UTC (Fri) by wtarreau (subscriber, #51152) [Link] (4 responses)

This full rebuild of applications however is a real problem for end users because they cannot choose anymore to only update the part that causes trouble. When application X has dependency Y and a full rebuild of X+Y has to be done to fix Y, most of the time it also comes with fixes (or even minor updates) of X, that users have to accept just to fix Y. This is a real problem because regressions do happen (frequently) and the more dependencies are involved in a rebuild, the more likely unintended changes are visible to the end user, who don't always have the choice of running the software that works for them and only update problematic dependencies.

Shared libraries

Posted Nov 28, 2025 5:35 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

I keep reminding people here that Gentoo exists. And it's not a new fad, it's been around for more than two decades. And Nix is a newer example.

Shared libraries

Posted Nov 28, 2025 8:33 UTC (Fri) by taladar (subscriber, #68407) [Link] (2 responses)

You mean they don't get a choice in the way they don't get a choice to keep the old version of a shared library (or indeed choose an arbitrary version of a shared library) for each of the other applications using that same library on the system that might not be affected by a fix?

Besides, updating all the dependencies to the same combination of dependency versions used by everyone else using the application actually reduces the number of truly different combinations in use, making it more likely that someone else is running the same version and has run into a bug before you do.

Shared libraries

Posted Jan 14, 2026 4:48 UTC (Wed) by wtarreau (subscriber, #51152) [Link] (1 responses)

> You mean they don't get a choice in the way they don't get a choice to keep the old version of a shared library (or indeed choose an arbitrary version of a shared library) for each of the other applications using that same library on the system that might not be affected by a fix?

No I mean they don't have the choice to only updating the library without updating the application. The lifecycle of applications is not always the same as that of libraries. Take firefox as an example, you might experience issues caused by one of the thousand libs it depends on, but otherwise match your needs. By only updating that broken lib you get a perfectly functional browser. When using static builds, it's less likely that they'll rebuild your version based on an updated dependency so instead you're more likely to be forced to upgrade to the next version that probably doesn't please you at all anymore, or introduces new issues. The alternative is to not update the static application and that's an issue with security stuff (e.g. crypto libraries), which will overall degrade the security of what's effectively deployed in the field.

Shared libraries

Posted Jan 14, 2026 5:20 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

The corollary is that if you have a static version, it's usually easy to build a custom version, because the project will have tooling to easily manage the bundled libraries. Otherwise they'd go mad.

This absolutely applies to Rust and Go. I can make my own mini-fork within minutes for most of the projects, except for really hairy ones like Docker or K8s.

Shared libraries

Posted Dec 6, 2025 10:38 UTC (Sat) by jengelh (subscriber, #33263) [Link]

>We've long mocked the Microsoft approach of rebooting the device for every change, but it's hard to beat if you want to guarantee that the vulnerable code is no longer running anywhere on the system

The more evolved and hostile attached networks became between 1990 and now, the *less* Windows versions needed to reboot. So it does not seem like the mocking was ever done because of (in)security concerns.

Shared libraries

Posted Nov 25, 2025 16:02 UTC (Tue) by draco (subscriber, #1792) [Link] (2 responses)

Officially, Fedora only supports offline updates (reboot, apply updates in constrained environment, then boot with updated software). You can do it online, but then it's up to you to pick up the pieces.

You keep saying distros don't handle this case, but they do.

Extra services during offline upgrade

Posted Nov 27, 2025 18:13 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (1 responses)

Is there a way to run extra services during an offline upgrade? Qubes OS requires that a VM be running certain services at all times.

Extra services during offline upgrade

Posted Nov 28, 2025 12:10 UTC (Fri) by neverpanic (subscriber, #99747) [Link]

AFAIR, the "offline" upgrade really is just a special systemd target that gets activated when a specific symlink exists in the rootfs, and a successful update deletes that symlink.

So, yes, you can run extra services by making sure your systemd service file is configured to start in that target.

Shared libraries

Posted Nov 25, 2025 12:17 UTC (Tue) by NAR (subscriber, #1313) [Link] (54 responses)

Don't forget that it goes the other way too: introduce one bug into libtiff and now all libtiff-using programs on your system have the bug. It's not unheard that new versions of libraries have (new) bugs... I'd even be inclined to think it happens more often that a shared library introduces a bug than it fixes an actual, exploitable security vulnerability.

Shared libraries

Posted Nov 25, 2025 14:06 UTC (Tue) by farnz (subscriber, #17727) [Link] (53 responses)

And there's a particularly nasty subset of that, induced by the increased scope of feature unification.

Imagine a new version of libtiff which introduces a security-relevant bug into the decompressor for TIFF compression scheme 32809 (ThunderScan 4-bit RLE). Upstream's statically linked builds of the program are not vulnerable, because they don't enable the bits of libtiff needed to handle files from ancient Macs, but because your distro includes a utility that's supposed to analyse an ancient Mac disk image and convert all the data to modern formats that you can work with, your distro build of libtiff has this support enabled.

Hey presto, an application that was not vulnerable in the upstream configuration (and may not be vulnerable on other distros that don't support reading TIFF files from ancient Macs) is now vulnerable, because you're running a configuration of the code that's necessary for a different application.

Worst case, you've opened up a network-accessible vulnerability in an application that was unaware that you could build libtiff this way, in order to give more functionality to an application that's carefully sandboxed in case the files are corrupt and trigger a bug.

Shared libraries

Posted Nov 25, 2025 15:10 UTC (Tue) by paulj (subscriber, #341) [Link] (35 responses)

This one can easily be flipped the other way. Upstream statically links in libtiff with legacy, bug ridden stuff enabled. When said bugs become known, the distro updates the system shared library. All the dynamically linked apps are now secured. Except of course your statically linked upstream.

Which scenario is the more common? Which has the better track record at quickly updating to fix bugs? The random statically linked upstream-packaged apps or the Linux distros? I'd say the distros.

But let's say Linux distros are just average. Say we have 100 upstream-packaged statically-linked apps, and 100 apps using the distro shared library... ~50 of the upstream apps will update before the distro, and ~50 after - with a long tail. So - even if distros are not very good at shipping security updates, the statically linked approach will still leave you with a number of vulnerable apps for a long time to come.

Shared libraries

Posted Nov 25, 2025 15:18 UTC (Tue) by farnz (subscriber, #17727) [Link] (14 responses)

Oh yes - both ways round are possible.

Note that the distro is quite capable of using the dependency information it already has (BuildRequires and the like) to rebuild statically linked binaries - dynamically linked versus statically linked is more about how much automated work has to be done to get you a fixed version in place, rather than about which is "more secure".

And I don't believe anyone has done the study to determine which is actually more secure in practice - static linked executables, with unused parts of libraries turned off, or dynamically linked executables sharing a library with more used components. Once you allow for things like time to determine that an update is needed, it's quite a complex space to think about, and (like so much in computing), we're more going on "what feels right" than on hard data.

Shared libraries

Posted Nov 25, 2025 17:42 UTC (Tue) by paulj (subscriber, #341) [Link] (1 responses)

My point was that, even IF distros were not great at updating the shared library, i.e. just average, it would still result in the system applications generally ALL being 'fixed' in an average and bounded amount of time - if they use shared libraries. Whereas, with apps statically linking, assuming a decent number of app, you will have a much longer and unbounded period of time where you have apps installed with a security problem.

I don't see any reason why app would or would not choose more locked-down build options than the distro. Assume that varies randomly across apps, and assume it varies randomly with libraries. The point still stands: The 'average' Linux distro gives you a bounded, shorter window of time where you have vulnerable apps from a bad library.

Shared libraries

Posted Nov 26, 2025 8:51 UTC (Wed) by taladar (subscriber, #68407) [Link]

This is not quite true. If the shared library solution works there is a strong chance that rebuilding the unmodified binary will work too so the distro could also do that, the builds might take a bit more CPU-time but overall this is definitely also bounded.

If, on the other hand, the binary needs to be modified to support the fixed library version there is a strong chance there is no ABI-compatibility for the shared library to take advantage of anyway.

Shared libraries

Posted Nov 25, 2025 17:56 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (11 responses)

> Note that the distro is quite capable of using the dependency information it already has (BuildRequires and the like) to rebuild statically linked binaries

That’s a workaround not a feature, it transforms updates in mass rebuilds, that require a lot more build power, and add an economic barrier to entry to the distribution game (not that the big distributors will complain much about this part).

Nevertheless the real major drawback of static building is that it removes any developer incentive to converge on specific component versions. With dynamic building you have to choose one of the handful of versions packaged by your distributors. With static building there is no reason to make the effort (which is why developers are in deep love with static building).

The consequence of this lack of version convergence, is that static building is not only massively more wasteful on building power, it is massively more demanding in maintainer power. You need to tailor security patches (and security impact analysis) to each and every component version individual developers chose to vendor in their static build. In effect you promote technical debt (increase short term benefits at the expense of long term maintenance).

The problem does not exist FAANG side, because FAANGs force their dev teams to use the same golden versions. Guess how promoters of static building would welcome a distribution, that told them “you can use static building, but only with the following vendored versions, because we do not have the resources to patch others”.

Shared libraries

Posted Nov 25, 2025 18:31 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

> That’s a workaround not a feature, it transforms updates in mass rebuilds, that require a lot more build power, and add an economic barrier to entry to the distribution game (not that the big distributors will complain much about this part).

Eh, no. Unless you're targeting something like m68k, rebuilding is cheap. We can use Gentoo as an example, rebuilding the world takes about 12 hours on a reasonably fast computer. And this is for a catastrophic bug, where everything needs to be updated.

In most cases, you're looking at maybe a handful of packages.

Shared libraries

Posted Nov 26, 2025 8:54 UTC (Wed) by taladar (subscriber, #68407) [Link] (4 responses)

On the other hand why is converging on a version that isn't the latest version a good thing? Shared library distro builds are essentially always stuck on the oldest version that some reverse dependency requires while static linking can just use the latest version, becoming part of the much, much larger group of people who aren't years behind.

This gets rid of all the effort required for distro specific bugs when developers don't want bug reports for ancient versions, all the backporting,...

Shared libraries

Posted Nov 28, 2025 9:37 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (1 responses)

The latest version is not necessarily reliable, give the developers of other components a break, they also need to explore new approaches. Converging on a common version means a lot of people found this particular version reliable;

Static building is the opposite of using the latest version, people use the latest version *at the time they bother to check* and then lock it down and accumulate technical debt. People do not like making the effort to update, period, they can’t complain using dynamic libraries forces them to update on a regular cadence (that would expose that they do not want to make this effort) they complain that the update forced on them by distributions is not new and shiny enough.

Shared libraries

Posted Nov 29, 2025 16:20 UTC (Sat) by khim (subscriber, #9252) [Link]

> Converging on a common version means a lot of people found this particular version reliable;

That's only true for forges like PyPi or crates.io. Where people are free to pick any version they like.

With distros is the exact opposite: few packagers, often just one, single, person decides what version of library to package.

That's the opposite of the process that may lead to the outcome you describe.

> Static building is the opposite of using the latest version, people use the latest version *at the time they bother to check*

Still better than some random version that was picked by god-know-who for god-know-what reason and which wasn't even tested in conjunction with app.

> they complain that the update forced on them by distributions is not new and shiny enough.

People complain when something doesn't work, period. It may be too new or too old or anything in between.

But with distros they are often in position that they have no one to even complain to because person that assembled the crazy combination of libraries that breaks things is not even result of conscious decision, but more of result of random dice throwing.

According to the distro makers every package should be ready to work with every version of library… but that's rarely a thing that any sane developer would accept: to know whether something works or not you need to test… and distro makers make that exceedingly difficult.

Shared libraries

Posted Nov 28, 2025 11:25 UTC (Fri) by intelfx (subscriber, #130118) [Link] (1 responses)

> Shared library distro builds are essentially always stuck on the oldest version that some reverse dependency requires while static linking can just use the latest version, becoming part of the much, much larger group of people who aren't years behind.

Yeah, that's just the exact opposite of how it happens in reality.

In reality, applications in ecosystems with vendor-controlled static linking/bundling/vendoring get locked to the first version of each dependency that happens to work and never upgraded again (until a vulnerability is exploited in the wild, or until maintenance of a suitably obsolete build environment becomes untenable through actions of others).

Maintainers of distributions, on the other hand, care about health of the overall "ecosystem" Of course, the quality of said care may differ, but the overall concept is always there. You, as a user, may be briefly stuck on non-latest versions (or even on legacy branches) of some dependencies, but the entire distribution moves forward as fast as manpower allows.

Shared libraries

Posted Dec 1, 2025 9:59 UTC (Mon) by taladar (subscriber, #68407) [Link]

That is the way it works in C/C++ when they vendor dependencies because the tooling to upgrade sucks but e.g. on Rust, when upgrading dependencies is as simple as running cargo upgrade and cargo update and then fixing the minor compile issues 99% of the time (i.e. every time some dependency didn't completely revamp their API which is very rare) keeping up with dependency versions is incredibly easy.

Shared libraries

Posted Nov 26, 2025 9:22 UTC (Wed) by taladar (subscriber, #68407) [Link]

If rebuilding thousands of packages was such a big deal, why can the Rust project easily do https://rustc-dev-guide.rust-lang.org/tests/crater.html runs regularly to test if the compiler breaks any crate on crates.io?

ABI stability can block security updates

Posted Nov 27, 2025 18:14 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (3 responses)

Static linking allows security updates to libraries to be applied to applications that are ready for them, even when other applications are stuck on vulnerable versions. See gRPC in Fedora.

ABI stability can block security updates

Posted Dec 1, 2025 9:17 UTC (Mon) by nim-nim (subscriber, #34454) [Link] (2 responses)

Unfortunately baddies do not care if you have double plated the main door with the latest uber-expensive alloy, if the service door right next still uses rotting medium (a very common situation in proprietary setups).

What matters for security is that *every* deployed software bit uses fully patched components, even when those components are slightly old because no security event required their update and a full OS update is expensive effort-wise.

Static linking as you wrote promotes fixing highly visible main doors while keeping service doors wide open.

ABI stability can block security updates

Posted Dec 1, 2025 10:50 UTC (Mon) by taladar (subscriber, #68407) [Link]

You are completely ignoring that dynamic linking keeps the entire distro on an old version (or rather often a new, completely untested version somehow considered stable because it is called a "backported fix") because the many less popular programs that most systems don't even install have not been updated yet.

ABI stability can block security updates

Posted Dec 1, 2025 10:59 UTC (Mon) by muase (subscriber, #178466) [Link]

But that’s not true, as different components have different attack surfaces. gRPC is a good example, because it’s a huge difference if you have a public service that works with untrusted gRPC messages from the outside, or if your coordinator just uses a fixed set of gRPC messages to distribute your physics simulation across multiple worker processes via stdin/stdout on your own desktop.

In the first case, that’s a real-world RCE, in the second case you probably cannot even exploit the bug because you can’t control the input in the first place (messages are generated by the coordinator, not you).

Different use cases have different attack scenarios and different attack surfaces. There are tons of applications where a gRPC vulnerability will be a serious incident; but there are also a lot of cases where it – realistically speaking – it’s simply not exploitable.

Ideally, you should fix all bugs asap – but given the limited resources IRL, it sometimes is simply necessary to triage. That includes that if you cannot fix all packages at once, you can at least try to fix those where the bug has the most impact; i.e. users*impact (“reduce the bleeding”). It simply does not make sense to hold a fix back for the majority of users because you’re blocked by some obscure package nobody is using, or to wait for a package where the bug has no relevant real-world impact.

Shared libraries

Posted Nov 25, 2025 17:58 UTC (Tue) by JanC_ (guest, #34940) [Link] (19 responses)

There is also the issue of having to upgrade 100 packages instead of 1, which can make a big difference in time, network usage, required disk space, …

Shared libraries

Posted Nov 25, 2025 18:33 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (18 responses)

But why? I'd argue that this is more of a problem of existing packaging systems, they are inefficient by design.

There are no technical problems in storing updates as binary deltas. It's just that nobody cared to spin up infrastructure for this.

Shared libraries

Posted Nov 25, 2025 19:21 UTC (Tue) by bluca (subscriber, #118303) [Link] (17 responses)

That's because they are crap, and work only in theory. There's a reason stuff like deltarpm was abandoned. It just doesn't work in the real world, and it's effectively a workaround for a problem that doesn't need to exist in the first place.

Shared libraries

Posted Nov 26, 2025 2:34 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (16 responses)

deltarpm was just bad, just as debdiffs. Not in the least because DEB/RPM themselves are inefficient.

Format-aware binary diffs can be incredibly compact. RedBend Software used to provide target-aware OTA diffs, and they could compress a simple binary patch to just about the source code difference in bytes. Its differ used additional instructions to represent address shifts, and it could extract locations from the ELF relocation section.

And these days, not many people are going to care about update sizes anyway.

Shared libraries

Posted Nov 26, 2025 11:18 UTC (Wed) by bluca (subscriber, #118303) [Link] (15 responses)

So everything that actually exists was crap but it was just a coincidence, and the implementations that actually don't exist are the bee's knees - got it

Shared libraries

Posted Nov 26, 2025 18:37 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

OK. I'll bite.

Can you explain exactly why even simple brain-dead binary diffs are crap? I did an experiment, changed an "if" condition in a large binary, and did a diff. This is probably what most security patches are.

I used simple `rsync --only-write-batch=diff file1 file2`, and for a 50Mb binary the diff was 1Mb. I have not tried bsdiff or xdelta.

Shared libraries

Posted Nov 26, 2025 18:55 UTC (Wed) by bluca (subscriber, #118303) [Link] (13 responses)

Because operational reality down in the real world is very, very far away from experiments with spherical cows in a vacuum. It's the same with kernel live patching: great idea in theory, but in practice it's so damn expensive to make it work for real, on real systems, ran by real people, with real random variations, that depend on them for real production use cases, that in reality you need an entire paid team to carefully shepherd them in a production scenario. And it happens for live patching because it's worth real money, as the only alternative to apply kernel secury updates is rebooting and thus very long and measurable downtimes, and it's somewhat simpler because you have _one_ kernel on any given system to delta patch. So if you pay Canonical/RH/Oracle/SUSE you can get access to them, and then you can sort of manage it with enough engineering resources.

It's orders of magnitude worse for deltarpm and similar because instead of one kernel to build and manage patches for you have N packages, and every node will have a different and unique combination. Combinatorial explosion. Complex, costly to manage, and benefits on a well designed systems that ships critical components such as libc or libssl as shared libraries that can get easily updated are too small to notice.

Shared libraries

Posted Nov 26, 2025 19:20 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

I don't get it. We have a globe-spanning binary-diff-driven system right now. You're using it every day, in fact. It's called "git".

It works just fine, somehow. All you need is protocol-level diffing support that deltarpms just half-assed. Even with the most naïve implementation, the end result should be bit-for-bit identical to just transferring files.

You want more examples? OSM (OpenStreet Maps) distributes up-to-minute incremental changes as diff files, and their full database is around 100GB. Yet they manage.

You want even more examples? Dockerhub and Docker images are represented as a set of deltas on top of a base image.

The real problem is, in fact, the RPMs and DEBs themselves. It routinely takes _longer_ to unpack and install Fedora or Debian updates than it takes to download them. Heck, the `apt-get update` step in my Dockerfiles is usually one of the slowest!

Shared libraries

Posted Nov 26, 2025 20:49 UTC (Wed) by bluca (subscriber, #118303) [Link] (9 responses)

> I don't get it. We have a globe-spanning binary-diff-driven system right now. You're using it every day, in fact. It's called "git".

Here's git in the real world: https://xkcd.com/1597/

> You want more examples? OSM

I'm not familiar with it, I suspect the bits are not user-facing pick-and-choose, but entirely behind the scenes, with no user control of installations and updates. And it's pure data, not running code. That makes it way, way easier and more manageable.

> You want even more examples? Dockerhub and Docker images are represented as a set of deltas on top of a base image.

Docker is a dumpster fire

Shared libraries

Posted Nov 27, 2025 0:15 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

> Docker is a dumpster fire

Yeah. Because it highlights just how terrible APT or RPM are. I can update 20 of my self-hosted apps _faster_ than it takes to update the host OS (Fedora). And to remind you, Docker is very much a user-facing, "pick-and-choose" system that is based on delta updates. A total impossibility, in other words.

And it works because it was architected correctly. Not _perfectly_, but correctly.

Why have the RPM diffs and APT deltas failed? It's because each package is installed separately. Apt has to pretend that each package is stand-alone, so it can't install them in parallel. And when you have thousands of packages, adding just a tenth of a second to the sequential critical path becomes painful, for not much gain for most people.

A more modern update system would dispense with all the individual package nonsense. It would fetch the updates (in parallel if possible) and in parallel pre-stage the changed files, resolving the deltas as soon as it gets the data. Then it would do atomic replacement of changed files (or as close to atomic as filesystems make it possible) and run whatever scripts needed to complete the update.

Shared libraries

Posted Nov 27, 2025 0:37 UTC (Thu) by bluca (subscriber, #118303) [Link] (5 responses)

> Yeah. Because it highlights just how terrible APT or RPM are.

Nah, it's shite with pacman and everything else too

Shared libraries

Posted Nov 27, 2025 0:47 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

So, correct me if I'm wrong, everything that solves the users' problems with updates is bad? As long as it makes the updates fast, seamless, reliable, and atomic?

BTW, even the _actual_ user-facing versions of Fedora are switching away from RPMs. See: Bazzite.

Shared libraries

Posted Nov 27, 2025 1:06 UTC (Thu) by pizza (subscriber, #46) [Link] (1 responses)

> So, correct me if I'm wrong, everything that solves the users' problems with updates is bad?

Ladders are really, really useful for climbing to the top of a one or two-floor building.

But for 3-4 floors, portable ladders are no longer an option. Beyond 5.. ladders of any sort are worthless and a completely different solution must be employed.

In case you can't follow this analogy, something that is optimal for one application, or even a couple of applications, rapidly runs into fundamental scaling problems when you try to scale it to dozens or hundreds of applications.

But application writers don't care about bigger picture problems; they only care about *their* application.

Shared libraries

Posted Nov 27, 2025 6:58 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> But application writers don't care about bigger picture problems; they only care about *their* application.

I agree with your analogy. Except that classical distros are these rickety wooden home-made ladders. And modern Docker or immutable distros are tower cranes.

Shared libraries

Posted Dec 2, 2025 21:06 UTC (Tue) by raven667 (subscriber, #5198) [Link] (1 responses)

I think it is a good fit for the OS distributor using tools like RPM to manage the internal complexity of their software, then roll that up into an image with deltas for distribution in the way that Bazzite and other rpm-ostree distros are built, same for building Flatpak and Docker runtimes and apps. You need a tool like RPM/deb to track how a complicated runtime gets built.

I've been working with package-based OS since Redhat 4.2 (Colgate IIRC) and can see the limitations in scaling up a full desktop workstation using rpm/dpkg and yum/apt, where update performance is slow (my Fedora MacPro I used to use would take *hours* to update between releases on spinning rust) and the cost of de-duplicating all application libraries is the local admin has to use the packaging tool to manage long dependency chains, and things get complicated if you want to install an app which the OS distributor hasn't packaged themselves.

For someone who just wants to _use_ the computer to do computer things, rather than making OS design and development their hobby, the image-based systems with container-based apps work a lot better and more reliably than the package-based model, at least in my experience. Heck, having a good way to rollback because you aren't trying to mutate the one-and-only live system comes in really handy and makes me sleep better at night that the computer isn't going to immolate itself if something goes wrong.

Shared libraries

Posted Dec 3, 2025 2:50 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Yup. As many people noted, Linux works well for local development. And image-based systems allow that local environment to be used more easily for deployment. Package-based distros are also great for low-level tinkering and experimentation.

Although classic distros might also eventually change a bit. I think something like nix or guix might end up a better solution.

Shared libraries

Posted Nov 27, 2025 23:08 UTC (Thu) by intgr (subscriber, #39733) [Link] (1 responses)

> Docker is a [...] system that is based on delta updates.
> it was architected correctly.

I agree that compared to classical package managers it's impressively fast.

But no, Docker's deltas aren't even optimal for updating your application from one version to the next. They are a delta between one layer and the next one, so only efficient when the last few layers have changed.

Fast updates are often a symptom of applications not updating their base image with security patches as often as they should.

Docker could be a lot faster even, if it computed deltas between versions, or had a protocol like rsync or casync that allows figuring out the delta on the fly.

Shared libraries

Posted Dec 1, 2025 17:21 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

There was an AllSystemsGo! talk years ago about an OCIv2 which did chunking for image delivery and storage, but I believe it is DOA.

Simple in theory, but .deb format makes it a little complicated.

Posted Dec 5, 2025 11:02 UTC (Fri) by gmatht (subscriber, #58961) [Link]

I did a bit of work on this back in the day.

In principle this is really simple. Grab the .gz files out of the .deb using ar, zsync [1] them with the .gz files stored on some server (using either a cached .deb file or just $(cat `dpkg -L myoldpackage`). Then use ar to pack the .gz files back into a deb. Optionally also store some binary diffs from some particular versions the user might be likely to have lying round, as this is more efficient than zsyncing between arbitrary versions.

However, there are some issues with the .deb format. The compressed files are signed, so once you have updated your files you have to recompress them before you can feed them back into dpkg. Apart from slowing things down a bit this also means that you have to use tricks to get the resulting .gz to be bit for bit identical to the official .gz files. OK, it is possible create reproducable gzip files, but Debian seemed to be moving away from .gz compression.

If Debian decided to switch to a hsynz/zsync friendly format it should be easy. For example, if Debian switched to using uncompressed tars in their .deb files and instead distributed .deb.zstd files it should be way easier to support incremental updates than say, rewriting apt in rust.

[1] These days we would probably instead use https://github.com/sisong/hsynz

Delta updates work great with image-based OSs

Posted Nov 27, 2025 18:17 UTC (Thu) by DemiMarie (subscriber, #164188) [Link]

On image-based OSs, delta updates work great because there is only one old version.

Shared libraries

Posted Dec 5, 2025 2:20 UTC (Fri) by dvdeug (subscriber, #10998) [Link] (16 responses)

> Imagine a new version of libtiff which introduces a security-relevant bug into the decompressor for TIFF compression scheme 32809 (ThunderScan 4-bit RLE). Upstream's statically linked builds of the program are not vulnerable, because they don't enable the bits of libtiff needed to handle files from ancient Macs, but because your distro includes a utility that's supposed to analyse an ancient Mac disk image and convert all the data to modern formats that you can work with, your distro build of libtiff has this support enabled.

Imagine using a bunch of old TIFF files and finding that one program doesn't read them, despite the fact they all use libtiff. You're the type of person who reads LWN, so after an hour of work, you might figure out that the program uses some brain-damaged version of libtiff. It would be a lot more painful for less technically skilled users.

libtiff is not the most secure library in the world, but randomly turning off features is painful to users. TIFF is annoying as a complex standard that no one completely supports; breaking support for random subformats (which are not visible to the average user, and even if they were, that this program doesn't support ThunderScan 4-bit RLE is probably documented nowhere) just makes things more miserable.

To boot, nobody is messing with the code for that compression scheme. There's a good chance that the bugs that affect it have always affected it. Changes to libtiff are more likely to break compression schemes that are worth optimizing, or some combination of important features that the programmer wasn't thinking about in combination and nobody wants to disable.

Shared libraries

Posted Dec 5, 2025 2:38 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

So you convert from the ancient compressed TIFF to something more modern, like PNG. Problem solved in practice.

Shared libraries

Posted Dec 5, 2025 9:33 UTC (Fri) by dvdeug (subscriber, #10998) [Link]

Except that TIFF has rich metadata support that PNG doesn't, including a whole set of private tags, and it's likely anyone still using TIFF is depending on one or the other. Changing to PNG is likely going to involve exporting that data to some format, and fixing up your system to import that data alongside the PNG.

And what's stopping someone from making a local copy of libpng and removing, say, Adam7 interlacing from it? "Nobody uses it", and it's more code surface. TIFF is a little weird, but if you support a file format, you should support it as far as reasonably possible, not randomly turning off features you're not interested in.

Shared libraries

Posted Dec 5, 2025 11:11 UTC (Fri) by farnz (subscriber, #17727) [Link] (13 responses)

In the scenario I outlined, you'd read the TIFF with the software that knows about legacy Mac formats, and convert it a modern TIFF in the process - you're already having to use that software to get the TIFF format out, and it can decompress ThunderScan 4-bit RLE and either leave the data uncompressed, or recompress with a more modern lossless compression scheme.

And I chose that format very, very deliberately; it was never commonly used with TIFFs, and the only time you're going to encounter it is if you're extracting files from a legacy Mac where you'd bought the ThunderScan attachment for your ImageWriter. Most people won't ever encounter a file with ThunderScan RLE encoding - very few programs outside Macs with the ThunderScan software installed could even read them, and if you opened it in (say) Photoshop, then saved as a TIFF, it'd compress it differently anyway.

While nobody may be messing with the code for it, there's also a good chance that until the compiler changes, the bug was latent - if it contains UB, then the compiler is perfectly capable of compiling it as the programmer intended, up until it changes and starts compiling it in a way that's exploitable (but that correctly decodes all valid ThunderScan compressed TIFFs). At this point, you've created a vulnerability in many programs, to support the one or two that still need to decode ThunderScan RLE compression.

Shared libraries

Posted Dec 5, 2025 21:49 UTC (Fri) by dvdeug (subscriber, #10998) [Link] (12 responses)

> In the scenario I outlined, you'd read the TIFF with the software that knows about legacy Mac formats, and convert it a modern TIFF in the process

There's a lot of times the solution is set VXTMLR=1 and it just works. A fact you learn six weeks after you finish redesigning the whole system with new software and custom code, or replaced a piece of hardware. You have a TIFF file that is read by TIFF reading programs; the fact that it's not being read here will frustrate all non-technical users and many technical ones. Then you get the tech in, and they realize it's running libtiff like the rest of the system, so why isn't this working?

> And I chose that format very, very deliberately

And I stand by my case; not supporting certain files in a format because you don't think they're being used is a recipe for pain and annoyance. I'll go further and point out that the gain for disabling this is tiny; if you don't trust libtiff, then you should protect yourself in some way, not just disable one or two compression schemes that you don't feel are being used that have a tiny chance of being the angle someone uses to attack libtiff. TIFF, like various media schemes, is a format you'll never support all files in the wild, but you should either support some narrowly and clearly defined subset, or support everything that e.g. libtiff/ffmpeg does. Or don't support it at all, which means people know to run a different tool or convert it to a format you support.

I'm also not convinced that everyone will be so careful as you. It's easy to get carried away; most attempts at making an X clone with only the features that people really use fail because while people only use 20% of the functionality, they all use a different 20%.

Shared libraries

Posted Dec 6, 2025 21:24 UTC (Sat) by farnz (subscriber, #17727) [Link] (11 responses)

Most of the time, TIFF reading programs don't support ThunderScan RLE encoding. That's a big part of why I chose it - it's rarely supported unless you've deliberately enabled it, and you only need it if you're dealing with deeply legacy media.

I could, of course, have chosen Internet-facing streaming media software that uses its own build of ffmpeg in the upstream configuration, where only the formats that are expected to work are enabled, but where the distro has turned on support for all the weird and wonderful formats that ffmpeg supports.

In both cases, though, your argument is that it's important to introduce security issues into software whose upstreams don't have that issue because they run with cut-down dependencies, because there might be a rare user who actually wants to deal with ThunderScan RLE TIFF files, or upload LucasArts SANM videos for transcoding to AV1 for streaming, or otherwise do something deeply niche and weird. I would suggest that actually, what you want is more than one build of libtiff, or ffmpeg, or other functionality, where the local-only media player can decode LucasArts SANM, or the legacy 68k Mac (ThunderScan was not supported on PPC Macs) file reader can convert your scanned documents to a modern format, but the Internet-exposed one doesn't have the extra codec support by default, precisely because of the security risk.

Shared libraries

Posted Dec 6, 2025 21:48 UTC (Sat) by johill (subscriber, #25196) [Link] (6 responses)

On the flip side, when a distro chooses to have a single libtiff/ffmpeg/whatever build installed at a time, it could actually give users a better choice. Default to libtiff-simple, but let a user install libtiff-everything, and both provide a virtual libtiff package. Now all software can either do "ThunderScan RLE encoding" or not, depending on which libtiff you installed, and you don't even need to rebuild the software if you do have such a special case. (Though obviously the problem with this is making it discoverable.)

Which is basically the "multiple builds", but as long as you don't need to do "internet facing server" and "legacy media" on the same machine, it'd be much simpler.

(You could argue that maybe a web browser should have a restricted version and the local viewer not, but realistically you might even need the browser to have the full version if you're working with such files since you might have a web-based organisation tool etc., so I don't think that really works)

Shared libraries

Posted Dec 8, 2025 9:28 UTC (Mon) by farnz (subscriber, #17727) [Link] (5 responses)

The trouble with that is that what you want is not "simple" and "everything", but a variety of different choices: there's faxes, medical imaging, scanned paperwork, prepress artwork and more, to name some common use cases for TIFFs.

In the extreme, you end up with a different libtiff for each use case, and you want them to be parallel installed and each only used by one program on a system - e.g. the fax handler has the fax subset (and doesn't have any others), while the paper document archive program supports the subsets used by faxes and scanned paperwork, but not medical imaging or prepress artwork.

And once you get there, where's the advantage of dynamic linking?

Shared libraries

Posted Dec 24, 2025 14:49 UTC (Wed) by sammythesnake (guest, #17693) [Link] (4 responses)

If there's a need for that kind of thing, then a plugin mechanism is the real answer - nothing to do with static linking or otherwise...

Shared libraries

Posted Dec 24, 2025 17:31 UTC (Wed) by farnz (subscriber, #17727) [Link] (3 responses)

A plugin mechanism doesn't help - it just rearranges the deckchairs.

Let's switch to video formats, and use GStreamer, which has a plugin mechanism. How does my "upload a video from your phone via the Internet" program know that a given plugin is not for it to use? In the upstream version, it's fine - they just don't install vulnerable plugins, so by definition, all plugins are fine for it to use.

But the distribution has a problem: a phone will never record in (say) LucasArts SANM format, but other software installed by the user might well have a legitimate reason to want the GStreamer plugin for LucasArts SANM to be available and working. Equally, though, the program might well benefit from you adding an AV2 plugin (so that phones that record AV2 video instead of HEVC are supported), even though it doesn't know about AV2.

This, however, is the same problem as before, just rearranged - you need parallel installable "profiles" of the library (in this case GStreamer), such that each program gets the profile it wants. And in the extreme, each program has its own profile.

Shared libraries

Posted Dec 24, 2025 18:22 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> This, however, is the same problem as before, just rearranged - you need parallel installable "profiles" of the library (in this case GStreamer), such that each program gets the profile it wants. And in the extreme, each program has its own profile.

So `/etc/pam.d`-like files but for plugins then? Sounds like fun ;) .

Shared libraries

Posted Dec 24, 2025 18:56 UTC (Wed) by excors (subscriber, #95769) [Link] (1 responses)

> This, however, is the same problem as before, just rearranged - you need parallel installable "profiles" of the library (in this case GStreamer), such that each program gets the profile it wants. And in the extreme, each program has its own profile.

That sounds like a pretty easy problem to solve: the application can have an allowlist of plugins. When a significant new codec like AV2 is released, maybe a couple of times per decade, the developers can add it to the list. Maybe have a config file so users of old releases can add it without upgrading. This should only be needed for applications that are processing untrusted input on systems where they can't trust the administrator not to install insecure plugins, so it's going to be end-user things like web browsers that have to be constantly upgraded for security reasons anyway and it's minimal extra effort to maintain the plugin list.

(GStreamer already splits plugins into "good" (safe to use), "ugly" (good quality code but copyright/patent licensing issues), and "bad" (unsupported; maybe low quality, unmaintained, rarely used, etc), with separate Linux packages for each set, plus a number of single-plugin packages (for licensing or dependency reasons). On systems with a responsible administrator, that means you can already choose what profile of plugins are available, and there's no technical reason it couldn't be more fine-grained.)

Shared libraries

Posted Dec 25, 2025 19:14 UTC (Thu) by farnz (subscriber, #17727) [Link]

But this is the same mess as multiple shared libraries, just moved around - it's not actually a solution if you want to have all programs use a unified feature set, which is an explicit goal of how the distro mechanism works (see the earlier comment from dvdeug).

And that's the root of the problem I'm raising. There exist cases where unifying features introduces a vulnerability - indeed, the allowlist mechanism you're described also introduces one if an insecure plugin is added that upstream explicitly don't support because they know it opens a security hole in their software. As a consequence, turning a simple "statically link ffmpeg, configured just for the plugins upstream has reviewed and know are good" into "port to a plugin architecture, maintain allowlists, somehow ensure that downstream never adds something insecure to the allowlist" problem, which is a lot of extra complexity, and all so that downstream can "unify" features for things that don't benefit from the unification (since the allowlist promptly bans use of all the extra features so enabled).

This is the flip-side of distros spotting problems as they package - because they change things (they're effectively a friendly fork of upstream), they run the risk of introducing new bugs. It is not nearly as simple as "static linking bad, dynamic linking good"; it can also be "by unifying features, I've introduced a bug that does not exist upstream, because I de-vendored a library, linked to the shared version, and now we have problems".

Shared libraries

Posted Dec 7, 2025 8:56 UTC (Sun) by dvdeug (subscriber, #10998) [Link] (3 responses)

> your argument is that it's important to introduce security issues into software whose upstreams don't have that issue because they run with cut-down dependencies, because there might be a rare user who actually wants to deal with ThunderScan RLE TIFF files, or upload LucasArts SANM videos for transcoding to AV1 for streaming, or otherwise do something deeply niche and weird.

My argument is that first, you aren't gaining much by stripping features because they're rare. Either you trust libtiff, or you don't. FFMPEG has more dependencies and sometimes more sketchy dependencies, so that's slightly different. But in either of those cases, you're taking a small chunk of code in a much, much bigger library, and treating it as a security win to disable it, not because it's known to have anything wrong with it, but just because you can. Don't; if you don't trust libtiff, sandbox it. If you find libtiff trustworthy enough, you're not gaining anything measurable by disabling random tiny parts of it.

Two, to you, a ThunderScan RLE TIFF file is an obscure Macintosh format. To a user, it's a TIFF file, a relatively common format. If I try and use an .snm file in a tool that plays movies, and it doesn't work, I look for a tool that supports .snm files. If I try and use a valid .png file in a tool that supports PNG files, and it doesn't work, I'm going to treat that tool as broken. I've read the TIFF standard, and understand how complete support is impossible, but the user is going to treat that as the PNG case, not the SNM case.

Third, I think this is a slippery slope that would make software less pleasant to use. ThunderScan RLE is a rare format, yes. But as I said, it's a small chunk of code, so what else are you going to take out? I have 16000 TIFF files on my hard drive, and most of them are in CCITT Group 4 fax encoding, which is not in the TIFF baseline. They may be few, but I have 2, 3 and 4 respectively in PackBits (baseline, but considered rare on Wikipedia), Deflate (PKZIP) (obsolete, uncommon), and LZW (not in baseline, and who uses LZW any more?). Having to spend any time to figure out why the program that views TIFF files doesn't view these is a waste of my time, and for many users, that could be a waste of hours and possibly expensive bills to call in a technician.

> I would suggest that actually, what you want is more than one build of libtiff, or ffmpeg, or other functionality,

No; I want something that processes TIFF files to actually process all TIFF files. That's not realistic, but approaching it is still important for the quality of user experience. If you don't trust libtiff, I expect you to not just disable a couple of minor compression schemes and call it done; you should protect the program from libtiff bugs. ffmpeg is a gnarlier mess, but I suspect there's even less real data about what video codecs people are using, and probably even more reason to sandbox it instead of trying to figure out which codecs are unused and blindly trusting the rest.

Shared libraries

Posted Dec 8, 2025 9:24 UTC (Mon) by farnz (subscriber, #17727) [Link] (2 responses)

Just to be clear, then: you want a fax handling program to have extra vulnerabilities over and above upstream, because it's possible that someone might want to reuse a fax handling program for legacy Macintosh scans.

There is no program out there that supports all TIFF files - libtiff certainly doesn't.

And coming back round to my initial point: by adding in the extra file format support to someone's fax handler, the distro has introduced a vulnerability that is not present upstream, and where upstream is quite likely to say "well, why did you add ThunderScan RLE decoding to my fax program? That makes no sense at all, since faxes are 1 bit (by definition), and ThunderScan RLE is 4 bit (by definition)". This is not a win for users, or for the upstream, and it's a security hole opened by the distro insisting that there is a shared libtiff.

Shared libraries

Posted Dec 8, 2025 11:16 UTC (Mon) by dvdeug (subscriber, #10998) [Link] (1 responses)

>Just to be clear, then: you want a fax handling program

Just to be clear, the idea that this was a fax handling program is present nowhere else in this discussion. At no point did you offer a reason ThunderScan RLE wouldn't be supported besides "it's rare".

> There is no program out there that supports all TIFF files - libtiff certainly doesn't.

There's no program out there that supports all C++ files; g++ certainly doesn't. Is that argument going to pacify you when it fails on your C++ file? I'm asking programs that claim to support TIFF files to try their best at supporting TIFF files, and not put extra work into disabling support for certain TIFF files.

>And coming back round to my initial point: by adding in the extra file format support to someone's fax handler, the distro has introduced a vulnerability that is not present upstream, and where upstream is quite likely to say "well, why did you add ThunderScan RLE decoding to my fax program? That makes no sense at all, since faxes are 1 bit (by definition), and ThunderScan RLE is 4 bit (by definition)". This is not a win for users, or for the upstream, and it's a security hole opened by the distro insisting that there is a shared libtiff.

No, the distro has not introduced a vulnerability that is not present upstream. It may have introduced a vulnerability, and the probability it did so is key to judging whether or the action should be taken. Again, it is a win for users to have as much support for their files as possible, and this idea that the program wouldn't accept ThunderScan RLE anyway is something new. The very fact that it only supports 1 bit files means that it should bail out of reading the file long before any ThunderScan RLE specific code is run.

It's possible that just adding the support code will close a vulnerability; mishandling of an error state was what destroyed the first Ariane 5, one of the most expensive software errors ever, after all. The log4j problems, similarly, came from issuing errors to a log file and could have been blocked in some cases by just changing which error message was logged.

More importantly, by linking the program to the shared libtiff, it avoids any vulnerabilities that could come from the window between the shared libtiff being fixed and the program being fixed. That's why distros demand that programs be linked to shared libraries instead of vendored ones, so security holes in the library get quickly patched on being fixed, instead of fixed in every copy on the system individually.

Even in the fax case, it's not as simple as the distro introducing a vulnerability; the distro could be patching a vulnerability just by making the change, and they could be minimizing the time that an openly known vulnerability exists in the program. Even if the user support is irrelevant, how likely a vulnerability is to be created by a change and how likely it is to be fixed by a change is important.

Shared libraries

Posted Dec 8, 2025 11:25 UTC (Mon) by farnz (subscriber, #17727) [Link]

You made assumptions about what the program might be, and asserted that a bug introduced by ThunderScan RLE support would be worth having for the extra supported formats; I've introduced a bit more information, explaining why ThunderScan RLE support is not worth having.

And just to be completely and utterly clear: you are saying that a vulnerability that is not present in the upstream version of the project, or in their binary builds, but is only present in the distro build (where they've unbundled libtiff and linked a version that has a security issue that can be triggered just by loading a TIFF file) is not introduced by the distro?

And my argument is that the distro is not always fixing things by unbundling dependencies and unifying on a single version - it can be both fixing some problems and introducing new ones. Which tradeoff is better is not backed (in either direction) by data.

Shared libraries

Posted Nov 24, 2025 20:25 UTC (Mon) by ebee_matteo (subscriber, #165284) [Link] (30 responses)

> > > Rebuilding packages when their dependencies change is the future.

> > Then the future is shite

> Or you go back to what I was doing over 40 years ago, when a library was just that ...

You can also go back at the beginning of UNIX and use IPC across small binaries to perform tasks. Many people here still like their pipes on the shell.

I see it a good pattern in keeping programs small and then using IPC to make them communicate, via pipes / sockets and gRPC / varlink / DBus / anything.

That for me would be a better future...

Shared libraries

Posted Nov 24, 2025 20:37 UTC (Mon) by willy (subscriber, #9762) [Link] (29 responses)

Ok, but each of those Unix tools needs to parse its command line options, yes? So do we hand-roll an option parser in each tool, or do we share the code to do that in a library? If we share the code, do we dynamically or statically link it?

At this point I hope you realize you've merely restated the problem, not solved it.

ABI stability funding

Posted Nov 24, 2025 21:29 UTC (Mon) by DemiMarie (subscriber, #164188) [Link] (24 responses)

The problem is real. The funding to solve it is missing.

Server software is often shipped as containers nowadays, and containers don’t benefit much from dynamic linking. In fact, static linking is often considered a benefit in the server world due to ease of deployment.

Embedded systems do benefit from dynamic linking, and Android uses dynamic linking for its Rust crates. However, updates for embedded devices are usually complete images, so ABI stability is of very little value. The only advantage would be allowing binary dependencies to use Rust APIs.

The systems that benefit greatly from ABI stability are “traditional” distros with mutable root filesystems. However, none of them have been willing to fund the needed improvements. Furthermore, many of these distros are run by volunteers.

Like fishface60, I hope that Canonical, SUSE, Red Hat, or Valve steps up and funds a solution.

ABI stability funding

Posted Nov 24, 2025 23:17 UTC (Mon) by bluca (subscriber, #118303) [Link] (12 responses)

> Server software is often shipped as containers nowadays, and containers don’t benefit much from dynamic linking.

Except of course that's not really true, as proven by companies like Redhat spending tons of dev time to implement very, very complex solutions to post-facto deduplicate said containers, because that whole docker mess doesn't really scale beyond a handful of instances. Storage, memory and loading time costs are through the roof because of the intense duplication.

ABI stability funding

Posted Nov 25, 2025 20:37 UTC (Tue) by jhoblitt (subscriber, #77733) [Link] (11 responses)

If this was true, kubernetes would have been dead upon arrival. Even an extreme case of needing a terabyte of OCI layers is an incremental cost per server on the order of $100, which is probably less than 1% of the acquisition cost of a new 1U server.

At my $day_job, I have increased the k8s per node pod limit on most clusters up to 250 from the default of 110 as nodes were routinely hitting the pod limit but could easily handle more load.

ABI stability funding

Posted Nov 25, 2025 20:49 UTC (Tue) by bluca (subscriber, #118303) [Link] (10 responses)

So with a million servers one has to spend an extra 100 millions for no particular reason other than because kubernetes is crap? Sounds about right

ABI stability funding

Posted Nov 25, 2025 21:22 UTC (Tue) by khim (subscriber, #9252) [Link] (9 responses)

If you have million servers then you would spend way more than $100 million on stuff not even remotely related to what you would pay for these servers. Wouldn't be surprised to find out that just building permits would cost more.

Heck, with million servers one, single, outage caused by problems with shared library compatibility may cost you more than $100 million!

An attempt to inflate price of something by shouting “but what if there are thousand, ten thousand, millions servers” would never work because not just expenses grow linearly but also cost of potential problems also grow linearly. Time when dynamic linking was feasible is long in the past for this very reason: in a world where human labor is cheap and hardware is expensive saving of one byte made sense. In today's world… not so much.

You may win some, in rare cases, if you have some component that's shared between hundreds and thousand of different programs (maybe some core OS library) but everything above that is cheaper not to share.

ABI stability funding

Posted Nov 25, 2025 21:48 UTC (Tue) by bluca (subscriber, #118303) [Link] (8 responses)

...and yet, tons of money is being spent developing various solutions to this very problem. How curious!

ABI stability funding

Posted Nov 25, 2025 21:52 UTC (Tue) by khim (subscriber, #9252) [Link] (7 responses)

Nothing curious, really. It's just matter of priorities: upgrade of one shared library on a server farm with million servers that would bring down your whole datacenter may cost you a lot more than $100 million thus you install Kubernetes and don't do that, but, of course, $100 million are still $100 — if you may, somehow, save them without exposing yourself to instability caused by distros quicksand then you will do that.

It's matter of priorities.

ABI stability funding

Posted Nov 26, 2025 1:06 UTC (Wed) by bluca (subscriber, #118303) [Link] (6 responses)

Yeah because famously kubernetes runs on thin air, it most definitely doesn't run on "distros quicksand". Also it never needs to be updated, and never, ever breaks

ABI stability funding

Posted Nov 26, 2025 2:53 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> Yeah because famously kubernetes runs on thin air

That's actually close to truth. There was even a project to run K8s as PID1. Most installations don't go _that_ far, and just limit themselves with something minimalistic like Alpine.

ABI stability funding

Posted Nov 26, 2025 3:18 UTC (Wed) by jhoblitt (subscriber, #77733) [Link] (1 responses)

There are compelling reasons to let kubelet use systemd to manage slices, so Alpine is probably not a popular host OS. However, it is incredibly popular as an OCI base layer.

ABI stability funding

Posted Nov 26, 2025 5:29 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

I was thinking more about the host for the control plane (kube-apiserver and such). Worker nodes are more diverse (there are even Windows nodes).

My desktop Docker host uses a cut-down Debian version without systemd running (there are just 4 processes: /initd services, /usr/bin/containerd-shim-runc-v2, /usr/bin/containerd, /usr/bin/rosetta-mount).

ABI stability funding

Posted Nov 26, 2025 11:15 UTC (Wed) by bluca (subscriber, #118303) [Link] (2 responses)

> That's actually close to truth.

In your fantasy land perhaps - back down here in the real world, everyone and their dog deploys on Ubuntu as the host, with RHEL and its derivatives distant contenders

> something minimalistic like Alpine

...which is also famously not a "distribution" but consists entirely of aether, right

ABI stability funding

Posted Nov 27, 2025 17:12 UTC (Thu) by ssmith32 (subscriber, #72404) [Link] (1 responses)

Er, actually, if we're going to go down the "well, back in the real world" route:

1) In the real world, most folks are going to use whatever OS their cloud provider uses for their k8s solution. So for EKS, Amazon Linux or Bottlerocket or something. And if it's GCP, they're probably rebuilding the whole world for funsies even if they use Ubuntu, because Monorepos Are (not) Awesome.

2) In the real world, people don't choose one or the other. Both have trade-offs, and most end up using both, depending on the situation. A base image of Ubuntu, with the occasional application installed via flatpak, etc.. is often the best solution for home use. k8s when deploying a large number of services at scale for commercial use.

ABI stability funding

Posted Nov 27, 2025 18:50 UTC (Thu) by bluca (subscriber, #118303) [Link]

For managed solutions sure, but the topic at hand here was custom hosts that one chooses and sets up to run their containers or VMs or whatevers.

ABI stability funding

Posted Nov 25, 2025 8:58 UTC (Tue) by taladar (subscriber, #68407) [Link] (10 responses)

I would argue that the funding isn't there because you can only lose in terms of performance and language capability when you remove any inlining and use of generics across crates. Most likely you would end up with the same "hope and pray it works" approach that C++ is using but it would work less reliably in languages like Rust that use generics and optimizations (e.g. struct field reordering) even more than C++ uses templates.

ABI stability funding

Posted Nov 25, 2025 13:35 UTC (Tue) by khim (subscriber, #9252) [Link] (8 responses)

The funding is not there because there are no actor who may benefit from that work and have some money to spare.

Google and Microsoft don't have an incentive to fund anything like that because they are not providing Rust ABIs (at least not yet) and distros are not in position to develop anything and don't even feel it's their responsibility to develop anything.

Story about “awful inlining” is entirely moot point: you have the same thing with dyn Trait already, what this would would do, in terms of the language is to bring dyn Trait to parity with impl Trait, if you want inlining then simply don't use dyn Trait and you are done.

ABI stability funding

Posted Nov 30, 2025 17:11 UTC (Sun) by NYKevin (subscriber, #129325) [Link] (7 responses)

dyn Trait can never reach parity with impl Trait. impl Trait is Sized, and that implies a lot of API flexibility that dyn Trait is physically incapable of expressing under Rust's current data model.

Alternative data models, and why they do not solve this problem:

* dyn Trait in a Sized context becomes syntactic sugar for Box<dyn Trait> (autoboxing) - There is no guarantee that we even have a heap in the first place. It would create a bizarre inconsistency where T is not the pointee of &T. And there are several problems (see below) that it doesn't actually solve.
* Methods with a self receiver implicitly specialize to accept Box<dyn Trait> as the receiver - Does nothing for non-receiver Self arguments, since we cannot prove that they have the same concrete type as the receiver. In the general case, it is possible to impl Trait for Box<dyn Trait> (or write an inherent impl for dyn Trait), and then this is ambiguous. And it does nothing for associated types and non-method associated functions, which would remain dyn-incompatible anyway.
* Introduce &owned references that "confer ownership" and can be moved out of (unlike &mut, which can be swapped but not moved-from) - Similar to the previous bullet, this fixes the self receiver and does not help with much else. On top of that, it is yet another type of reference we would need to think about.
* Steal dynamic_cast from C++ - This has already been done, see std::any::Any (or you could reinvent your own with unsafe code, if for some reason Any is too inflexible for you). But that forces these APIs to become either fallible or unsafe when dispatched from dyn Trait, and that's much less ergonomic than impl Trait.

ABI stability funding

Posted Nov 30, 2025 17:32 UTC (Sun) by khim (subscriber, #9252) [Link] (6 responses)

> impl Trait is Sized, and that implies a lot of API flexibility that dyn Trait is physically incapable of expressing under Rust's current data model

Where have you read the idea that it wouldn't change the “current data model”? It would, of course. The same way Swift did: by permitting !Sized types on stack and so on. Large work, sure, but nothing impossible.

> Alternative data models, and why they do not solve this problem:

Have you excluded not just obvious, but already implemented (in Swift) solution on purpose? Introduce dynamic, parametrised types — and that's it. Yes, it would change language in a subtle ways: str would no longer be one, single, type, but would become parametrised type with length fixed at runtime… so pointer or reference would need to include reference and also length… oh, right, that's how things already word, isn't it? That would actually simplify the language instead of making it more complex. Would remove lots of corner cases related to !Sized types.

The big question is not whether that can be done, but how hard would it be. Probably lots of simple, tedious work… but nothing really crazy.

Backward compatibility could be problematic, though.

The worst thing that would happen: some panics that currently happen at compile-time would start happening at runtime… consider function that removes one element from [u8] (slice, not array!) and returns the result… what should it do if slice is already empty?

Currently all that zoo is kept out of stable, but there are lots of struggles in attempting to make it all work… struggles that, ironically enough, become trivial if you introduce runtime-parametrised types and make dyn Trait identical to impl Trait.

It's more of a political decision than technical decision: now one would fund such work without promise of it being included in the compiler — and no one would give such a promise if only ideas on paper exist and there are no working code. But maybe someone can do that as Rust fork?

The biggest question is with functions that may return unknown type (known type, unknown parameters: think about function that removes duplicates from a slice). As first step these may be forbidden outright: every type that leaves function should be describable in terms of input (but then the aforementioned function that removes one element from slice is also impossible) — a bit line lifetimes are handled today.

ABI stability funding

Posted Dec 1, 2025 23:12 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (5 responses)

> The same way Swift did: by permitting !Sized types on stack and so on. Large work, sure, but nothing impossible.

I refer the honorable gentleperson to the answer I gave some moments ago:

> Similar to the previous bullet, this fixes the self receiver and does not help with [associated types, associated functions, and various other dyn-incompatible things besides the self receiver].

ABI stability funding

Posted Dec 1, 2025 23:14 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (4 responses)

(And yes, I can see that you wrote a bunch of stuff that boils down to "let's make method dispatch fallible." But that is so obviously absurd that it is undeserving of a serious response.)

ABI stability funding

Posted Dec 1, 2025 23:41 UTC (Mon) by khim (subscriber, #9252) [Link] (3 responses)

One may face random compilation error today with impl even if all formal trait-level checks succeed. Naturally if we want parity between impl and dyn we couldn't skip that quirk, too. And because with dyn they couldn't be done at compile time they have to be done at runtime.

As I have said: that's not a big deal, in practice, the majority of existing software is written in languages that have that “defect” and it's rarely a problem, in practice. Whether Rust team wants to deliver something usable in realistic timeframe or would prefer to punish users with what they have now is up to them, ultimately: using extern "C" doesn't solve that problem, in fact it makes it worse.

ABI stability funding

Posted Dec 2, 2025 8:44 UTC (Tue) by taladar (subscriber, #68407) [Link] (2 responses)

As a Rust user anything that causes more failures at runtime that could instead be compile time failures would be something I would definitely strongly oppose.

ABI stability funding

Posted Dec 2, 2025 10:21 UTC (Tue) by khim (subscriber, #9252) [Link] (1 responses)

We are talking about Rust, not Haskell, here. Uses of extern "C" definitely generates more problems at runtime that higher-level interface ever could — only they generate random crashes, not predictable panics.

In the past for things like integer overflow or array access Rust usually picked up panics in place of UB, but with stable ABI approach is the opposite.

That looks a bit illogical to me, but then, people are not always rational.

ABI stability funding

Posted Dec 9, 2025 0:37 UTC (Tue) by gmatht (subscriber, #58961) [Link]

The current convention for static linking avoids this. If dynamic linking became more common in Rust, making a missing library a compile time error may not be possible, but it would be nice to at least be a link time error.

ABI stability funding

Posted Nov 25, 2025 16:52 UTC (Tue) by Wol (subscriber, #4433) [Link]

> but it would work less reliably in languages like Rust that use generics and optimizations (e.g. struct field reordering) even more than C++ uses templates.

And there's no possibility to declare an interface as "extern", which means that anything crossing that interface cannot be optimised in a way that would break an external app that doesn't know about the changes?

Of course, that then means a strict separation of declarations, inline definitions, and generics, but might that not be a good thing?

I can see that trying to turn generics into concretes might be a little tricky, but a dummy call for every generic you want to concrete, over an extern definition, would do it?

And just like with "unsafe", you could offload the responsibility to the programmer to make sure the use of the definition files is consistent. With automated traps as far as possible.

Cheers,
Wol

Shared libraries

Posted Nov 25, 2025 22:40 UTC (Tue) by jhoblitt (subscriber, #77733) [Link] (3 responses)

That is a very C without batteries included point of view. For example, most go programs use the core flags package and don't require a a runtime dependency for parsing arguments.

Shared libraries

Posted Nov 25, 2025 23:20 UTC (Tue) by intelfx (subscriber, #130118) [Link] (2 responses)

> That is a very C without batteries included point of view.

> For example, most go programs use the core flags package and don't require a a runtime dependency for parsing arguments.

And this is a very "I-don't-see-further-than-my-lang's-shiny-abstractions" point of view.

What is the "core flags package", exactly? It is a library. So if it's a library, where does it exist? Do we dynamically or statically link it? The GP's question stands unanswered.

Shared libraries

Posted Nov 26, 2025 3:24 UTC (Wed) by jhoblitt (subscriber, #77733) [Link] (1 responses)

It means that the flags package is part of go and it is always available. Since parsing cli arguments is extremely common, it is a no brainer for it to be included as part of the language. The rust equivalent is std::env::args.

Shared libraries

Posted Nov 26, 2025 9:28 UTC (Wed) by taladar (subscriber, #68407) [Link]

std::env::args is the Rust equivalent of the C argc and argv parameters, nothing more. The Rust CLI parsing functionality most commonly used is the clap crate which is not part of the standard library because there is no good reason to put something in the standard library when it doesn't have close ties to the language version.

Shared libraries

Posted Nov 24, 2025 18:42 UTC (Mon) by keithp (subscriber, #5140) [Link] (70 responses)

I've come to accept the reality that compiling parametric polymorphic languages (like Rust and C++) inherently eliminates any notion of strong ABI. To avoid just stuffing everything in boxes and using virtual dispatch for all operations, you have to codegen functions using the concrete type.

So, you either get responsible language design with actual type checking across interfaces, or you get shared libraries. I haven't seen any plan for getting both. It kinda sucks, but given that I have to make a choice, I know which I'm willing to accept.

At this point, I'd assume any time a package using Rust anywhere should trigger a rebuild of any reverse dependencies, at least until policy tells us how to avoid that.

Shared libraries

Posted Nov 24, 2025 18:57 UTC (Mon) by ballombe (subscriber, #9523) [Link] (37 responses)

Then there should be a subset of rust without parametric polymorphism.
This is not required to replace C code.

Shared libraries

Posted Nov 24, 2025 21:05 UTC (Mon) by DemiMarie (subscriber, #164188) [Link]

And nobody is going to use it. Too much code would have to be duplicated.

Shared libraries

Posted Nov 24, 2025 21:27 UTC (Mon) by mb (subscriber, #50428) [Link]

It's called "extern C".
You can basically do almost all the things you can do in C. Including dynamic linking.

Shared libraries

Posted Nov 25, 2025 12:02 UTC (Tue) by farnz (subscriber, #17727) [Link] (33 responses)

The problem is more than just parametric polymorphism; it's things like defined constants, semantic meaning of functions and more.

Polymorphism is absolutely fine as long as you are aware that this means that the polymorphic parts of your library live in the caller's binary, not in your binary. Same with defined constants in a header, struct layout etc.

The thing that you need is something that tells you when you've modified something that will be in the caller's binary, not your binary, so that you can undo that breakage. Ideally, you'd also have a way to "shim" your new library, so that old binaries can still link against the new library, and go via the shim that fixes things up so that they continue to work without a rebuild.

But this is a really hard tool to develop; there's a lot hiding in those two sentences. Even just doing the "modified something that will cause breakage" for static linking is hard; and dynamic linking ups the difficulty a notch.

Shared libraries

Posted Nov 25, 2025 13:38 UTC (Tue) by khim (subscriber, #9252) [Link] (32 responses)

> Polymorphism is absolutely fine as long as you are aware that this means that the polymorphic parts of your library live in the caller's binary, not in your binary.

This would only work if your library provides ABI without things like Option or Result… and ABI that doesn't use these is as almost far from idiomatic Rust as "C"

Shared libraries

Posted Nov 25, 2025 13:56 UTC (Tue) by farnz (subscriber, #17727) [Link] (31 responses)

Why? Option and Result can be fully monomorphized in your API, in which case there's no polymorphic parts (even though pub struct Foo<T>(Option<T>) is polymorphic, pub struct Foo(Result<u32, MyError>) is not).

Second, I didn't say that you can't have polymorphism; I said that you have to be aware that your polymorphic components live outside your binary. You can have, for example, pub fn foo<P: AsRef<Path>>(path: P) -> u32 { foo_impl(path.as_ref() }, as long as you are happy that foo is inlined into the caller's binary, while fn foo_impl(path: &Path) -> u32 is in your binary.

The important part is that you're aware of what's in your dynamic library, and what's outside it, and that you have a way to cope with the subset of your code that's in the caller not changing when your dynamic library changes. That might be shims and symbol versions like glibc, or not changing things once they've been exposed in a way that breaks the ABI.

Shared libraries

Posted Nov 25, 2025 14:32 UTC (Tue) by khim (subscriber, #9252) [Link] (30 responses)

> Option and Result can be fully monomorphized in your API

Yes. But not with Rust as it exists today.

> in which case there's no polymorphic parts (even though pub struct Foo<T>(Option<T>) is polymorphic, pub struct Foo(Result<u32, MyError&ht;) is not).

Even pub struct Foo(Result<u32, MyError>) is polymorphic because it depends on a compiler version. Compile is free to change the representation of pub struct Foo(Result<u32, MyError>) at any time, in fact nightly have a flag to do that and stable does it from time, to time, too.

> That might be shims and symbol versions like glibc, or not changing things once they've been exposed in a way that breaks the ABI

Well… compiler upgrade [potentially] break ABI which means you would have to specify precisely which version of the compiler defines it… and never upgrade.

RenderScript tried that and died as a result, Apple ended up in the exact same potion, etc.

You couldn't build a stable platform on a quicksand.

Shared libraries

Posted Nov 25, 2025 15:09 UTC (Tue) by farnz (subscriber, #17727) [Link] (29 responses)

Sure, you'd need the compiler to not break things that are marked as ABI - and you'd have to accept that the stable ABI is not necessarily as efficient as the unstable ABI.

Indeed, you might well end up with a v1, v2, v3 etc stable ABI, where v1 is what we thought was good enough next year, v2 is a decade later with all the small improvements that we've accumulated since v1 was marked stable, with downstream users deciding when it's worth moving to a new version of the ABI and breaking older binaries - or even provide a stable ABI v1 shim that uses the stable ABI v5 code to implement things, and does whatever is needed to get compatibility (copies of data structures etc).

But that's something the compiler team has to commit to. None of this works if the compiler team won't stabilize the ABI (replacing the compiler version dependency with a stable ABI version dependency).

Shared libraries

Posted Nov 25, 2025 15:18 UTC (Tue) by khim (subscriber, #9252) [Link] (28 responses)

> But that's something the compiler team has to commit to. None of this works if the compiler team won't stabilize the ABI (replacing the compiler version dependency with a stable ABI version dependency).

Then what's the point of limiting the whole thing to statically known types? Polymorphic ABIs work with the compiler buy-in just fine: there are Swift, C#, Java, Ada… it's not a rocket science, it's well-tested tech. Know for decades, not years.

Shared libraries

Posted Nov 25, 2025 15:33 UTC (Tue) by farnz (subscriber, #17727) [Link] (20 responses)

I don't know why you'd limit it to statically known types - that's not my proposal at all.

Mine is that the provider of a library with a stable ABI needs to be aware of what they're actually offering - what parts have to be kept stable (including because they're embedded in user code - e.g. user code knows this is a thin pointer, ergo you can't change it to a fat pointer, or user code knows this data structure is 108 bytes in size, so that can't change), and what parts are safe to change (e.g. user code calls into your library at this point, so you can change the implementation).

Note that you might want to allow some of your library code to be inlined for performance - e.g. Vec::len is something you'd want inlined, you might want to inline most of Vec::push, only calling out-of-line code if the Vec needs to grow, for two examples. And when you do that, you need to know that part of your stable ABI is ensuring that the inlined parts still work as designed, even if the parts that you've kept in your shared binary are changing.

Shared libraries

Posted Nov 25, 2025 17:06 UTC (Tue) by ssokolow (guest, #94568) [Link] (4 responses)

What you're describing sounds like what they aim to produce with CrABI.

An opt-in stable ABI for Rust that serves as a higher-level alternative to what you currently get from extern "C" without having to use the abi_stable crate to marshal your types through the C ABI.

Shared libraries

Posted Nov 25, 2025 17:18 UTC (Tue) by farnz (subscriber, #17727) [Link] (3 responses)

It's more than CrABI, but CrABI is a necessary component of it.

I want both CrABI, so that I have a compiler that can give me a stable ABI, and a way to cleanly identify which things are ABI, so that a future version of cargo-semver-checks can tell me not only that I've broken my API stability promises (leaving me to decide if I bump the major version, or if I fix my API), but also that I've broken my ABI stability promises.

I also want to be able to say that my stable ABI depends on you using a compatible stable API crate as part of the build process - just as I can't use a random header file version in C, and rely on it working with a random shared library version - that allows for things like carefully chosen inlining of part of my code into the caller, while still having chunks of my internals in my shared library (e.g. so that fast path code can be inlined, calling a slow path in my shared library).

On top of that, while I'm demanding perfect tooling, moon-onna-stick, and free unicorns for everyone, I want tooling that allows me to knowingly break ABI as long as I provide the necessary shims to let people who linked against the older ABI to continue to work.

Shared libraries

Posted Nov 25, 2025 19:16 UTC (Tue) by ssokolow (guest, #94568) [Link] (2 responses)

just as I can't use a random header file version in C, and rely on it working with a random shared library version

For varying definitions of "rely on"

https://abi-laboratory.pro/index.php?view=tracker

On top of that, while I'm demanding perfect tooling, moon-onna-stick, and free unicorns for everyone, I want tooling that allows me to knowingly break ABI as long as I provide the necessary shims to let people who linked against the older ABI to continue to work.
Knowing the Rust community, give it a decade and you'll probably have all that.

Shared libraries

Posted Nov 26, 2025 10:00 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

My definition of "rely on" is "the tooling gives me an error if I mismatch the shared library version and the header file". Very few C and C++ libraries make that guarantee - the only library I'm aware of that does make that guarantee is glibc, and they do a lot of work with symbol versioning to make that happen.

Shared libraries

Posted Nov 26, 2025 10:15 UTC (Wed) by ssokolow (guest, #94568) [Link]

I missed that you said "can't" instead of "can". Sorry.

I suspect anything which satisfies both your goals and the Rust devs' goals will be paired with some kind of link-time hackery to ensure that what would otherwise have been statically linked will refuse to start if the subset of the provided library that's being used is not 100% ABI compatible.

"'Just don't make mistakes' doesn't scale" is, after all, a core element of Rust's design philosophy.

Shared libraries

Posted Nov 25, 2025 17:17 UTC (Tue) by khim (subscriber, #9252) [Link] (14 responses)

I understand what you are offering, but I don't understand why would you offer that.

C++ went this way because it could without asking compiler developers to cooperate. People just started providing binary libraries without asking anyone for permission — and that worked because C++ haven't been breaking their ABIs quickly enough.

Rust couldn't do that without asking compiler developers to cooperate… the breakage possibility is real enough… and if compiler developers have to involved anyway… why not give developers ability to make nice, easy to use, generic ABIs and restrict them to this strange subset?

Note that it only works in C++ because of incredible efforts that compiler developers, now, have to provide… no one would have accepted such burden if they wouldn't have been put in the bind “provide backward compatibility or forget about user adopting new versions of your compiler”.

Rust developers may simply refuse to participate in that crazyness (because, as you note, without them nothing would work) or, if they would agree — they can design something more useful and sane.

Shared libraries

Posted Nov 26, 2025 10:49 UTC (Wed) by farnz (subscriber, #17727) [Link] (13 responses)

Because it's not just about generic ABIs, and I'm not talking about restricting developers to a "strange subset". You're projecting your own prejudices on top of what I'm saying, and letting it blind you to what I'm talking about.

I'm saying that we need the tools to allow developers to decide what falls on the "shared binary" side of the library, and what falls on the "inlined into the user" side of the library; in C++, this got forced on compilers by the header file/object file split, where things in the header files are "inlined into the user", and things in the object files are not.

But there's no particular reason that this couldn't be well-designed, such that everything (even constants and data layouts) is fixed up by the dynamic linker at run time by default - but there are performance wins to be had from inlining other things. For example, doing a full-blown function call for Vec::len is an expensive way to read a single usize from a known location in the structure; if you export the layout of Vec and the definition of Vec::len such that they can both be inlined into the caller, then you need to ensure that the layout of Vec and the definition of Vec::len remain compatible into the future. Or you might decide that optimizations enabled by knowing that Vec is always going to be 24 bytes, 8 byte aligned, on this platform, are worth enabling, and thus export that layout.

And that's why I want to offer that - I want library authors (who presumably understand the library code) to be able to decide what is inlined into the calling binary, and what is put in the library's binary, with tooling support for stopping them from accidentally changing something from inlined into the caller's binary to part of the library's binary.

Shared libraries

Posted Nov 26, 2025 13:42 UTC (Wed) by khim (subscriber, #9252) [Link] (12 responses)

> I'm saying that we need the tools to allow developers to decide what falls on the "shared binary" side of the library

Nope. We need tools that simply work. Developer shouldn't consult some large document to find out what would be embedded where. Except in rare cases marked unsafe. C/C++ tried that “you are holding it wrong” approach — and it doesn't scale.

Developers have some ideas about what it takes to have stable API (and there are checkers for that), that should be enough. How exactly compiler would ensure that stable API would become stable ABI, also — is on the shoulders of the compiler.

> in C++, this got forced on compilers by the header file/object file split, where things in the header files are "inlined into the user", and things in the object files are not.

Yes. In C++ they had this split from the day one — that's why it worked. Rust had no such split, thus it wouldn't work.

> And that's why I want to offer that - I want library authors (who presumably understand the library code) to be able to decide what is inlined into the calling binary, and what is put in the library's binary, with tooling support for stopping them from accidentally changing something from inlined into the caller's binary to part of the library's binary.

This could be an external addon, sure. But by default things should just work. Otherwise no one would adopt that solution.

Developers are not malicious, but lazy. Take that into account — and that would be enough to develop sane approach.

Provide tools that allow some kind of perfection after you'll provide something that “just works”.

Shared libraries

Posted Nov 26, 2025 14:01 UTC (Wed) by farnz (subscriber, #17727) [Link] (11 responses)

You continue to misrepresent my position; I'm saying that we need the tooling to enable library developers to safely inline parts of their library into the caller, otherwise we're going to see C/C++ "you're holding it wrong" hacks because they're needed for performance.

The compiler cannot do this unaided - how does the compiler know whether a given small function is "hot path" in your users (and therefore needs to be inlined, with tooling assistance to prevent you from breaking the inlined paths), or "cold path" (and therefore can be out-of-line as a symbol in the shared binary)?

If you don't offer this as part of shared library support, then developers are going to hack around it in ways that tooling cannot check, because they're lazy - fixing the compiler to support this is a lot harder than hacking around it in interesting and unsupportable ways.

And the bit you keep ignoring is that I'm saying that we need tooling that's aware of this, and that can error out if you accidentally break your users. That is table stakes for this - otherwise you end up in the C++ situation.

Shared libraries

Posted Nov 26, 2025 14:15 UTC (Wed) by khim (subscriber, #9252) [Link] (10 responses)

> You continue to misrepresent my position; I'm saying that we need the tooling to enable library developers to safely inline parts of their library into the caller,

Why?

> otherwise we're going to see C/C++ "you're holding it wrong" hacks because they're needed for performance.

Why that doesn't happen with bazillion other languages that offer similar capabilities?

> If you don't offer this as part of shared library support, then developers are going to hack around it in ways that tooling cannot check, because they're lazy - fixing the compiler to support this is a lot harder than hacking around it in interesting and unsupportable ways.

Not exposing “lightweight” objects in way that requires such hacks is even easier.

We are talking shared libraries that are used to offer some kind of stable ABI here. Supporting stable ABI is hard. People wouldn't add crazy hack where they couldn't guarantee if they would work or.

Look on Vulkan as an example of stable ABI that have to be performant, too: no crazy hacks, no inline function, just a bunch of non-opaque structures. With accessors to these structures in static libraries (that you may or may not want to use). Zero support from the compiler, zero magic, guaranteed to work, can be easily replicated in Rust, if needed.

> That is table stakes for this - otherwise you end up in the C++ situation.

Nope, you wouldn't. C++ situation arose because C++ already had that difference between .h and .c from the day one, because it already offered a way to have functions partially inlined in .so and partially in .h.

Don't give developers that capability — and they wouldn't use it. They would work with dynamic dispatch and limitations of dynamic dispatch. It's as simple as that.

Sure, optional availability of such capability wouldn't hurt… but lack of these are not show-stoppers at all. Developers can tolerate slow, they wouldn't tolerate “you need an entirely different API for shared libraries”, they would just link everything statically.

Shared libraries

Posted Nov 26, 2025 15:28 UTC (Wed) by farnz (subscriber, #17727) [Link] (8 responses)

Because these hacks do happen with other languages - the exceptions are those with a JIT, where it doesn't matter. And compiler developers just have to get used to this - like they've been forced to with C and C++. Either you provide the tooling to do a good job, or people come up with hacks.

And if you don't allow exposure of lightweight objects from a library, then you're ruling out such things as Option<&str> in the library's ABI surface. That's a lightweight object being exposed - and thus needs a stable ABI.

Vulkan's a really good example, because it does expose all sorts of things that get inlined into the caller - there's inline functions, for example, and those struct definitions are inlined into the caller, such that Vulkan cannot redefine the structures to a different size and still work with older libraries.

In an ideal world, there would be tooling that told people working on Vulkan "hey, you've changed this structure in such a way that it now has different size/alignment. You can't do that!", so that they couldn't make that mistake - but as it is, the normal C rules apply of "you're holding it wrong".

Shared libraries

Posted Nov 26, 2025 15:40 UTC (Wed) by khim (subscriber, #9252) [Link] (7 responses)

> In an ideal world, there would be tooling that told people working on Vulkan "hey, you've changed this structure in such a way that it now has different size/alignment. You can't do that!"

That tool is called Libabigail. And yes, it's used when Vulkan is upgraded.

> That's a lightweight object being exposed - and thus needs a stable ABI.

It just needs stable layout. But yes, it's an interesting dilemma there: if you just freeze the layout someone still needs to ensure that said Option<&str> would be valid when it crosses library boundary.

Whether this needs to be exposed to developers as some kind of universally accessible tooling or kept to the standard library is an interesting question. Today there are a lot of things that standard library does yet regular programs can not do. Providing inlining for methods of Option<str> without giving such ability to third-party library could be a good first step.

After all it's not possible to take self: Foo<Bar<Self>> today as target for the method, but only self: Rc<Self> or self: Arc<Self>… and sky haven't fallen on earth.

Shared libraries

Posted Nov 26, 2025 16:01 UTC (Wed) by Wol (subscriber, #4433) [Link] (2 responses)

> It just needs stable layout. But yes, it's an interesting dilemma there: if you just freeze the layout someone still needs to ensure that said Option<&str> would be valid when it crosses library boundary.

So you allow Rust to use the "extern" keyword. Which freezes the layout according to certain rules.

One likely mechanism is to require (a) that extern tells the compiler to use a simplistic layout as declared (no optimisation) so it's guaranteed to be consistent across programs. And then (b) (copying Rust "Editions") you allow "extern "version"" so you CAN update the layout, and more importantly, you can tell the Rust compiler about both layouts so it can auto-generate a shim, or you can choose not to, so it generates a linking error.

But either way, you do have to say "extern" is similar to "unsafe", in that you are relying on the programmer to enforce invariants, but that the compiler will do its job properly if the programmer does theirs (primarily ensures version is updated correctly).

Cheers,
Wol

Shared libraries

Posted Nov 26, 2025 16:11 UTC (Wed) by khim (subscriber, #9252) [Link] (1 responses)

> So you allow Rust to use the "extern" keyword. Which freezes the layout according to certain rules.

It's not enough to just freeze a layout. Option<&str> contains either None or reference to a valid UTF-8 string. Nothing else is allowed. If handling of it is inlined into both binary and library then they should agree about upholding such invariants.

> But either way, you do have to say "extern" is similar to "unsafe", in that you are relying on the programmer to enforce invariants

This would never work. The big advantage of Rust is the fact that compiler (and, most notably, not a developer) upholds many such invariants.

We need this guarantee kept across shared libraries boundary or it'll never work.

Shared libraries

Posted Nov 26, 2025 16:55 UTC (Wed) by Wol (subscriber, #4433) [Link]

> We need this guarantee kept across shared libraries boundary or it'll never work.

Which is farnz' point. You need tooling.

Which is the point of extern. It says this is where it's likely to go wrong. It tells the compiler "here is a boundary where abstractions mustn't leak". It tells the programmer "here is a boundary where you need to be careful what goes across".

It's a major improvement on C / C++ where the header file is part of the library, but contains loads of stuff that ends up in the application binary - a major abstraction leak that C / C++ sprinkles heavily with "here be dragons" pixie dust. A Rust compiler could cleanly enforce most of that, for example by ensuring structs passed through an extern have to be declared as extern, and have the same version as the function they are passing through.

Yes it's not perfect. But it's a damn sight better than what we have in the C ecosystem. Given Rust's concern about pointer provenance and stuff, you could use the same approach to argument provenance across an extern. Doesn't stop a programmer cheating and feeding two different versions of the same header file into the two different halves - application and library - in order to fool the compiler - but that's a clear breach of his obligation to enforce integrity across that boundary.

You might even be able to get the compiler (when compiling a library) to output an Intermediate Language Description of the call interface, which the compiler (when compiling an application) imports to guarantee compatibility. Again, you're then heavily dependent on versioning, and the programmer doing it properly, but it'll only take a few crashes on linking as attempts to do so cause problems, and the programmers will learn to "get it right".

Cheers,
Wol

Shared libraries

Posted Nov 26, 2025 16:50 UTC (Wed) by farnz (subscriber, #17727) [Link] (3 responses)

Libabigail is part of the tooling I'd like to see in proper shared library tooling - it covers size and alignment, but not field ordering within the structure (which also ought to be checked by tools). And, of course, if you want something that "just works", you need the size and alignment of structures to not be hard-coded in the user, but rather filled in by the dynamic linker via a COPY or SIZE relocation (causing you to have to do arithmetic all over the place when a structure from the dynamic library is embedded in a user-supplied structure).

Basically, what I want is tooling that tells me if I've made any change that breaks either my API (which currently exists) or my ABI, and that I can have in my build process to tell me when I've made a mistake. And, as Vulkan shows, this is not just about having a way to express "this item has a fixed layout regardless of compiler", but also "you said this was supposed to be unchanging, but you've changed it in a way that's significant. Did you mean to do that?", complete with the ability to reserve space for future expansion (and not in the C-like hacky way of unused fields, but a very deliberate and well-thought through way to reserve sufficient space with good enough alignment for what you might want to fill in that space with, allowing for some fields being fixed in location because they're exposed ABI, and others being mobile because they're opaque to callers).

And yes, this tooling needs co-operation with the compiler - that's table stakes for doing a stable ABI properly. But it also needs a bunch of other things, so that I have to tell my tools that yes, I mean to break the stable ABI here, just as I have to tell my tools (via SemVer changes) that yes, I mean to break the stable API.

Shared libraries

Posted Nov 26, 2025 17:04 UTC (Wed) by khim (subscriber, #9252) [Link] (2 responses)

> And, of course, if you want something that "just works", you need the size and alignment of structures to not be hard-coded in the user, but rather filled in by the dynamic linker via a COPY or SIZE relocation (causing you to have to do arithmetic all over the place when a structure from the dynamic library is embedded in a user-supplied structure).

It's not “of course”, but “do we really need it?” If structure has a potential of changing size then it's highly unlikely then it's “trivial” structure that would be used somewhere in the critical path. More likely it's some descriptor that needs some non-trivial work to handle it. Swift handles such structures with runtime descriptors and it may be enough to do that.

I would say that if you would try to do that you'll end up with “success” of OSI protocols: lots of hype, lots of talks, zero working implementations.

I would rather see that being marked as “maybe in 10 years if we would have the resources we might attempt that” rather then “that's something that we would use as justification not to do anything”.

> But it also needs a bunch of other things

It needs some things, yes. You are trying to stuff it full of things that are not, strictly speaking, needed, but only desired. That's willful ignorance of RFC1925: It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.

I would say that starting with dynamic dispatch everywhere but some “blessed” types from the standard library (which is essentially what Swift did) would get us 90% there (if not 99% there) and the remaining 1% may be discussed later (or handled with hacky solutions, if needed).

But yes, basic library types like Option<&str> needs to be covered, somehow.

Shared libraries

Posted Nov 26, 2025 17:14 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

Runtime descriptors get filled in by the dynamic linker - and all of the things I'm describing are things that are done semi-manually to maintain a stable ABI by Vulkan and glibc developers today.

I'm not arguing that these are required for a minimal solution - but rather that all of these problems have to be solved if you're going to have a stable ABI, and given that Rust needs tooling updates anyway to have a stable ABI, it would be better to have tooling that prevents accidental ABI breakage from day 1 of the Rust stable ABI for dynamic linking, rather than (as you seem to be suggesting) relying on "programmers don't make that mistake".

After all, the lesson of C++ UB is that "programmers don't make that mistake" is virtually always false.

Shared libraries

Posted Nov 26, 2025 17:23 UTC (Wed) by khim (subscriber, #9252) [Link]

> Runtime descriptors get filled in by the dynamic linker

Nope. Not even remotely close. Rather it's extension of current dyn Trait scheme: in the vtable there are size of the object (so it can be allocated on stack, alloca-style), reference to the constructor (so you can create a vector) and so on. Then generic may have one, common code for that thing and work — even without any dynamic linkage involved.

Very natural extension of the Rust language, not special code for the dynamic libraries. It would be well-received even without dynamic libraries, the ability to have true polymorphic callback is something people desire often.

> After all, the lesson of C++ UB is that "programmers don't make that mistake" is virtually always false.

Yes, but that doesn't mean we have to invent something grandiose, that would require next 10 or 20 years to develop. Simple solution may be better, because it's actually implementable… and we know, from Swift example, what can be implemented in the limited time.

Hint: anything that requires changes to the dynamic linker automatically makes the whole thing DOA. Simply because on Windows dynamic linker is part of the OS and if this scheme wouldn't support Windows then nobody serious would use it.

Shared libraries

Posted Nov 26, 2025 16:09 UTC (Wed) by excors (subscriber, #95769) [Link]

> Look on Vulkan as an example of stable ABI that have to be performant, too

Vulkan's ABI is stable and extensible and C-based (allowing FFI from many languages), with the tradeoff that it's painful and bug-prone to use directly.

That's only acceptable because most application developers won't use it directly. They'll use something like https://github.com/KhronosGroup/Vulkan-Hpp which provides an easier-to-use and safer C++ API, but gives no promises about API stability. That's a header-only library, so effectively it's statically linked into every application. (Header-only libraries are popular because C++'s ABI support is so poor that even static linking of separately-compiled libraries is unreliable). Or they'll use some other wrapper or engine, or write their own, with the same issues.

So now the problem is: how could you implement something like Vulkan-Hpp as a shared library with a stable ABI? And the current answer is, you can't. Vulkan isn't solving the problem of safe ABIs; it's just shifting it to another layer which doesn't solve it either.

If you want shared libraries and safety (which I think was the premise of this thread), it seems worth trying to come up with a better solution than that.

(Vulkan is also a peculiar example because it has a complex architecture where applications link (dynamically or sometimes statically) to a Vulkan Loader which dynamically links to multiple independent driver implementations, and drivers export the whole Vulkan ABI through a GetProcAddr interface, and applications call a trampoline function in the Loader that dispatches to the appropriate driver. (If you have multiple GPUs, the application might use multiple drivers at once). Applications can avoid the trampoline cost by using a GetProcAddr to get a function pointer directly into the driver, though it may actually be a function pointer into a separate Layer library that can intercept and modify each call before the driver sees it (for debugging etc). I wouldn't call that "no crazy hacks". (https://github.com/KhronosGroup/Vulkan-Loader/blob/main/d...))

Shared libraries

Posted Nov 25, 2025 17:59 UTC (Tue) by paulj (subscriber, #341) [Link] (6 responses)

Java elides the parameterised type information from the binary ABI I thought. AFAIK linking to a jar won't catch an incompatible change - I'd have to experiment a bit to be sure. So... ICBW, or otherwise perhaps you're talking about a slightly different point to me.

Shared libraries

Posted Nov 25, 2025 18:55 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> Java elides the parameterised type information from the binary ABI I thought.

It doesn't. And the type information is available throught reflection. It simply ignores it during the bytecode interpretation (or JIT-compilation).

Shared libraries

Posted Nov 26, 2025 10:44 UTC (Wed) by paulj (subscriber, #341) [Link] (4 responses)

Interesting... Hadn't considered reflection. But that's not part of the binary ABI as such, but the bolt-on reflection API. It's the equivalent of additional debug symbols - useful for introspection, but not used for link-time validation. What you say confirms my understanding, the runtime byte-code linking process does not take parameterised types into account.

Shared libraries

Posted Nov 26, 2025 13:33 UTC (Wed) by khim (subscriber, #9252) [Link]

It's implementation detail. Developers don't care about whether something is “proper” part of the ABI or not “proper” part of the ABI. They can and use foreign types with generics and this just works.

Rust may do the same if it would impose that foreign types can only be used with the dynamic dispatch.

Shared libraries

Posted Nov 26, 2025 18:19 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> Interesting... Hadn't considered reflection. But that's not part of the binary ABI as such, but the bolt-on reflection API. It's the equivalent of additional debug symbols - useful for introspection, but not used for link-time validation.

Technically, there is no "linking" in Java. All the code is dynamically loaded at runtime (through ClassLoaders). You can do additional validation during loading and/or rewrite ("instrument") the bytecode entirely.

Type information available through reflection was used to implement reified generics by some languages targeting the JVM. Kotlin is one of them, for example. So in practice, the type erasure on the bytecode level is mostly an implementation detail.

Shared libraries

Posted Nov 27, 2025 16:54 UTC (Thu) by ssmith32 (subscriber, #72404) [Link] (1 responses)

Oh no. I've gotten bit by type erasure (of generics) more times then I can remember.

You can definitely have code that does:

for( var a: someCollection<SomeTypeA> )

And it ends up with someCollection<SomeTypeB> at runtime, and you're stuck debugging the damnedest runtime error, once you do something with it in the for loop..

I had this happen on a nested, templated collection.

Granted, the thing that usually triggers this is the odd corner case of deserializing data back into Java.

Unfortunately, this odd corner is something that happens all over the place when you're working with a system that contains HTTP services, which, if you work with Java, is what you're usually paid to do, nowadays.

And, since you always need some sort of deserializer for JSON (or, if you're particularly unlucky, XML) in your HTTP service, oh, what the heck, we'll use it for config files as well, for any data we read in, for..

And Boom.

When working with Java on a daily basis, type erasure is anything but an implementation detail.

It's is a very real pain point (granted it was done by smart people with sound reasoning and good intentions [1], but you know what they say about good intentions...)

There are other cases I've ran into at odd times, but I have trouble recalling the specifics, because the serde issue happens often enough that it looms large in the memory.

[1] https://openjdk.org/projects/valhalla/design-notes/in-def...

Shared libraries

Posted Nov 28, 2025 5:31 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Oh yes. Back when I was writing tons of Java, I had my own deserializer for my homegrown RPC protocol that made sure that types of collections match.

I also distinctly remember collection wrappers that enforced the correct types using reflection. Maybe in apache-commons?

Shared libraries

Posted Dec 1, 2025 23:30 UTC (Mon) by NYKevin (subscriber, #129325) [Link]

There is not. You would need to remove or severely restrict all of the following:

* References (lifetimes are considered type parameters), but maybe we can make an exception since lifetime parameters do not result in true monomorphization.
* Slices and arrays (the element is considered a type parameter, and gives rise to monomorphization in the usual fashion).
* Async (Future<T>)
* for loops and iteration (Iterator<T>)
* Smart pointers (Deref<T>)
* Result<V, E> and Option<T>, and all APIs that return things in one of those two forms (i.e. every fallible function in std and core). That includes basically all I/O.

And probably a dozen other things I didn't think of. Needless to say, the resulting language would be so constrained as to make WUFFS look like a "normal" language in comparison.

Shared libraries

Posted Nov 24, 2025 21:16 UTC (Mon) by zyga (subscriber, #81533) [Link] (15 responses)

This is not quite true. It is "just" insanely expensive to do well.

Apple paid for that support in Swift so that apps for their platforms can benefit from base OS library updates without having to be rebuilt.

Rust and Go didn't have the money or desire to implement that, respectively.

I recommend reading what Swift can do today, on Linux. You can load a library with a type. Load another with a container and efficiently instantiate container specialized with that type, all with dynamic libararies and stable ABIs. It is compiler voodoo but it is not impossible.

I kind of think we are all doomed in the long run (e.g. imagine all of GTK and Qt are written in rust and require a complete world rebuild for every tiny update). IMO that is not scalable and the trend to move to Rust or another langue like that, will bounce at some point.

Either someone steps in and does the heavy lifting to solve this problem, or distributions will just grind down to a halt.

Shared libraries

Posted Nov 24, 2025 21:50 UTC (Mon) by zyga (subscriber, #81533) [Link] (3 responses)

Replying to myself. Apologies for not posting this above but it is well worth reading. "How Swift Achieved Dynamic Linking Where Rust Couldn't": https://faultlore.com/blah/swift-abi/

Shared libraries

Posted Nov 25, 2025 12:17 UTC (Tue) by paulj (subscriber, #341) [Link] (1 responses)

I think that's pretty much exactly what keithp had in mind with "To avoid just stuffing everything in boxes and using virtual dispatch for all operations". I read his comment as fairly explicitly referring to the "You can have (strong ABI AND expensive runtime) OR (limited dynlib ABI AND compile time concrete implementation of parametric types to obtain fast runtime)" dichotomy.

Shared libraries

Posted Nov 25, 2025 13:40 UTC (Tue) by khim (subscriber, #9252) [Link]

You can have both (just not at the same time). That's what Swift does.

Shared libraries

Posted Nov 27, 2025 10:15 UTC (Thu) by Karellen (subscriber, #67644) [Link]

Great link, thanks for posting!

Swift vs Rust ABI

Posted Nov 24, 2025 23:15 UTC (Mon) by DemiMarie (subscriber, #164188) [Link] (5 responses)

Part of how Swift achieved ABI stability is to implicitly heap-allocate data in certain situations. I don’t believe this is permitted in Rust. In particular, Rust doesn’t require that a heap even exists.

Swift vs Rust ABI

Posted Nov 25, 2025 2:11 UTC (Tue) by khim (subscriber, #9252) [Link] (4 responses)

That can be solved by declaring that thing an “std-only” feature.

There's nothing impossible there, but it's a lot of work—means it's unlikely to happen without serious funding… who can provide it?

Swift vs Rust ABI

Posted Nov 25, 2025 6:52 UTC (Tue) by josh (subscriber, #17465) [Link] (3 responses)

It's not just funding, because we've got a strong interest in trying it. It's design, because the Swift design is a good source of inspiration but not *exactly* what would work for Rust, in various different ways.

We're working on it, though.

Good to know that Rust is working on ABI!

Posted Nov 27, 2025 18:23 UTC (Thu) by DemiMarie (subscriber, #164188) [Link]

I’m glad to read this!

Swift vs Rust ABI

Posted Nov 28, 2025 9:51 UTC (Fri) by farnz (subscriber, #17727) [Link]

It's a really hard problem to get right, and I am extremely grateful that the Rust team is thinking it through, and not getting stuck with the first design that solves 99% of the problem space.

Swift vs Rust ABI

Posted Nov 28, 2025 10:26 UTC (Fri) by bluca (subscriber, #118303) [Link]

Excellent news, looking forward to learn more about these developments

Shared libraries

Posted Nov 25, 2025 9:01 UTC (Tue) by taladar (subscriber, #68407) [Link] (3 responses)

I mean lets be honest, if Qt were a Rust library it would also be much, much smaller because it wouldn't re-implement an entire ecosystem of dependencies inside the library itself the way it currently does in C++.

Shared libraries

Posted Nov 25, 2025 10:25 UTC (Tue) by intelfx (subscriber, #130118) [Link] (2 responses)

> I mean lets be honest, if Qt were a Rust library it would also be much, much smaller because it wouldn't re-implement an entire ecosystem of dependencies inside the library itself the way it currently does in C++.

It would have been smaller in source code, but not in binary, for obvious reasons: it might not need to reimplement an ecosystem of dependencies, but the object code generated from those dependencies would still have to exist somewhere.

Unless, of course, it was a hypothetical *shared* Rust library, linking to *shared* Rust libraries of those dependencies. Right.

Shared libraries

Posted Nov 26, 2025 16:25 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (1 responses)

Qt is a "kitchen sink" library which tries to offer everything you might need, so it's hard to compare, maybe a Rust Qt-like would insist on offering all these container types Rust already provides for example, just as Qt feels the need to supply QList (the growable array type) QStack (the same type, with a more stack flavoured API) and QQueue (the same type again AFAIK but now used to back a ring buffer abstraction presented as a FIFO Queue)

C++ doesn't provide very good container types, but it does provide a reasonably good growable array type, with the expected amortized constant time growth, the basic reservation feature - and QList isn't actually better. So the fact QList was provided anyway suggests the same mindset would apply in Rust for a library in that mould.

But on the other hand, a library which didn't insist on offering its own version of things you already have might actually reduce the redundancy and the extra marshalling overhead from the duplication, so that might actually be a net win in size terms.

Shared libraries

Posted Nov 26, 2025 17:19 UTC (Wed) by excors (subscriber, #95769) [Link]

> C++ doesn't provide very good container types, but it does provide a reasonably good growable array type, with the expected amortized constant time growth, the basic reservation feature - and QList isn't actually better. So the fact QList was provided anyway suggests the same mindset would apply in Rust for a library in that mould.

Back in 1998 they explained: (https://github.com/KDE/qt1/blob/25d30943816da9c28cded9ac7...)

> Qt's containers are much more portable than the STL is. When we started writing Qt, STL was far away in the future, and when we released Qt 1.0, no widely-used compilers could compile the STL. For a library such as Qt, it is of course very important to compile on the widest possible variety of compilers.

They also justified it by having different space/time tradeoffs to STL containers (and still make that argument in today's documentation); but if the STL had been widely available with decent-quality implementations when they started, I imagine they would have simply used that and put up with its other limitations. And anyone starting today in Rust would simply use Rust's std containers.

Shared libraries

Posted Nov 26, 2025 1:58 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> I kind of think we are all doomed in the long run (e.g. imagine all of GTK and Qt are written in rust and require a complete world rebuild for every tiny update). IMO that is not scalable and the trend to move to Rust or another langue like that, will bounce at some point.

GTK has language-independent object descriptions (GObject). I don't see why that couldn't be provided from a Rust core as well. Qt relies more on C++ magic and would be harder translation, but a QML interface could be provided from a Rust core.

Also, I feel that Rust implementations would have more, smaller crates to provide "all" of these UI libraries/frameworks than we have today, so you might actually "win" if something can depend on a smaller footprint in the first place.

Shared libraries

Posted Nov 25, 2025 14:23 UTC (Tue) by gspr (subscriber, #91542) [Link] (10 responses)

Not knowing much about this, I've always wondered why a relatively closed set of packages – like those that constitute a distro – couldn't be transitively monomorphized *internally* in the set.

For example, take the directed graph of dependencies between Rust packages in Debian. Pick any package that is not a library (i.e. not a librust-foo-dev package). This package surely uses, in its dependencies, either monomorphized versions of functions and types, or dynamic dispatch. Note down all the monomorphized versions, and add them to a list for each dependency. Traverse the graph in topological order, and build these monomorphization lists for all dependencies. Then build all library packages as shared objects with all of those monomorphic instances explicitly stamped out (I understand there's no compiler support for this at the moment, but it shouldn't be too hard to fake it by generating stubs?). Will this not allow dynamic linking and bug-fixing in shared objects *within* Debian at least? For a given compiler version, of course. Non-Debian software that uses the libraries are no better off than before (unless they happen to need the same monomorphizations), but they're also no worse off.

I'm sure I'm overlooking something here, but I'd love to learn :)

Shared libraries

Posted Nov 25, 2025 14:39 UTC (Tue) by farnz (subscriber, #17727) [Link] (8 responses)

The problem comes with updates. If you update (say) ripgrep to fix a bug, and it uses a new monomorphization, that new monomorphization can rely on a new monomorphization inside a library package, and so on.

You end up with the same problem as the rebuild problem, since you cannot determine ahead of time that no bug fixes will involve a new monomorphization. You will probably reduce the number of total rebuilds you need, but if you're unlucky, you won't.

Shared libraries

Posted Nov 25, 2025 14:43 UTC (Tue) by gspr (subscriber, #91542) [Link] (6 responses)

> The problem comes with updates. If you update (say) ripgrep to fix a bug, and it uses a new monomorphization, that new monomorphization can rely on a new monomorphization inside a library package, and so on.

Is that likely? Or, is it any more more likely than, say, a bugfix in a classical C library needing to break the ABI?

Shared libraries

Posted Nov 25, 2025 14:46 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

If you're doing the change downstream, then yes it is quite likely - something as "trivial" to upstream as "add a new variant to an error enum" is a new monomorphization, with the resulting need to recompile everything that knows the layout of that enum.

Shared libraries

Posted Nov 25, 2025 14:51 UTC (Tue) by gspr (subscriber, #91542) [Link] (3 responses)

> If you're doing the change downstream, then yes it is quite likely - something as "trivial" to upstream as "add a new variant to an error enum" is a new monomorphization, with the resulting need to recompile everything that knows the layout of that enum.

Definitely. But a similar change in a classical C library would be to return a new error value. That wouldn't technically break the ABI, but it would sure require depending packages to acquire knowledge of the new error value. That would take *more* than just recompiling.

I guess what I'm saying is that this approach doesn't always work, but it's not much worse than the situation for classical C libraries.

Shared libraries

Posted Nov 25, 2025 14:59 UTC (Tue) by farnz (subscriber, #17727) [Link] (2 responses)

Returning a new error value that was previously impossible is an ABI break, in both C and Rust, unless it's clearly documented beforehand that other errors are possible.

For example, if I truncate the error value to 8 bits to make it fit an existing struct, because all known error values are under 255, and you introduce error value 256, I've got a problem in C. This gets worse in Rust, because enums aren't just a value, they can carry data, too, so the enum may get larger as a result of the change, and upstream won't care that the old enum compiled by Debian was 72 bytes, and the new one is 80 bytes - especially if compiled with a newer compiler, they're both 64 bytes.

Shared libraries

Posted Nov 25, 2025 15:34 UTC (Tue) by gspr (subscriber, #91542) [Link] (1 responses)

Ok, sure, we have an ABI break for both C and Rust. I'm just saying: the situation doesn't get worse, does it?

Shared libraries

Posted Nov 25, 2025 15:42 UTC (Tue) by farnz (subscriber, #17727) [Link]

It's slightly worse, because years of habit mean that C programmers are used to thinking about whether a change will break distro ABIs; Rust programmers aren't, and because Rust makes it easier to express things that are hard to express in C (sum types, for example), they're more likely to make changes to fix a bug that make the situation worse.

Remember that the state we're in with C is in part because the language requires programmers to get it right, or risk UB, and as a result, C programmers doing security fixes tend to be thinking about all the ways they can accidentally break someone; Rust programmers tend not to be doing that, because the result of breaking someone is a compiler error, not UB.

That cultural difference matters, and is part of why the aim on the Rust side is to have a state where swapping in an incompatible dynamic library is a dynamic linker failure, not UB as it is in C.

Shared libraries

Posted Nov 25, 2025 17:16 UTC (Tue) by ssokolow (guest, #94568) [Link]

Rust has automatic struct packing and niche optimization and, were it not for the need for reproducible builds, the support for randomizing struct layouts to help people catch Hyrum's law mistakes might be on by default.

Shared libraries

Posted Nov 25, 2025 17:05 UTC (Tue) by Wol (subscriber, #4433) [Link]

> The problem comes with updates. If you update (say) ripgrep to fix a bug, and it uses a new monomorphization, that new monomorphization can rely on a new monomorphization inside a library package, and so on.

Or you go back to the old FORTRAN libraries that I worked with. The linker pulled in all the modules it knew it needed (and if it had recursive dependencies you had to link the same library several times to get them all). That also had the side effect that all the functions that your program didn't need didn't end up in the executable.

So we then have some fancy update tool (sorry) that takes two allegedly identical libraries, and goes through merging all the different modules into a new updated library. If it finds the same module in both precursor libraries, it would need to check and enforce the rule that the extern definition was identical, before choosing the version with the newest reference number (maybe defined as the most recent modified date - I think that might be a tricky problem?). Or maybe just updates the original library with new modules that didn't originally exist.

Cheers,
Wol

Whole-image optimization

Posted Nov 27, 2025 22:48 UTC (Thu) by DemiMarie (subscriber, #164188) [Link]

I think this could work if one is shipping an image that is updated by replacing it.

Shared libraries

Posted Nov 25, 2025 20:46 UTC (Tue) by jhoblitt (subscriber, #77733) [Link] (4 responses)

The reality is that the burden of rolling out security fixes is shifting from distros to upstream projects. So instead of a shared lib getting updated at the distro level, the upstream(s) tag a new release with the version dep(s) bumped. This isn't a big deal for most upstreams as odds are, a bot will automatically open a pull/merge request when a dep patches a security issue.

What is probably needed to make this new model palletted for distros is standardized change log data that the distros can poll looking for security updates.

Shared libraries

Posted Nov 25, 2025 20:51 UTC (Tue) by mb (subscriber, #50428) [Link] (3 responses)

That's not what actually happens, though.

Distros don't care about locked upstream dependencies.
And I think that's fine, if they always pick the latest compatible semver.
That tends to work very well in the crates.io ecosystem with only extremely few exceptions.

But distros often try to use older dependencies than locked upstream. Which is not Ok, IMO.

Shared libraries

Posted Nov 25, 2025 22:35 UTC (Tue) by jhoblitt (subscriber, #77733) [Link] (2 responses)

I don't dispute that distros override Cargo.lock files but I don't agree that behavior is a feature. Changing a single dep version can also result in changes to transitive deps and my preference is that I am running the exact version of a software package which passed the upstream project's CI pipeline.

Shared libraries

Posted Nov 26, 2025 8:22 UTC (Wed) by joib (subscriber, #8541) [Link] (1 responses)

Case in point being Debian breaking bcachefs-tools by changing one of the deps to an older one.

Maybe there's a position in between "only one version of each library" and "whatever is currently in Cargo.lock across a hundred different projects"? E.g. some algorithm that could calculate the minimum set of versions to satisfy all the version requirements in all the Cargo.toml files that are used in the distro?

Shared libraries

Posted Nov 26, 2025 8:59 UTC (Wed) by zdzichu (subscriber, #17118) [Link]

Debian breakage should be caught by unit tests during the package build. But last time I've checked, there were no such tests in bcachefs-tools. Not exactly evoking confidence in its code.

Shared libraries

Posted Nov 24, 2025 19:03 UTC (Mon) by carlosrodfern (subscriber, #166486) [Link] (70 responses)

Image a world where every single program in a distro is statically linked. Imagine the logistic of delivering patches to all users, memory consumption of binaries, program load time, configurability, etc... It might not be felt in servers as much but it will definitely be felt in Desktops.

Shared libraries

Posted Nov 24, 2025 22:25 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (17 responses)

Why would this be complicated? You just recompile stuff and ship it. It might force you to invest in more efficient packaging system, if anything. There's no need to transmit 150Mb of data if you need to change 20 bytes of code.

Android uses this for the OTA system updates.

Shared libraries

Posted Nov 24, 2025 23:39 UTC (Mon) by carlosrodfern (subscriber, #166486) [Link] (3 responses)

I'm actually pointing out the *problems* of doing all the things statically linked, not advocating for it.

The fact that statically linked programs are a good solution in containers doesn't mean that it can be extrapolated to an Linux distro. A slightly change in the nature of a problem, or in the size of the problem, can justifies a very different solution. It is a typical mistake that people make as they get excited about one technology or approach and want to apply it to all the things that like like a nail. Statically linked programs written in golang or Rust for containers make a lot of sense since the pros are weighty and the cons are not that significant in the context of that use case, but it is not a good approach for all the programs in Linux distros.

Shared libraries

Posted Nov 25, 2025 0:49 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> I'm actually pointing out the *problems* of doing all the things statically linked, not advocating for it.

But it's not really a problem, is it? Binary diffs for patch update can negate the advantages of shared libraries.

> The fact that statically linked programs are a good solution in containers doesn't mean that it can be extrapolated to an Linux distro.

But maybe it can? I actually tried a fully static distro a while ago ( https://github.com/oasislinux/oasis ), and it objectively felt _better_ than regular Debian.

I'm not at all convinced that shared libraries are worth all the hassle.

Shared libraries

Posted Nov 25, 2025 20:27 UTC (Tue) by Whyte (subscriber, #161914) [Link] (1 responses)

Did your test also include KDE or GNOME?

That's the part I don't ever see happening, since the libraries supporting those are huge.

Shared libraries

Posted Nov 26, 2025 2:38 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

No, it used its own display manager. That was pretty snappy but limited.

I think that these days, graphics library problem can be solved by moving the UI into Electron-like shells, with a shared browser layer. And custom native code can be called via IPC. This in theory can provide even more optimizations, like pre-forked initialized shell that can be forked up without waiting for it to load (like the 'zygote' process in Android).

BTW, the browser doesn't have to be Chromium. Servo would be great for this purpose.

Shared libraries

Posted Nov 25, 2025 7:54 UTC (Tue) by joib (subscriber, #8541) [Link]

I think Ubuntu optionally supports using zsync for distribution changes efficiently. Also some other non-distro packaged software I use, use it for updating themselves. https://zsync.moria.org.uk/ I'm sure there are other things that do something similar as well.

So the tech to do this efficiently already exists in open source, it just needs to be integrated more deeply into distro package distribution tooling.

Shared libraries

Posted Nov 25, 2025 21:02 UTC (Tue) by jhoblitt (subscriber, #77733) [Link] (4 responses)

I'd also argue that 150MiB or even 1.5GiB package updates would not be a deal breaker. Those sizes are equivalent to streaming only a few minutes of 4K video.

Shared libraries

Posted Nov 26, 2025 5:46 UTC (Wed) by Heretic_Blacksheep (subscriber, #169992) [Link] (3 responses)

Cumulative. Imagine having to replace your entire distro 10-15 GB installation every couple of weeks. Then imagine multiplying that by hundreds of thousands and see how fast most distribution infrastructure volunteers throw in their cards. We're not talking about streaming a movie from Netflix, a billion dollar company with distributed media cache servers on nearly every ISP internal WAN. We're talking about Debian, which is volunteer run with volunteered mirror resources. Backbone interconnect data service is a metered service for all ISPs. You want to waste your bandwidth, fine, but don't be so selfish as to volunteer others that are graciously sharing their resources - or don't have resources to waste. Not every end user is so blessed as to having effectively unlimited 200+ Mbit service. Outside of the developed world, and even in some areas of the developed world, that's pretty rare. Even in the US, most residential ISP services have a soft 1 TB/month data cap regardless of bandwidth which has to be shared across all family members. The low cost subsidized plans are usually capped at 25 Mb/sec with much lower data caps - the very people that Linux distributions would benefit the most using recycled hardware because they can't afford fancy gear!

Also people forget the flash drives the vast majority of people use these days for storage have a finite lifetime and that lifetime is steadily getting lower as the number of bits per cell inflates. A typical drive rated for 300 TB write/erase cycles (Samsung 970 EVO rated lifetime) won't last nearly as long if everything were staticly built as it would with a distro with dynamic linking that only had to write a couple of gigabytes of updates a month. I've seen plenty of drives with even lower lifetime ratings on the consumer/enthusiast market. There's no telling the rating on rebranded OEM units (Lenovo, Dell, etc). Dynamic linking leaves a lot more write cycles for user data and browser-like-programs disk cache/system logging/file indexing (the bigger problems with SSD cell burn than rarely changed media files and such).

Shared libraries

Posted Nov 26, 2025 7:23 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> Cumulative. Imagine having to replace your entire distro 10-15 GB installation every couple of weeks.

This is a simply ridiculously overblown comparison. The total size of non-kernel-related binaries on my desktop Linux computer is 1Gb. And they don't need to _all_ change every time.

Assuming that ALL of them need to be replaced, that's already an order of magnitude less. But let's roll with it. An unmetered 10GB port is around $200 a month, and an M2 SSD can easily saturate it. If you want to serve static files, that's probably around $300 a month expense.

A 10GB port can serve the 1Gb in about 1 second. So 86400 people daily, and over a month that's 2.5 million people. This is more than the number of users for most small distros.

Shared libraries

Posted Nov 26, 2025 7:33 UTC (Wed) by mb (subscriber, #50428) [Link] (1 responses)

>10-15 GB installation every couple of weeks
>rated for 300 TB write/erase cycles

That's over 400 years of steady updates.

I've yet to see my first SSD failure due to the "small" lifetimes that one can read about everywhere all the time.

I use them since about 15 years ago as system disk and as disks that are continuously being written to since 8 years ago. Not a single failure so far, except for one that was bad out of the box.

I don't say that failures can't happen, but so far SSDs are much *more* reliable than spinning rust for me.

Shared libraries

Posted Dec 4, 2025 17:00 UTC (Thu) by nye (subscriber, #51576) [Link]

Only actually ~500 full drive writes, but I've been impressed by Sandforce. Shame they went out of business. I've never seen an HDD last nearly this long.
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Retired_Block_Count     0x0033   093   093   003    Pre-fail  Always       -       608
  9 Power_On_Hours_and_Msec 0x0032   099   099   000    Old_age   Always       -       132110h+35m+45.600s
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       175
174 Unexpect_Power_Loss_Ct  0x0030   000   000   000    Old_age   Offline      -       81
231 SSD_Life_Left           0x0013   089   089   010    Pre-fail  Always       -       1
241 Lifetime_Writes_GiB     0x0032   000   000   000    Old_age   Always       -       32128
242 Lifetime_Reads_GiB      0x0032   000   000   000    Old_age   Always       -       79296

Shared libraries

Posted Dec 5, 2025 12:06 UTC (Fri) by ras (subscriber, #33059) [Link] (6 responses)

> Why would this be complicated? You just recompile stuff and ship it.

It's complicated because the upstream stuff isn't the same.

Debian Stable has one feature sysadmin of smaller places love: it's stable. If there is some security issue Debian doesn't ship the new upstream with it's new features and depreciations, they ship the same version with a security fix backported. That effectively means you can let unattended upgrades do it's thing in the background for the four year lifetime a Debian of a typical release without too many concerns. (You will get the occasional email about something that might require manual intervention, like a reboot for a kernel upgrade, but it's rare.)

Big shops probably don't see much value in this, as they have teams of programmers releasing new static binaries or docker images every few weeks. But for small shops it's live saver. For example it makes home mail/web/ssh/router server as hobby feasible, because I only have to touch it once every 4 years. If you work in place that has less than 10 people handling IT functions, that way of operating is the norm. I've don't know much about LWN for instance, but I'd bet that's how they manage their infrastructure. It's either that, or god help you, you use micro services and outsource it all to a very well paid provider.

To pull that off Debian uses a small army of security volunteers watching incoming CVE's, and doing the backporting work. I'd kinda amazed it works at all, but it does better than that. In my experience Microsoft's security patches break far more servers than Debian does.

One of the tricks Debian uses to make it is Debian insisting there is only one version of every library in use. Imagine what it would be like if you have 20 versions of glibc, gtk, libm and every other library. The work load of the backporters would explode. Yet when I look in the Cargo.lock of your typical largish Rust program that is exactly what I see. Worse, it's not just a case of different program uses different versions of the same library, it's often different versions of the same library used by one program! Kudo's for Rust for making that work I guess - but that wonderful experience of having "cargo build" just work most of the time looks to me to have laid the foundations for creating a security nightmare.

All that said, I'm not a huge fan of the current package model myself. I'd much prefer an immutable core, rather like MacOS does (but much smaller), and default to complete isolation of the apps running on top of it like Android does. It would be far more secure by default. But it's decades of work away, and in the mean time Rust's monomorphization model doesn't seem to be a good fit for how distro's work today.

Shared libraries

Posted Dec 5, 2025 12:48 UTC (Fri) by Wol (subscriber, #4433) [Link] (1 responses)

> One of the tricks Debian uses to make it is Debian insisting there is only one version of every library in use. Imagine what it would be like if you have 20 versions of glibc, gtk, libm and every other library. The work load of the backporters would explode. Yet when I look in the Cargo.lock of your typical largish Rust program that is exactly what I see. Worse, it's not just a case of different program uses different versions of the same library, it's often different versions of the same library used by one program! Kudo's for Rust for making that work I guess - but that wonderful experience of having "cargo build" just work most of the time looks to me to have laid the foundations for creating a security nightmare.

Can/does Cargo complain when told to use several different versions of the same library?

Given the general "If it compiles it will work" nature of Rust, a warning/error trace telling you "this library is loaded using several different versions pulled in in all these places", surely it would just take a bit of discipline to delete all the older references, a quick QA, and your "typical largish Rust program" would suddenly be rather smaller?

Cheers,
Wol

Shared libraries

Posted Dec 5, 2025 14:04 UTC (Fri) by farnz (subscriber, #17727) [Link]

Cargo itself will let you use multiple different versions of the same library in one program, without errors.

You can, however, use cargo-deny, which will tell you about multiple versions in use.

The issue, however, remains developer time. We all agree that using undermaintained versions of code is bad, but nobody is putting in the time to make sure that we're all on maintained versions. And the only difference in that regard between the C ecosystem and the Rust ecosystem is that in the Rust ecosystem, the use of large numbers of small dependencies means that it's easy to spot, where in the C ecosystem, spotting which parts of a big library (like glib) are undermaintained, and which ones are cared for is hidden behind the fact that glib overall is cared for.

Shared libraries

Posted Dec 5, 2025 13:08 UTC (Fri) by Wol (subscriber, #4433) [Link] (1 responses)

> All that said, I'm not a huge fan of the current package model myself. I'd much prefer an immutable core, rather like MacOS does (but much smaller), and default to complete isolation of the apps running on top of it like Android does. It would be far more secure by default. But it's decades of work away, and in the mean time Rust's monomorphization model doesn't seem to be a good fit for how distro's work today.

Unfortunately, I don't think that's a good fit with how linux development actually happens. Because everything happens independently, it's hard to do big releases. But yes, I agree with you.

The base OS - linux the kernel, systemd, basic utilities (similar in concept to MS-Dos).
The daemon set - sendmail and friends, MySQL and friends, etc etc
The windowing system - KDE/Plasma, Gnome, whatever
User applications.

And if each part were as self-contained as possible it would make life a whole lot easier. But it would require a "distro BDFL", and I don't think that's going to happen! It would probably take someone like Dell, or Red Hat, and the network effects are against it.

Cheers,
Wol

Shared libraries

Posted Dec 5, 2025 13:42 UTC (Fri) by pizza (subscriber, #46) [Link]

> Unfortunately, I don't think that's a good fit with how linux development actually happens. Because everything happens independently,

This point cannot be emphasized enough.

Shared libraries

Posted Dec 5, 2025 14:11 UTC (Fri) by daroc (editor, #160859) [Link]

As our HTTP headers state, LWN is currently hosted on CentOS Stream. But, as it happens, we've been looking into whether we need to change the site's hosting infrastructure due to some recent outages at our hosting provider. All the important configuration is theoretically kept in an ansible playbook that should allow us to change underlying systems without too much fuss — and if we do, Debian would certainly be a fine choice. But I believe the site has been on CentOS for the last several decades, and keeping to what is known to work also makes sense.

Shared libraries

Posted Dec 6, 2025 17:14 UTC (Sat) by ggreen199 (subscriber, #53396) [Link]

> All that said, I'm not a huge fan of the current package model myself. I'd much prefer an immutable core, rather like > MacOS does (but much smaller), and default to complete isolation of the apps running on top of it like Android does. > It would be far more secure by default. But it's decades of work away, and in the mean time Rust's
> monomorphization model doesn't seem to be a good fit for how distro's work today.

There are the new immutable distributions that work like this. For example, Kinoite, in Fedora, which is what I use now. It has an immutable core, and you use flatpaks or containers for everything else. Still early days, but I've been using it all year with no problems.

Shared libraries

Posted Nov 25, 2025 6:39 UTC (Tue) by mb (subscriber, #50428) [Link] (50 responses)

>memory consumption of binaries

negligible

>program load time

Probably faster with statically linked binaries.

>configurability

What?

Shared libraries

Posted Nov 25, 2025 11:11 UTC (Tue) by euclidian (subscriber, #145308) [Link]

With memory and re-use i did have a super naive C linker / loader that would statically link all programs and share identical functions on load (it would take a hash over all the binary code in a function and all the other functions it called and add them to an object store.).

Theoretically for basic cases when the binary gets recompiled with the same static library you get the de-duplication from dynamic libraries plus inlining and versioning working (just loosing the de-duplication).

I doubt it would ever work well enough for production use (first load of a program) and i got side tracked dealing with edge cases but it might be something I should poke again.

Shared libraries

Posted Nov 25, 2025 11:22 UTC (Tue) by LtWorf (subscriber, #124958) [Link] (47 responses)

Loading time and memory usage are larger with static linking if we remember that 1 process per machine is hardly a common usecase.

Shared libraries

Posted Nov 25, 2025 17:33 UTC (Tue) by ssokolow (guest, #94568) [Link]

You'll probably want to give Do your installed programs share dynamic libraries? a read if you haven't already.

Shared libraries

Posted Nov 25, 2025 18:01 UTC (Tue) by mb (subscriber, #50428) [Link] (39 responses)

>Loading time [..] are larger with static linking

Why would that be the case?

I remember the days twenty years ago where we had slow CPUs and we did prelink workarounds to reduce the dynamic linking overhead at runtime to reduce the startup time noticably.
Has the dynamic linking runtime overhead been reduced to almost zero since then? Yes, I know it has been reduced by some degree, so that it's not really noticeable anymore. But is it faster than static linking now? How can that be?

Shared libraries

Posted Nov 25, 2025 18:37 UTC (Tue) by joib (subscriber, #8541) [Link] (38 responses)

Perhaps faster in the sense it's more likely the pages from the shared library are already in memory and thus don't need to be paged in from disk? Particularly for some widely used library like libc; for the long tail of libraries used by a single or just a few applications less so.

Shared libraries

Posted Nov 25, 2025 19:16 UTC (Tue) by mb (subscriber, #50428) [Link] (36 responses)

dylibs are much bigger because they have to include everything and not only the bits that the application needs.
And they are scattered around in the file system which causes lookups and seeks.
I doubt that this can be faster on average.
Maybe it's faster for small applications where everything but the main binary is already in the page cache.
But I doubt it's true in a general sense.

Are there actual numbers from real life examples that show that Rust startup times are slower due to static linking?
Are uutils slower than gnu coreutils, just because they are statically linked?

And libc is dynamically linked to Rust programs.

These days in practice probably neither dynamic not static linking is slow in the days of extremely fast CPUs and SSDs.

Shared libraries

Posted Nov 26, 2025 4:35 UTC (Wed) by collinfunk (subscriber, #169873) [Link] (35 responses)

Actually that is a great example. The way that Ubuntu builds them at least, uutils has much slower start up time than GNU coreutils.

Here is an example:

$ podman run --rm -it ubuntu:25.10
$ time for i in $(seq 10000); do /lib/cargo/bin/coreutils/true; done
real 0m21.966s
$ time for i in $(seq 10000); do gnutrue; done
real 0m7.110s

This is quite important since these commands are executed very frequently. I believe they are looking into improving it.

Shared libraries

Posted Nov 26, 2025 5:42 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

See this comment[1] where coreutils is "just as slow" when using a multi-call binary. The claim is that it is due to the monolithic library needing to load far more (usually unused) dynamic dependencies.

[1] https://lwn.net/Articles/1043239/

Shared libraries

Posted Nov 26, 2025 7:02 UTC (Wed) by mb (subscriber, #50428) [Link] (30 responses)

Yes, I know.
But the question was: Is this due to static linking?

Shared libraries

Posted Nov 26, 2025 11:21 UTC (Wed) by bluca (subscriber, #118303) [Link] (29 responses)

Yes, having huge binaries like Rust has means there is a large cost in loading them, as opposed as more efficient and smaller binaries using shared libraries that are already loaded in memory anyway. This is a well known pitfall, that for some reason a lot of people seem to have suddenly forgot about

Shared libraries

Posted Nov 26, 2025 17:08 UTC (Wed) by mb (subscriber, #50428) [Link] (27 responses)

We are talking about this:

>$ time for i in $(seq 10000); do /lib/cargo/bin/coreutils/true; done

Does this have a loading into memory cost of 1 or 10000?
I would be *very* surprised if this would read the binary from disk 10000 times.

Also please note that gnu-true doesn't use any shared library except for libc. Which is exactly the same in Rust.
Saying that gnu-true is faster than Rust-true due to dynamic linking is clear nonsense, because there is no difference w.r.t. dynamic linking between them.

If Rust-true is slower than gnu-true due to its bigger size, then this has nothing to do with dynamic or static linking.

Such small binaries are typically larger than their C counterpart, because the Rust std library is typically not rebuilt together with the program and therefore contains lots of unused code.

Shared libraries

Posted Nov 26, 2025 18:46 UTC (Wed) by bluca (subscriber, #118303) [Link] (26 responses)

> I would be *very* surprised if this would read the binary from disk 10000 times.

Depends. Do you have enough memory available and is the system otherwise idle? Or is it near capacity with no room to spare and higher priority processes running and saturating whatever cache is available?
It's the difference between real production systems and synthetic benchmarks.

> Also please note that gnu-true doesn't use any shared library except for libc. Which is exactly the same in Rust.

It is not, because in the rust case you have the many-tenctacle monster that is the rust stdlib and whatever the cat, er, cargo dragged in that morning. So for coreutils really most of the stuff is in glibc so it's all already mapped, in the uutils case you load most of it every time from scratch.

Shared libraries

Posted Nov 26, 2025 19:41 UTC (Wed) by farnz (subscriber, #17727) [Link] (25 responses)

Linux loads all ELF objects (binaries and libraries alike) on demand. If you run it in a loop, and it doesn't have enough memory to load the parts used for true and keep it in page cache, then that's the same whether it's dynamically linked or statically linked.

In both cases, all the binaries involved are simply mmaped into place, and normal demand paging takes care of reading them as needed.

Shared libraries

Posted Nov 26, 2025 20:46 UTC (Wed) by bluca (subscriber, #118303) [Link] (24 responses)

> In both cases, all the binaries involved are simply mmaped into place, and normal demand paging takes care of reading them as needed.

The point is, once again, that glibc and libssl and other core system libraries will always already be loaded on any given system. Your 100MB chunky rust binary won't.

Shared libraries

Posted Nov 26, 2025 20:51 UTC (Wed) by farnz (subscriber, #17727) [Link] (23 responses)

Not necessarily - any page that includes a relocation is not shared. Add in that any shared page of any core system library that's not actively in use can get dropped from the page cache (since the kernel knows it can reload it on demand), and you could well find (and indeed, I see on my laptop) that glibc pages get paged in when you start a dynamically linked binary because they're not in cache already.

Shared libraries

Posted Nov 26, 2025 21:15 UTC (Wed) by bluca (subscriber, #118303) [Link] (22 responses)

And? It's theoretically possible sure, but it's not a very interesting argument in the real world

Shared libraries

Posted Nov 26, 2025 21:17 UTC (Wed) by farnz (subscriber, #17727) [Link] (21 responses)

Well, you'd have to ask this commenter why they thought the Rust binary would be read in 10000 times, but the C binary would not.

Shared libraries

Posted Nov 26, 2025 21:54 UTC (Wed) by bluca (subscriber, #118303) [Link] (20 responses)

Because there are ~0% chances your 100MB rust binary is identical to any other binary/library already loaded. The chances of glibc being already loaded are ~100%. Of course if one wants to be pointlessly pedantic they can set up pointless artificial experiments triggering the opposite, but it doesn't really matter, spherical chickens in a vacuum are fun but nothing more beyond that

Shared libraries

Posted Nov 27, 2025 0:44 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Because there are ~0% chances your 100MB rust binary is identical to any other binary/library already loaded.

Unless it needs an nss module. Or it needs to be relocated.

Shared libraries

Posted Nov 27, 2025 9:06 UTC (Thu) by farnz (subscriber, #17727) [Link] (18 responses)

Firstly, as has already been said, I don't load the whole 100 MB at a time - the kernel demand pages the bits I need.

Secondly, and confirmed experimentally, significant chunks of glibc contain relocations, at least under Debian/aarch64. I find that with an eMMC, demand paging a 100 MB statically linked binary is faster than handling the relocations needed to dynamically link a library.

And note that this tradeoff would have been very different in the days when a 5400 RPM HDD was "fast". But technology has changed - and the speed with which I can demand page a binary is much higher than the speed with which my embedded CPU can process the relocations and associated page faults, along with also having to demand page in the unshared pages that it needs to do that.

Shared libraries

Posted Nov 27, 2025 10:21 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (2 responses)

This sounds like a machine purpose built to win this benchmark honestly. Is it a device that one can buy? How many of this item exist? Should we optimise for this?

Shared libraries

Posted Nov 27, 2025 10:28 UTC (Thu) by farnz (subscriber, #17727) [Link]

It's an embedded Linux system - cheap parts throughout.

But, FWIW, this also applied to big servers at a hyperscaler - the cost of processing relocations outweighed the cost of paging in a bit more from the application binary from an SSD. Systems with HDDs only benefited speed-wise from dynamic linking, systems with SSDs benefited from static linking.

Shared libraries

Posted Nov 27, 2025 12:14 UTC (Thu) by malmedal (subscriber, #56172) [Link]

> Is it a device that one can buy?

Don't know which device farnz is talking about, but the description matches things like NanoKVM.

Shared libraries

Posted Nov 27, 2025 12:56 UTC (Thu) by bluca (subscriber, #118303) [Link] (14 responses)

It's the exact opposite. With an eMMC going to the disk means you are dead in the water. There's no universe in which using a slow eMMC (eg something half-duplex single-channel) it's better to go to disk than doing memory operations. I have no idea what kind of weird device you have where loading stuff from disk is slower than loading stuff from ram.

Shared libraries

Posted Nov 27, 2025 13:04 UTC (Thu) by farnz (subscriber, #17727) [Link] (13 responses)

This is measured on an Alliance eMMC device. Taking the same C program, and statically linking it, makes it faster to load than dynamically linking it, on the same processor.

I have no idea why you think that loading from eMMC (which is needed in the dynamically linked case, too, because the relocations have already been unshared, and the original data has to be reloaded), then doing the relocations, then doing more loading, is faster than just loading.

Shared libraries

Posted Nov 27, 2025 17:11 UTC (Thu) by bluca (subscriber, #118303) [Link] (12 responses)

Then... don't unload it? "Doctor, it hurts when I do this"

Shared libraries

Posted Nov 28, 2025 5:28 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

If you don't keep touching an executable (and multicore binaries are bad, mmkay?), then it'll likely be paged out by something. A multicall static binary is more likely to stay full in cache, because it keeps getting touched.

Shared libraries

Posted Nov 28, 2025 10:25 UTC (Fri) by bluca (subscriber, #118303) [Link] (10 responses)

You _really_ need to go out of your way to get glibc and other core system libraries paged out, on a system that is actually getting used. Of course if it's all sitting idle doing nothing then anything hardly matters.

Shared libraries

Posted Nov 28, 2025 10:43 UTC (Fri) by farnz (subscriber, #17727) [Link] (5 responses)

You really, really don't if you're RAM constrained (welcome to embedded!).

The kernel is not operating at the level of entire files; it's operating at the page level. Pages that contain relocations are only shared up until the relocation is overwritten by the dynamic linker; pages where the only data used at runtime is relocation metadata are only used up until all the relocations have been handled, at which point they become unused pages.

You can thus have 90% of glibc in RAM, but the critical parts for process startup are not, and you have to do small I/Os to get the missing pages.

Shared libraries

Posted Nov 28, 2025 11:01 UTC (Fri) by bluca (subscriber, #118303) [Link] (4 responses)

That can only happen if you somehow have a system that is both incredibly busy, but also incredibly idle, so that resources are fully saturated and things get aggressively evicted from the page cache, but somehow the system is completely static otherwise. IE, you need to go out of your way to artificially create such a situation. On a normal, busy Linux systems with socket activated services, timer activated services, dbus activated services, etc etc, there are pretty much always processes being started and stopped. If new processes are started so _rarely_ that you don't even have glibc in the page cache, then obviously starting processes is not a bottleneck and it doesn't matter anyway one way or the other.

I'm not really sure why you are trying to conjure up such a contrived and unlikely example just to prove a point? It's not really working

Shared libraries

Posted Nov 28, 2025 11:08 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

No - it happens when the system is continuously busy, but new process starting is on the "once every 15 minutes" scale, not "every few seconds".

And I do have glibc in the page cache - I just don't have the stuff that's only loaded on process starting in cache.

But I get it, my experience and yours don't match, so you're going to tell me I'm wrong, rather than address a real situation.

Shared libraries

Posted Nov 28, 2025 13:40 UTC (Fri) by LtWorf (subscriber, #124958) [Link] (2 responses)

I think that even if it does happen on some specially built devices with some very specific workloads, it would be counterproductive to optimise for that and make the common case slower for everyone else in the world instead.

Shared libraries

Posted Nov 29, 2025 20:53 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

But we still need to support obsolete hardware!

FWIW, static binaries help with responsiveness for most users. I encourage everyone here to try the fully static distro that I mentioned ( . It really feels more snappy.

Shared libraries

Posted Nov 29, 2025 22:08 UTC (Sat) by LtWorf (subscriber, #124958) [Link]

It's not "obsolete", it's just custom built. And it does work just fine with shared libraries for most workloads except whatever weird thing farnz is doing with it.

Feeling more snappy at 2 minutes after boot doesn't necessarily mean it's more snappy 5 days later.

Shared libraries

Posted Nov 30, 2025 1:28 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> You _really_ need to go out of your way to get glibc and other core system libraries paged out

Nope. glibc is loaded at a random location, and when it needs to be linked into an executable, the OS needs to do relocations to resolve the addresses. This information can be easily paged out, especially for rarely used binaries.

glibc by itself is not too large, but when you add other libraries like libstdc++, libz, libsystemd, and others it starts adding up. This is compounded by libraries that use NSS plugins, because dlopen() requires new relocations each time (AFAIR?).

Shared libraries

Posted Dec 1, 2025 11:10 UTC (Mon) by paulj (subscriber, #341) [Link] (1 responses)

Does this imply the static binaries you advocate for are a lot less secure, cause - other than perhaps the initial base address of where it is loaded - all the other addresses are a known entity? Does it not significantly reduce security, by largely negating ALSR? (So far as ALSR provides security - I'm aware there is the odd bit of dissent on the merits).

Shared libraries

Posted Dec 1, 2025 18:22 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Not with the PIC (Position-Independent Code). The kernel itself does relocations when the binary is loaded.

Shared libraries

Posted Dec 1, 2025 13:16 UTC (Mon) by malmedal (subscriber, #56172) [Link]

> OS needs to do relocations

It would be good to properly measure how big this effect is. I did a very quick test running chrome under perf and immediately killing it when it finished starting up. On my machine with a hot cache ld-linux used 7% of cpu-time. This does not prove there is a problem, but it is indicative that it is worth investigating properly.

Shared libraries

Posted Nov 26, 2025 18:25 UTC (Wed) by Wol (subscriber, #4433) [Link]

That's assuming you load it!

Get the compiler/linker to segment object code into 4K blocks or whatever the figure is, map the file into ram, and only load those bits that are used.

Cheers,
Wol

Shared libraries

Posted Nov 26, 2025 8:34 UTC (Wed) by joib (subscriber, #8541) [Link]

true and false are probably the ones that show the largest difference in startup cost since they are so trivial, but OTOH they are both shell builtins so the actual /usr/bin/{true,false} are probably almost never used.

For more complex binaries I suppose it comes down to more open()'s, and more but smaller memory mappings for dynamic linking vs fewer bigger memory mappings for the static linking case.

Shared libraries

Posted Nov 26, 2025 8:37 UTC (Wed) by taladar (subscriber, #68407) [Link] (1 responses)

This can't be due to load times of the binary since that would only apply to the first of your 10000 iterations unless you want to make the claim that the filesystem cache discards it every time.

Shared libraries

Posted Nov 26, 2025 13:57 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

As I pointed out in a sibling comment, it seems to be that a multi-call `true` loads up libraries needed for (say) `sha256sum` even if they're not used because even coreutils behaves this way with a multi-call binary.

Shared libraries

Posted Nov 25, 2025 19:32 UTC (Tue) by bluca (subscriber, #118303) [Link]

Precisely this - most likely you are only loading your process executable in memory, all its dependencies are already paged in. It's not something you noticed on an x86 desktop with fast storage, but it's definitely something you notice on a resource strapped arm machine with a crappy emmc. And even on x86, it's definitely something you notice when the resources you are using are resources you can't give to paying customers (eg: a virtualization host). At that point, running a bunch of static linked massive binaries and all the extra memory they take starts actually costing you money.

Shared libraries

Posted Nov 25, 2025 18:10 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> 1 process per machine is hardly a common usecase.

I would argue that "few processes per machine" is one of the most common use-cases, because BusyBox systems are like that. And they probably outnumber all other Linuxes except Android.

Shared libraries

Posted Nov 25, 2025 21:52 UTC (Tue) by Wol (subscriber, #4433) [Link] (3 responses)

Given that the majority of computers are single-user, with said user running a single app full-screen, I'd say IN PRACTICE pretty much all computers are only running one app at a time from the user's PoV.

Cheers,
Wol

Shared libraries

Posted Nov 26, 2025 23:16 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (2 responses)

Uh?

Processes in linux keep running even if they are not owning the windows which has focus.

I don't understand what you're trying to say here.

Shared libraries

Posted Nov 27, 2025 8:03 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

Speaking on behalf of the users - "What's a process?"

All the *user* cares about is the *single* application which is currently running.

I've lost the context, but iirc it was something about how computers that only run one program at a time are very much a minority. From our PoV as computer guys looking at all the stuff going on in the background that may be true, but from the user's PoV pretty much *every* computer is *single use*.

It's glaringly obvious as soon as you realise, when you're talking about processes, that your typical computer user won't have a clue what you're talking about.

Cheers,
Wol

Shared libraries

Posted Nov 27, 2025 8:59 UTC (Thu) by LtWorf (subscriber, #124958) [Link]

They might not know what a process is, but they will know if their music stops when they aren't looking at the player, or if the chat notifications stop working.

Anyway, we all know here what a process is. Why do we care if some people who aren't participating in the discussion aren't aware of that? How is that relevant?

Multi-user vs. single-user

Posted Nov 28, 2025 10:01 UTC (Fri) by geert (subscriber, #98403) [Link]

It depends on the use case.
E.g. DEC Ultrix did not have shared libraries, but it did have demand paging and sharing of binaries. On a typical multi-user system at that time, all users ran sh, vi, matlab, and mosaic, which thus could be shared.
On a modern single-user system, the user runs a few large applications, which cannot share much, unless they use the same shared libraries.

Shared libraries

Posted Nov 26, 2025 0:43 UTC (Wed) by carlosrodfern (subscriber, #166486) [Link]

> negligible

Wouldn't that depend on use case? In servers, probably not a big deal, in Desktop, I'm not sure.

>program load time

Overall larger amount of major page faults.

> What?

Being able to replace implementations dynamically, with optimizations targeted to the use case.

Those reasons and the other ones I mentioned were observations of many people when the majority of systems were built statically linked that eventually motivated a switch in the OS field to favor DSOs.

Shared libraries

Posted Nov 26, 2025 1:59 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Wasn't there a "stali" distro which aimed to do this? IIRC, they actually claimed "smaller" binaries because you could drop parts of libraries no one used.

Shared libraries

Posted Nov 24, 2025 19:27 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

Yeah, just imagine a project with hundreds of components that have no stable ABI between them, and even use dynamic loading to mask that mess. It's a versioning nightmare, you can't do pinpoint updates of individual components without recompiling everything.

Shared libraries

Posted Nov 24, 2025 23:14 UTC (Mon) by bluca (subscriber, #118303) [Link] (13 responses)

Sounds like a very well thought out project - of course as long as most of the code is shared, and the hundreds of components are small so that memory usage, disk usage and loading times are all minimized. In fact that's how pretty much all Linux distributions of good quality are setup nowadays - many many individual programs with little in common, sharing a common subset of dynamic libraries

Shared libraries

Posted Nov 25, 2025 0:51 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

But how do you make sure that a "Component A" works if the "Library B" was replaced while "Component A" is running? Especially for daemons that can stay up for a long time.

It's so much better to precompile everything into "Component A", so that it need not care if anything on disk changes.

Shared libraries

Posted Nov 25, 2025 6:48 UTC (Tue) by koflerdavid (subscriber, #176408) [Link] (4 responses)

Is that really such a big issue with existing packaging? Library B should still be resident in memory and therefore Component A will keep running. If Component A is incompatible with the new version of Library B then Component A needs to be updated and restarted as well of course.

Atomic distributions handle this by creating a new file system image in the background, and the user boots into the updated system.

Shared libraries

Posted Nov 25, 2025 9:06 UTC (Tue) by taladar (subscriber, #68407) [Link] (3 responses)

From practical experience on both my desktop (Gentoo) and various servers (Debian and RHEL) crashes and misbehaving programs due to updated dependencies happen all the time, I encounter one at least 1-2 times a week. Granted, it is not always the library part that is the problem but often files that weren't in memory at all at the time of the update that the library assumes it can load but the end result is the same.

Shared libraries

Posted Nov 25, 2025 14:37 UTC (Tue) by NightMonkey (subscriber, #23051) [Link] (1 responses)

From my experience, Gentoo does a great job of recompiling packages that need shared library updates. :shrug: Perhaps you need to change your upgrading patterns? :)

For example, I use this to upgrade religiously:

emerge -uDNv --with-bdeps y system world --keep-going --jobs --load-average 8

Shared libraries

Posted Nov 26, 2025 9:08 UTC (Wed) by taladar (subscriber, #68407) [Link]

My update command is quite similar only I don't use binary dependencies but what does that have to do with the problem I talked about of programs not being restarted and the distro tools themselves not mentioning the required restart?

Shared libraries

Posted Nov 25, 2025 15:26 UTC (Tue) by ballombe (subscriber, #9523) [Link]

This is very unusual, otherwise Debian would be flooded by bug reports.

Shared libraries

Posted Nov 25, 2025 6:53 UTC (Tue) by josh (subscriber, #17465) [Link] (6 responses)

> But how do you make sure that a "Component A" works if the "Library B" was replaced while "Component A" is running?

Whether you're dealing with a replacement of component A, or a replacement of library B, either way, you *always* write to a temporary file and rename over the original, so that the old inode still exists as the source of the mmap'd code, and then restart A. Writing over the original will cause segfaults.

Shared libraries

Posted Nov 25, 2025 9:08 UTC (Tue) by taladar (subscriber, #68407) [Link] (2 responses)

Oddly enough none of the distros like Debian include a core component that checks which A needs to be restarted after upgrades. We have been using needrestart for that for years but it seems like a problem none of the distros even attempt to solve.

Shared libraries

Posted Nov 25, 2025 12:03 UTC (Tue) by draco (subscriber, #1792) [Link]

Fedora Server edition installs Cockpit, which uses needrestart. Though it still waits for the admin to invoke the restarts.

Shared libraries

Posted Nov 25, 2025 19:18 UTC (Tue) by bluca (subscriber, #118303) [Link]

Shared libraries

Posted Nov 25, 2025 18:12 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Won't help. The binary of "Component A" might not have loaded the library at the time of the replacement.

This is not theoretical, that's how new systemd behaves. It loads libraries dynamically using dlopen().

Shared libraries

Posted Nov 26, 2025 9:14 UTC (Wed) by taladar (subscriber, #68407) [Link] (1 responses)

This is not just a library issue either, it can also happen with data files or helper binaries that communicate via entirely different interfaces (e.g. browser tab helper processes).

Depending on the design of the program it might even happen if you make a config change and some more recently loaded part of the whole package (library, binary,...) reads the new configuration and the longer running parts use the old.

Sometimes that is even deliberate as part of some reload mechanism where a new binary loads a new config, then gets file handles for sockets and similar resources from the old binary so even using something like a snapshot of the filesystem from the time the first part of the package is started wouldn't be a universal solution.

Shared libraries

Posted Nov 26, 2025 18:43 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Indeed. But we actually have pretty good "ABI layers" for IPC. Everything from OpenAPI to Protobuf/CapnProto descriptions.

Shared libraries

Posted Nov 24, 2025 19:22 UTC (Mon) by ibukanov (subscriber, #3942) [Link] (8 responses)

The big difference between C++ and Rust is that the former has stable ABI while the former lacks those. Of cause even with Rust one can expose things across shared libraries using C-ABI, but then Rust code calling such C-based API will have to use unsafe when calling those even when the implementation is fully safe. With C++ if one avoids templates one can use class-based API including support for virtual functions that can be called across shared library boundaries.

Shared libraries

Posted Nov 24, 2025 20:20 UTC (Mon) by ojeda (subscriber, #143370) [Link]

There is no standard C++ ABI, though vendors try to help to some degree.

As for unsafe calls, that is the same as in C++, i.e. every call is unsafe.

By the way, in Rust you can easily specify nowadays that an external function is safe, e.g.

unsafe extern "C" {
    safe fn f();
}

Shared libraries

Posted Nov 24, 2025 20:20 UTC (Mon) by ebee_matteo (subscriber, #165284) [Link] (5 responses)

> the former has stable ABI

Except when it hasn't.

ARMv5 ABI changed after GCC 7 (we all love our -Wno-psabi).

C++11 also broke ABI in several ways. See GCC 5 and the libstdc++ versioning fiasco. `_GLIBCXX_USE_CXX11_ABI` for the win.

GCC 11 broke ABI with GCC 10 due to std::span.

jmp_buf has different ABI for s390 after glibc 2.19.

I can cite more.

Yes, C++ has slightly better ABI guarantees than Rust, but mostly just because its usage is widespread enough, across so many decades, that it came to be that way /de facto/ after people spent years fighting with ABI problems.
In fact, I am not aware of a standardised ABI for C++ at all.

And as other people have pointed out, you still have the issue of macros and templates to solve when you use the C++ headers.

C is the closest we have to a stable ABI, assuming the same macros are defined at the time of inclusion.

And you can write Rust programs exporting C mangled functions, and that works just fine also to produce shared libs. But that's the best you can do as of today.

I guess at some point the pressure will be enough for Rust to standardize something resembling an ABI, but the widespread use of monomorphization makes it extremely tricky to do. C++ already had enough of problems with the infamous "extern template" feature of C++98, and now with C++ modules. Which, years after standardization, mostly still do not work.

Shared libraries

Posted Nov 24, 2025 22:48 UTC (Mon) by randomguy3 (subscriber, #71063) [Link] (1 responses)

Just to add to the list, we had cause to discover at work that appleclang had a subtle C++ ABI break between two minor versions of its compiler (I think it was between 14.0.0 and 14.0.3).

Shared libraries

Posted Nov 24, 2025 23:06 UTC (Mon) by ballombe (subscriber, #9523) [Link]

Given Apple track record at this point, what would be noteworthy is a XCode release that does not introduce new bugs. A subtle ABI breakage is the minimum to expect.

Shared libraries

Posted Nov 25, 2025 17:48 UTC (Tue) by ssokolow (guest, #94568) [Link] (2 responses)

I guess at some point the pressure will be enough for Rust to standardize something resembling an ABI
Work already in progress: Tracking Issue for the experimental crabi ABI

extern "crabi" is intended to be a higher-level alternative to extern "C".

Shared libraries

Posted Nov 25, 2025 20:04 UTC (Tue) by khim (subscriber, #9252) [Link] (1 responses)

Is it really “in progress” if last message is more than year old?

I would say “there's a dream”, at this point, not “there's a progress”.

Shared libraries

Posted Nov 25, 2025 20:43 UTC (Tue) by ssokolow (guest, #94568) [Link]

Fair point.

Shared libraries

Posted Nov 25, 2025 17:55 UTC (Tue) by ssokolow (guest, #94568) [Link]

Of cause even with Rust one can expose things across shared libraries using C-ABI, but then Rust code calling such C-based API will have to use unsafe when calling those even when the implementation is fully safe.
I don't remember seeing unsafe use in the examples for the abi_stable crate.

It's a Rust-to-C-to-Rust binding generator for making things like .so-based systems, similar to how PyO3 will let you write code to interface Rust and Python and it'll take care of generating the unsafe bits and wrapping them up in invariant-enforcing abstractions.

There are a lot of that sort of binding helper for Rust:

Shared libraries

Posted Nov 25, 2025 11:20 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (15 responses)

> If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen.

Why should they ? The same developer-friendly argument was made for Java software, the same refusal to invest in a mechanism to share components and stabilise ABIs was advanced by Java developers, the same hostility to distribution best practices was trumpeted right and left.

Fast forward twenty years the technical debt come due and no one can leave the Java boat fast enough. Turns out, refactoring vast piles of vendored, forked and obsolescent code, with no clear lines of demarcation because no one enforced ABI separation for a long time, is completely unappealing. You can ignore problems a long time they come back with a vengeance.

Shared libraries

Posted Nov 25, 2025 13:46 UTC (Tue) by khim (subscriber, #9252) [Link] (12 responses)

> Fast forward twenty years the technical debt come due and no one can leave the Java boat fast enough.

You live in some imaginary universe. On our universe Java is number three language, behind JavaScript and Python, but ahead of PHP, it's used by the most popular OS and no one thinks about abandoning it… sure, people like to grumble about Java problems… they use Java, nonetheless.

> You can ignore problems a long time they come back with a vengeance.

Isn't that what you are doing here?

Shared libraries

Posted Nov 25, 2025 17:45 UTC (Tue) by ssokolow (guest, #94568) [Link] (11 responses)

I think it's just that, because Java originally shaped its APIs toward "write once, run everywhere" and away from tight platform integration, and originally had a restrictive license, it's not very visible on desktop Linux.

...in the same way that C# is the language people are most likely to reach for on Windows because it's very well-suited to interfacing with Windows-specific APIs, people are likely to reach for C, Perl, Python, etc. on Linux because it's well suited for interfacing with POSIX APIs and Linux-specific APIs.

Web servers, specialty applications, Android apps... none of these grab your attention on a Linux desktop. The only Java apps I can think of having encountered on Linux are Minecraft, JDownloader, Azureus, Trang, FreeMind, and the control software for my KryoFlux.

Shared libraries

Posted Nov 25, 2025 18:22 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (10 responses)

> I think it's just that, because Java originally shaped its APIs toward "write once, run everywhere" and away from tight platform integration, and originally had a restrictive license, it's not very visible on desktop Linux.

It’s not “just that”. The restrictive licensing has been lifted a long time ago. There are loads of available FOSS Java components. There are lots of people who know how to code in Java.

What kills Java as a general purpose language that could be used to build generic components that could end up in generic software deployed in generic distributions is lack of ABI enforcement resulting in lack of version convergence. Life is just too short to delve in the pile of specific component versions each and every Java project uses. Each project is effectively a walled garden, its own component universe, hostile to other component universes.

Shared libraries

Posted Nov 25, 2025 18:29 UTC (Tue) by khim (subscriber, #9252) [Link] (9 responses)

> What kills Java as a general purpose language that could be used to build generic components that could end up in generic software deployed in generic distributions is lack of ABI enforcement resulting in lack of version convergence.

Except it doesn't “kill Java”. It kills “generic distributions”. I'm not really even sure how they are relevant, in the grand scheme of things: the only reason they survived till now was the fact that people needed them to develop server software. But with Docker on macOS and WSL they no longer needed even for that.

> Each project is effectively a walled garden, its own component universe, hostile to other component universes.

You mean: exactly and precisely like a Linux distro? Pot, meet kettle, I would say.

Shared libraries

Posted Nov 28, 2025 11:24 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (8 responses)

Yes precisely like a Linux distro;

And you can say “well I don’t need your stinking distros” then just show how you can assemble a full system. Distros proved they could do it from the bare metal to the apps above. Distro opponents have not managed to do it. (And when they attempt to do it it looks like a distro reinvention).

Shared libraries

Posted Nov 28, 2025 11:38 UTC (Fri) by khim (subscriber, #9252) [Link] (7 responses)

> Distro opponents have not managed to do it.

Seriously? Can you find another, more believable, delusion? The most popular OS on the planet shows us how that can be done.

You may claim that there are bazillion Linux distros and only one successful alternative, but that's not entirely true: there were others (Tizen, Maemo), they just have proven not to be very interesting… ironically enough one of the worst problems they had was precisely the fact that they have broken compatibility too much.

> And when they attempt to do it it looks like a distro reinvention

Not really. It's like saying that car and the plane are the same thing if they both use Rolls-Royce engine.

The critical difference between distro's “steaming pile of packages” and proper OS is an SDK: something that you can use to build your binary and, most importantly, target different versions of OS simultaneously. Here are downloads for Anroid. These are for Windows. MacOS are the shittiest ones, yes, they exist.

Where can I find something like that for any popular distro? If the answer is “you should just install bazillion versions on CI in docker”… then here's your answer for why things work as badly as they do with Linux on desktop.

Shared libraries

Posted Nov 29, 2025 16:32 UTC (Sat) by anselm (subscriber, #2796) [Link] (3 responses)

The critical difference between distro's “steaming pile of packages” and proper OS is an SDK: something that you can use to build your binary and, most importantly, target different versions of OS simultaneously.

Chances are, if you're developing software for Linux you probably want to target more than one Linux distribution, anyway. At that point, relying on an “SDK” provided by any one distribution likely wouldn't be enough – it would be weird to expect the “Debian SDK” to spit out RPM packages for RHEL or openSUSE, after all, and vice-versa –, and using CI to build more than one flavour of your code doesn't look all that unreasonable anymore. Been there, done that; it's not exactly rocket science.

Shared libraries

Posted Nov 29, 2025 16:48 UTC (Sat) by khim (subscriber, #9252) [Link] (1 responses)

> Chances are, if you're developing software for Linux you probably want to target more than one Linux distribution, anyway.

Why is that a problem? Go back quarter century… Windows 9x and Windows NT were significantly more different than different Linux distros… yet creating one binary that worked on both was the norm. Supporting Windows 3.x (with Win32s) was harder, but still possible.

These days that support is often done with help of Docker and, now, Flatpacks. Not sure how well it works with Flatpacks, but Docker for server packages works reasonably well.

> it would be weird to expect the “Debian SDK” to spit out RPM packages for RHEL or openSUSE, after all, and vice-versa

Yes, that's why people invented something that works for both. Many things, in fact. If GNU/Linux couldn't give them usable SDK then they would pick someone who would… Valve, e.g.

But then distros spent decades punishing them for that.

The only thing that achieved is the development of open source ecosystems that consider distros their enemy, or, at best, “the necessary evil”.

Rust just have turned into reveal show because distros have come to the Rust developers with demands then need to fulfill… and got back the question they haven't expected at all: “why should we care?”.

They have gotten certain answers, ultimately, via the Rust-in-Linux route: certain developers want to use distro-provided version of Rust and that means certain features needs to be included in Rust at certain time… that got acknowledgement, even if not full acceptance. Everything else? Distros have created the problem for themselves by introducing some strange arbitrary rules, distros may decide how to resolve them… developers of most Rust packages are just too busy doing useful work with people who don't invent random crazy hoops, they don't have time for the distro-invented circus.

Shared libraries

Posted Nov 29, 2025 17:39 UTC (Sat) by pizza (subscriber, #46) [Link]

> The only thing that achieved is the development of open source ecosystems that consider distros their enemy, or, at best, “the necessary evil”.

Those ecosystems decided that support for non-Linux environments was required, so they had to take a different approach.

If you have to reinvent the wheel completely anyway, why not just use that new wheel for Linux systems?

The smarter ones incorporated the lessons learned from Distros, but the not so smart ones (Python comes to mind) are still paying the price.

But as the saying goes, with sufficient thrust even a pig will fly. And thanks to the AI craze Python has a *lot* of well-funded thrust.


Shared libraries

Posted Dec 1, 2025 18:54 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Eh, we make tarballs that extract to a standard installation prefix. The binaries are relocatable, so we just need to make sure we build on a suitably old distro (currently CentOS8) and document libraries we assume exist (X11, some libxcb libraries, etc.). Works well enough. But yes, if you don't want to ship everything above the "bare bones", you'll need a per-distro package.

Shared libraries

Posted Dec 1, 2025 8:58 UTC (Mon) by nim-nim (subscriber, #34454) [Link]

The other solutions work when you have a rich monopolistic sugar daddy to pay for the base OS. Because the other solutions are totally ineffective to build this base OS and require the injection of loads of capital and management directives to achieve the same result.
A lot of us are not interested in hanging FLOSS decorations on a monopolistic closed system.

Shared libraries

Posted Dec 5, 2025 3:42 UTC (Fri) by dvdeug (subscriber, #10998) [Link] (1 responses)

> The critical difference between distro's “steaming pile of packages” and proper OS is an SDK: something that you can use to build your binary and, most importantly, target different versions of OS simultaneously.

It's relatively easy to do that on Windows, as Microsoft controls all the systems. It's not so easy to do that with Linux or *BSD, as there's multiple different forks you have to deal with. Flatpaks ultimately do what Windows does; ship all dynamic libraries with the program. I don't see any reason you don't accept that you can do that with a flatpak.

(Note that most people don't distribute native binaries for Android; the system compiles the APK into a native format. There are a bunch of "binary" formats that may or may not take an installation step, that you can run on different versions of Linux, ranging from well-designed compiled code to any number of interpreters to APKs and JARs. RenPY programmers distribute one binary package for Windows and Linux all the time.)

None of those are proper OSes; a proper OS has no users so there are no problems. Real systems have problems and make compromises. Distributing the same binary to versions of Linux from many times, from many vendors was not one of the driving goals behind Linux, so the compromises it demands weren't made. Note that Windows binaries, include ones built from post-2010 .NET, don't generally run across architectures; ARM Windows will emulate x86 binaries, but the x86-64 emulator is a lot more fragile and experimental, and x86-64 Windows won't run ARM Windows binaries.

So, yes, Linux can do it, using flatpaks or other options, no, Windows can't do it if you want to use a modern Windows SDK and target both ARM Windows and x86 Windows at the same time.

Shared libraries

Posted Dec 5, 2025 9:28 UTC (Fri) by taladar (subscriber, #68407) [Link]

If you really want to you can emulate Linux binaries from other architectures relatively transparently using qemu-user and binfmt_misc. Of course you still need a full set of libraries for them just as you would if you ran the code on that platform. Mostly it is useful for things like chroots with mounted SD-Cards for low power embedded devices where the emulation on your PC is still much faster and has more RAM available to compile stuff.

Shared libraries

Posted Nov 25, 2025 18:47 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> Why should they ? The same developer-friendly argument was made for Java software, the same refusal to invest in a mechanism to share components and stabilise ABIs was advanced by Java developers

Whut?!? Java is one of the _best_ examples of stable API/ABI. I can launch software written for JVM 1.3 in 2001 on the most modern JVM, as long as it doesn't use stuff like "Unsafe". Even SWING and AWT are still available!

This absolutely helped with Java's popularity. Banks and enterprises _love_ not having to update the code every year.

Shared libraries

Posted Dec 1, 2025 9:08 UTC (Mon) by nim-nim (subscriber, #34454) [Link]

Correction : the integrators that work for banks *love* a system where they are paid to reinvent the wheel for every deployment, and love the maze of custom component versions that prevent their customers from auditing deliverables and noticing when they have been sold a leaky obsolescent mess.

Of course the same lack of ABI convergence that make those languages perfect to resell a mess at inflated prices to banks, makes them totally inadequate for cooperative OS building, where inefficiencies are not an occasion to invoice more but a direct cost in volunteer efforts.

Shared libraries

Posted Dec 5, 2025 1:46 UTC (Fri) by dvdeug (subscriber, #10998) [Link] (1 responses)

> None of these languages are being developed or funded by distros.

And vice versa, none of these distros are being developed or funded by languages. In many cases, there's people working on both a distribution and a programming language. Debian is an all-volunteer distribution that doesn't have money to really throw at other projects, and if they did, they'd probably encourage you donate to those other projects directly.

> If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen.

If Debian picked up its toys and went home, a vast number of people and companies who never did work for or donated money to Debian would be left high and dry. If you want to make it antagonistic, ecosystems who want to continue using Debian for their servers should start paying for an OS license.

Shared libraries

Posted Dec 5, 2025 2:16 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Distros don't have that much leverage, though. At this point, a vendor can probably switch to another distro reasonably quickly.

There are several options to choose from.

Shared libraries

Posted Nov 24, 2025 17:50 UTC (Mon) by farnz (subscriber, #17727) [Link]

In the short term, there's experiments like stabby and abi_stable looking at what it means to provide a well-defined ABI for a shared library written in Rust and intended to be consumed by other Rust programs.

There's also work coming from the other direction, of providing a way to deliberately indicate that you intend something to be ABI, and widening the number of things that have a stable ABI, which will hopefully meet the efforts to determine what a stable ABI definition "should" look like in the middle.

Unfortunately, all this takes time, motivation, and a lot of work; without more people helping, I could see it taking some time to get there.

Shared libraries

Posted Nov 24, 2025 18:48 UTC (Mon) by hunger (subscriber, #36242) [Link] (7 responses)

> C++ support shared libraries and rust could in principle support them too.

Does it? Yes, it works most of the time, but that is by luck and not by design.

The headers used to build some binary contain lots of code that gets backed into the binary (e.g. all templates). If any of those get changed by the next version of the library, then you can spent fun times debugging crashes as suddenly the code baked into the binary from the old version fails to use some symbol backed into the new library.

There is a reason why most distros rebuild binaries when the dependencies change.

Yes, rust could do the same. Rust has a different culture so it won't.

Shared libraries

Posted Nov 25, 2025 2:22 UTC (Tue) by Elv13 (subscriber, #106198) [Link] (3 responses)

The same is true for C, but the ABI surface is smaller. `enum SecurityMode {LEGACY, SECURE, DISABLED};` sometime accidentally switch to `enum SecurityMode {LEGACY, SECURE, SUPER_STRICT, DISABLED};` in some of your dependency point release. Some libraries have a vast number of headers, beyond what a human can review. The API is compatible, but you just silently introduced a severe security regression in any package not rebuilt. Thankfully, this is a tooling issue and `libabigail` exists for C/C++. So most distribution have the means to track what needs to be rebuilt if they integrate those tools. Template code makes the problem worse, but it not solely a C++ problem.

I am not familiar with the tooling Rust has to track ABI breakages, but I assume it could be handled using tooling rather than try to maintain a stable shared library ABI across versions.

Shared libraries

Posted Nov 25, 2025 2:46 UTC (Tue) by khim (subscriber, #9252) [Link] (2 responses)

> I assume it could be handled using tooling rather than try to maintain a stable shared library ABI across versions.

Not really. One example: let's convert your enum SecurityMode {LEGACY, SECURE, DISABLED}; to Rust and add Option<…&rt; wrapping. And now look on how different versions of Rust thread that. Nice, isn't it? The same effect that you just described—but without any source changes, just with different compiler. And no, release notes wouldn't save you, either, there are nothing in them about this change.

> I am not familiar with the tooling Rust has to track ABI breakages

Easy: it doesn't exist. cargo_semver_checks is very through, but it only tracks source compatibility. Never binary. Stable ABI doesn't exist, period.

There was some interest in development of such ABI, but effort have stalled.

Shared libraries

Posted Nov 25, 2025 9:12 UTC (Tue) by taladar (subscriber, #68407) [Link] (1 responses)

Considering the number of edge cases the cargo-semver-checks author constantly documents on his blog even at the API level I doubt an approach that would satisfy Rust's high standard for correctness will ever exist at the ABI level.

It mostly works in C and C++ since those seem to have much lower standards for what they consider 'working'.

Shared libraries

Posted Nov 25, 2025 13:49 UTC (Tue) by khim (subscriber, #9252) [Link]

With Swift approach (roughly: make dyn Trait as capable as impl Trait at the cost of implementation speed) there would be no material difference between ABI stability checks and API stability checks.

Sure, it would be a bit work to provide stable ABI and most crates wouldn't bother, but if someone want to create a “Rust platform” (similarly to how iOS and macOS are “Swift platforms”) then it's perfectly doable if costly.

Shared libraries

Posted Nov 25, 2025 11:37 UTC (Tue) by SLi (subscriber, #53131) [Link] (2 responses)

> There is a reason why most distros rebuild binaries when the dependencies change.

The claim that this is not sustainable for Debian also seems strange, given that a lot of distros do manage to do it (including non-commercial ones like NixOS).

Shared libraries

Posted Nov 25, 2025 15:02 UTC (Tue) by intelfx (subscriber, #130118) [Link] (1 responses)

> The claim that this is not sustainable for Debian also seems strange, given that a lot of distros do manage to do it (including non-commercial ones like NixOS).

> given that a lot of distros do manage to do it (including non-commercial ones like NixOS).

NixOS is only managing to do it because commercial sponsors dump relatively huge money into operation of their CI and binary cache.

Same also goes for other "non-commercial" distros — if you look closer, you'll find they all have commercial sponsors subsidizing the infrastructure.

Shared libraries

Posted Nov 25, 2025 18:49 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Gentoo also exists, btw. And it doesn't appear to have many corporate sponsors.

Shared libraries

Posted Nov 25, 2025 0:34 UTC (Tue) by pabs (subscriber, #43278) [Link] (10 responses)

Rust already supports shared libraries (dylib), IIRC the ABI is not yet stable though.

https://doc.rust-lang.org/reference/linkage.html

The problem though is the culture of the Rust ecosystem; much of it prefers static linking, dislikes distros and probably would reject patches to introduce dylibs for each package.

Shared libraries

Posted Nov 25, 2025 4:14 UTC (Tue) by xnox (subscriber, #63320) [Link] (9 responses)

It is somewhat limited in its use.

It doesn't provide stable abi - one can use them to share code across multiple related binaries, think private .so

It also is unsafe and removes type checking - which defeats the point of rust to begin with.

Shared libraries

Posted Nov 25, 2025 10:55 UTC (Tue) by joib (subscriber, #8541) [Link] (8 responses)

If it's just across multiple related binaries, wouldn't a better approach be to use a multi-call binary like AFAIU uutils is using (or busybox for an example in C land)?

Shared libraries

Posted Nov 25, 2025 18:00 UTC (Tue) by ssokolow (guest, #94568) [Link]

I believe it's intended for embedded use, where they can update the entire image in one go to keep everything consistent but they may want to add and remove modules depending on the context.

(eg. I could see that being used for something similar to how, as explained by the me_cleaner docs, Intel's UEFI is made of modular blocks and the firmware is updated as one blob, but each motherboard model will have different blocks present and absent... before Intel revised their signing to disallow it, me_cleaner would delete some of the modules.)

Shared libraries

Posted Nov 25, 2025 19:34 UTC (Tue) by bluca (subscriber, #118303) [Link] (3 responses)

Multicall binaries are a horrible hack. Anything there's any security policy in place, running one command means you can run any of them.

Shared libraries

Posted Nov 26, 2025 2:34 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

If your security depends on limiting which binaries you can run by name, then it's dead anyway.

Shared libraries

Posted Nov 26, 2025 11:21 UTC (Wed) by bluca (subscriber, #118303) [Link] (1 responses)

Except of course "depends on" is something you have hallucinated all by yourself

Shared libraries

Posted Nov 26, 2025 18:38 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

I mean, you yourself said that. Also, you can still limit the available command set with multicall binaries, although it requires some support from the binary.

Shared libraries

Posted Nov 25, 2025 19:36 UTC (Tue) by willy (subscriber, #9762) [Link] (2 responses)

Ok, but going back to the libtiff example, we probably don't want a multi call binary that includes every app that wants to process TIFFs.

I really wish something like Microsoft's OLE were more acceptable where we'd essentially have a bunch of methods we could invoke on a tiff object that was handled by a different process.

Shared libraries

Posted Nov 26, 2025 1:57 UTC (Wed) by pabs (subscriber, #43278) [Link] (1 responses)

ISTR the Mill Computing (vaporware) ISA allows for efficient cross-context function calls.

https://millcomputing.com/topic/inter-process-communication/

Shared libraries

Posted Nov 26, 2025 4:06 UTC (Wed) by willy (subscriber, #9762) [Link]

Yeah, iAPX 432 also optimised for this use case ;-)

Shared libraries

Posted Nov 25, 2025 16:53 UTC (Tue) by ssokolow (guest, #94568) [Link]

C++ supports shared libraries for the parts which don't use templates.

See The impact of C++ templates on library ABI for details but the gist is that it needs to customize and inline the code for every type it's applied to.

Dynamically linking that sort of dispatch without the compromises Swift makes is an unsolved problem and Rust threads its equivalent of templates throughout the entire system. Most visibly, in the form of the generic type parameters on its alternatives to NULL and exception-throwing. (Option<T> and Result<T, E>)

Shared libraries

Posted Nov 26, 2025 22:20 UTC (Wed) by ralfj (subscriber, #172874) [Link]

> I am very reticent to lose that by moving to rust, especially since there is no strictly technical reasons,
> C++ support shared libraries and rust could in principle support them too.

Not sure if you are trolling or just ignorant, but the reasons are entirely technical. Supporting shared libraries properly is really, really hard. C++ barely supports them, but only if you avoid templates and are extremely careful -- doing the same in Rust (in particular, foregoing the use of generics in library API surfaces) would give up almost all the benefits Rust brings, so it's pointless. Swift supports them through a metric ton of engineering work: https://faultlore.com/blah/swift-abi/ .

I'd love to see a stable ABI that actually supports Rust-style APIs. I am certain it's the same for many of my colleagues on the Rust compiler team. But the engineering required for that far exceeds any single new feature that Rust has gained since version 1.0. So please don't go around claiming that Rust does not support shared libraries for non-technical reasons, implying that we somehow have a political agenda to oppose shared libraries or something like that.

Policy about the meaning of dropping a release architecture

Posted Nov 25, 2025 19:52 UTC (Tue) by hsivonen (subscriber, #91034) [Link] (1 responses)

> The sh4 port has never been officially supported, and none of the other ports have been supported since Debian 6.0.

Is there a stated policy that explains the implications of Debian dropping an architecture as an official release architecture?

It appears that dropping an architecture as a release architecture does not stop these architectures from factoring into debates about what’s OK to do on official release architectures.

What does dropping an architecture from the set of official release architectures mean policy-wise?

Policy about the meaning of dropping a release architecture

Posted Nov 26, 2025 12:36 UTC (Wed) by smcv (subscriber, #53363) [Link]

> What does dropping an architecture from the set of official release architectures mean policy-wise?

It means that the dropped architecture no longer prevents packages in unstable from migrating to testing (and therefore being included in the next stable release) without manual intervention from the release team, even if that results in previously-working packages becoming uninstallable or otherwise broken.

A fun article from Microsoft

Posted Nov 26, 2025 5:22 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

As if on cue, an interesting article from Microsoft about switching from a "sea of packages" model to a unified monolithic build: https://devblogs.microsoft.com/dotnet/reinventing-how-dot...

They had similar issues to Linux distribution with managing tons of fine-grained dependencies, and making sure the set of versions stays "coherent". And they switched to doing essentially "static builds".

A fun article from Microsoft

Posted Nov 26, 2025 9:27 UTC (Wed) by cyperpunks (subscriber, #39406) [Link] (1 responses)

The shared libs approach was pioneered by SunOS / Solaris, which forbids static linking to this day. Linux copied many ideas from Solaris and this is one of them.

The best side effect of the shared libs mantra is that responibility is clear, it's up to the owner/maintainer of the problematic package to get the fix available in repos.

In a static approach a completely new dependency graph and process would be needed, the rust packaging in Fedora seems
to be step in this direction, where a more holistic approach is used.

For core services like sshd and httpd the dependency graph is not huge, however extending to popluar stacks like PHP and Python it gets very complex where fast, due to shared libs but also due to the use of "modules" the framework in question uses.

(Which was the reason for the Xubuntu breach lwn.net wrote about recently.)

Other related problem is the maintainship upstream, if upstream don't create proper releases and follows reasonable versioning scheme (like semver), how is downstream consumers supposed to able to use the software in a safe manner?

There are lots of hard problem to solve here, its far more complex than a simple fight between shared and static linking.


A fun article from Microsoft

Posted Nov 26, 2025 11:59 UTC (Wed) by Wol (subscriber, #4433) [Link]

> The best side effect of the shared libs mantra is that responsibility is clear, it's up to the owner/maintainer of the problematic package to get the fix available in repos.

And if the repo is an unresponsive 3rd party? The trouble is there are too many 3rd parties (upstream AND downstream) in the linux eco-system, and if you're unlucky enough to be dealing with one that doesn't have the bandwidth to deal with your particular problem (isn't that the norm?) ...

Cheers,
Wol

A fun article from Microsoft

Posted Dec 1, 2025 9:28 UTC (Mon) by nim-nim (subscriber, #34454) [Link] (3 responses)

Of course a single monopolistic team does not need packages, dynamic libraries and ABI assurances. It uses other mechanisms (called management and paycheck) to force every developer involved to converge and update to the same component versions. Giving up on packages is a net initial win in such an organization.

It remains to be seen if the other mechanisms manage to herd the cats effectively long term, past experience is not conclusive.

A fun article from Microsoft

Posted Dec 1, 2025 14:16 UTC (Mon) by Wol (subscriber, #4433) [Link] (1 responses)

If you actually read the article, one of the requirements was that distros could/would build the linux_x86 version.

Management and paycheck don't work there ...

Cheers,
Wol

A fun article from Microsoft

Posted Dec 1, 2025 14:36 UTC (Mon) by nim-nim (subscriber, #34454) [Link]

Distributions never had any problem to rebuild clean upstreams, just the ones that were stuck on odd component versions

A fun article from Microsoft

Posted Dec 1, 2025 18:23 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

If you read the article, you'll see that they actually had exactly the same problem as something like Debian. And it took them years to finish this project.

So, what's going to happen?

Posted Nov 26, 2025 10:18 UTC (Wed) by gioele (subscriber, #61675) [Link]

The discussion dwindled down without any clear path forward, it seems.

IMO the most reasonable thing to do at the moment would be:

1. start working on rustifying `apt-ftparchive` and `apt-extracttemplates`;

2. split `apt-utils` away from APT's source package (or introduce a <norust> build profile), so that APT's main binary can still be compiled using only C/C++;

3. do not rustify APT itself until some sort of consensus is reached.

What counts as "'serious usage' of apt-ftparchive"?

Posted Nov 26, 2025 13:51 UTC (Wed) by ATLief (subscriber, #166135) [Link] (2 responses)

I use apt-ftparchive to distribute custom builds to all of my personal devices and to share them with friends. I don't make any money with it, though.

Would that count as "serious usage"?

What counts as "'serious usage' of apt-ftparchive"?

Posted Nov 26, 2025 16:50 UTC (Wed) by smcv (subscriber, #53363) [Link]

If you trust (or are responsible for) those custom builds, it doesn't really matter whether this is serious usage or not, because presumably you aren't expertly crafting them to exploit bugs in apt-ftparchive (if you did, you'd only be hurting yourself and your users, which you can do more easily by putting something malicious in the packages). The point at which you start using it to parse a .deb that might have been maliciously crafted by an attacker (like Launchpad PPAs) is the point at which bugs become security vulnerabilities.

(Probably Launchpad should be mitigating this by running apt-ftparchive in a sandbox that has no read access to anything non-public except for the target PPA's per-PPA signing key, and no write access to anything except the target PPA's metadata; and for all I know, maybe they already do.)

What counts as "'serious usage' of apt-ftparchive"?

Posted Nov 26, 2025 18:31 UTC (Wed) by edgewood (subscriber, #1123) [Link]

In addition to what smcv wrote, adding Rust to apt-ftparchive would only make a (negative) difference to you if you want to develop apt-ftparchive and don't want to learn Rust, or if you use it on one of the four legacy platforms that the article discusses.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds