APT Rust requirement raises questions
It is rarely newsworthy when a project or package picks up a new dependency. However, changes in a core tool like Debian's Advanced Package Tool (APT) can have far-reaching effects. For example, Julian Andres Klode's declaration that APT would require Rust in May 2026 means that a few of Debian's unofficial ports must either acquire a working Rust toolchain or depend on an old version of APT. This has raised several questions within the project, particularly about the ability of a single maintainer to make changes that have widespread impact.
On October 31, Klode sent an announcement to the debian-devel mailing list that he intended to introduce Rust dependencies and code into APT as soon as May 2026:
This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.
If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.
Klode added this was necessary so that the project as a whole could
move forward, rely on modern technologies, "and not be held back by
trying to shoehorn modern software on retro computing
devices
". Some Debian developers have welcomed the news. Paul
Tagliamonte acknowledged
that it would impact unofficial Debian ports but called the push
toward Rust "welcome news
".
However, John Paul Adrian Glaubitz complained
that Klode's wording was unpleasant and that the approach was
confrontational. In another
message, he explained that he was not against adoption of Rust; he
had worked on enabling Rust on many of the Debian architectures and
helped to fix architecture-specific bugs in the Rust toolchain as well
as LLVM upstream. However, the message strongly suggested there was no room
for a change in plan: Klode had ended his message with "thank you for
understanding
", which invited no further discussion. Glaubitz was
one of a few Debian developers who expressed discomfort with Klode's
communication style in the message.
Klode noted, briefly, that Rust was already a hard requirement for all Debian release architectures and ports, except for Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4), because of APT's use of the Sequoia-PGP project's sqv tool to verify OpenPGP signatures. APT falls back to using the GNU Privacy Guard signature-verification tool, gpgv, on ports that do not have a Rust compiler. By depending directly on Rust, though, APT itself would not be available on ports without a Rust compiler. LWN recently covered the state of Linux architecture support, and the status of Rust support for each one.
None of the ports listed by Klode are among those officially
supported by Debian today, or targeted for support in
Debian 14 ("forky"). The sh4 port has never been officially
supported, and none of the other ports have been supported since
Debian 6.0. The actual impact on the ports lacking Rust is also
less dramatic than it sounded at first. Glaubitz assured
Antoni Boucher that "the ultimatum that Julian set doesn't really
exist
", but phrasing it that way "gets more attention in the
news
". Boucher is the maintainer of rust_codegen_gcc,
a GCC
ahead-of-time code generator for Rust. Nothing, Glaubitz said,
stops ports from using a non-Rust version of APT until Boucher and
others manage to bootstrap Rust for those ports.
Security theater?
David Kalnischkies, who is also a major
contributor to APT, suggested
that if the goal is to reduce bugs, it would be better to remove the
code that is used to parse the .deb, .ar, and .tar formats that Klode
mentioned from APT entirely. It is only needed for two tools, apt-ftparchive
and apt-extracttemplates,
he said, and the only "serious usage
" of
apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration
platform. If those were taken out of the main APT code base, then it
would not matter whether they were written in Rust, Python, or another
language, since the tools are not directly necessary for any given
port.
Kalnischkies also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:
You can certainly do unit tests in C++, we do. The main problem is that someone has to write those tests. Like docs.
Your new solver e.g. has none (apart from our preexisting integration tests). You don't seriously claim that is because of C++ ? If you don't like GoogleTest, which is what we currently have, I could suggest doctest (as I did in previous installments). Plenty other frameworks exist with similar or different styles.
Klode has not responded to those comments yet, which is a bit unfortunate given the fact that introducing hard dependencies on Rust has an impact beyond his own work on APT. It may well be that he has good answers to the questions, but it can also give the impression that Klode is simply embracing a trend toward Rust. He is involved in the Ubuntu work to migrate from GNU Coreutils to the Rust-based uutils. The reasons given for that work, again, are around modernization and better security—but security is not automatically guaranteed simply by switching to Rust, and there are a number of other considerations.
For example, Adrian Bunk pointed
out that there are a number of Debian teams, as well as tooling,
that will be impacted by writing some of APT in Rust. The release
notes for Debian 13 ("trixie") mention
that Debian's infrastructure "currently has problems with
rebuilding packages of types that systematically use static
linking
", such as those with code written in Go and Rust. Thus, "these packages will be
covered by limited security support until the infrastructure is
improved to deal with them maintainably
". Limited security support
means that updates to Rust libraries are likely to only be released
when Debian publishes a point release, which happens about every two
months. The security team has specifically
stated that sqv is fully supported, but there are still
outstanding problems.
Due to the static-linking issue, any time one of sqv's dependencies, currently more than 40 Rust crates, have to be rebuilt due to a security issue, sqv (at least potentially) also needs to be rebuilt. There are also difficulties in tracking CVEs for all of its dependencies, and understanding when a security vulnerability in a Rust crate may require updating a Rust program that depends on it.
Fabian Grünbichler, a maintainer of Debian's Rust toolchain, listed several outstanding problems Debian has with dealing with Rust packages. One of the largest is the need for a consistent Debian policy for declaring statically linked libraries. In 2022, Guillem Jover added a control field for Debian packages called Static-Built-Using (SBU), which would list the source packages used to build a binary package. This would indicate when a binary package needs to be rebuilt due to an update in another source package. For example, sqv depends on more than 40 Rust crates that are packaged for Debian. Without declaring the SBUs, it may not be clear if sqv needs to be updated when one of its dependencies is updated. Debian has been working on a policy requirement for SBU since April 2024, but it is not yet finished or adopted.
The discussion sparked by Grünbichler makes clear that most of Debian's Rust-related problems are in the process of being solved. However, there's no evidence that Klode explored the problems before declaring that APT would depend on Rust, or even asked "is this a reasonable time frame to introduce this dependency?"
Where tradition meets tomorrow
Debian's tagline, or at least one of its taglines, is "the
universal operating system", meaning that the project aims to run on a
wide variety of hardware (old and new) and be usable on the desktop,
server, IoT devices, and more. The "Why Debian" page
lists a number of reasons users and developers should choose the
distribution: multiple hardware
architectures, long-term
support, and its democratic governance
structure are just a few of the arguments it puts forward in favor
of Debian. It also notes that "Debian cannot be controlled by a
single company
". A single developer employed by a company to work
on Debian tools pushing a change that seems beneficial to that
company, without discussion or debate, that impacts multiple hardware
architectures and that requires other volunteers to do unplanned work
or meet an artificial deadline seems to go against many of the
project's stated values.
Debian, of course, does have checks and balances that could be
employed if other Debian developers feel it necessary. Someone could,
for example, appeal to Debian's Technical Committee,
or sponsor a general resolution to override a developer if they cannot
be persuaded by discussion alone. That happened recently when the committee required systemd
maintainers to provide the /var/lock directory "until
a satisfactory migration of impacted software has occurred and Policy
updated accordingly
".
However, it also seems fair to point out that Debian can move slowly, even glacially, at times. APT added support for the DEB822 format for its source information lists in 2015. Despite APT supporting that format for years, Klode faced resistance in 2021, when he pushed for Debian to move to the new format ahead of the Debian 12 ("bookworm") release in 2021, but was unsuccessful. It is now the default for trixie with the move to APT 3.0, though APT will continue to support the old format for years to come.
The fact is, regardless of what Klode does with APT, more and more free software is being written (or rewritten) in Rust. Making it easier to support that software when it is packaged for Debian is to everyone's benefit. Perhaps the project needs some developers who will be aggressive about pushing the project to move more quickly in improving its support for Rust. However, what is really needed is more developers lending a hand to do the work that is needed to support Rust in Debian and elsewhere, such as gccrs. It does not seem in keeping with Debian's community focus for a single developer to simply declare dependencies that other volunteers will have to scramble to support.
Posted Nov 24, 2025 16:42 UTC (Mon)
by atai (subscriber, #10977)
[Link] (12 responses)
Posted Nov 24, 2025 16:53 UTC (Mon)
by epa (subscriber, #39769)
[Link] (5 responses)
Posted Nov 24, 2025 17:14 UTC (Mon)
by ojeda (subscriber, #143370)
[Link]
`rustc_codegen_clr` has such a mode, and there was also another start on a new C backend for `rustc`. Neither is "production ready", but it is a nice approach, and in fact it is not uncommon for languages to design their compilers that way.
Posted Nov 25, 2025 20:16 UTC (Tue)
by hsivonen (subscriber, #91034)
[Link] (1 responses)
I don’t know about the level of effort requires to get the pile of C to expose the same interface as the Rust input is exposing as FFI.
Posted Nov 26, 2025 10:00 UTC (Wed)
by moltonel (subscriber, #45207)
[Link]
Posted Nov 27, 2025 23:52 UTC (Thu)
by jmalcolm (subscriber, #8876)
[Link]
If you can compile Rust to any architecture that supports GCC, you certainly have enough portability for APT.
Rust is already portable to the Debian architectures that people actually use. The ones that it does not support are not up-to-date anyway so shipping with an older version of APT would not be the end of the world.
As above though, there is no reason you cannot compile APT, even with Rust in it, for any of these platforms.
Posted Nov 24, 2025 16:53 UTC (Mon)
by jmm (subscriber, #34596)
[Link] (4 responses)
Posted Nov 26, 2025 14:01 UTC (Wed)
by ATLief (subscriber, #166135)
[Link] (3 responses)
Posted Nov 26, 2025 16:35 UTC (Wed)
by smcv (subscriber, #53363)
[Link] (2 responses)
Priority has very little practical effect, and Priority: required is mostly about ensuring that debootstrap and similar minimal-system bootstrap tools will install it, so that you have a route to install additional packages.
Posted Nov 26, 2025 17:01 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (1 responses)
So you can't uninstall the only one available from the list - if two are installed there's no problem uninstalling either.
Cheers,
Posted Nov 27, 2025 16:59 UTC (Thu)
by timon (subscriber, #152974)
[Link]
[1]: https://packages.debian.org/stable/virtual/
Posted Nov 24, 2025 16:56 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
The other route is to contribute to things like gccrs or rust_codegen_gcc, so that Rust is available on these ports, too. This has the slight advantage that, once you have Rust support, any other packages in Debian that need Rust become buildable for that port.
Posted Nov 24, 2025 17:02 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (316 responses)
I am very reticent to lose that by moving to rust, especially since there is no strictly technical reasons,
Rebuilding packages to update their dependencies is not sustainable for Debian.
Posted Nov 24, 2025 17:26 UTC (Mon)
by DemiMarie (subscriber, #164188)
[Link] (293 responses)
Even in C++, you already need to do this to pull in fixes made to a template, because templates are located in the header file. Most other natively-compiled languages also require such rebuilding. When it comes to new languages, Swift and maybe Hare are the only exceptions I know of.
None of these languages are being developed or funded by distros. They are all developed and funded by companies that can and do rebuild their programs from source and link statically without any issues. Distros are complaining that there is a problem without doing a substantial fraction of upstream maintenance on Cargo, rustc, GHC, Go, or any of the other toolchains.
If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen. It’s not impossible, but it is very difficult, and it has ecosystem-wide implications. Until they do, they get to use whatever the people who do do this work choose to make.
Posted Nov 24, 2025 17:31 UTC (Mon)
by fishface60 (subscriber, #88700)
[Link]
Hopefully the likes of Canonical, Red Hat or possibly Valve will step up to fund this, since it doesn't seem realistic to expect volunteer distributions like Debian to do the work.
Posted Nov 24, 2025 17:49 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (264 responses)
Then the future is shite
Posted Nov 24, 2025 18:07 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (106 responses)
> Then the future is shite
Or you go back to what I was doing over 40 years ago, when a library was just that ...
Yes you'll need some thought about how to update it into the modern world, but you static link and your library is a bunch of .o's that get copied in.
Yes you need to rebuild your applications, but the compile load is so much lower.
And if you really want to sort-of-merge your compiler and linker, okay you won't be able to mix-n-match compilers in all likelihood, but instead of .o's you compile the library to intermediate compiler representation, optimise whatever hell you can out of it, and then dump that into a .lib file that the compiler can pull into the application.
Okay, you lose the ability to just drop in a new fixed library, that fixes all your apps in one hit, but how well does that really work in practice?
Cheers,
Posted Nov 24, 2025 20:12 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (74 responses)
It work pretty well. For example each time a new CVE is fixed in libtiff, the libtiff library is upgraded and there is no need to rebuild every software that directly or indirectly process TIFF files.
Making very costly to apply a security fix does not increase security.
Posted Nov 25, 2025 8:54 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (18 responses)
Posted Nov 25, 2025 9:45 UTC (Tue)
by leromarinvit (subscriber, #56850)
[Link] (14 responses)
If they don't, just setting APT to auto-install security updates, without somehow restarting individual services or the whole system afterwards, is clearly not enough to at least keep a system free of known (and fixed) vulnerabilities.
Posted Nov 25, 2025 10:03 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Nov 25, 2025 10:05 UTC (Tue)
by epa (subscriber, #39769)
[Link] (12 responses)
In principle a program could be re-linked against the new shared library code while it stays running, but that requires an even stronger ABI stability guarantee than most libraries provide.
Posted Nov 25, 2025 11:26 UTC (Tue)
by SLi (subscriber, #53131)
[Link] (10 responses)
Posted Nov 25, 2025 14:02 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (9 responses)
The point is that it's not that big a reduction in effort - and it's a reduction in effort in the automated part, to boot.
Posted Nov 25, 2025 14:59 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (8 responses)
Nobody is making the claim for shared libraries to somehow obviate the need to *restart the applications*. You invented this claim out of thin air.
Shared libraries obviate the need to *update the binaries*, no more, no less.
Posted Nov 25, 2025 15:12 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (7 responses)
The only thing they do is mean that you don't have to replace the executables - but replacing binaries (libraries or executables) is the bit of the update process that's simple to automate.
Posted Nov 25, 2025 18:23 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (6 responses)
> but replacing binaries (libraries or executables) is the bit of the update process that's simple to automate.
It's not so simple (or cheap) to automate _creating_ those binaries in the first place. This is the part shared libraries help with: you do not need to rebuild and redistribute O(n) dependents when all you need is to make a compatible change in O(1) libraries.
Posted Nov 25, 2025 19:14 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
The argument made by the newer languages is that it should be. And they have largely succeeded. When is the last time you saw a Rust or Go project that *couldn't* be built with cargo build, go build, etc.?
Yes, yes, the subsequent packaging step is frequently more complicated. But that step is entirely the distro's invention. They don't get to make it everyone else's problem.
Posted Nov 28, 2025 3:16 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link] (4 responses)
Posted Nov 28, 2025 5:35 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 28, 2025 8:33 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (2 responses)
Besides, updating all the dependencies to the same combination of dependency versions used by everyone else using the application actually reduces the number of truly different combinations in use, making it more likely that someone else is running the same version and has run into a bug before you do.
Posted Jan 14, 2026 4:48 UTC (Wed)
by wtarreau (subscriber, #51152)
[Link] (1 responses)
No I mean they don't have the choice to only updating the library without updating the application. The lifecycle of applications is not always the same as that of libraries. Take firefox as an example, you might experience issues caused by one of the thousand libs it depends on, but otherwise match your needs. By only updating that broken lib you get a perfectly functional browser. When using static builds, it's less likely that they'll rebuild your version based on an updated dependency so instead you're more likely to be forced to upgrade to the next version that probably doesn't please you at all anymore, or introduces new issues. The alternative is to not update the static application and that's an issue with security stuff (e.g. crypto libraries), which will overall degrade the security of what's effectively deployed in the field.
Posted Jan 14, 2026 5:20 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
This absolutely applies to Rust and Go. I can make my own mini-fork within minutes for most of the projects, except for really hairy ones like Docker or K8s.
Posted Dec 6, 2025 10:38 UTC (Sat)
by jengelh (subscriber, #33263)
[Link]
The more evolved and hostile attached networks became between 1990 and now, the *less* Windows versions needed to reboot. So it does not seem like the mocking was ever done because of (in)security concerns.
Posted Nov 25, 2025 16:02 UTC (Tue)
by draco (subscriber, #1792)
[Link] (2 responses)
You keep saying distros don't handle this case, but they do.
Posted Nov 27, 2025 18:13 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link] (1 responses)
Posted Nov 28, 2025 12:10 UTC (Fri)
by neverpanic (subscriber, #99747)
[Link]
So, yes, you can run extra services by making sure your systemd service file is configured to start in that target.
Posted Nov 25, 2025 12:17 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (54 responses)
Posted Nov 25, 2025 14:06 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (53 responses)
Imagine a new version of libtiff which introduces a security-relevant bug into the decompressor for TIFF compression scheme 32809 (ThunderScan 4-bit RLE). Upstream's statically linked builds of the program are not vulnerable, because they don't enable the bits of libtiff needed to handle files from ancient Macs, but because your distro includes a utility that's supposed to analyse an ancient Mac disk image and convert all the data to modern formats that you can work with, your distro build of libtiff has this support enabled.
Hey presto, an application that was not vulnerable in the upstream configuration (and may not be vulnerable on other distros that don't support reading TIFF files from ancient Macs) is now vulnerable, because you're running a configuration of the code that's necessary for a different application.
Worst case, you've opened up a network-accessible vulnerability in an application that was unaware that you could build libtiff this way, in order to give more functionality to an application that's carefully sandboxed in case the files are corrupt and trigger a bug.
Posted Nov 25, 2025 15:10 UTC (Tue)
by paulj (subscriber, #341)
[Link] (35 responses)
Which scenario is the more common? Which has the better track record at quickly updating to fix bugs? The random statically linked upstream-packaged apps or the Linux distros? I'd say the distros.
But let's say Linux distros are just average. Say we have 100 upstream-packaged statically-linked apps, and 100 apps using the distro shared library... ~50 of the upstream apps will update before the distro, and ~50 after - with a long tail. So - even if distros are not very good at shipping security updates, the statically linked approach will still leave you with a number of vulnerable apps for a long time to come.
Posted Nov 25, 2025 15:18 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (14 responses)
Note that the distro is quite capable of using the dependency information it already has (BuildRequires and the like) to rebuild statically linked binaries - dynamically linked versus statically linked is more about how much automated work has to be done to get you a fixed version in place, rather than about which is "more secure".
And I don't believe anyone has done the study to determine which is actually more secure in practice - static linked executables, with unused parts of libraries turned off, or dynamically linked executables sharing a library with more used components. Once you allow for things like time to determine that an update is needed, it's quite a complex space to think about, and (like so much in computing), we're more going on "what feels right" than on hard data.
Posted Nov 25, 2025 17:42 UTC (Tue)
by paulj (subscriber, #341)
[Link] (1 responses)
I don't see any reason why app would or would not choose more locked-down build options than the distro. Assume that varies randomly across apps, and assume it varies randomly with libraries. The point still stands: The 'average' Linux distro gives you a bounded, shorter window of time where you have vulnerable apps from a bad library.
Posted Nov 26, 2025 8:51 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
If, on the other hand, the binary needs to be modified to support the fixed library version there is a strong chance there is no ABI-compatibility for the shared library to take advantage of anyway.
Posted Nov 25, 2025 17:56 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (11 responses)
That’s a workaround not a feature, it transforms updates in mass rebuilds, that require a lot more build power, and add an economic barrier to entry to the distribution game (not that the big distributors will complain much about this part).
Nevertheless the real major drawback of static building is that it removes any developer incentive to converge on specific component versions. With dynamic building you have to choose one of the handful of versions packaged by your distributors. With static building there is no reason to make the effort (which is why developers are in deep love with static building).
The consequence of this lack of version convergence, is that static building is not only massively more wasteful on building power, it is massively more demanding in maintainer power. You need to tailor security patches (and security impact analysis) to each and every component version individual developers chose to vendor in their static build. In effect you promote technical debt (increase short term benefits at the expense of long term maintenance).
The problem does not exist FAANG side, because FAANGs force their dev teams to use the same golden versions. Guess how promoters of static building would welcome a distribution, that told them “you can use static building, but only with the following vendored versions, because we do not have the resources to patch others”.
Posted Nov 25, 2025 18:31 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Eh, no. Unless you're targeting something like m68k, rebuilding is cheap. We can use Gentoo as an example, rebuilding the world takes about 12 hours on a reasonably fast computer. And this is for a catastrophic bug, where everything needs to be updated.
In most cases, you're looking at maybe a handful of packages.
Posted Nov 26, 2025 8:54 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (4 responses)
This gets rid of all the effort required for distro specific bugs when developers don't want bug reports for ancient versions, all the backporting,...
Posted Nov 28, 2025 9:37 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
Static building is the opposite of using the latest version, people use the latest version *at the time they bother to check* and then lock it down and accumulate technical debt. People do not like making the effort to update, period, they can’t complain using dynamic libraries forces them to update on a regular cadence (that would expose that they do not want to make this effort) they complain that the update forced on them by distributions is not new and shiny enough.
Posted Nov 29, 2025 16:20 UTC (Sat)
by khim (subscriber, #9252)
[Link]
That's only true for forges like PyPi or crates.io. Where people are free to pick any version they like. With distros is the exact opposite: few packagers, often just one, single, person decides what version of library to package. That's the opposite of the process that may lead to the outcome you describe. Still better than some random version that was picked by god-know-who for god-know-what reason and which wasn't even tested in conjunction with app. People complain when something doesn't work, period. It may be too new or too old or anything in between. But with distros they are often in position that they have no one to even complain to because person that assembled the crazy combination of libraries that breaks things is not even result of conscious decision, but more of result of random dice throwing. According to the distro makers every package should be ready to work with every version of library… but that's rarely a thing that any sane developer would accept: to know whether something works or not you need to test… and distro makers make that exceedingly difficult.
Posted Nov 28, 2025 11:25 UTC (Fri)
by intelfx (subscriber, #130118)
[Link] (1 responses)
Yeah, that's just the exact opposite of how it happens in reality.
In reality, applications in ecosystems with vendor-controlled static linking/bundling/vendoring get locked to the first version of each dependency that happens to work and never upgraded again (until a vulnerability is exploited in the wild, or until maintenance of a suitably obsolete build environment becomes untenable through actions of others).
Maintainers of distributions, on the other hand, care about health of the overall "ecosystem" Of course, the quality of said care may differ, but the overall concept is always there. You, as a user, may be briefly stuck on non-latest versions (or even on legacy branches) of some dependencies, but the entire distribution moves forward as fast as manpower allows.
Posted Dec 1, 2025 9:59 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
Posted Nov 26, 2025 9:22 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Posted Nov 27, 2025 18:14 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link] (3 responses)
Posted Dec 1, 2025 9:17 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
What matters for security is that *every* deployed software bit uses fully patched components, even when those components are slightly old because no security event required their update and a full OS update is expensive effort-wise.
Static linking as you wrote promotes fixing highly visible main doors while keeping service doors wide open.
Posted Dec 1, 2025 10:50 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
Posted Dec 1, 2025 10:59 UTC (Mon)
by muase (subscriber, #178466)
[Link]
In the first case, that’s a real-world RCE, in the second case you probably cannot even exploit the bug because you can’t control the input in the first place (messages are generated by the coordinator, not you).
Different use cases have different attack scenarios and different attack surfaces. There are tons of applications where a gRPC vulnerability will be a serious incident; but there are also a lot of cases where it – realistically speaking – it’s simply not exploitable.
Ideally, you should fix all bugs asap – but given the limited resources IRL, it sometimes is simply necessary to triage. That includes that if you cannot fix all packages at once, you can at least try to fix those where the bug has the most impact; i.e. users*impact (“reduce the bleeding”). It simply does not make sense to hold a fix back for the majority of users because you’re blocked by some obscure package nobody is using, or to wait for a package where the bug has no relevant real-world impact.
Posted Nov 25, 2025 17:58 UTC (Tue)
by JanC_ (guest, #34940)
[Link] (19 responses)
Posted Nov 25, 2025 18:33 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (18 responses)
There are no technical problems in storing updates as binary deltas. It's just that nobody cared to spin up infrastructure for this.
Posted Nov 25, 2025 19:21 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (17 responses)
Posted Nov 26, 2025 2:34 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (16 responses)
Format-aware binary diffs can be incredibly compact. RedBend Software used to provide target-aware OTA diffs, and they could compress a simple binary patch to just about the source code difference in bytes. Its differ used additional instructions to represent address shifts, and it could extract locations from the ELF relocation section.
And these days, not many people are going to care about update sizes anyway.
Posted Nov 26, 2025 11:18 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (15 responses)
Posted Nov 26, 2025 18:37 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (14 responses)
Can you explain exactly why even simple brain-dead binary diffs are crap? I did an experiment, changed an "if" condition in a large binary, and did a diff. This is probably what most security patches are.
I used simple `rsync --only-write-batch=diff file1 file2`, and for a 50Mb binary the diff was 1Mb. I have not tried bsdiff or xdelta.
Posted Nov 26, 2025 18:55 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (13 responses)
It's orders of magnitude worse for deltarpm and similar because instead of one kernel to build and manage patches for you have N packages, and every node will have a different and unique combination. Combinatorial explosion. Complex, costly to manage, and benefits on a well designed systems that ships critical components such as libc or libssl as shared libraries that can get easily updated are too small to notice.
Posted Nov 26, 2025 19:20 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
It works just fine, somehow. All you need is protocol-level diffing support that deltarpms just half-assed. Even with the most naïve implementation, the end result should be bit-for-bit identical to just transferring files.
You want more examples? OSM (OpenStreet Maps) distributes up-to-minute incremental changes as diff files, and their full database is around 100GB. Yet they manage.
You want even more examples? Dockerhub and Docker images are represented as a set of deltas on top of a base image.
The real problem is, in fact, the RPMs and DEBs themselves. It routinely takes _longer_ to unpack and install Fedora or Debian updates than it takes to download them. Heck, the `apt-get update` step in my Dockerfiles is usually one of the slowest!
Posted Nov 26, 2025 20:49 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (9 responses)
Here's git in the real world: https://xkcd.com/1597/
> You want more examples? OSM
I'm not familiar with it, I suspect the bits are not user-facing pick-and-choose, but entirely behind the scenes, with no user control of installations and updates. And it's pure data, not running code. That makes it way, way easier and more manageable.
> You want even more examples? Dockerhub and Docker images are represented as a set of deltas on top of a base image.
Docker is a dumpster fire
Posted Nov 27, 2025 0:15 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
Yeah. Because it highlights just how terrible APT or RPM are. I can update 20 of my self-hosted apps _faster_ than it takes to update the host OS (Fedora). And to remind you, Docker is very much a user-facing, "pick-and-choose" system that is based on delta updates. A total impossibility, in other words.
And it works because it was architected correctly. Not _perfectly_, but correctly.
Why have the RPM diffs and APT deltas failed? It's because each package is installed separately. Apt has to pretend that each package is stand-alone, so it can't install them in parallel. And when you have thousands of packages, adding just a tenth of a second to the sequential critical path becomes painful, for not much gain for most people.
A more modern update system would dispense with all the individual package nonsense. It would fetch the updates (in parallel if possible) and in parallel pre-stage the changed files, resolving the deltas as soon as it gets the data. Then it would do atomic replacement of changed files (or as close to atomic as filesystems make it possible) and run whatever scripts needed to complete the update.
Posted Nov 27, 2025 0:37 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (5 responses)
Nah, it's shite with pacman and everything else too
Posted Nov 27, 2025 0:47 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
BTW, even the _actual_ user-facing versions of Fedora are switching away from RPMs. See: Bazzite.
Posted Nov 27, 2025 1:06 UTC (Thu)
by pizza (subscriber, #46)
[Link] (1 responses)
Ladders are really, really useful for climbing to the top of a one or two-floor building.
But for 3-4 floors, portable ladders are no longer an option. Beyond 5.. ladders of any sort are worthless and a completely different solution must be employed.
In case you can't follow this analogy, something that is optimal for one application, or even a couple of applications, rapidly runs into fundamental scaling problems when you try to scale it to dozens or hundreds of applications.
But application writers don't care about bigger picture problems; they only care about *their* application.
Posted Nov 27, 2025 6:58 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I agree with your analogy. Except that classical distros are these rickety wooden home-made ladders. And modern Docker or immutable distros are tower cranes.
Posted Dec 2, 2025 21:06 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (1 responses)
I've been working with package-based OS since Redhat 4.2 (Colgate IIRC) and can see the limitations in scaling up a full desktop workstation using rpm/dpkg and yum/apt, where update performance is slow (my Fedora MacPro I used to use would take *hours* to update between releases on spinning rust) and the cost of de-duplicating all application libraries is the local admin has to use the packaging tool to manage long dependency chains, and things get complicated if you want to install an app which the OS distributor hasn't packaged themselves.
For someone who just wants to _use_ the computer to do computer things, rather than making OS design and development their hobby, the image-based systems with container-based apps work a lot better and more reliably than the package-based model, at least in my experience. Heck, having a good way to rollback because you aren't trying to mutate the one-and-only live system comes in really handy and makes me sleep better at night that the computer isn't going to immolate itself if something goes wrong.
Posted Dec 3, 2025 2:50 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Although classic distros might also eventually change a bit. I think something like nix or guix might end up a better solution.
Posted Nov 27, 2025 23:08 UTC (Thu)
by intgr (subscriber, #39733)
[Link] (1 responses)
I agree that compared to classical package managers it's impressively fast.
But no, Docker's deltas aren't even optimal for updating your application from one version to the next. They are a delta between one layer and the next one, so only efficient when the last few layers have changed.
Fast updates are often a symptom of applications not updating their base image with security patches as often as they should.
Docker could be a lot faster even, if it computed deltas between versions, or had a protocol like rsync or casync that allows figuring out the delta on the fly.
Posted Dec 1, 2025 17:21 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Dec 5, 2025 11:02 UTC (Fri)
by gmatht (subscriber, #58961)
[Link]
In principle this is really simple. Grab the .gz files out of the .deb using ar, zsync [1] them with the .gz files stored on some server (using either a cached .deb file or just $(cat `dpkg -L myoldpackage`). Then use ar to pack the .gz files back into a deb. Optionally also store some binary diffs from some particular versions the user might be likely to have lying round, as this is more efficient than zsyncing between arbitrary versions.
However, there are some issues with the .deb format. The compressed files are signed, so once you have updated your files you have to recompress them before you can feed them back into dpkg. Apart from slowing things down a bit this also means that you have to use tricks to get the resulting .gz to be bit for bit identical to the official .gz files. OK, it is possible create reproducable gzip files, but Debian seemed to be moving away from .gz compression.
If Debian decided to switch to a hsynz/zsync friendly format it should be easy. For example, if Debian switched to using uncompressed tars in their .deb files and instead distributed .deb.zstd files it should be way easier to support incremental updates than say, rewriting apt in rust.
[1] These days we would probably instead use https://github.com/sisong/hsynz
Posted Nov 27, 2025 18:17 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link]
Posted Dec 5, 2025 2:20 UTC (Fri)
by dvdeug (subscriber, #10998)
[Link] (16 responses)
Imagine using a bunch of old TIFF files and finding that one program doesn't read them, despite the fact they all use libtiff. You're the type of person who reads LWN, so after an hour of work, you might figure out that the program uses some brain-damaged version of libtiff. It would be a lot more painful for less technically skilled users.
libtiff is not the most secure library in the world, but randomly turning off features is painful to users. TIFF is annoying as a complex standard that no one completely supports; breaking support for random subformats (which are not visible to the average user, and even if they were, that this program doesn't support ThunderScan 4-bit RLE is probably documented nowhere) just makes things more miserable.
To boot, nobody is messing with the code for that compression scheme. There's a good chance that the bugs that affect it have always affected it. Changes to libtiff are more likely to break compression schemes that are worth optimizing, or some combination of important features that the programmer wasn't thinking about in combination and nobody wants to disable.
Posted Dec 5, 2025 2:38 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Dec 5, 2025 9:33 UTC (Fri)
by dvdeug (subscriber, #10998)
[Link]
And what's stopping someone from making a local copy of libpng and removing, say, Adam7 interlacing from it? "Nobody uses it", and it's more code surface. TIFF is a little weird, but if you support a file format, you should support it as far as reasonably possible, not randomly turning off features you're not interested in.
Posted Dec 5, 2025 11:11 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (13 responses)
And I chose that format very, very deliberately; it was never commonly used with TIFFs, and the only time you're going to encounter it is if you're extracting files from a legacy Mac where you'd bought the ThunderScan attachment for your ImageWriter. Most people won't ever encounter a file with ThunderScan RLE encoding - very few programs outside Macs with the ThunderScan software installed could even read them, and if you opened it in (say) Photoshop, then saved as a TIFF, it'd compress it differently anyway.
While nobody may be messing with the code for it, there's also a good chance that until the compiler changes, the bug was latent - if it contains UB, then the compiler is perfectly capable of compiling it as the programmer intended, up until it changes and starts compiling it in a way that's exploitable (but that correctly decodes all valid ThunderScan compressed TIFFs). At this point, you've created a vulnerability in many programs, to support the one or two that still need to decode ThunderScan RLE compression.
Posted Dec 5, 2025 21:49 UTC (Fri)
by dvdeug (subscriber, #10998)
[Link] (12 responses)
There's a lot of times the solution is set VXTMLR=1 and it just works. A fact you learn six weeks after you finish redesigning the whole system with new software and custom code, or replaced a piece of hardware. You have a TIFF file that is read by TIFF reading programs; the fact that it's not being read here will frustrate all non-technical users and many technical ones. Then you get the tech in, and they realize it's running libtiff like the rest of the system, so why isn't this working?
> And I chose that format very, very deliberately
And I stand by my case; not supporting certain files in a format because you don't think they're being used is a recipe for pain and annoyance. I'll go further and point out that the gain for disabling this is tiny; if you don't trust libtiff, then you should protect yourself in some way, not just disable one or two compression schemes that you don't feel are being used that have a tiny chance of being the angle someone uses to attack libtiff. TIFF, like various media schemes, is a format you'll never support all files in the wild, but you should either support some narrowly and clearly defined subset, or support everything that e.g. libtiff/ffmpeg does. Or don't support it at all, which means people know to run a different tool or convert it to a format you support.
I'm also not convinced that everyone will be so careful as you. It's easy to get carried away; most attempts at making an X clone with only the features that people really use fail because while people only use 20% of the functionality, they all use a different 20%.
Posted Dec 6, 2025 21:24 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (11 responses)
I could, of course, have chosen Internet-facing streaming media software that uses its own build of ffmpeg in the upstream configuration, where only the formats that are expected to work are enabled, but where the distro has turned on support for all the weird and wonderful formats that ffmpeg supports.
In both cases, though, your argument is that it's important to introduce security issues into software whose upstreams don't have that issue because they run with cut-down dependencies, because there might be a rare user who actually wants to deal with ThunderScan RLE TIFF files, or upload LucasArts SANM videos for transcoding to AV1 for streaming, or otherwise do something deeply niche and weird. I would suggest that actually, what you want is more than one build of libtiff, or ffmpeg, or other functionality, where the local-only media player can decode LucasArts SANM, or the legacy 68k Mac (ThunderScan was not supported on PPC Macs) file reader can convert your scanned documents to a modern format, but the Internet-exposed one doesn't have the extra codec support by default, precisely because of the security risk.
Posted Dec 6, 2025 21:48 UTC (Sat)
by johill (subscriber, #25196)
[Link] (6 responses)
Which is basically the "multiple builds", but as long as you don't need to do "internet facing server" and "legacy media" on the same machine, it'd be much simpler.
(You could argue that maybe a web browser should have a restricted version and the local viewer not, but realistically you might even need the browser to have the full version if you're working with such files since you might have a web-based organisation tool etc., so I don't think that really works)
Posted Dec 8, 2025 9:28 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (5 responses)
In the extreme, you end up with a different libtiff for each use case, and you want them to be parallel installed and each only used by one program on a system - e.g. the fax handler has the fax subset (and doesn't have any others), while the paper document archive program supports the subsets used by faxes and scanned paperwork, but not medical imaging or prepress artwork.
And once you get there, where's the advantage of dynamic linking?
Posted Dec 24, 2025 14:49 UTC (Wed)
by sammythesnake (guest, #17693)
[Link] (4 responses)
Posted Dec 24, 2025 17:31 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (3 responses)
Let's switch to video formats, and use GStreamer, which has a plugin mechanism. How does my "upload a video from your phone via the Internet" program know that a given plugin is not for it to use? In the upstream version, it's fine - they just don't install vulnerable plugins, so by definition, all plugins are fine for it to use.
But the distribution has a problem: a phone will never record in (say) LucasArts SANM format, but other software installed by the user might well have a legitimate reason to want the GStreamer plugin for LucasArts SANM to be available and working. Equally, though, the program might well benefit from you adding an AV2 plugin (so that phones that record AV2 video instead of HEVC are supported), even though it doesn't know about AV2.
This, however, is the same problem as before, just rearranged - you need parallel installable "profiles" of the library (in this case GStreamer), such that each program gets the profile it wants. And in the extreme, each program has its own profile.
Posted Dec 24, 2025 18:22 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
So `/etc/pam.d`-like files but for plugins then? Sounds like fun ;) .
Posted Dec 24, 2025 18:56 UTC (Wed)
by excors (subscriber, #95769)
[Link] (1 responses)
That sounds like a pretty easy problem to solve: the application can have an allowlist of plugins. When a significant new codec like AV2 is released, maybe a couple of times per decade, the developers can add it to the list. Maybe have a config file so users of old releases can add it without upgrading. This should only be needed for applications that are processing untrusted input on systems where they can't trust the administrator not to install insecure plugins, so it's going to be end-user things like web browsers that have to be constantly upgraded for security reasons anyway and it's minimal extra effort to maintain the plugin list.
(GStreamer already splits plugins into "good" (safe to use), "ugly" (good quality code but copyright/patent licensing issues), and "bad" (unsupported; maybe low quality, unmaintained, rarely used, etc), with separate Linux packages for each set, plus a number of single-plugin packages (for licensing or dependency reasons). On systems with a responsible administrator, that means you can already choose what profile of plugins are available, and there's no technical reason it couldn't be more fine-grained.)
Posted Dec 25, 2025 19:14 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
And that's the root of the problem I'm raising. There exist cases where unifying features introduces a vulnerability - indeed, the allowlist mechanism you're described also introduces one if an insecure plugin is added that upstream explicitly don't support because they know it opens a security hole in their software. As a consequence, turning a simple "statically link ffmpeg, configured just for the plugins upstream has reviewed and know are good" into "port to a plugin architecture, maintain allowlists, somehow ensure that downstream never adds something insecure to the allowlist" problem, which is a lot of extra complexity, and all so that downstream can "unify" features for things that don't benefit from the unification (since the allowlist promptly bans use of all the extra features so enabled).
This is the flip-side of distros spotting problems as they package - because they change things (they're effectively a friendly fork of upstream), they run the risk of introducing new bugs. It is not nearly as simple as "static linking bad, dynamic linking good"; it can also be "by unifying features, I've introduced a bug that does not exist upstream, because I de-vendored a library, linked to the shared version, and now we have problems".
Posted Dec 7, 2025 8:56 UTC (Sun)
by dvdeug (subscriber, #10998)
[Link] (3 responses)
My argument is that first, you aren't gaining much by stripping features because they're rare. Either you trust libtiff, or you don't. FFMPEG has more dependencies and sometimes more sketchy dependencies, so that's slightly different. But in either of those cases, you're taking a small chunk of code in a much, much bigger library, and treating it as a security win to disable it, not because it's known to have anything wrong with it, but just because you can. Don't; if you don't trust libtiff, sandbox it. If you find libtiff trustworthy enough, you're not gaining anything measurable by disabling random tiny parts of it.
Two, to you, a ThunderScan RLE TIFF file is an obscure Macintosh format. To a user, it's a TIFF file, a relatively common format. If I try and use an .snm file in a tool that plays movies, and it doesn't work, I look for a tool that supports .snm files. If I try and use a valid .png file in a tool that supports PNG files, and it doesn't work, I'm going to treat that tool as broken. I've read the TIFF standard, and understand how complete support is impossible, but the user is going to treat that as the PNG case, not the SNM case.
Third, I think this is a slippery slope that would make software less pleasant to use. ThunderScan RLE is a rare format, yes. But as I said, it's a small chunk of code, so what else are you going to take out? I have 16000 TIFF files on my hard drive, and most of them are in CCITT Group 4 fax encoding, which is not in the TIFF baseline. They may be few, but I have 2, 3 and 4 respectively in PackBits (baseline, but considered rare on Wikipedia), Deflate (PKZIP) (obsolete, uncommon), and LZW (not in baseline, and who uses LZW any more?). Having to spend any time to figure out why the program that views TIFF files doesn't view these is a waste of my time, and for many users, that could be a waste of hours and possibly expensive bills to call in a technician.
> I would suggest that actually, what you want is more than one build of libtiff, or ffmpeg, or other functionality,
No; I want something that processes TIFF files to actually process all TIFF files. That's not realistic, but approaching it is still important for the quality of user experience. If you don't trust libtiff, I expect you to not just disable a couple of minor compression schemes and call it done; you should protect the program from libtiff bugs. ffmpeg is a gnarlier mess, but I suspect there's even less real data about what video codecs people are using, and probably even more reason to sandbox it instead of trying to figure out which codecs are unused and blindly trusting the rest.
Posted Dec 8, 2025 9:24 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (2 responses)
There is no program out there that supports all TIFF files - libtiff certainly doesn't.
And coming back round to my initial point: by adding in the extra file format support to someone's fax handler, the distro has introduced a vulnerability that is not present upstream, and where upstream is quite likely to say "well, why did you add ThunderScan RLE decoding to my fax program? That makes no sense at all, since faxes are 1 bit (by definition), and ThunderScan RLE is 4 bit (by definition)". This is not a win for users, or for the upstream, and it's a security hole opened by the distro insisting that there is a shared libtiff.
Posted Dec 8, 2025 11:16 UTC (Mon)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
Just to be clear, the idea that this was a fax handling program is present nowhere else in this discussion. At no point did you offer a reason ThunderScan RLE wouldn't be supported besides "it's rare".
> There is no program out there that supports all TIFF files - libtiff certainly doesn't.
There's no program out there that supports all C++ files; g++ certainly doesn't. Is that argument going to pacify you when it fails on your C++ file? I'm asking programs that claim to support TIFF files to try their best at supporting TIFF files, and not put extra work into disabling support for certain TIFF files.
>And coming back round to my initial point: by adding in the extra file format support to someone's fax handler, the distro has introduced a vulnerability that is not present upstream, and where upstream is quite likely to say "well, why did you add ThunderScan RLE decoding to my fax program? That makes no sense at all, since faxes are 1 bit (by definition), and ThunderScan RLE is 4 bit (by definition)". This is not a win for users, or for the upstream, and it's a security hole opened by the distro insisting that there is a shared libtiff.
No, the distro has not introduced a vulnerability that is not present upstream. It may have introduced a vulnerability, and the probability it did so is key to judging whether or the action should be taken. Again, it is a win for users to have as much support for their files as possible, and this idea that the program wouldn't accept ThunderScan RLE anyway is something new. The very fact that it only supports 1 bit files means that it should bail out of reading the file long before any ThunderScan RLE specific code is run.
It's possible that just adding the support code will close a vulnerability; mishandling of an error state was what destroyed the first Ariane 5, one of the most expensive software errors ever, after all. The log4j problems, similarly, came from issuing errors to a log file and could have been blocked in some cases by just changing which error message was logged.
More importantly, by linking the program to the shared libtiff, it avoids any vulnerabilities that could come from the window between the shared libtiff being fixed and the program being fixed. That's why distros demand that programs be linked to shared libraries instead of vendored ones, so security holes in the library get quickly patched on being fixed, instead of fixed in every copy on the system individually.
Even in the fax case, it's not as simple as the distro introducing a vulnerability; the distro could be patching a vulnerability just by making the change, and they could be minimizing the time that an openly known vulnerability exists in the program. Even if the user support is irrelevant, how likely a vulnerability is to be created by a change and how likely it is to be fixed by a change is important.
Posted Dec 8, 2025 11:25 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
And just to be completely and utterly clear: you are saying that a vulnerability that is not present in the upstream version of the project, or in their binary builds, but is only present in the distro build (where they've unbundled libtiff and linked a version that has a security issue that can be triggered just by loading a TIFF file) is not introduced by the distro?
And my argument is that the distro is not always fixing things by unbundling dependencies and unifying on a single version - it can be both fixing some problems and introducing new ones. Which tradeoff is better is not backed (in either direction) by data.
Posted Nov 24, 2025 20:25 UTC (Mon)
by ebee_matteo (subscriber, #165284)
[Link] (30 responses)
> > Then the future is shite
> Or you go back to what I was doing over 40 years ago, when a library was just that ...
You can also go back at the beginning of UNIX and use IPC across small binaries to perform tasks. Many people here still like their pipes on the shell.
I see it a good pattern in keeping programs small and then using IPC to make them communicate, via pipes / sockets and gRPC / varlink / DBus / anything.
That for me would be a better future...
Posted Nov 24, 2025 20:37 UTC (Mon)
by willy (subscriber, #9762)
[Link] (29 responses)
At this point I hope you realize you've merely restated the problem, not solved it.
Posted Nov 24, 2025 21:29 UTC (Mon)
by DemiMarie (subscriber, #164188)
[Link] (24 responses)
Server software is often shipped as containers nowadays, and containers don’t benefit much from dynamic linking. In fact, static linking is often considered a benefit in the server world due to ease of deployment.
Embedded systems do benefit from dynamic linking, and Android uses dynamic linking for its Rust crates. However, updates for embedded devices are usually complete images, so ABI stability is of very little value. The only advantage would be allowing binary dependencies to use Rust APIs.
The systems that benefit greatly from ABI stability are “traditional” distros with mutable root filesystems. However, none of them have been willing to fund the needed improvements. Furthermore, many of these distros are run by volunteers.
Like fishface60, I hope that Canonical, SUSE, Red Hat, or Valve steps up and funds a solution.
Posted Nov 24, 2025 23:17 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (12 responses)
Except of course that's not really true, as proven by companies like Redhat spending tons of dev time to implement very, very complex solutions to post-facto deduplicate said containers, because that whole docker mess doesn't really scale beyond a handful of instances. Storage, memory and loading time costs are through the roof because of the intense duplication.
Posted Nov 25, 2025 20:37 UTC (Tue)
by jhoblitt (subscriber, #77733)
[Link] (11 responses)
At my $day_job, I have increased the k8s per node pod limit on most clusters up to 250 from the default of 110 as nodes were routinely hitting the pod limit but could easily handle more load.
Posted Nov 25, 2025 20:49 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (10 responses)
Posted Nov 25, 2025 21:22 UTC (Tue)
by khim (subscriber, #9252)
[Link] (9 responses)
If you have million servers then you would spend way more than $100 million on stuff not even remotely related to what you would pay for these servers. Wouldn't be surprised to find out that just building permits would cost more. Heck, with million servers one, single, outage caused by problems with shared library compatibility may cost you more than $100 million! An attempt to inflate price of something by shouting “but what if there are thousand, ten thousand, millions servers” would never work because not just expenses grow linearly but also cost of potential problems also grow linearly. Time when dynamic linking was feasible is long in the past for this very reason: in a world where human labor is cheap and hardware is expensive saving of one byte made sense. In today's world… not so much. You may win some, in rare cases, if you have some component that's shared between hundreds and thousand of different programs (maybe some core OS library) but everything above that is cheaper not to share.
Posted Nov 25, 2025 21:48 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (8 responses)
Posted Nov 25, 2025 21:52 UTC (Tue)
by khim (subscriber, #9252)
[Link] (7 responses)
Nothing curious, really. It's just matter of priorities: upgrade of one shared library on a server farm with million servers that would bring down your whole datacenter may cost you a lot more than $100 million thus you install Kubernetes and don't do that, but, of course, $100 million are still $100 — if you may, somehow, save them without exposing yourself to instability caused by distros quicksand then you will do that. It's matter of priorities.
Posted Nov 26, 2025 1:06 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (6 responses)
Posted Nov 26, 2025 2:53 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
That's actually close to truth. There was even a project to run K8s as PID1. Most installations don't go _that_ far, and just limit themselves with something minimalistic like Alpine.
Posted Nov 26, 2025 3:18 UTC (Wed)
by jhoblitt (subscriber, #77733)
[Link] (1 responses)
Posted Nov 26, 2025 5:29 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
My desktop Docker host uses a cut-down Debian version without systemd running (there are just 4 processes: /initd services, /usr/bin/containerd-shim-runc-v2, /usr/bin/containerd, /usr/bin/rosetta-mount).
Posted Nov 26, 2025 11:15 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (2 responses)
In your fantasy land perhaps - back down here in the real world, everyone and their dog deploys on Ubuntu as the host, with RHEL and its derivatives distant contenders
> something minimalistic like Alpine
...which is also famously not a "distribution" but consists entirely of aether, right
Posted Nov 27, 2025 17:12 UTC (Thu)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
1) In the real world, most folks are going to use whatever OS their cloud provider uses for their k8s solution. So for EKS, Amazon Linux or Bottlerocket or something. And if it's GCP, they're probably rebuilding the whole world for funsies even if they use Ubuntu, because Monorepos Are (not) Awesome.
2) In the real world, people don't choose one or the other. Both have trade-offs, and most end up using both, depending on the situation. A base image of Ubuntu, with the occasional application installed via flatpak, etc.. is often the best solution for home use. k8s when deploying a large number of services at scale for commercial use.
Posted Nov 27, 2025 18:50 UTC (Thu)
by bluca (subscriber, #118303)
[Link]
Posted Nov 25, 2025 8:58 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (10 responses)
Posted Nov 25, 2025 13:35 UTC (Tue)
by khim (subscriber, #9252)
[Link] (8 responses)
The funding is not there because there are no actor who may benefit from that work and have some money to spare. Google and Microsoft don't have an incentive to fund anything like that because they are not providing Rust ABIs (at least not yet) and distros are not in position to develop anything and don't even feel it's their responsibility to develop anything. Story about “awful inlining” is entirely moot point: you have the same thing with
Posted Nov 30, 2025 17:11 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link] (7 responses)
Alternative data models, and why they do not solve this problem:
* dyn Trait in a Sized context becomes syntactic sugar for Box<dyn Trait> (autoboxing) - There is no guarantee that we even have a heap in the first place. It would create a bizarre inconsistency where T is not the pointee of &T. And there are several problems (see below) that it doesn't actually solve.
Posted Nov 30, 2025 17:32 UTC (Sun)
by khim (subscriber, #9252)
[Link] (6 responses)
Where have you read the idea that it wouldn't change the “current data model”? It would, of course. The same way Swift did: by permitting Have you excluded not just obvious, but already implemented (in Swift) solution on purpose? Introduce dynamic, parametrised types — and that's it. Yes, it would change language in a subtle ways: The big question is not whether that can be done, but how hard would it be. Probably lots of simple, tedious work… but nothing really crazy. Backward compatibility could be problematic, though. The worst thing that would happen: some panics that currently happen at compile-time would start happening at runtime… consider function that removes one element from Currently all that zoo is kept out of stable, but there are lots of struggles in attempting to make it all work… struggles that, ironically enough, become trivial if you introduce runtime-parametrised types and make It's more of a political decision than technical decision: now one would fund such work without promise of it being included in the compiler — and no one would give such a promise if only ideas on paper exist and there are no working code. But maybe someone can do that as Rust fork? The biggest question is with functions that may return unknown type (known type, unknown parameters: think about function that removes duplicates from a slice). As first step these may be forbidden outright: every type that leaves function should be describable in terms of input (but then the aforementioned function that removes one element from slice is also impossible) — a bit line lifetimes are handled today.
Posted Dec 1, 2025 23:12 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
I refer the honorable gentleperson to the answer I gave some moments ago:
> Similar to the previous bullet, this fixes the self receiver and does not help with [associated types, associated functions, and various other dyn-incompatible things besides the self receiver].
Posted Dec 1, 2025 23:14 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (4 responses)
Posted Dec 1, 2025 23:41 UTC (Mon)
by khim (subscriber, #9252)
[Link] (3 responses)
One may face random compilation error today with As I have said: that's not a big deal, in practice, the majority of existing software is written in languages that have that “defect” and it's rarely a problem, in practice. Whether Rust team wants to deliver something usable in realistic timeframe or would prefer to punish users with what they have now is up to them, ultimately: using
Posted Dec 2, 2025 8:44 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (2 responses)
Posted Dec 2, 2025 10:21 UTC (Tue)
by khim (subscriber, #9252)
[Link] (1 responses)
We are talking about Rust, not Haskell, here. Uses of In the past for things like integer overflow or array access Rust usually picked up panics in place of UB, but with stable ABI approach is the opposite. That looks a bit illogical to me, but then, people are not always rational.
Posted Dec 9, 2025 0:37 UTC (Tue)
by gmatht (subscriber, #58961)
[Link]
Posted Nov 25, 2025 16:52 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
And there's no possibility to declare an interface as "extern", which means that anything crossing that interface cannot be optimised in a way that would break an external app that doesn't know about the changes?
Of course, that then means a strict separation of declarations, inline definitions, and generics, but might that not be a good thing?
I can see that trying to turn generics into concretes might be a little tricky, but a dummy call for every generic you want to concrete, over an extern definition, would do it?
And just like with "unsafe", you could offload the responsibility to the programmer to make sure the use of the definition files is consistent. With automated traps as far as possible.
Cheers,
Posted Nov 25, 2025 22:40 UTC (Tue)
by jhoblitt (subscriber, #77733)
[Link] (3 responses)
Posted Nov 25, 2025 23:20 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (2 responses)
> For example, most go programs use the core flags package and don't require a a runtime dependency for parsing arguments.
And this is a very "I-don't-see-further-than-my-lang's-shiny-abstractions" point of view.
What is the "core flags package", exactly? It is a library. So if it's a library, where does it exist? Do we dynamically or statically link it? The GP's question stands unanswered.
Posted Nov 26, 2025 3:24 UTC (Wed)
by jhoblitt (subscriber, #77733)
[Link] (1 responses)
Posted Nov 26, 2025 9:28 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Posted Nov 24, 2025 18:42 UTC (Mon)
by keithp (subscriber, #5140)
[Link] (70 responses)
So, you either get responsible language design with actual type checking across interfaces, or you get shared libraries. I haven't seen any plan for getting both. It kinda sucks, but given that I have to make a choice, I know which I'm willing to accept.
At this point, I'd assume any time a package using Rust anywhere should trigger a rebuild of any reverse dependencies, at least until policy tells us how to avoid that.
Posted Nov 24, 2025 18:57 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (37 responses)
Posted Nov 24, 2025 21:05 UTC (Mon)
by DemiMarie (subscriber, #164188)
[Link]
Posted Nov 24, 2025 21:27 UTC (Mon)
by mb (subscriber, #50428)
[Link]
Posted Nov 25, 2025 12:02 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (33 responses)
Polymorphism is absolutely fine as long as you are aware that this means that the polymorphic parts of your library live in the caller's binary, not in your binary. Same with defined constants in a header, struct layout etc.
The thing that you need is something that tells you when you've modified something that will be in the caller's binary, not your binary, so that you can undo that breakage. Ideally, you'd also have a way to "shim" your new library, so that old binaries can still link against the new library, and go via the shim that fixes things up so that they continue to work without a rebuild.
But this is a really hard tool to develop; there's a lot hiding in those two sentences. Even just doing the "modified something that will cause breakage" for static linking is hard; and dynamic linking ups the difficulty a notch.
Posted Nov 25, 2025 13:38 UTC (Tue)
by khim (subscriber, #9252)
[Link] (32 responses)
This would only work if your library provides ABI without things like
Posted Nov 25, 2025 13:56 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (31 responses)
Second, I didn't say that you can't have polymorphism; I said that you have to be aware that your polymorphic components live outside your binary. You can have, for example, pub fn foo<P: AsRef<Path>>(path: P) -> u32 { foo_impl(path.as_ref() }, as long as you are happy that foo is inlined into the caller's binary, while fn foo_impl(path: &Path) -> u32 is in your binary.
The important part is that you're aware of what's in your dynamic library, and what's outside it, and that you have a way to cope with the subset of your code that's in the caller not changing when your dynamic library changes. That might be shims and symbol versions like glibc, or not changing things once they've been exposed in a way that breaks the ABI.
Posted Nov 25, 2025 14:32 UTC (Tue)
by khim (subscriber, #9252)
[Link] (30 responses)
Yes. But not with Rust as it exists today. Even Well… compiler upgrade [potentially] break ABI which means you would have to specify precisely which version of the compiler defines it… and never upgrade. RenderScript tried that and died as a result, Apple ended up in the exact same potion, etc. You couldn't build a stable platform on a quicksand.
Posted Nov 25, 2025 15:09 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (29 responses)
Indeed, you might well end up with a v1, v2, v3 etc stable ABI, where v1 is what we thought was good enough next year, v2 is a decade later with all the small improvements that we've accumulated since v1 was marked stable, with downstream users deciding when it's worth moving to a new version of the ABI and breaking older binaries - or even provide a stable ABI v1 shim that uses the stable ABI v5 code to implement things, and does whatever is needed to get compatibility (copies of data structures etc).
But that's something the compiler team has to commit to. None of this works if the compiler team won't stabilize the ABI (replacing the compiler version dependency with a stable ABI version dependency).
Posted Nov 25, 2025 15:18 UTC (Tue)
by khim (subscriber, #9252)
[Link] (28 responses)
Then what's the point of limiting the whole thing to statically known types? Polymorphic ABIs work with the compiler buy-in just fine: there are Swift, C#, Java, Ada… it's not a rocket science, it's well-tested tech. Know for decades, not years.
Posted Nov 25, 2025 15:33 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (20 responses)
Mine is that the provider of a library with a stable ABI needs to be aware of what they're actually offering - what parts have to be kept stable (including because they're embedded in user code - e.g. user code knows this is a thin pointer, ergo you can't change it to a fat pointer, or user code knows this data structure is 108 bytes in size, so that can't change), and what parts are safe to change (e.g. user code calls into your library at this point, so you can change the implementation).
Note that you might want to allow some of your library code to be inlined for performance - e.g. Vec::len is something you'd want inlined, you might want to inline most of Vec::push, only calling out-of-line code if the Vec needs to grow, for two examples. And when you do that, you need to know that part of your stable ABI is ensuring that the inlined parts still work as designed, even if the parts that you've kept in your shared binary are changing.
Posted Nov 25, 2025 17:06 UTC (Tue)
by ssokolow (guest, #94568)
[Link] (4 responses)
An opt-in stable ABI for Rust that serves as a higher-level alternative to what you currently get from
Posted Nov 25, 2025 17:18 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (3 responses)
I want both CrABI, so that I have a compiler that can give me a stable ABI, and a way to cleanly identify which things are ABI, so that a future version of cargo-semver-checks can tell me not only that I've broken my API stability promises (leaving me to decide if I bump the major version, or if I fix my API), but also that I've broken my ABI stability promises.
I also want to be able to say that my stable ABI depends on you using a compatible stable API crate as part of the build process - just as I can't use a random header file version in C, and rely on it working with a random shared library version - that allows for things like carefully chosen inlining of part of my code into the caller, while still having chunks of my internals in my shared library (e.g. so that fast path code can be inlined, calling a slow path in my shared library).
On top of that, while I'm demanding perfect tooling, moon-onna-stick, and free unicorns for everyone, I want tooling that allows me to knowingly break ABI as long as I provide the necessary shims to let people who linked against the older ABI to continue to work.
Posted Nov 25, 2025 19:16 UTC (Tue)
by ssokolow (guest, #94568)
[Link] (2 responses)
For varying definitions of "rely on"
https://abi-laboratory.pro/index.php?view=tracker
Posted Nov 26, 2025 10:00 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
Posted Nov 26, 2025 10:15 UTC (Wed)
by ssokolow (guest, #94568)
[Link]
I suspect anything which satisfies both your goals and the Rust devs' goals will be paired with some kind of link-time hackery to ensure that what would otherwise have been statically linked will refuse to start if the subset of the provided library that's being used is not 100% ABI compatible.
"'Just don't make mistakes' doesn't scale" is, after all, a core element of Rust's design philosophy.
Posted Nov 25, 2025 17:17 UTC (Tue)
by khim (subscriber, #9252)
[Link] (14 responses)
I understand what you are offering, but I don't understand why would you offer that. C++ went this way because it could without asking compiler developers to cooperate. People just started providing binary libraries without asking anyone for permission — and that worked because C++ haven't been breaking their ABIs quickly enough. Rust couldn't do that without asking compiler developers to cooperate… the breakage possibility is real enough… and if compiler developers have to involved anyway… why not give developers ability to make nice, easy to use, generic ABIs and restrict them to this strange subset? Note that it only works in C++ because of incredible efforts that compiler developers, now, have to provide… no one would have accepted such burden if they wouldn't have been put in the bind “provide backward compatibility or forget about user adopting new versions of your compiler”. Rust developers may simply refuse to participate in that crazyness (because, as you note, without them nothing would work) or, if they would agree — they can design something more useful and sane.
Posted Nov 26, 2025 10:49 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (13 responses)
I'm saying that we need the tools to allow developers to decide what falls on the "shared binary" side of the library, and what falls on the "inlined into the user" side of the library; in C++, this got forced on compilers by the header file/object file split, where things in the header files are "inlined into the user", and things in the object files are not.
But there's no particular reason that this couldn't be well-designed, such that everything (even constants and data layouts) is fixed up by the dynamic linker at run time by default - but there are performance wins to be had from inlining other things. For example, doing a full-blown function call for Vec::len is an expensive way to read a single usize from a known location in the structure; if you export the layout of Vec and the definition of Vec::len such that they can both be inlined into the caller, then you need to ensure that the layout of Vec and the definition of Vec::len remain compatible into the future. Or you might decide that optimizations enabled by knowing that Vec is always going to be 24 bytes, 8 byte aligned, on this platform, are worth enabling, and thus export that layout.
And that's why I want to offer that - I want library authors (who presumably understand the library code) to be able to decide what is inlined into the calling binary, and what is put in the library's binary, with tooling support for stopping them from accidentally changing something from inlined into the caller's binary to part of the library's binary.
Posted Nov 26, 2025 13:42 UTC (Wed)
by khim (subscriber, #9252)
[Link] (12 responses)
Nope. We need tools that simply work. Developer shouldn't consult some large document to find out what would be embedded where. Except in rare cases marked Developers have some ideas about what it takes to have stable API (and there are checkers for that), that should be enough. How exactly compiler would ensure that stable API would become stable ABI, also — is on the shoulders of the compiler. Yes. In C++ they had this split from the day one — that's why it worked. Rust had no such split, thus it wouldn't work. This could be an external addon, sure. But by default things should just work. Otherwise no one would adopt that solution. Developers are not malicious, but lazy. Take that into account — and that would be enough to develop sane approach. Provide tools that allow some kind of perfection after you'll provide something that “just works”.
Posted Nov 26, 2025 14:01 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (11 responses)
The compiler cannot do this unaided - how does the compiler know whether a given small function is "hot path" in your users (and therefore needs to be inlined, with tooling assistance to prevent you from breaking the inlined paths), or "cold path" (and therefore can be out-of-line as a symbol in the shared binary)?
If you don't offer this as part of shared library support, then developers are going to hack around it in ways that tooling cannot check, because they're lazy - fixing the compiler to support this is a lot harder than hacking around it in interesting and unsupportable ways.
And the bit you keep ignoring is that I'm saying that we need tooling that's aware of this, and that can error out if you accidentally break your users. That is table stakes for this - otherwise you end up in the C++ situation.
Posted Nov 26, 2025 14:15 UTC (Wed)
by khim (subscriber, #9252)
[Link] (10 responses)
Why? Why that doesn't happen with bazillion other languages that offer similar capabilities? Not exposing “lightweight” objects in way that requires such hacks is even easier. We are talking shared libraries that are used to offer some kind of stable ABI here. Supporting stable ABI is hard. People wouldn't add crazy hack where they couldn't guarantee if they would work or. Look on Vulkan as an example of stable ABI that have to be performant, too: no crazy hacks, no inline function, just a bunch of non-opaque structures. With accessors to these structures in static libraries (that you may or may not want to use). Zero support from the compiler, zero magic, guaranteed to work, can be easily replicated in Rust, if needed. Nope, you wouldn't. C++ situation arose because C++ already had that difference between Don't give developers that capability — and they wouldn't use it. They would work with dynamic dispatch and limitations of dynamic dispatch. It's as simple as that. Sure, optional availability of such capability wouldn't hurt… but lack of these are not show-stoppers at all. Developers can tolerate slow, they wouldn't tolerate “you need an entirely different API for shared libraries”, they would just link everything statically.
Posted Nov 26, 2025 15:28 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (8 responses)
And if you don't allow exposure of lightweight objects from a library, then you're ruling out such things as Option<&str> in the library's ABI surface. That's a lightweight object being exposed - and thus needs a stable ABI.
Vulkan's a really good example, because it does expose all sorts of things that get inlined into the caller - there's inline functions, for example, and those struct definitions are inlined into the caller, such that Vulkan cannot redefine the structures to a different size and still work with older libraries.
In an ideal world, there would be tooling that told people working on Vulkan "hey, you've changed this structure in such a way that it now has different size/alignment. You can't do that!", so that they couldn't make that mistake - but as it is, the normal C rules apply of "you're holding it wrong".
Posted Nov 26, 2025 15:40 UTC (Wed)
by khim (subscriber, #9252)
[Link] (7 responses)
That tool is called Libabigail. And yes, it's used when Vulkan is upgraded. It just needs stable layout. But yes, it's an interesting dilemma there: if you just freeze the layout someone still needs to ensure that said Whether this needs to be exposed to developers as some kind of universally accessible tooling or kept to the standard library is an interesting question. Today there are a lot of things that standard library does yet regular programs can not do. Providing inlining for methods of After all it's not possible to take
Posted Nov 26, 2025 16:01 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (2 responses)
So you allow Rust to use the "extern" keyword. Which freezes the layout according to certain rules.
One likely mechanism is to require (a) that extern tells the compiler to use a simplistic layout as declared (no optimisation) so it's guaranteed to be consistent across programs. And then (b) (copying Rust "Editions") you allow "extern "version"" so you CAN update the layout, and more importantly, you can tell the Rust compiler about both layouts so it can auto-generate a shim, or you can choose not to, so it generates a linking error.
But either way, you do have to say "extern" is similar to "unsafe", in that you are relying on the programmer to enforce invariants, but that the compiler will do its job properly if the programmer does theirs (primarily ensures version is updated correctly).
Cheers,
Posted Nov 26, 2025 16:11 UTC (Wed)
by khim (subscriber, #9252)
[Link] (1 responses)
It's not enough to just freeze a layout. This would never work. The big advantage of Rust is the fact that compiler (and, most notably, not a developer) upholds many such invariants. We need this guarantee kept across shared libraries boundary or it'll never work.
Posted Nov 26, 2025 16:55 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
Which is farnz' point. You need tooling.
Which is the point of extern. It says this is where it's likely to go wrong. It tells the compiler "here is a boundary where abstractions mustn't leak". It tells the programmer "here is a boundary where you need to be careful what goes across".
It's a major improvement on C / C++ where the header file is part of the library, but contains loads of stuff that ends up in the application binary - a major abstraction leak that C / C++ sprinkles heavily with "here be dragons" pixie dust. A Rust compiler could cleanly enforce most of that, for example by ensuring structs passed through an extern have to be declared as extern, and have the same version as the function they are passing through.
Yes it's not perfect. But it's a damn sight better than what we have in the C ecosystem. Given Rust's concern about pointer provenance and stuff, you could use the same approach to argument provenance across an extern. Doesn't stop a programmer cheating and feeding two different versions of the same header file into the two different halves - application and library - in order to fool the compiler - but that's a clear breach of his obligation to enforce integrity across that boundary.
You might even be able to get the compiler (when compiling a library) to output an Intermediate Language Description of the call interface, which the compiler (when compiling an application) imports to guarantee compatibility. Again, you're then heavily dependent on versioning, and the programmer doing it properly, but it'll only take a few crashes on linking as attempts to do so cause problems, and the programmers will learn to "get it right".
Cheers,
Posted Nov 26, 2025 16:50 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (3 responses)
Basically, what I want is tooling that tells me if I've made any change that breaks either my API (which currently exists) or my ABI, and that I can have in my build process to tell me when I've made a mistake. And, as Vulkan shows, this is not just about having a way to express "this item has a fixed layout regardless of compiler", but also "you said this was supposed to be unchanging, but you've changed it in a way that's significant. Did you mean to do that?", complete with the ability to reserve space for future expansion (and not in the C-like hacky way of unused fields, but a very deliberate and well-thought through way to reserve sufficient space with good enough alignment for what you might want to fill in that space with, allowing for some fields being fixed in location because they're exposed ABI, and others being mobile because they're opaque to callers).
And yes, this tooling needs co-operation with the compiler - that's table stakes for doing a stable ABI properly. But it also needs a bunch of other things, so that I have to tell my tools that yes, I mean to break the stable ABI here, just as I have to tell my tools (via SemVer changes) that yes, I mean to break the stable API.
Posted Nov 26, 2025 17:04 UTC (Wed)
by khim (subscriber, #9252)
[Link] (2 responses)
It's not “of course”, but “do we really need it?” If structure has a potential of changing size then it's highly unlikely then it's “trivial” structure that would be used somewhere in the critical path. More likely it's some descriptor that needs some non-trivial work to handle it. Swift handles such structures with runtime descriptors and it may be enough to do that. I would say that if you would try to do that you'll end up with “success” of OSI protocols: lots of hype, lots of talks, zero working implementations. I would rather see that being marked as “maybe in 10 years if we would have the resources we might attempt that” rather then “that's something that we would use as justification not to do anything”. It needs some things, yes. You are trying to stuff it full of things that are not, strictly speaking, needed, but only desired. That's willful ignorance of RFC1925: It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea. I would say that starting with dynamic dispatch everywhere but some “blessed” types from the standard library (which is essentially what Swift did) would get us 90% there (if not 99% there) and the remaining 1% may be discussed later (or handled with hacky solutions, if needed). But yes, basic library types like
Posted Nov 26, 2025 17:14 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
I'm not arguing that these are required for a minimal solution - but rather that all of these problems have to be solved if you're going to have a stable ABI, and given that Rust needs tooling updates anyway to have a stable ABI, it would be better to have tooling that prevents accidental ABI breakage from day 1 of the Rust stable ABI for dynamic linking, rather than (as you seem to be suggesting) relying on "programmers don't make that mistake".
After all, the lesson of C++ UB is that "programmers don't make that mistake" is virtually always false.
Posted Nov 26, 2025 17:23 UTC (Wed)
by khim (subscriber, #9252)
[Link]
Nope. Not even remotely close. Rather it's extension of current Very natural extension of the Rust language, not special code for the dynamic libraries. It would be well-received even without dynamic libraries, the ability to have true polymorphic callback is something people desire often. Yes, but that doesn't mean we have to invent something grandiose, that would require next 10 or 20 years to develop. Simple solution may be better, because it's actually implementable… and we know, from Swift example, what can be implemented in the limited time. Hint: anything that requires changes to the dynamic linker automatically makes the whole thing DOA. Simply because on Windows dynamic linker is part of the OS and if this scheme wouldn't support Windows then nobody serious would use it.
Posted Nov 26, 2025 16:09 UTC (Wed)
by excors (subscriber, #95769)
[Link]
Vulkan's ABI is stable and extensible and C-based (allowing FFI from many languages), with the tradeoff that it's painful and bug-prone to use directly.
That's only acceptable because most application developers won't use it directly. They'll use something like https://github.com/KhronosGroup/Vulkan-Hpp which provides an easier-to-use and safer C++ API, but gives no promises about API stability. That's a header-only library, so effectively it's statically linked into every application. (Header-only libraries are popular because C++'s ABI support is so poor that even static linking of separately-compiled libraries is unreliable). Or they'll use some other wrapper or engine, or write their own, with the same issues.
So now the problem is: how could you implement something like Vulkan-Hpp as a shared library with a stable ABI? And the current answer is, you can't. Vulkan isn't solving the problem of safe ABIs; it's just shifting it to another layer which doesn't solve it either.
If you want shared libraries and safety (which I think was the premise of this thread), it seems worth trying to come up with a better solution than that.
(Vulkan is also a peculiar example because it has a complex architecture where applications link (dynamically or sometimes statically) to a Vulkan Loader which dynamically links to multiple independent driver implementations, and drivers export the whole Vulkan ABI through a GetProcAddr interface, and applications call a trampoline function in the Loader that dispatches to the appropriate driver. (If you have multiple GPUs, the application might use multiple drivers at once). Applications can avoid the trampoline cost by using a GetProcAddr to get a function pointer directly into the driver, though it may actually be a function pointer into a separate Layer library that can intercept and modify each call before the driver sees it (for debugging etc). I wouldn't call that "no crazy hacks". (https://github.com/KhronosGroup/Vulkan-Loader/blob/main/d...))
Posted Nov 25, 2025 17:59 UTC (Tue)
by paulj (subscriber, #341)
[Link] (6 responses)
Posted Nov 25, 2025 18:55 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
It doesn't. And the type information is available throught reflection. It simply ignores it during the bytecode interpretation (or JIT-compilation).
Posted Nov 26, 2025 10:44 UTC (Wed)
by paulj (subscriber, #341)
[Link] (4 responses)
Posted Nov 26, 2025 13:33 UTC (Wed)
by khim (subscriber, #9252)
[Link]
It's implementation detail. Developers don't care about whether something is “proper” part of the ABI or not “proper” part of the ABI. They can and use foreign types with generics and this just works. Rust may do the same if it would impose that foreign types can only be used with the dynamic dispatch.
Posted Nov 26, 2025 18:19 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Technically, there is no "linking" in Java. All the code is dynamically loaded at runtime (through ClassLoaders). You can do additional validation during loading and/or rewrite ("instrument") the bytecode entirely.
Type information available through reflection was used to implement reified generics by some languages targeting the JVM. Kotlin is one of them, for example. So in practice, the type erasure on the bytecode level is mostly an implementation detail.
Posted Nov 27, 2025 16:54 UTC (Thu)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
You can definitely have code that does:
for( var a: someCollection<SomeTypeA> )
And it ends up with someCollection<SomeTypeB> at runtime, and you're stuck debugging the damnedest runtime error, once you do something with it in the for loop..
I had this happen on a nested, templated collection.
Granted, the thing that usually triggers this is the odd corner case of deserializing data back into Java.
Unfortunately, this odd corner is something that happens all over the place when you're working with a system that contains HTTP services, which, if you work with Java, is what you're usually paid to do, nowadays.
And, since you always need some sort of deserializer for JSON (or, if you're particularly unlucky, XML) in your HTTP service, oh, what the heck, we'll use it for config files as well, for any data we read in, for..
And Boom.
When working with Java on a daily basis, type erasure is anything but an implementation detail.
It's is a very real pain point (granted it was done by smart people with sound reasoning and good intentions [1], but you know what they say about good intentions...)
There are other cases I've ran into at odd times, but I have trouble recalling the specifics, because the serde issue happens often enough that it looms large in the memory.
[1] https://openjdk.org/projects/valhalla/design-notes/in-def...
Posted Nov 28, 2025 5:31 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I also distinctly remember collection wrappers that enforced the correct types using reflection. Maybe in apache-commons?
Posted Dec 1, 2025 23:30 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link]
* References (lifetimes are considered type parameters), but maybe we can make an exception since lifetime parameters do not result in true monomorphization.
And probably a dozen other things I didn't think of. Needless to say, the resulting language would be so constrained as to make WUFFS look like a "normal" language in comparison.
Posted Nov 24, 2025 21:16 UTC (Mon)
by zyga (subscriber, #81533)
[Link] (15 responses)
Apple paid for that support in Swift so that apps for their platforms can benefit from base OS library updates without having to be rebuilt.
Rust and Go didn't have the money or desire to implement that, respectively.
I recommend reading what Swift can do today, on Linux. You can load a library with a type. Load another with a container and efficiently instantiate container specialized with that type, all with dynamic libararies and stable ABIs. It is compiler voodoo but it is not impossible.
I kind of think we are all doomed in the long run (e.g. imagine all of GTK and Qt are written in rust and require a complete world rebuild for every tiny update). IMO that is not scalable and the trend to move to Rust or another langue like that, will bounce at some point.
Either someone steps in and does the heavy lifting to solve this problem, or distributions will just grind down to a halt.
Posted Nov 24, 2025 21:50 UTC (Mon)
by zyga (subscriber, #81533)
[Link] (3 responses)
Posted Nov 25, 2025 12:17 UTC (Tue)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Nov 25, 2025 13:40 UTC (Tue)
by khim (subscriber, #9252)
[Link]
Posted Nov 27, 2025 10:15 UTC (Thu)
by Karellen (subscriber, #67644)
[Link]
Posted Nov 24, 2025 23:15 UTC (Mon)
by DemiMarie (subscriber, #164188)
[Link] (5 responses)
Posted Nov 25, 2025 2:11 UTC (Tue)
by khim (subscriber, #9252)
[Link] (4 responses)
That can be solved by declaring that thing an “std-only” feature. There's nothing impossible there, but it's a lot of work—means it's unlikely to happen without serious funding… who can provide it?
Posted Nov 25, 2025 6:52 UTC (Tue)
by josh (subscriber, #17465)
[Link] (3 responses)
We're working on it, though.
Posted Nov 27, 2025 18:23 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link]
Posted Nov 28, 2025 9:51 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Posted Nov 28, 2025 10:26 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Posted Nov 25, 2025 9:01 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Nov 25, 2025 10:25 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (2 responses)
It would have been smaller in source code, but not in binary, for obvious reasons: it might not need to reimplement an ecosystem of dependencies, but the object code generated from those dependencies would still have to exist somewhere.
Unless, of course, it was a hypothetical *shared* Rust library, linking to *shared* Rust libraries of those dependencies. Right.
Posted Nov 26, 2025 16:25 UTC (Wed)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
C++ doesn't provide very good container types, but it does provide a reasonably good growable array type, with the expected amortized constant time growth, the basic reservation feature - and QList isn't actually better. So the fact QList was provided anyway suggests the same mindset would apply in Rust for a library in that mould.
But on the other hand, a library which didn't insist on offering its own version of things you already have might actually reduce the redundancy and the extra marshalling overhead from the duplication, so that might actually be a net win in size terms.
Posted Nov 26, 2025 17:19 UTC (Wed)
by excors (subscriber, #95769)
[Link]
Back in 1998 they explained: (https://github.com/KDE/qt1/blob/25d30943816da9c28cded9ac7...)
> Qt's containers are much more portable than the STL is. When we started writing Qt, STL was far away in the future, and when we released Qt 1.0, no widely-used compilers could compile the STL. For a library such as Qt, it is of course very important to compile on the widest possible variety of compilers.
They also justified it by having different space/time tradeoffs to STL containers (and still make that argument in today's documentation); but if the STL had been widely available with decent-quality implementations when they started, I imagine they would have simply used that and put up with its other limitations. And anyone starting today in Rust would simply use Rust's std containers.
Posted Nov 26, 2025 1:58 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
GTK has language-independent object descriptions (GObject). I don't see why that couldn't be provided from a Rust core as well. Qt relies more on C++ magic and would be harder translation, but a QML interface could be provided from a Rust core.
Also, I feel that Rust implementations would have more, smaller crates to provide "all" of these UI libraries/frameworks than we have today, so you might actually "win" if something can depend on a smaller footprint in the first place.
Posted Nov 25, 2025 14:23 UTC (Tue)
by gspr (subscriber, #91542)
[Link] (10 responses)
For example, take the directed graph of dependencies between Rust packages in Debian. Pick any package that is not a library (i.e. not a librust-foo-dev package). This package surely uses, in its dependencies, either monomorphized versions of functions and types, or dynamic dispatch. Note down all the monomorphized versions, and add them to a list for each dependency. Traverse the graph in topological order, and build these monomorphization lists for all dependencies. Then build all library packages as shared objects with all of those monomorphic instances explicitly stamped out (I understand there's no compiler support for this at the moment, but it shouldn't be too hard to fake it by generating stubs?). Will this not allow dynamic linking and bug-fixing in shared objects *within* Debian at least? For a given compiler version, of course. Non-Debian software that uses the libraries are no better off than before (unless they happen to need the same monomorphizations), but they're also no worse off.
I'm sure I'm overlooking something here, but I'd love to learn :)
Posted Nov 25, 2025 14:39 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (8 responses)
You end up with the same problem as the rebuild problem, since you cannot determine ahead of time that no bug fixes will involve a new monomorphization. You will probably reduce the number of total rebuilds you need, but if you're unlucky, you won't.
Posted Nov 25, 2025 14:43 UTC (Tue)
by gspr (subscriber, #91542)
[Link] (6 responses)
Is that likely? Or, is it any more more likely than, say, a bugfix in a classical C library needing to break the ABI?
Posted Nov 25, 2025 14:46 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (4 responses)
Posted Nov 25, 2025 14:51 UTC (Tue)
by gspr (subscriber, #91542)
[Link] (3 responses)
Definitely. But a similar change in a classical C library would be to return a new error value. That wouldn't technically break the ABI, but it would sure require depending packages to acquire knowledge of the new error value. That would take *more* than just recompiling.
I guess what I'm saying is that this approach doesn't always work, but it's not much worse than the situation for classical C libraries.
Posted Nov 25, 2025 14:59 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
For example, if I truncate the error value to 8 bits to make it fit an existing struct, because all known error values are under 255, and you introduce error value 256, I've got a problem in C. This gets worse in Rust, because enums aren't just a value, they can carry data, too, so the enum may get larger as a result of the change, and upstream won't care that the old enum compiled by Debian was 72 bytes, and the new one is 80 bytes - especially if compiled with a newer compiler, they're both 64 bytes.
Posted Nov 25, 2025 15:34 UTC (Tue)
by gspr (subscriber, #91542)
[Link] (1 responses)
Posted Nov 25, 2025 15:42 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
Remember that the state we're in with C is in part because the language requires programmers to get it right, or risk UB, and as a result, C programmers doing security fixes tend to be thinking about all the ways they can accidentally break someone; Rust programmers tend not to be doing that, because the result of breaking someone is a compiler error, not UB.
That cultural difference matters, and is part of why the aim on the Rust side is to have a state where swapping in an incompatible dynamic library is a dynamic linker failure, not UB as it is in C.
Posted Nov 25, 2025 17:16 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
Posted Nov 25, 2025 17:05 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Or you go back to the old FORTRAN libraries that I worked with. The linker pulled in all the modules it knew it needed (and if it had recursive dependencies you had to link the same library several times to get them all). That also had the side effect that all the functions that your program didn't need didn't end up in the executable.
So we then have some fancy update tool (sorry) that takes two allegedly identical libraries, and goes through merging all the different modules into a new updated library. If it finds the same module in both precursor libraries, it would need to check and enforce the rule that the extern definition was identical, before choosing the version with the newest reference number (maybe defined as the most recent modified date - I think that might be a tricky problem?). Or maybe just updates the original library with new modules that didn't originally exist.
Cheers,
Posted Nov 27, 2025 22:48 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link]
Posted Nov 25, 2025 20:46 UTC (Tue)
by jhoblitt (subscriber, #77733)
[Link] (4 responses)
What is probably needed to make this new model palletted for distros is standardized change log data that the distros can poll looking for security updates.
Posted Nov 25, 2025 20:51 UTC (Tue)
by mb (subscriber, #50428)
[Link] (3 responses)
Distros don't care about locked upstream dependencies.
But distros often try to use older dependencies than locked upstream. Which is not Ok, IMO.
Posted Nov 25, 2025 22:35 UTC (Tue)
by jhoblitt (subscriber, #77733)
[Link] (2 responses)
Posted Nov 26, 2025 8:22 UTC (Wed)
by joib (subscriber, #8541)
[Link] (1 responses)
Maybe there's a position in between "only one version of each library" and "whatever is currently in Cargo.lock across a hundred different projects"? E.g. some algorithm that could calculate the minimum set of versions to satisfy all the version requirements in all the Cargo.toml files that are used in the distro?
Posted Nov 26, 2025 8:59 UTC (Wed)
by zdzichu (subscriber, #17118)
[Link]
Posted Nov 24, 2025 19:03 UTC (Mon)
by carlosrodfern (subscriber, #166486)
[Link] (70 responses)
Posted Nov 24, 2025 22:25 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (17 responses)
Android uses this for the OTA system updates.
Posted Nov 24, 2025 23:39 UTC (Mon)
by carlosrodfern (subscriber, #166486)
[Link] (3 responses)
The fact that statically linked programs are a good solution in containers doesn't mean that it can be extrapolated to an Linux distro. A slightly change in the nature of a problem, or in the size of the problem, can justifies a very different solution. It is a typical mistake that people make as they get excited about one technology or approach and want to apply it to all the things that like like a nail. Statically linked programs written in golang or Rust for containers make a lot of sense since the pros are weighty and the cons are not that significant in the context of that use case, but it is not a good approach for all the programs in Linux distros.
Posted Nov 25, 2025 0:49 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
But it's not really a problem, is it? Binary diffs for patch update can negate the advantages of shared libraries.
> The fact that statically linked programs are a good solution in containers doesn't mean that it can be extrapolated to an Linux distro.
But maybe it can? I actually tried a fully static distro a while ago ( https://github.com/oasislinux/oasis ), and it objectively felt _better_ than regular Debian.
I'm not at all convinced that shared libraries are worth all the hassle.
Posted Nov 25, 2025 20:27 UTC (Tue)
by Whyte (subscriber, #161914)
[Link] (1 responses)
That's the part I don't ever see happening, since the libraries supporting those are huge.
Posted Nov 26, 2025 2:38 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I think that these days, graphics library problem can be solved by moving the UI into Electron-like shells, with a shared browser layer. And custom native code can be called via IPC. This in theory can provide even more optimizations, like pre-forked initialized shell that can be forked up without waiting for it to load (like the 'zygote' process in Android).
BTW, the browser doesn't have to be Chromium. Servo would be great for this purpose.
Posted Nov 25, 2025 7:54 UTC (Tue)
by joib (subscriber, #8541)
[Link]
So the tech to do this efficiently already exists in open source, it just needs to be integrated more deeply into distro package distribution tooling.
Posted Nov 25, 2025 21:02 UTC (Tue)
by jhoblitt (subscriber, #77733)
[Link] (4 responses)
Posted Nov 26, 2025 5:46 UTC (Wed)
by Heretic_Blacksheep (subscriber, #169992)
[Link] (3 responses)
Also people forget the flash drives the vast majority of people use these days for storage have a finite lifetime and that lifetime is steadily getting lower as the number of bits per cell inflates. A typical drive rated for 300 TB write/erase cycles (Samsung 970 EVO rated lifetime) won't last nearly as long if everything were staticly built as it would with a distro with dynamic linking that only had to write a couple of gigabytes of updates a month. I've seen plenty of drives with even lower lifetime ratings on the consumer/enthusiast market. There's no telling the rating on rebranded OEM units (Lenovo, Dell, etc). Dynamic linking leaves a lot more write cycles for user data and browser-like-programs disk cache/system logging/file indexing (the bigger problems with SSD cell burn than rarely changed media files and such).
Posted Nov 26, 2025 7:23 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
This is a simply ridiculously overblown comparison. The total size of non-kernel-related binaries on my desktop Linux computer is 1Gb. And they don't need to _all_ change every time.
Assuming that ALL of them need to be replaced, that's already an order of magnitude less. But let's roll with it. An unmetered 10GB port is around $200 a month, and an M2 SSD can easily saturate it. If you want to serve static files, that's probably around $300 a month expense.
A 10GB port can serve the 1Gb in about 1 second. So 86400 people daily, and over a month that's 2.5 million people. This is more than the number of users for most small distros.
Posted Nov 26, 2025 7:33 UTC (Wed)
by mb (subscriber, #50428)
[Link] (1 responses)
That's over 400 years of steady updates.
I've yet to see my first SSD failure due to the "small" lifetimes that one can read about everywhere all the time.
I use them since about 15 years ago as system disk and as disks that are continuously being written to since 8 years ago. Not a single failure so far, except for one that was bad out of the box.
I don't say that failures can't happen, but so far SSDs are much *more* reliable than spinning rust for me.
Posted Dec 4, 2025 17:00 UTC (Thu)
by nye (subscriber, #51576)
[Link]
Posted Dec 5, 2025 12:06 UTC (Fri)
by ras (subscriber, #33059)
[Link] (6 responses)
It's complicated because the upstream stuff isn't the same.
Debian Stable has one feature sysadmin of smaller places love: it's stable. If there is some security issue Debian doesn't ship the new upstream with it's new features and depreciations, they ship the same version with a security fix backported. That effectively means you can let unattended upgrades do it's thing in the background for the four year lifetime a Debian of a typical release without too many concerns. (You will get the occasional email about something that might require manual intervention, like a reboot for a kernel upgrade, but it's rare.)
Big shops probably don't see much value in this, as they have teams of programmers releasing new static binaries or docker images every few weeks. But for small shops it's live saver. For example it makes home mail/web/ssh/router server as hobby feasible, because I only have to touch it once every 4 years. If you work in place that has less than 10 people handling IT functions, that way of operating is the norm. I've don't know much about LWN for instance, but I'd bet that's how they manage their infrastructure. It's either that, or god help you, you use micro services and outsource it all to a very well paid provider.
To pull that off Debian uses a small army of security volunteers watching incoming CVE's, and doing the backporting work. I'd kinda amazed it works at all, but it does better than that. In my experience Microsoft's security patches break far more servers than Debian does.
One of the tricks Debian uses to make it is Debian insisting there is only one version of every library in use. Imagine what it would be like if you have 20 versions of glibc, gtk, libm and every other library. The work load of the backporters would explode. Yet when I look in the Cargo.lock of your typical largish Rust program that is exactly what I see. Worse, it's not just a case of different program uses different versions of the same library, it's often different versions of the same library used by one program! Kudo's for Rust for making that work I guess - but that wonderful experience of having "cargo build" just work most of the time looks to me to have laid the foundations for creating a security nightmare.
All that said, I'm not a huge fan of the current package model myself. I'd much prefer an immutable core, rather like MacOS does (but much smaller), and default to complete isolation of the apps running on top of it like Android does. It would be far more secure by default. But it's decades of work away, and in the mean time Rust's monomorphization model doesn't seem to be a good fit for how distro's work today.
Posted Dec 5, 2025 12:48 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Can/does Cargo complain when told to use several different versions of the same library?
Given the general "If it compiles it will work" nature of Rust, a warning/error trace telling you "this library is loaded using several different versions pulled in in all these places", surely it would just take a bit of discipline to delete all the older references, a quick QA, and your "typical largish Rust program" would suddenly be rather smaller?
Cheers,
Posted Dec 5, 2025 14:04 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
You can, however, use cargo-deny, which will tell you about multiple versions in use.
The issue, however, remains developer time. We all agree that using undermaintained versions of code is bad, but nobody is putting in the time to make sure that we're all on maintained versions. And the only difference in that regard between the C ecosystem and the Rust ecosystem is that in the Rust ecosystem, the use of large numbers of small dependencies means that it's easy to spot, where in the C ecosystem, spotting which parts of a big library (like glib) are undermaintained, and which ones are cared for is hidden behind the fact that glib overall is cared for.
Posted Dec 5, 2025 13:08 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Unfortunately, I don't think that's a good fit with how linux development actually happens. Because everything happens independently, it's hard to do big releases. But yes, I agree with you.
The base OS - linux the kernel, systemd, basic utilities (similar in concept to MS-Dos).
And if each part were as self-contained as possible it would make life a whole lot easier. But it would require a "distro BDFL", and I don't think that's going to happen! It would probably take someone like Dell, or Red Hat, and the network effects are against it.
Cheers,
Posted Dec 5, 2025 13:42 UTC (Fri)
by pizza (subscriber, #46)
[Link]
This point cannot be emphasized enough.
Posted Dec 5, 2025 14:11 UTC (Fri)
by daroc (editor, #160859)
[Link]
Posted Dec 6, 2025 17:14 UTC (Sat)
by ggreen199 (subscriber, #53396)
[Link]
There are the new immutable distributions that work like this. For example, Kinoite, in Fedora, which is what I use now. It has an immutable core, and you use flatpaks or containers for everything else. Still early days, but I've been using it all year with no problems.
Posted Nov 25, 2025 6:39 UTC (Tue)
by mb (subscriber, #50428)
[Link] (50 responses)
negligible
>program load time
Probably faster with statically linked binaries.
>configurability
What?
Posted Nov 25, 2025 11:11 UTC (Tue)
by euclidian (subscriber, #145308)
[Link]
Theoretically for basic cases when the binary gets recompiled with the same static library you get the de-duplication from dynamic libraries plus inlining and versioning working (just loosing the de-duplication).
I doubt it would ever work well enough for production use (first load of a program) and i got side tracked dealing with edge cases but it might be something I should poke again.
Posted Nov 25, 2025 11:22 UTC (Tue)
by LtWorf (subscriber, #124958)
[Link] (47 responses)
Posted Nov 25, 2025 17:33 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
Posted Nov 25, 2025 18:01 UTC (Tue)
by mb (subscriber, #50428)
[Link] (39 responses)
Why would that be the case?
I remember the days twenty years ago where we had slow CPUs and we did prelink workarounds to reduce the dynamic linking overhead at runtime to reduce the startup time noticably.
Posted Nov 25, 2025 18:37 UTC (Tue)
by joib (subscriber, #8541)
[Link] (38 responses)
Posted Nov 25, 2025 19:16 UTC (Tue)
by mb (subscriber, #50428)
[Link] (36 responses)
Are there actual numbers from real life examples that show that Rust startup times are slower due to static linking?
And libc is dynamically linked to Rust programs.
These days in practice probably neither dynamic not static linking is slow in the days of extremely fast CPUs and SSDs.
Posted Nov 26, 2025 4:35 UTC (Wed)
by collinfunk (subscriber, #169873)
[Link] (35 responses)
Here is an example:
$ podman run --rm -it ubuntu:25.10
This is quite important since these commands are executed very frequently. I believe they are looking into improving it.
Posted Nov 26, 2025 5:42 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Nov 26, 2025 7:02 UTC (Wed)
by mb (subscriber, #50428)
[Link] (30 responses)
Posted Nov 26, 2025 11:21 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (29 responses)
Posted Nov 26, 2025 17:08 UTC (Wed)
by mb (subscriber, #50428)
[Link] (27 responses)
>$ time for i in $(seq 10000); do /lib/cargo/bin/coreutils/true; done
Does this have a loading into memory cost of 1 or 10000?
Also please note that gnu-true doesn't use any shared library except for libc. Which is exactly the same in Rust.
If Rust-true is slower than gnu-true due to its bigger size, then this has nothing to do with dynamic or static linking.
Such small binaries are typically larger than their C counterpart, because the Rust std library is typically not rebuilt together with the program and therefore contains lots of unused code.
Posted Nov 26, 2025 18:46 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (26 responses)
Depends. Do you have enough memory available and is the system otherwise idle? Or is it near capacity with no room to spare and higher priority processes running and saturating whatever cache is available?
> Also please note that gnu-true doesn't use any shared library except for libc. Which is exactly the same in Rust.
It is not, because in the rust case you have the many-tenctacle monster that is the rust stdlib and whatever the cat, er, cargo dragged in that morning. So for coreutils really most of the stuff is in glibc so it's all already mapped, in the uutils case you load most of it every time from scratch.
Posted Nov 26, 2025 19:41 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (25 responses)
In both cases, all the binaries involved are simply mmaped into place, and normal demand paging takes care of reading them as needed.
Posted Nov 26, 2025 20:46 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (24 responses)
The point is, once again, that glibc and libssl and other core system libraries will always already be loaded on any given system. Your 100MB chunky rust binary won't.
Posted Nov 26, 2025 20:51 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (23 responses)
Posted Nov 26, 2025 21:15 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (22 responses)
Posted Nov 26, 2025 21:17 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (21 responses)
Posted Nov 26, 2025 21:54 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (20 responses)
Posted Nov 27, 2025 0:44 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Unless it needs an nss module. Or it needs to be relocated.
Posted Nov 27, 2025 9:06 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (18 responses)
Secondly, and confirmed experimentally, significant chunks of glibc contain relocations, at least under Debian/aarch64. I find that with an eMMC, demand paging a 100 MB statically linked binary is faster than handling the relocations needed to dynamically link a library.
And note that this tradeoff would have been very different in the days when a 5400 RPM HDD was "fast". But technology has changed - and the speed with which I can demand page a binary is much higher than the speed with which my embedded CPU can process the relocations and associated page faults, along with also having to demand page in the unshared pages that it needs to do that.
Posted Nov 27, 2025 10:21 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Posted Nov 27, 2025 10:28 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
But, FWIW, this also applied to big servers at a hyperscaler - the cost of processing relocations outweighed the cost of paging in a bit more from the application binary from an SSD. Systems with HDDs only benefited speed-wise from dynamic linking, systems with SSDs benefited from static linking.
Posted Nov 27, 2025 12:14 UTC (Thu)
by malmedal (subscriber, #56172)
[Link]
Don't know which device farnz is talking about, but the description matches things like NanoKVM.
Posted Nov 27, 2025 12:56 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (14 responses)
Posted Nov 27, 2025 13:04 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (13 responses)
I have no idea why you think that loading from eMMC (which is needed in the dynamically linked case, too, because the relocations have already been unshared, and the original data has to be reloaded), then doing the relocations, then doing more loading, is faster than just loading.
Posted Nov 27, 2025 17:11 UTC (Thu)
by bluca (subscriber, #118303)
[Link] (12 responses)
Posted Nov 28, 2025 5:28 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
Posted Nov 28, 2025 10:25 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (10 responses)
Posted Nov 28, 2025 10:43 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (5 responses)
The kernel is not operating at the level of entire files; it's operating at the page level. Pages that contain relocations are only shared up until the relocation is overwritten by the dynamic linker; pages where the only data used at runtime is relocation metadata are only used up until all the relocations have been handled, at which point they become unused pages.
You can thus have 90% of glibc in RAM, but the critical parts for process startup are not, and you have to do small I/Os to get the missing pages.
Posted Nov 28, 2025 11:01 UTC (Fri)
by bluca (subscriber, #118303)
[Link] (4 responses)
I'm not really sure why you are trying to conjure up such a contrived and unlikely example just to prove a point? It's not really working
Posted Nov 28, 2025 11:08 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (3 responses)
And I do have glibc in the page cache - I just don't have the stuff that's only loaded on process starting in cache.
But I get it, my experience and yours don't match, so you're going to tell me I'm wrong, rather than address a real situation.
Posted Nov 28, 2025 13:40 UTC (Fri)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Posted Nov 29, 2025 20:53 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
FWIW, static binaries help with responsiveness for most users. I encourage everyone here to try the fully static distro that I mentioned ( . It really feels more snappy.
Posted Nov 29, 2025 22:08 UTC (Sat)
by LtWorf (subscriber, #124958)
[Link]
Feeling more snappy at 2 minutes after boot doesn't necessarily mean it's more snappy 5 days later.
Posted Nov 30, 2025 1:28 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Nope. glibc is loaded at a random location, and when it needs to be linked into an executable, the OS needs to do relocations to resolve the addresses. This information can be easily paged out, especially for rarely used binaries.
glibc by itself is not too large, but when you add other libraries like libstdc++, libz, libsystemd, and others it starts adding up. This is compounded by libraries that use NSS plugins, because dlopen() requires new relocations each time (AFAIR?).
Posted Dec 1, 2025 11:10 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Dec 1, 2025 18:22 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Dec 1, 2025 13:16 UTC (Mon)
by malmedal (subscriber, #56172)
[Link]
It would be good to properly measure how big this effect is. I did a very quick test running chrome under perf and immediately killing it when it finished starting up. On my machine with a hot cache ld-linux used 7% of cpu-time. This does not prove there is a problem, but it is indicative that it is worth investigating properly.
Posted Nov 26, 2025 18:25 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
Get the compiler/linker to segment object code into 4K blocks or whatever the figure is, map the file into ram, and only load those bits that are used.
Cheers,
Posted Nov 26, 2025 8:34 UTC (Wed)
by joib (subscriber, #8541)
[Link]
For more complex binaries I suppose it comes down to more open()'s, and more but smaller memory mappings for dynamic linking vs fewer bigger memory mappings for the static linking case.
Posted Nov 26, 2025 8:37 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Nov 26, 2025 13:57 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Nov 25, 2025 19:32 UTC (Tue)
by bluca (subscriber, #118303)
[Link]
Posted Nov 25, 2025 18:10 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
I would argue that "few processes per machine" is one of the most common use-cases, because BusyBox systems are like that. And they probably outnumber all other Linuxes except Android.
Posted Nov 25, 2025 21:52 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (3 responses)
Cheers,
Posted Nov 26, 2025 23:16 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Processes in linux keep running even if they are not owning the windows which has focus.
I don't understand what you're trying to say here.
Posted Nov 27, 2025 8:03 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
All the *user* cares about is the *single* application which is currently running.
I've lost the context, but iirc it was something about how computers that only run one program at a time are very much a minority. From our PoV as computer guys looking at all the stuff going on in the background that may be true, but from the user's PoV pretty much *every* computer is *single use*.
It's glaringly obvious as soon as you realise, when you're talking about processes, that your typical computer user won't have a clue what you're talking about.
Cheers,
Posted Nov 27, 2025 8:59 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link]
Anyway, we all know here what a process is. Why do we care if some people who aren't participating in the discussion aren't aware of that? How is that relevant?
Posted Nov 28, 2025 10:01 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Posted Nov 26, 2025 0:43 UTC (Wed)
by carlosrodfern (subscriber, #166486)
[Link]
Wouldn't that depend on use case? In servers, probably not a big deal, in Desktop, I'm not sure.
>program load time
Overall larger amount of major page faults.
> What?
Being able to replace implementations dynamically, with optimizations targeted to the use case.
Those reasons and the other ones I mentioned were observations of many people when the majority of systems were built statically linked that eventually motivated a switch in the OS field to favor DSOs.
Posted Nov 26, 2025 1:59 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Nov 24, 2025 19:27 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (14 responses)
Posted Nov 24, 2025 23:14 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (13 responses)
Posted Nov 25, 2025 0:51 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (12 responses)
It's so much better to precompile everything into "Component A", so that it need not care if anything on disk changes.
Posted Nov 25, 2025 6:48 UTC (Tue)
by koflerdavid (subscriber, #176408)
[Link] (4 responses)
Atomic distributions handle this by creating a new file system image in the background, and the user boots into the updated system.
Posted Nov 25, 2025 9:06 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Nov 25, 2025 14:37 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (1 responses)
For example, I use this to upgrade religiously:
emerge -uDNv --with-bdeps y system world --keep-going --jobs --load-average 8
Posted Nov 26, 2025 9:08 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Posted Nov 25, 2025 15:26 UTC (Tue)
by ballombe (subscriber, #9523)
[Link]
Posted Nov 25, 2025 6:53 UTC (Tue)
by josh (subscriber, #17465)
[Link] (6 responses)
Whether you're dealing with a replacement of component A, or a replacement of library B, either way, you *always* write to a temporary file and rename over the original, so that the old inode still exists as the source of the mmap'd code, and then restart A. Writing over the original will cause segfaults.
Posted Nov 25, 2025 9:08 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (2 responses)
Posted Nov 25, 2025 12:03 UTC (Tue)
by draco (subscriber, #1792)
[Link]
Posted Nov 25, 2025 19:18 UTC (Tue)
by bluca (subscriber, #118303)
[Link]
Posted Nov 25, 2025 18:12 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
This is not theoretical, that's how new systemd behaves. It loads libraries dynamically using dlopen().
Posted Nov 26, 2025 9:14 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (1 responses)
Depending on the design of the program it might even happen if you make a config change and some more recently loaded part of the whole package (library, binary,...) reads the new configuration and the longer running parts use the old.
Sometimes that is even deliberate as part of some reload mechanism where a new binary loads a new config, then gets file handles for sockets and similar resources from the old binary so even using something like a snapshot of the filesystem from the time the first part of the package is started wouldn't be a universal solution.
Posted Nov 26, 2025 18:43 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 24, 2025 19:22 UTC (Mon)
by ibukanov (subscriber, #3942)
[Link] (8 responses)
Posted Nov 24, 2025 20:20 UTC (Mon)
by ojeda (subscriber, #143370)
[Link]
There is no standard C++ ABI, though vendors try to help to some degree. As for unsafe calls, that is the same as in C++, i.e. every call is unsafe. By the way, in Rust you can easily specify nowadays that an external function is safe, e.g.
Posted Nov 24, 2025 20:20 UTC (Mon)
by ebee_matteo (subscriber, #165284)
[Link] (5 responses)
Except when it hasn't.
ARMv5 ABI changed after GCC 7 (we all love our -Wno-psabi).
C++11 also broke ABI in several ways. See GCC 5 and the libstdc++ versioning fiasco. `_GLIBCXX_USE_CXX11_ABI` for the win.
GCC 11 broke ABI with GCC 10 due to std::span.
jmp_buf has different ABI for s390 after glibc 2.19.
I can cite more.
Yes, C++ has slightly better ABI guarantees than Rust, but mostly just because its usage is widespread enough, across so many decades, that it came to be that way /de facto/ after people spent years fighting with ABI problems.
And as other people have pointed out, you still have the issue of macros and templates to solve when you use the C++ headers.
C is the closest we have to a stable ABI, assuming the same macros are defined at the time of inclusion.
And you can write Rust programs exporting C mangled functions, and that works just fine also to produce shared libs. But that's the best you can do as of today.
I guess at some point the pressure will be enough for Rust to standardize something resembling an ABI, but the widespread use of monomorphization makes it extremely tricky to do. C++ already had enough of problems with the infamous "extern template" feature of C++98, and now with C++ modules. Which, years after standardization, mostly still do not work.
Posted Nov 24, 2025 22:48 UTC (Mon)
by randomguy3 (subscriber, #71063)
[Link] (1 responses)
Posted Nov 24, 2025 23:06 UTC (Mon)
by ballombe (subscriber, #9523)
[Link]
Posted Nov 25, 2025 17:48 UTC (Tue)
by ssokolow (guest, #94568)
[Link] (2 responses)
Posted Nov 25, 2025 20:04 UTC (Tue)
by khim (subscriber, #9252)
[Link] (1 responses)
Is it really “in progress” if last message is more than year old? I would say “there's a dream”, at this point, not “there's a progress”. Posted Nov 25, 2025 17:55 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
It's a Rust-to-C-to-Rust binding generator for making things like There are a lot of that sort of binding helper for Rust:
Posted Nov 25, 2025 11:20 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (15 responses)
Why should they ? The same developer-friendly argument was made for Java software, the same refusal to invest in a mechanism to share components and stabilise ABIs was advanced by Java developers, the same hostility to distribution best practices was trumpeted right and left.
Fast forward twenty years the technical debt come due and no one can leave the Java boat fast enough. Turns out, refactoring vast piles of vendored, forked and obsolescent code, with no clear lines of demarcation because no one enforced ABI separation for a long time, is completely unappealing. You can ignore problems a long time they come back with a vengeance.
Posted Nov 25, 2025 13:46 UTC (Tue)
by khim (subscriber, #9252)
[Link] (12 responses)
You live in some imaginary universe. On our universe Java is number three language, behind JavaScript and Python, but ahead of PHP, it's used by the most popular OS and no one thinks about abandoning it… sure, people like to grumble about Java problems… they use Java, nonetheless. Isn't that what you are doing here?
Posted Nov 25, 2025 17:45 UTC (Tue)
by ssokolow (guest, #94568)
[Link] (11 responses)
...in the same way that C# is the language people are most likely to reach for on Windows because it's very well-suited to interfacing with Windows-specific APIs, people are likely to reach for C, Perl, Python, etc. on Linux because it's well suited for interfacing with POSIX APIs and Linux-specific APIs.
Web servers, specialty applications, Android apps... none of these grab your attention on a Linux desktop. The only Java apps I can think of having encountered on Linux are Minecraft, JDownloader, Azureus, Trang, FreeMind, and the control software for my KryoFlux.
Posted Nov 25, 2025 18:22 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (10 responses)
It’s not “just that”. The restrictive licensing has been lifted a long time ago. There are loads of available FOSS Java components. There are lots of people who know how to code in Java.
What kills Java as a general purpose language that could be used to build generic components that could end up in generic software deployed in generic distributions is lack of ABI enforcement resulting in lack of version convergence. Life is just too short to delve in the pile of specific component versions each and every Java project uses. Each project is effectively a walled garden, its own component universe, hostile to other component universes.
Posted Nov 25, 2025 18:29 UTC (Tue)
by khim (subscriber, #9252)
[Link] (9 responses)
Except it doesn't “kill Java”. It kills “generic distributions”. I'm not really even sure how they are relevant, in the grand scheme of things: the only reason they survived till now was the fact that people needed them to develop server software. But with Docker on macOS and WSL they no longer needed even for that. You mean: exactly and precisely like a Linux distro? Pot, meet kettle, I would say.
Posted Nov 28, 2025 11:24 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (8 responses)
And you can say “well I don’t need your stinking distros” then just show how you can assemble a full system. Distros proved they could do it from the bare metal to the apps above. Distro opponents have not managed to do it. (And when they attempt to do it it looks like a distro reinvention).
Posted Nov 28, 2025 11:38 UTC (Fri)
by khim (subscriber, #9252)
[Link] (7 responses)
Seriously? Can you find another, more believable, delusion? The most popular OS on the planet shows us how that can be done. You may claim that there are bazillion Linux distros and only one successful alternative, but that's not entirely true: there were others (Tizen, Maemo), they just have proven not to be very interesting… ironically enough one of the worst problems they had was precisely the fact that they have broken compatibility too much. Not really. It's like saying that car and the plane are the same thing if they both use Rolls-Royce engine. The critical difference between distro's “steaming pile of packages” and proper OS is an SDK: something that you can use to build your binary and, most importantly, target different versions of OS simultaneously. Here are downloads for Anroid. These are for Windows. MacOS are the shittiest ones, yes, they exist. Where can I find something like that for any popular distro? If the answer is “you should just install bazillion versions on CI in docker”… then here's your answer for why things work as badly as they do with Linux on desktop.
Posted Nov 29, 2025 16:32 UTC (Sat)
by anselm (subscriber, #2796)
[Link] (3 responses)
Chances are, if you're developing software for Linux you probably want to target more than one Linux distribution, anyway. At that point, relying on an “SDK” provided by any one distribution likely wouldn't be enough – it would be weird to expect the “Debian SDK” to spit out RPM packages for RHEL or openSUSE, after all, and vice-versa –, and using CI to build more than one flavour of your code doesn't look all that unreasonable anymore. Been there, done that; it's not exactly rocket science.
Posted Nov 29, 2025 16:48 UTC (Sat)
by khim (subscriber, #9252)
[Link] (1 responses)
Why is that a problem? Go back quarter century… Windows 9x and Windows NT were significantly more different than different Linux distros… yet creating one binary that worked on both was the norm. Supporting Windows 3.x (with Win32s) was harder, but still possible. These days that support is often done with help of Docker and, now, Flatpacks. Not sure how well it works with Flatpacks, but Docker for server packages works reasonably well. Yes, that's why people invented something that works for both. Many things, in fact. If GNU/Linux couldn't give them usable SDK then they would pick someone who would… Valve, e.g. But then distros spent decades punishing them for that. The only thing that achieved is the development of open source ecosystems that consider distros their enemy, or, at best, “the necessary evil”. Rust just have turned into reveal show because distros have come to the Rust developers with demands then need to fulfill… and got back the question they haven't expected at all: “why should we care?”. They have gotten certain answers, ultimately, via the Rust-in-Linux route: certain developers want to use distro-provided version of Rust and that means certain features needs to be included in Rust at certain time… that got acknowledgement, even if not full acceptance. Everything else? Distros have created the problem for themselves by introducing some strange arbitrary rules, distros may decide how to resolve them… developers of most Rust packages are just too busy doing useful work with people who don't invent random crazy hoops, they don't have time for the distro-invented circus.
Posted Nov 29, 2025 17:39 UTC (Sat)
by pizza (subscriber, #46)
[Link]
Those ecosystems decided that support for non-Linux environments was required, so they had to take a different approach.
If you have to reinvent the wheel completely anyway, why not just use that new wheel for Linux systems?
The smarter ones incorporated the lessons learned from Distros, but the not so smart ones (Python comes to mind) are still paying the price.
But as the saying goes, with sufficient thrust even a pig will fly. And thanks to the AI craze Python has a *lot* of well-funded thrust.
Posted Dec 1, 2025 18:54 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Dec 1, 2025 8:58 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link]
Posted Dec 5, 2025 3:42 UTC (Fri)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
It's relatively easy to do that on Windows, as Microsoft controls all the systems. It's not so easy to do that with Linux or *BSD, as there's multiple different forks you have to deal with. Flatpaks ultimately do what Windows does; ship all dynamic libraries with the program. I don't see any reason you don't accept that you can do that with a flatpak.
(Note that most people don't distribute native binaries for Android; the system compiles the APK into a native format. There are a bunch of "binary" formats that may or may not take an installation step, that you can run on different versions of Linux, ranging from well-designed compiled code to any number of interpreters to APKs and JARs. RenPY programmers distribute one binary package for Windows and Linux all the time.)
None of those are proper OSes; a proper OS has no users so there are no problems. Real systems have problems and make compromises. Distributing the same binary to versions of Linux from many times, from many vendors was not one of the driving goals behind Linux, so the compromises it demands weren't made. Note that Windows binaries, include ones built from post-2010 .NET, don't generally run across architectures; ARM Windows will emulate x86 binaries, but the x86-64 emulator is a lot more fragile and experimental, and x86-64 Windows won't run ARM Windows binaries.
So, yes, Linux can do it, using flatpaks or other options, no, Windows can't do it if you want to use a modern Windows SDK and target both ARM Windows and x86 Windows at the same time.
Posted Dec 5, 2025 9:28 UTC (Fri)
by taladar (subscriber, #68407)
[Link]
Posted Nov 25, 2025 18:47 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Whut?!? Java is one of the _best_ examples of stable API/ABI. I can launch software written for JVM 1.3 in 2001 on the most modern JVM, as long as it doesn't use stuff like "Unsafe". Even SWING and AWT are still available!
This absolutely helped with Java's popularity. Banks and enterprises _love_ not having to update the code every year.
Posted Dec 1, 2025 9:08 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link]
Of course the same lack of ABI convergence that make those languages perfect to resell a mess at inflated prices to banks, makes them totally inadequate for cooperative OS building, where inefficiencies are not an occasion to invoice more but a direct cost in volunteer efforts.
Posted Dec 5, 2025 1:46 UTC (Fri)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
And vice versa, none of these distros are being developed or funded by languages. In many cases, there's people working on both a distribution and a programming language. Debian is an all-volunteer distribution that doesn't have money to really throw at other projects, and if they did, they'd probably encourage you donate to those other projects directly.
> If distros want ecosystems to be more friendly to them, they need to put in the (large) amount of work to make that happen.
If Debian picked up its toys and went home, a vast number of people and companies who never did work for or donated money to Debian would be left high and dry. If you want to make it antagonistic, ecosystems who want to continue using Debian for their servers should start paying for an OS license.
Posted Dec 5, 2025 2:16 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
There are several options to choose from.
Posted Nov 24, 2025 17:50 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
There's also work coming from the other direction, of providing a way to deliberately indicate that you intend something to be ABI, and widening the number of things that have a stable ABI, which will hopefully meet the efforts to determine what a stable ABI definition "should" look like in the middle.
Unfortunately, all this takes time, motivation, and a lot of work; without more people helping, I could see it taking some time to get there.
Posted Nov 24, 2025 18:48 UTC (Mon)
by hunger (subscriber, #36242)
[Link] (7 responses)
Does it? Yes, it works most of the time, but that is by luck and not by design.
The headers used to build some binary contain lots of code that gets backed into the binary (e.g. all templates). If any of those get changed by the next version of the library, then you can spent fun times debugging crashes as suddenly the code baked into the binary from the old version fails to use some symbol backed into the new library.
There is a reason why most distros rebuild binaries when the dependencies change.
Yes, rust could do the same. Rust has a different culture so it won't.
Posted Nov 25, 2025 2:22 UTC (Tue)
by Elv13 (subscriber, #106198)
[Link] (3 responses)
I am not familiar with the tooling Rust has to track ABI breakages, but I assume it could be handled using tooling rather than try to maintain a stable shared library ABI across versions.
Posted Nov 25, 2025 2:46 UTC (Tue)
by khim (subscriber, #9252)
[Link] (2 responses)
Not really. One example: let's convert your Easy: it doesn't exist. cargo_semver_checks is very through, but it only tracks source compatibility. Never binary. Stable ABI doesn't exist, period. There was some interest in development of such ABI, but effort have stalled.
Posted Nov 25, 2025 9:12 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (1 responses)
It mostly works in C and C++ since those seem to have much lower standards for what they consider 'working'.
Posted Nov 25, 2025 13:49 UTC (Tue)
by khim (subscriber, #9252)
[Link]
With Swift approach (roughly: make Sure, it would be a bit work to provide stable ABI and most crates wouldn't bother, but if someone want to create a “Rust platform” (similarly to how iOS and macOS are “Swift platforms”) then it's perfectly doable if costly.
Posted Nov 25, 2025 11:37 UTC (Tue)
by SLi (subscriber, #53131)
[Link] (2 responses)
The claim that this is not sustainable for Debian also seems strange, given that a lot of distros do manage to do it (including non-commercial ones like NixOS).
Posted Nov 25, 2025 15:02 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (1 responses)
> given that a lot of distros do manage to do it (including non-commercial ones like NixOS).
NixOS is only managing to do it because commercial sponsors dump relatively huge money into operation of their CI and binary cache.
Same also goes for other "non-commercial" distros — if you look closer, you'll find they all have commercial sponsors subsidizing the infrastructure.
Posted Nov 25, 2025 18:49 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 25, 2025 0:34 UTC (Tue)
by pabs (subscriber, #43278)
[Link] (10 responses)
https://doc.rust-lang.org/reference/linkage.html
The problem though is the culture of the Rust ecosystem; much of it prefers static linking, dislikes distros and probably would reject patches to introduce dylibs for each package.
Posted Nov 25, 2025 4:14 UTC (Tue)
by xnox (subscriber, #63320)
[Link] (9 responses)
It doesn't provide stable abi - one can use them to share code across multiple related binaries, think private .so
It also is unsafe and removes type checking - which defeats the point of rust to begin with.
Posted Nov 25, 2025 10:55 UTC (Tue)
by joib (subscriber, #8541)
[Link] (8 responses)
Posted Nov 25, 2025 18:00 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
(eg. I could see that being used for something similar to how, as explained by the me_cleaner docs, Intel's UEFI is made of modular blocks and the firmware is updated as one blob, but each motherboard model will have different blocks present and absent... before Intel revised their signing to disallow it, me_cleaner would delete some of the modules.)
Posted Nov 25, 2025 19:34 UTC (Tue)
by bluca (subscriber, #118303)
[Link] (3 responses)
Posted Nov 26, 2025 2:34 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Nov 26, 2025 11:21 UTC (Wed)
by bluca (subscriber, #118303)
[Link] (1 responses)
Posted Nov 26, 2025 18:38 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 25, 2025 19:36 UTC (Tue)
by willy (subscriber, #9762)
[Link] (2 responses)
I really wish something like Microsoft's OLE were more acceptable where we'd essentially have a bunch of methods we could invoke on a tiff object that was handled by a different process.
Posted Nov 26, 2025 1:57 UTC (Wed)
by pabs (subscriber, #43278)
[Link] (1 responses)
https://millcomputing.com/topic/inter-process-communication/
Posted Nov 26, 2025 4:06 UTC (Wed)
by willy (subscriber, #9762)
[Link]
Posted Nov 25, 2025 16:53 UTC (Tue)
by ssokolow (guest, #94568)
[Link]
See The impact of C++ templates on library ABI for details but the gist is that it needs to customize and inline the code for every type it's applied to.
Dynamically linking that sort of dispatch without the compromises Swift makes is an unsolved problem and Rust threads its equivalent of templates throughout the entire system. Most visibly, in the form of the generic type parameters on its alternatives to NULL and exception-throwing. (Option<T> and Result<T, E>)
Posted Nov 26, 2025 22:20 UTC (Wed)
by ralfj (subscriber, #172874)
[Link]
Not sure if you are trolling or just ignorant, but the reasons are entirely technical. Supporting shared libraries properly is really, really hard. C++ barely supports them, but only if you avoid templates and are extremely careful -- doing the same in Rust (in particular, foregoing the use of generics in library API surfaces) would give up almost all the benefits Rust brings, so it's pointless. Swift supports them through a metric ton of engineering work: https://faultlore.com/blah/swift-abi/ .
I'd love to see a stable ABI that actually supports Rust-style APIs. I am certain it's the same for many of my colleagues on the Rust compiler team. But the engineering required for that far exceeds any single new feature that Rust has gained since version 1.0. So please don't go around claiming that Rust does not support shared libraries for non-technical reasons, implying that we somehow have a political agenda to oppose shared libraries or something like that.
Posted Nov 25, 2025 19:52 UTC (Tue)
by hsivonen (subscriber, #91034)
[Link] (1 responses)
Is there a stated policy that explains the implications of Debian dropping an architecture as an official release architecture?
It appears that dropping an architecture as a release architecture does not stop these architectures from factoring into debates about what’s OK to do on official release architectures.
What does dropping an architecture from the set of official release architectures mean policy-wise?
Posted Nov 26, 2025 12:36 UTC (Wed)
by smcv (subscriber, #53363)
[Link]
It means that the dropped architecture no longer prevents packages in unstable from migrating to testing (and therefore being included in the next stable release) without manual intervention from the release team, even if that results in previously-working packages becoming uninstallable or otherwise broken.
Posted Nov 26, 2025 5:22 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
They had similar issues to Linux distribution with managing tons of fine-grained dependencies, and making sure the set of versions stays "coherent". And they switched to doing essentially "static builds".
Posted Nov 26, 2025 9:27 UTC (Wed)
by cyperpunks (subscriber, #39406)
[Link] (1 responses)
The best side effect of the shared libs mantra is that responibility is clear, it's up to the owner/maintainer of the problematic package to get the fix available in repos.
In a static approach a completely new dependency graph and process would be needed, the rust packaging in Fedora seems
For core services like sshd and httpd the dependency graph is not huge, however extending to popluar stacks like PHP and Python it gets very complex where fast, due to shared libs but also due to the use of "modules" the framework in question uses.
(Which was the reason for the Xubuntu breach lwn.net wrote about recently.)
Other related problem is the maintainship upstream, if upstream don't create proper releases and follows reasonable versioning scheme (like semver), how is downstream consumers supposed to able to use the software in a safe manner?
There are lots of hard problem to solve here, its far more complex than a simple fight between shared and static linking.
Posted Nov 26, 2025 11:59 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
And if the repo is an unresponsive 3rd party? The trouble is there are too many 3rd parties (upstream AND downstream) in the linux eco-system, and if you're unlucky enough to be dealing with one that doesn't have the bandwidth to deal with your particular problem (isn't that the norm?) ...
Cheers,
Posted Dec 1, 2025 9:28 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (3 responses)
It remains to be seen if the other mechanisms manage to herd the cats effectively long term, past experience is not conclusive.
Posted Dec 1, 2025 14:16 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (1 responses)
Management and paycheck don't work there ...
Cheers,
Posted Dec 1, 2025 14:36 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link]
Posted Dec 1, 2025 18:23 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 26, 2025 10:18 UTC (Wed)
by gioele (subscriber, #61675)
[Link]
IMO the most reasonable thing to do at the moment would be:
1. start working on rustifying `apt-ftparchive` and `apt-extracttemplates`;
2. split `apt-utils` away from APT's source package (or introduce a <norust> build profile), so that APT's main binary can still be compiled using only C/C++;
3. do not rustify APT itself until some sort of consensus is reached.
Posted Nov 26, 2025 13:51 UTC (Wed)
by ATLief (subscriber, #166135)
[Link] (2 responses)
Would that count as "serious usage"?
Posted Nov 26, 2025 16:50 UTC (Wed)
by smcv (subscriber, #53363)
[Link]
(Probably Launchpad should be mitigating this by running apt-ftparchive in a sandbox that has no read access to anything non-public except for the target PPA's per-PPA signing key, and no write access to anything except the target PPA's metadata; and for all I know, maybe they already do.)
Posted Nov 26, 2025 18:31 UTC (Wed)
by edgewood (subscriber, #1123)
[Link]
portable APT?
portable APT?
portable APT?
portable APT?
At that stage, you might as well compile to WASM, and run that binary using a wasm runtime on the platforms that lack Rust ?
portable APT?
portable APT?
portable APT?
The ports w/o a Rust toolchain could still use cupt, which is written in C++.
portable APT?
portable APT?
portable APT?
Wol
Debian virtual packages
[2]: https://packages.debian.org/stable/xserver
[3]: https://packages.debian.org/stable/metapackages/
[4]: https://packages.debian.org/stable/init
[5]: https://packages.debian.org/stable/apt
The question, as always, would be who's going to do the forking and keep up with upstream?
portable APT?
Shared libraries
C++ support shared libraries and rust could in principle support them too. In fact rust shared libraries could fix most of the problems with C shared libraries by having well-defined ABI and API definitions in the library itself.
Rebuilding packages when their dependencies change is the future.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Wol
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
The claim being made for shared libraries is that I can just update the library, and all the applications are immediately patched, which reduces admin effort as compared to static linking, where I have to update the binaries and then restart the applications.
Shared libraries
Shared libraries
They don't even do that - you have to update the binaries that are supplied by the shared library, and the in-memory copies of the binaries, too.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Extra services during offline upgrade
Extra services during offline upgrade
Shared libraries
And there's a particularly nasty subset of that, induced by the increased scope of feature unification.
Shared libraries
Shared libraries
Oh yes - both ways round are possible.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
> Converging on a common version means a lot of people found this particular version reliable;
Shared libraries
Shared libraries
Shared libraries
Shared libraries
ABI stability can block security updates
ABI stability can block security updates
ABI stability can block security updates
ABI stability can block security updates
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
> it was architected correctly.
Shared libraries
Simple in theory, but .deb format makes it a little complicated.
Delta updates work great with image-based OSs
Shared libraries
Shared libraries
Shared libraries
In the scenario I outlined, you'd read the TIFF with the software that knows about legacy Mac formats, and convert it a modern TIFF in the process - you're already having to use that software to get the TIFF format out, and it can decompress ThunderScan 4-bit RLE and either leave the data uncompressed, or recompress with a more modern lossless compression scheme.
Shared libraries
Shared libraries
Most of the time, TIFF reading programs don't support ThunderScan RLE encoding. That's a big part of why I chose it - it's rarely supported unless you've deliberately enabled it, and you only need it if you're dealing with deeply legacy media.
Shared libraries
Shared libraries
The trouble with that is that what you want is not "simple" and "everything", but a variety of different choices: there's faxes, medical imaging, scanned paperwork, prepress artwork and more, to name some common use cases for TIFFs.
Shared libraries
Shared libraries
A plugin mechanism doesn't help - it just rearranges the deckchairs.
Shared libraries
Shared libraries
Shared libraries
But this is the same mess as multiple shared libraries, just moved around - it's not actually a solution if you want to have all programs use a unified feature set, which is an explicit goal of how the distro mechanism works (see the earlier comment from dvdeug).
Shared libraries
Shared libraries
Just to be clear, then: you want a fax handling program to have extra vulnerabilities over and above upstream, because it's possible that someone might want to reuse a fax handling program for legacy Macintosh scans.
Shared libraries
Shared libraries
You made assumptions about what the program might be, and asserted that a bug introduced by ThunderScan RLE support would be worth having for the extra supported formats; I've introduced a bit more information, explaining why ThunderScan RLE support is not worth having.
Shared libraries
Shared libraries
Shared libraries
The problem is real. The funding to solve it is missing.
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
ABI stability funding
dyn Trait already, what this would would do, in terms of the language is to bring dyn Trait to parity with impl Trait, if you want inlining then simply don't use dyn Trait and you are done.ABI stability funding
* Methods with a self receiver implicitly specialize to accept Box<dyn Trait> as the receiver - Does nothing for non-receiver Self arguments, since we cannot prove that they have the same concrete type as the receiver. In the general case, it is possible to impl Trait for Box<dyn Trait> (or write an inherent impl for dyn Trait), and then this is ambiguous. And it does nothing for associated types and non-method associated functions, which would remain dyn-incompatible anyway.
* Introduce &owned references that "confer ownership" and can be moved out of (unlike &mut, which can be swapped but not moved-from) - Similar to the previous bullet, this fixes the self receiver and does not help with much else. On top of that, it is yet another type of reference we would need to think about.
* Steal dynamic_cast from C++ - This has already been done, see std::any::Any (or you could reinvent your own with unsafe code, if for some reason Any is too inflexible for you). But that forces these APIs to become either fallible or unsafe when dispatched from dyn Trait, and that's much less ergonomic than impl Trait.
> impl Trait is Sized, and that implies a lot of API flexibility that dyn Trait is physically incapable of expressing under Rust's current data model
ABI stability funding
!Sized types on stack and so on. Large work, sure, but nothing impossible.str would no longer be one, single, type, but would become parametrised type with length fixed at runtime… so pointer or reference would need to include reference and also length… oh, right, that's how things already word, isn't it? That would actually simplify the language instead of making it more complex. Would remove lots of corner cases related to !Sized types.[u8] (slice, not array!) and returns the result… what should it do if slice is already empty?dyn Trait identical to impl Trait.ABI stability funding
ABI stability funding
ABI stability funding
impl even if all formal trait-level checks succeed. Naturally if we want parity between impl and dyn we couldn't skip that quirk, too. And because with dyn they couldn't be done at compile time they have to be done at runtime.extern "C" doesn't solve that problem, in fact it makes it worse.ABI stability funding
ABI stability funding
extern "C" definitely generates more problems at runtime that higher-level interface ever could — only they generate random crashes, not predictable panics.ABI stability funding
ABI stability funding
Wol
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
This is not required to replace C code.
Shared libraries
Shared libraries
You can basically do almost all the things you can do in C. Including dynamic linking.
The problem is more than just parametric polymorphism; it's things like defined constants, semantic meaning of functions and more.
Shared libraries
> Polymorphism is absolutely fine as long as you are aware that this means that the polymorphic parts of your library live in the caller's binary, not in your binary.
Shared libraries
Option or Result… and ABI that doesn't use these is as almost far from idiomatic Rust as "C"
Why? Option and Result can be fully monomorphized in your API, in which case there's no polymorphic parts (even though pub struct Foo<T>(Option<T>) is polymorphic, pub struct Foo(Result<u32, MyError>) is not).
Shared libraries
> Option and Result can be fully monomorphized in your API
Shared libraries
struct Foo<T>(Option<T>) is polymorphic, pub struct Foo(Result<u32, MyError&ht;) is not).
pub struct Foo(Result<u32, MyError>) is polymorphic because it depends on a compiler version. Compile is free to change the representation of pub struct Foo(Result<u32, MyError>) at any time, in fact nightly have a flag to do that and stable does it from time, to time, too.
Sure, you'd need the compiler to not break things that are marked as ABI - and you'd have to accept that the stable ABI is not necessarily as efficient as the unstable ABI.
Shared libraries
> But that's something the compiler team has to commit to. None of this works if the compiler team won't stabilize the ABI (replacing the compiler version dependency with a stable ABI version dependency).
Shared libraries
I don't know why you'd limit it to statically known types - that's not my proposal at all.
Shared libraries
What you're describing sounds like what they aim to produce with CrABI.Shared libraries
extern "C" without having to use the abi_stable crate to marshal your types through the C ABI.
It's more than CrABI, but CrABI is a necessary component of it.
Shared libraries
Shared libraries
just as I can't use a random header file version in C, and rely on it working with a random shared library version
On top of that, while I'm demanding perfect tooling, moon-onna-stick, and free unicorns for everyone, I want tooling that allows me to knowingly break ABI as long as I provide the necessary shims to let people who linked against the older ABI to continue to work.
Knowing the Rust community, give it a decade and you'll probably have all that.
My definition of "rely on" is "the tooling gives me an error if I mismatch the shared library version and the header file". Very few C and C++ libraries make that guarantee - the only library I'm aware of that does make that guarantee is glibc, and they do a lot of work with symbol versioning to make that happen.
Shared libraries
Shared libraries
Shared libraries
Because it's not just about generic ABIs, and I'm not talking about restricting developers to a "strange subset". You're projecting your own prejudices on top of what I'm saying, and letting it blind you to what I'm talking about.
Shared libraries
> I'm saying that we need the tools to allow developers to decide what falls on the "shared binary" side of the library
Shared libraries
unsafe. C/C++ tried that “you are holding it wrong” approach — and it doesn't scale.
You continue to misrepresent my position; I'm saying that we need the tooling to enable library developers to safely inline parts of their library into the caller, otherwise we're going to see C/C++ "you're holding it wrong" hacks because they're needed for performance.
Shared libraries
> You continue to misrepresent my position; I'm saying that we need the tooling to enable library developers to safely inline parts of their library into the caller,
Shared libraries
.h and .c from the day one, because it already offered a way to have functions partially inlined in .so and partially in .h.
Because these hacks do happen with other languages - the exceptions are those with a JIT, where it doesn't matter. And compiler developers just have to get used to this - like they've been forced to with C and C++. Either you provide the tooling to do a good job, or people come up with hacks.
Shared libraries
> In an ideal world, there would be tooling that told people working on Vulkan "hey, you've changed this structure in such a way that it now has different size/alignment. You can't do that!"
Shared libraries
Option<&str> would be valid when it crosses library boundary.Option<str> without giving such ability to third-party library could be a good first step.self: Foo<Bar<Self>> today as target for the method, but only self: Rc<Self> or self: Arc<Self>… and sky haven't fallen on earth.Shared libraries
Wol
> So you allow Rust to use the "extern" keyword. Which freezes the layout according to certain rules.
Shared libraries
Option<&str> contains either None or reference to a valid UTF-8 string. Nothing else is allowed. If handling of it is inlined into both binary and library then they should agree about upholding such invariants.Shared libraries
Wol
Libabigail is part of the tooling I'd like to see in proper shared library tooling - it covers size and alignment, but not field ordering within the structure (which also ought to be checked by tools). And, of course, if you want something that "just works", you need the size and alignment of structures to not be hard-coded in the user, but rather filled in by the dynamic linker via a COPY or SIZE relocation (causing you to have to do arithmetic all over the place when a structure from the dynamic library is embedded in a user-supplied structure).
Shared libraries
> And, of course, if you want something that "just works", you need the size and alignment of structures to not be hard-coded in the user, but rather filled in by the dynamic linker via a COPY or SIZE relocation (causing you to have to do arithmetic all over the place when a structure from the dynamic library is embedded in a user-supplied structure).
Shared libraries
Option<&str> needs to be covered, somehow.
Runtime descriptors get filled in by the dynamic linker - and all of the things I'm describing are things that are done semi-manually to maintain a stable ABI by Vulkan and glibc developers today.
Shared libraries
> Runtime descriptors get filled in by the dynamic linker
Shared libraries
dyn Trait scheme: in the vtable there are size of the object (so it can be allocated on stack, alloca-style), reference to the constructor (so you can create a vector) and so on. Then generic may have one, common code for that thing and work — even without any dynamic linkage involved.Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
* Slices and arrays (the element is considered a type parameter, and gives rise to monomorphization in the usual fashion).
* Async (Future<T>)
* for loops and iteration (Iterator<T>)
* Smart pointers (Deref<T>)
* Result<V, E> and Option<T>, and all APIs that return things in one of those two forms (i.e. every fallible function in std and core). That includes basically all I/O.
Shared libraries
Shared libraries
Shared libraries
You can have both (just not at the same time). That's what Swift does.
Shared libraries
Shared libraries
Swift vs Rust ABI
Swift vs Rust ABI
Swift vs Rust ABI
Good to know that Rust is working on ABI!
It's a really hard problem to get right, and I am extremely grateful that the Rust team is thinking it through, and not getting stuck with the first design that solves 99% of the problem space.
Swift vs Rust ABI
Swift vs Rust ABI
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
The problem comes with updates. If you update (say) ripgrep to fix a bug, and it uses a new monomorphization, that new monomorphization can rely on a new monomorphization inside a library package, and so on.
Shared libraries
Shared libraries
If you're doing the change downstream, then yes it is quite likely - something as "trivial" to upstream as "add a new variant to an error enum" is a new monomorphization, with the resulting need to recompile everything that knows the layout of that enum.
Shared libraries
Shared libraries
Returning a new error value that was previously impossible is an ABI break, in both C and Rust, unless it's clearly documented beforehand that other errors are possible.
Shared libraries
Shared libraries
It's slightly worse, because years of habit mean that C programmers are used to thinking about whether a change will break distro ABIs; Rust programmers aren't, and because Rust makes it easier to express things that are hard to express in C (sum types, for example), they're more likely to make changes to fix a bug that make the situation worse.
Shared libraries
Rust has automatic struct packing and niche optimization and, were it not for the need for reproducible builds, the support for randomizing struct layouts to help people catch Hyrum's law mistakes might be on by default.
Shared libraries
Shared libraries
Wol
Whole-image optimization
Shared libraries
Shared libraries
And I think that's fine, if they always pick the latest compatible semver.
That tends to work very well in the crates.io ecosystem with only extremely few exceptions.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
>rated for 300 TB write/erase cycles
Only actually ~500 full drive writes, but I've been impressed by Sandforce. Shame they went out of business. I've never seen an HDD last nearly this long.
Shared libraries
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Retired_Block_Count 0x0033 093 093 003 Pre-fail Always - 608
9 Power_On_Hours_and_Msec 0x0032 099 099 000 Old_age Always - 132110h+35m+45.600s
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 175
174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 81
231 SSD_Life_Left 0x0013 089 089 010 Pre-fail Always - 1
241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 32128
242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 79296
Shared libraries
Shared libraries
Wol
Cargo itself will let you use multiple different versions of the same library in one program, without errors.
Shared libraries
Shared libraries
The daemon set - sendmail and friends, MySQL and friends, etc etc
The windowing system - KDE/Plasma, Gnome, whatever
User applications.
Wol
Shared libraries
Shared libraries
Shared libraries
> monomorphization model doesn't seem to be a good fit for how distro's work today.
Shared libraries
Shared libraries
Shared libraries
You'll probably want to give Do your installed programs share dynamic libraries? a read if you haven't already.
Shared libraries
Shared libraries
Has the dynamic linking runtime overhead been reduced to almost zero since then? Yes, I know it has been reduced by some degree, so that it's not really noticeable anymore. But is it faster than static linking now? How can that be?
Shared libraries
Shared libraries
And they are scattered around in the file system which causes lookups and seeks.
I doubt that this can be faster on average.
Maybe it's faster for small applications where everything but the main binary is already in the page cache.
But I doubt it's true in a general sense.
Are uutils slower than gnu coreutils, just because they are statically linked?
Shared libraries
$ time for i in $(seq 10000); do /lib/cargo/bin/coreutils/true; done
real 0m21.966s
$ time for i in $(seq 10000); do gnutrue; done
real 0m7.110s
Shared libraries
Shared libraries
But the question was: Is this due to static linking?
Shared libraries
Shared libraries
I would be *very* surprised if this would read the binary from disk 10000 times.
Saying that gnu-true is faster than Rust-true due to dynamic linking is clear nonsense, because there is no difference w.r.t. dynamic linking between them.
Shared libraries
It's the difference between real production systems and synthetic benchmarks.
Linux loads all ELF objects (binaries and libraries alike) on demand. If you run it in a loop, and it doesn't have enough memory to load the parts used for true and keep it in page cache, then that's the same whether it's dynamically linked or statically linked.
Shared libraries
Shared libraries
Not necessarily - any page that includes a relocation is not shared. Add in that any shared page of any core system library that's not actively in use can get dropped from the page cache (since the kernel knows it can reload it on demand), and you could well find (and indeed, I see on my laptop) that glibc pages get paged in when you start a dynamically linked binary because they're not in cache already.
Shared libraries
Shared libraries
Well, you'd have to ask this commenter why they thought the Rust binary would be read in 10000 times, but the C binary would not.
Shared libraries
Shared libraries
Shared libraries
Firstly, as has already been said, I don't load the whole 100 MB at a time - the kernel demand pages the bits I need.
Shared libraries
Shared libraries
It's an embedded Linux system - cheap parts throughout.
Shared libraries
Shared libraries
Shared libraries
This is measured on an Alliance eMMC device. Taking the same C program, and statically linking it, makes it faster to load than dynamically linking it, on the same processor.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
You really, really don't if you're RAM constrained (welcome to embedded!).
Shared libraries
Shared libraries
No - it happens when the system is continuously busy, but new process starting is on the "once every 15 minutes" scale, not "every few seconds".
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Wol
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Wol
Shared libraries
Shared libraries
Wol
Shared libraries
Multi-user vs. single-user
E.g. DEC Ultrix did not have shared libraries, but it did have demand paging and sharing of binaries. On a typical multi-user system at that time, all users ran sh, vi, matlab, and mosaic, which thus could be shared.
On a modern single-user system, the user runs a few large applications, which cannot share much, unless they use the same shared libraries.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
unsafe extern "C" {
safe fn f();
}Shared libraries
In fact, I am not aware of a standardised ABI for C++ at all.
Shared libraries
Shared libraries
Shared libraries
I guess at some point the pressure will be enough for Rust to standardize something resembling an ABI
Work already in progress: Tracking Issue for the experimental crabi ABI
extern "crabi" is intended to be a higher-level alternative to extern "C".
Shared libraries
Shared libraries
Of cause even with Rust one can expose things across shared libraries using C-ABI, but then Rust code calling such C-based API will have to use unsafe when calling those even when the implementation is fully safe.
I don't remember seeing unsafe use in the examples for the abi_stable crate.
.so-based systems, similar to how PyO3 will let you write code to interface Rust and Python and it'll take care of generating the unsafe bits and wrapping them up in invariant-enforcing abstractions.
Shared libraries
> Fast forward twenty years the technical debt come due and no one can leave the Java boat fast enough.
Shared libraries
Shared libraries
Shared libraries
> What kills Java as a general purpose language that could be used to build generic components that could end up in generic software deployed in generic distributions is lack of ABI enforcement resulting in lack of version convergence.
Shared libraries
Shared libraries
> Distro opponents have not managed to do it.
Shared libraries
Shared libraries
The critical difference between distro's “steaming pile of packages” and proper OS is an SDK: something that you can use to build your binary and, most importantly, target different versions of OS simultaneously.
> Chances are, if you're developing software for Linux you probably want to target more than one Linux distribution, anyway.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
A lot of us are not interested in hanging FLOSS decorations on a monopolistic closed system.
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
In the short term, there's experiments like stabby and abi_stable looking at what it means to provide a well-defined ABI for a shared library written in Rust and intended to be consumed by other Rust programs.
Shared libraries
Shared libraries
Shared libraries
> I assume it could be handled using tooling rather than try to maintain a stable shared library ABI across versions.
Shared libraries
enum SecurityMode {LEGACY, SECURE, DISABLED}; to Rust and add Option<…&rt; wrapping. And now look on how different versions of Rust thread that. Nice, isn't it? The same effect that you just described—but without any source changes, just with different compiler. And no, release notes wouldn't save you, either, there are nothing in them about this change.Shared libraries
Shared libraries
dyn Trait as capable as impl Trait at the cost of implementation speed) there would be no material difference between ABI stability checks and API stability checks.Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
Shared libraries
C++ supports shared libraries for the parts which don't use templates.
Shared libraries
Shared libraries
> C++ support shared libraries and rust could in principle support them too.
Policy about the meaning of dropping a release architecture
Policy about the meaning of dropping a release architecture
A fun article from Microsoft
A fun article from Microsoft
to be step in this direction, where a more holistic approach is used.
A fun article from Microsoft
Wol
A fun article from Microsoft
A fun article from Microsoft
Wol
A fun article from Microsoft
A fun article from Microsoft
So, what's going to happen?
What counts as "'serious usage' of apt-ftparchive"?
What counts as "'serious usage' of apt-ftparchive"?
In addition to what smcv wrote, adding Rust to apt-ftparchive would only make a (negative) difference to you if you want to develop apt-ftparchive and don't want to learn Rust, or if you use it on one of the four legacy platforms that the article discusses.
What counts as "'serious usage' of apt-ftparchive"?
