|
|
Log in / Subscribe / Register

Predictions for the new year

By Jonathan Corbet
January 5, 2026
The calendar has flipped over to 2026; a new year has begun. That means the moment we all dread has arrived: it is time for LWN to put out a set of lame predictions for what may happen in the coming year. Needless to say, we do not know any more than anybody else, but that doesn't stop us from making authoritative-sounding pronouncements anyway.

Some of the things that might just happen in 2026 are:

This will be a make-or-break year for the Firefox browser. For years, longtime Firefox users have hung on while Mozilla has pursued random directions, added advertiser-friendly features, and tried repeatedly to jump onto the AI bandwagon. But all those users want is a reliable, standards-compliant browser that gives them control over their web experience and protects their privacy. The result is that Firefox's usage has been declining for years.

Mozilla now has a new CEO who has one last chance to turn things around. A refocused Firefox determined to protect its users from an increasingly hostile web environment could regain users. A Firefox that continues to chase buzzwords or appease advertisers will sink without a trace. The world desperately needs an independent browser project that serves its users; one can only hope that Mozilla rediscovers its old mission to fill that need.

Meanwhile, an overall increase in interest in Linux and free software will be driven by a combination of surveillance concerns, forced upgrades, unwanted AI features, and increasing hardware prices. A lot of forces are coming together to show the virtues of a system that is free, user-centered, and resource-efficient.

The gccrs project will deliver a working Rust compiler this year; it will be usable to build the kernel's Rust code. This ambitious project to bring Rust support to the GCC compiler seemed to languish for years, but has recently picked up its pace. In the coming year, it will begin to be useful, even though the to-do list is likely to remain long.

The availability of gccrs is important because, among other reasons, an increasing number important projects will require Rust to build over the course of the year. The availability of a GCC-based compiler will make that transition easier for many people, especially those working with architectures that the LLVM-based rustc compiler does not support. It is possible, though far from certain, that building the 2026 long-term-support kernel (likely 7.4, to be released on December 20) will require Rust for some use cases.

In 2025, many projects, including distributions, grappled with the question of whether to accept code generated by large language models (LLMs). In 2026, though, distributors will increasingly face the question of whether to ship LLM-based tools and other machine-learning software itself. Many of these tools claim to be open-source software, but the true status of code built around large binary blobs of model data is far from clear.

The Git project's transition away from SHA-1 will make big steps in 2026 as the efforts to switch to SHA-256 by default reach fruition. As the final interoperability pieces fall into place, there will be little reason to use the older hash algorithm for new repositories, and some established ones may make the switch as well.

Whether LLMs will prove useful for large-scale code generation remains unclear, but the use of LLMs for code review will increase sharply in 2026. The creation of new code has not been a limiting factor for much of the free-software community for a long time, but code review is in chronically short supply. It seems that LLMs can be good at finding the kinds of problems that humans often miss, and they do not (yet) get upset about being asked to review version 17 of a patch rather than implementing cool new features. The benefits of the use of these tools seem clear. The risk is that projects will quickly become dependent on proprietary systems that are unsustainable (in both climate and economic senses). If that dependence comes about, the possibility of disruptive rug-pulls will increase as well.

As a related point, it will become clear that attackers are using these systems to find vulnerabilities as they are added to free-software projects. It is hard to imagine a world where people who seek to break into systems would not use a tool like this. Security has always been (among other things) an arms race, and the attackers have a new weapon.

Restrictions on Android app installation will make the non-free nature of Android systems clear. Specifically, the planned restrictions on "sideloading" will create pain for many users. As Eugen Rochko commented: "'Sideloading' is the rentseeker word for 'being able to run software of your choosing on a computing device you purchased'". A platform that denies this ability is not free. This restriction may increase interest in alternatives based on the Android Open Source Project. But the world needs a truly free operating system for mobile devices. Perhaps the LibrePhone project will help us to get there, but do not expect this problem to be solved in 2026.

Distributors will have to rethink what they are creating. In the early days of Linux, a distribution was the source for most interesting software on a system; the rest was typically built from source. The growth in the amount of available software and the emergence of alternative software sources (language-specific repositories and sites like Flathub, for example) has challenged that model. Often, it seems like distributors offer nowhere near enough software, and they are too slow to ship it.

That leads to a situation where distributors consider falling back to a base (perhaps immutable) platform and leaving the task of shipping application software to others. That, however, takes away much of the value that distributions offer for their users. A collection of software that has been selected and built according to a set of consistent policies, with perhaps some user-unfriendly features removed, is not to be let go of lightly. Distributors that leave much of their work to third-repositories may end up losing users to those that hold to their original mission.

The European Cyber Resilience Act (CRA) will start to make itself felt. While the main requirements of the CRA do not apply until the end of 2027, the requirement to report exploited vulnerabilities and serious incidents takes effect in September 2026. By that time, vendors shipping open-source software in their products will need to have mechanisms in place to meet those requirements when vulnerabilities turn up in that software.

Digital sovereignty efforts, in Europe and beyond, will pick up over the year. Even if the tensions that have caused many to question their reliance on US-based companies miraculously disappear, the point that this reliance can be a security risk has been made clear, and it will take years to rebuild trust, if that can be done at all. Free software is the obvious platform on which an independent digital infrastructure can be built, so expect an increase in interest in projects that can help countries increase their independence and control. We can only hope that the rest of the world opts to build platforms that support freedom and privacy, rather than creating its own native versions of the surveillance and control economy.

The desire to give a new shape to the Internet is felt far beyond Europe, of course. The net (and the World Wide Web specifically), as originally conceived, was a widely distributed collection of independent sites. Now it is dominated by a handful of massive companies that appear to be strip-mining what remains for content while steadily increasing their revenue-extraction efforts. Perhaps 2026 will be the year when the early Internet, updated for current times, begins to make a recovery. Efforts like the Resonant Computing Manifesto show the kind of thought that is starting to be heard more widely.

This year may be a turbulent one in which a lot of change becomes possible. If the AI bubble collapses, it may free the industry from its current large-model obsession and allow us to find ways to make some of that technology useful on a smaller, more distributed scale. There are faint signs of political change in the US that, just maybe, will eventually slow the efforts to tear apart the mechanisms of global cooperation. Since global cooperation is what our community depends on, there is reason to hope. We have shown how we can work together across the planet to build something of benefit to all. Maybe, if we are truly lucky, more of that work can truly be to everybody's benefit, and less for a few huge corporations.

Finally, LWN will complete its 28th year at the end of January; we would have never guessed that we would be doing this for so long. Some people are just slow learners. It has been a great ride, and we are far from done. We're looking forward to discovering 2026 with you, and thank you for your support. We will, as always, review these predictions at the end of the year to see just how wrong we were.


to post comments

Limited by gccrs

Posted Jan 5, 2026 20:40 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (36 responses)

I think the combination of gccrs providing more compatibility and more projects wanting to make Rust a hard dependency will split projects according to the platforms they care about. Projects that only care about the big, popular platforms will track rustc and include the latest features. Projects that care about the broadest compatibility with niche platforms will restrict themselves to the level supported by gccrs. Hopefully that difference won't be too big, but as long as rustc is the officially supported compiler it's likely to continue to have some features that haven't landed in gccrs yet.

Limited by gccrs

Posted Jan 6, 2026 6:05 UTC (Tue) by azumanga (subscriber, #90158) [Link] (19 responses)

My bigger worry is the long-tail of bugs.

I understand in theory it's better to have multiple independent compilers, but I also hate the pain in long-running C++ applications of having to #ifdef __clang__ and __gcc__ and _MSC_VER , and then sometimes particular versions of particular compilers.

Limited by gccrs

Posted Jan 6, 2026 7:30 UTC (Tue) by mb (subscriber, #50428) [Link] (2 responses)

>applications of having to #ifdef __clang__ and __gcc__ and _MSC_VER ,

It's rarely the case that these are needed due to compiler bugs.
99% of the time these are done because nonstandard features are used that all compilers implement differently.
I don't see this happening in the Rust world, because the idea is to closely track some version of rustc.

Limited by gccrs

Posted Jan 7, 2026 14:36 UTC (Wed) by chris_se (subscriber, #99706) [Link] (1 responses)

> >applications of having to #ifdef __clang__ and __gcc__ and _MSC_VER ,
> It's rarely the case that these are needed due to compiler bugs.
> 99% of the time these are done because nonstandard features are used that all compilers implement differently.

I work in C++ for my day job and the _only_ place where we have compiler-specific defines are when we want to silence false positive compiler warnings in certain code regions, because there is no standard way of doing so (without generating others in other compilers). I don't remember having to work around a compiler bug in this manner. (And I have recently encountered a compiler bug in Apple's clang, which I was able to work around by rewriting a single line of code slightly differently - but no conditionals.)

What the C++ code I'm working on does contain here and there are OS/arch-dependent conditionals, such as #ifdef _WIN32 or so. But that's not really that different from Rust's #[cfg(target_os = "windows")].

Granted, in very generic library code that also wants to support very old compiler versions these kinds of things might be necessary in some rare cases, but in my experience compiler-specific conditionals are more often than not used incorrectly, where a platform-specific conditional was more appropriate. (There's plenty of code I've seen that assumes MSVC = Windows and GCC = Linux, even though there's e.g. MinGW.)

Limited by gccrs

Posted Jan 8, 2026 9:00 UTC (Thu) by taladar (subscriber, #68407) [Link]

That might be true for C++, a language that is basically crawling along at barely perceptible speeds for language development because of the many implementations and the design by committee approach but I don't see that kind of language development speed as a goal for languages that want to thrive long-term. In fact C++ is basically the poster child for what I fear Rust will become if multiple implementations become established enough that every change to the language has to consider all of them.

Limited by gccrs

Posted Jan 6, 2026 7:30 UTC (Tue) by josh (subscriber, #17465) [Link] (15 responses)

gccrs has gone out of its way to say that projects should *not* do this. I hope that people listen to that message from gccrs.

Limited by gccrs

Posted Jan 6, 2026 9:33 UTC (Tue) by cpitrat (subscriber, #116459) [Link] (14 responses)

The simplest way to prevent it is simply to not allow it! It is in the hands of the gccrs devs after all (though I understand there's certainly some pressure and people coming with good reasons)

Limited by gccrs

Posted Jan 6, 2026 9:41 UTC (Tue) by mb (subscriber, #50428) [Link]

That's not possible.
If the compiler doesn't support it on its own, then people will come up with external solutions such as build.rs scripts or whatever.
The build system always knows what compiler it is using. It called it.

I would rather like to have a "standardized" interface to check for compiler type/version rather than 9000 re-implementations of the same thing. Experience from C/C++ shows that this is a nightmare in cross compilation environments. "Not allowing" such an interface means to enforce this bad situation.

Support for that should be added to the already established interfaces.
https://crates.io/crates/rustversion

Limited by gccrs

Posted Jan 6, 2026 10:10 UTC (Tue) by farnz (subscriber, #17727) [Link] (12 responses)

Note that Rust-the-language makes this fairly easy, at that. Inside the language, there's no way to determine what compiler or compiler version is in use - you have feature macros for unstable features, and that's it. Your only control lever is the build system, and right now, that's all limited to "which version of Rust is supported".

When gccrs is stable enough to claim support for a complete Rust version, build systems might grow the ability to behave differently with different compilers; but it is currently gccrs policy that differences in compiled program behaviour between rustc and gccrs are always gccrs bugs. As a result, it seems plausible that the tool offered to distinguish "rustc" and "gccrs" will just be the MSRV - if you claim to need Rust 1.85, and gccrs only implements Rust 1.49, then you'll definitely get rustc up until gccrs catches up.

Assuming both rustc and gccrs keep on their current paths, I think it far more likely that any divergence will be covered by feature flags at the language level, and if you genuinely need to identify which compiler it is (e.g. because you depend on the assembly output of rustc), you'll have to do that at build system level.

Limited by gccrs

Posted Jan 7, 2026 1:17 UTC (Wed) by josh (subscriber, #17465) [Link] (4 responses)

That is my sincere hope as well. We may add an in-language way to write conditionals on the Rust version, as well, but that's not the same as asking which compiler you're on, and ideally projects should never be distinguishing between gccrs and whatever older rustc version it is compatible with.

Limited by gccrs

Posted Jan 7, 2026 2:58 UTC (Wed) by quotemstr (subscriber, #45331) [Link] (3 responses)

Compiler-vendor obliviousness is an unattainable ideal that's never been achieved in practice in a multiple-implementor language environment. Even if you declare divergent behavior to be a bug, you can't fix bugs in the past, and people stay on older versions longer than you'd like.

Failure to add a language-level conditional to help programmers account for vendor divergence that shouldn't happen but actually does will just lead people to do gross things in the build system, e.g. preprocessing source code before handing it to the compiler.

Limited by gccrs

Posted Jan 8, 2026 2:30 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (2 responses)

D is probably the closest to that (dmd (Digital Mars), ldc (LLVM), and gdc (GCC)). They seem to all work together well enough that I, at least, have not seen much in the way of compiler-conditional code. Python is another one: how often do you see Python code checking whether it is CPython, PyPy, IronPython, Jython, MicroPython, or any of the other implementations out there?

Generating code at least seems more honest than stuffing it into the language (as anything more special than a `#[cfg]` already is at least, but that has far more useful things to do beyond it). It's not too different than the polyfill sources for platform differences conditionally added into the build where things are otherwise missing (cf. tmux's `compat/` directory).

Also note that usually there is some sacrificial lamb that ends up not being able to distinguish itself in a meaningful way. For example, C compilers end up emulating GCC to bootstrap themselves so that projects that sniff compilers and dump unknowns into an `#else #error` block don't need manually patched all over the place. So instead you sniff for all the various vendor-specific ones and then if none of them are defined, you can assume you're GCC (until some new implementation comes along and needs differentiated).

In any case, I expect gccrs and rustc to collaborate to not need such things in "everyday Rust" code.

Limited by gccrs

Posted Jan 8, 2026 9:55 UTC (Thu) by jwilk (subscriber, #63328) [Link] (1 responses)

> how often do you see Python code checking whether it is CPython, PyPy, IronPython, Jython, MicroPython, or any of the other implementations out there?

There are literally hundreds of packages in Debian doing that: https://codesearch.debian.net/search?q=%5Cbplatform%5B.%5...

Limited by gccrs

Posted Jan 9, 2026 21:24 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

A perusal shows that most of this is low-level code (e.g., `setuptools`, `websockets`), vendored code in Firefox (which shouldn't count here), or is in what looks like generated code (e.g., I jumped to a random page and the `pypy` variable set is not actually used anywhere).

I also see a sizeable portion in test suites. So I think I'm going to stick by my "most Python code doesn't care" statement.

Limited by gccrs

Posted Jan 12, 2026 11:02 UTC (Mon) by moltonel (subscriber, #45207) [Link] (6 responses)

One nuance is that the features supported by gccrs do not cleanly match a particular rustc release. For example, the initial focus was on rustc 1.49 but the devs quickly found the need to implement some `const` features, and they are now focused on supporting "whatever the linux kernel needs". By the time gccrs can advertise "1.49 feature-completeness", it'll have lots of post-1.49 features.

The question is then "what should my MSRV be if gccrs implements the features I need but doesn't yet support all the features of the corresponding rustc version ?" The answer is likely to be messy.

Limited by gccrs

Posted Jan 13, 2026 1:04 UTC (Tue) by josh (subscriber, #17465) [Link]

> The question is then "what should my MSRV be if gccrs implements the features I need but doesn't yet support all the features of the corresponding rustc version ?"

Your MSRV should be whatever version of Rust includes all the features you need.

Limited by gccrs

Posted Jan 13, 2026 10:31 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Your MSRV should be the rustc version that you need, and if your project is important enough, gccrs will catch up with you. If it's not, then it's on people who desperately want to build it with gccrs anyway to patch the MSRV to one that gccrs supports - and they get to keep all the pieces if they do that and it goes wrong.

Limited by gccrs

Posted Jan 13, 2026 11:35 UTC (Tue) by moltonel (subscriber, #45207) [Link] (3 responses)

That's a comfortable, principled answer (and similar to Josh's, though I appreciate that you didn't conflate rustc version with Rust version). But making it someone else's problem doesn't answer the question. What if I want (or am paid) to support building my crate with gccrs ? How would you do it for your own project ?

If the LWN prediction is right, the Linux kernel will have to answer this question by the end of the year. Linux devs are not going to wait for gccrs to catch up to rustc-1.85 (or whatever the MSRV is at that stage). My guess is that they'll somehow split "MSRV" into "rustc-version || gccrs-version" in the build system. Eventually, some people will want something similar in cargo.

Limited by gccrs

Posted Jan 13, 2026 13:13 UTC (Tue) by ojeda (subscriber, #143370) [Link] (1 responses)

The minimum version we support is still 1.78. When I bump it to 1.85, which will happen soon, it will (should) remain there for about two years, since we plan to follow Debian Stable's version (assuming Debian Stable keeps following a similar pattern).

So if `gccrs` decided to target that version, then they would have two years, not less than one (if that is what you mean by "Linux devs are not going to wait").

But, of course, that does not really matter for building the kernel -- it is neither sufficient nor necessary.

And, yeah, the minimum versions the kernel declares, just like for C compilers and other tools, are about the tool versions, i.e. `rustc`. A separate `gccrs` minimum wouldn't be a surprise eventually.

Limited by gccrs

Posted Jan 13, 2026 13:55 UTC (Tue) by moltonel (subscriber, #45207) [Link]

Thanks, that matches what I was expecting.

To be clear, what I meant by "not waiting" is that I expect the kernel will support building with gccrs very soon after gccrs is "ready for the kernel", which may be a long time (probably years) before gccrs has "caugth up with the rustc MSRV of the kernel".

Limited by gccrs

Posted Jan 14, 2026 12:18 UTC (Wed) by hunger (subscriber, #36242) [Link]

The gccrs newsletter from December says the plan is to *miscompile* the kernel in 2026: It is expected that the compilation succeeds, but the result will not run:-)

So if you are paid to work with the kernel using gccrs: You will end up doing fun compiler hacking :-)

Limited by gccrs

Posted Jan 6, 2026 7:41 UTC (Tue) by hunger (subscriber, #36242) [Link] (14 responses)

Rust requires every suggested feature to provide an implementation in rustc nightly (behind a feature flag). As long as that is the case, gccrs will *always* be far behind rustc in fpthe features it supports. There is just no way for them to ever catch up.

I do see some value on having an independent implementation of rust. That is severely limited by the decision to reuse crates from rustc: It is just a partially independent reimplementation with some of the tricky bits copied over.

I have much higher hope for the gcc backend to rustc... but if there is interest into a platform, then it will get support from llvm eventually. There is some interesting stuff happening with llvm-based tools -- more than "just" rust.

Limited by gccrs

Posted Jan 6, 2026 8:00 UTC (Tue) by mb (subscriber, #50428) [Link] (9 responses)

gccrs has to implement many of these unstable features, because they are typically often used to implement features in the std library of stable rustc compilers. So in the bootstrap process these features are already used and tested.
And even for features not used in std library, nothing prevents gccrc from implementing unstable features.

Limited by gccrs

Posted Jan 6, 2026 8:51 UTC (Tue) by hunger (subscriber, #36242) [Link] (8 responses)

Sure, nothing prevents anyone from doing anything. but there is a huge imbalance here between rustc and gccrs.

To suggest a feature you have to implement it in rustc. So rustc will always have all features. Gccrs on the other hand has no such requirement. Gccrs devs will need to make time to add the features they want themselves -- provided they do not get it via the crates they plan to reuse from rustc.

Limited by gccrs

Posted Jan 6, 2026 9:14 UTC (Tue) by mb (subscriber, #50428) [Link] (2 responses)

New features are typically kept unstable (a.k.a. unusable by stable users) for many months or years before they are stabilized.
Plenty of time for alternative implementation to catch up before the stable release.

The problem is not that rustc progresses too fast.
The problem is that alternative implementation do not have the resources to catch up.
This is *not* a fundamental imbalance between the reference rustc and everybody else.

Limited by gccrs

Posted Jan 7, 2026 14:54 UTC (Wed) by chris_se (subscriber, #99706) [Link] (1 responses)

> The problem is that alternative implementation do not have the resources to catch up.

The gccrs project appears to be making a lot of progress, as the LWM article discussed. I personally think the biggest remaining hurdles (outside of the Linux kernel use case) are:

- Get to full compliance with the currently selected Rust edition
- Implement enough "magic" features that the standard library is fully there

If that point is reached then gccrs will be an alternative compiler that supports that specific Rust edition, and will be able to compile a lot of code that is out there already. Implementing the additional features introduced by newer Rust editions will be much more straight-forward than fully reaching this initial stage.

Limited by gccrs

Posted Jan 7, 2026 16:12 UTC (Wed) by mb (subscriber, #50428) [Link]

There is no such thing as "supporting a specific Rust edition" independent from the rustc compiler version.
A compiler can support a Rust edition as implemented on a certain rustc version.

Rust edition x on compiler y.z is not the same feature set as Rust edition x on compiler y.z+1

That's why crates can (should) specify both, the edition and the MSRV.

Limited by gccrs

Posted Jan 7, 2026 8:49 UTC (Wed) by smurf (subscriber, #17840) [Link] (4 responses)

You can fix that the way some other standards do: require two independent implementations of $SHINY_NEW_FEATURE before it gets to get out of Unstable.

Whether a significant-enough-to-make-a-difference portion of rust developers/users/what-have-you can be convinced to do that is a different question. Chicken/egg.

Limited by gccrs

Posted Jan 7, 2026 10:01 UTC (Wed) by Wol (subscriber, #4433) [Link] (1 responses)

Hmmm...

Create an "accelerated release" path that requires the feature to be in both rustc, and gccrs? Okay, if a new feature depends on other new features that aren't available in both, that's a bummer; but I guess implementing in both compilers will also help ironing out bugs, so the accelerated release will be earned.

Cheers,
Wol

Limited by gccrs

Posted Jan 14, 2026 12:02 UTC (Wed) by hunger (subscriber, #36242) [Link]

This might be somewhat mitigated by gccrs planning to use code straight from the rust compiler for several of the critical tasks (e.g. the borrow checker, aka. Polonius)...

At this point "two independent implementations" turns into "two independent implementations that are sharing all the juicy bits that would need validation by independent implementations the most".

Limited by gccrs

Posted Jan 8, 2026 8:33 UTC (Thu) by taladar (subscriber, #68407) [Link]

This sort of thing is exactly why so many of us are opposed to the very idea of having a second implementation. It will lead to the exact same issues we see in other languages that have multiple implementations in terms of making the further development of the language itself so much harder.

Limited by gccrs

Posted Jan 14, 2026 11:55 UTC (Wed) by hunger (subscriber, #36242) [Link]

That's what I said: To make gccrs viable, the requirements on how features land in Rust need to be changed. Gccrs will need to offer something pretty compelling to warrant adding extra work to people already overworked with just one compiler in the mix.

Which standard still requires "two independent implementations" nowadays? And since when is rust a standard? :-)

Limited by gccrs

Posted Jan 7, 2026 17:46 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link] (3 responses)

I see the main benefit of the gcc-based platform being getting Rust to architectures LLVM hasn't shown any interest in supporting. We seem to be getting more and more articles here on LWN that can be summarized as "package foo wants to start using Rust, but users of niche architectures are upset because this will cut them out". In terms of supporting these cases, some kind of gcc-based solution seems far more likely than LLVM starting to support niche platforms that are only losing popularity.

The key factor is obviously going to be the Linux kernel. It seems like kernel developers would really like to start using Rust features in the core kernel rather than just drivers, but that is extremely unlikely if it means abandoning a bunch of currently supported architectures. Once there's good enough support for Rust that the kernel can start using it for all its supported architectures, I expect a lot of other projects to start using it.

Limited by gccrs

Posted Jan 8, 2026 19:11 UTC (Thu) by jmalcolm (subscriber, #8876) [Link] (1 responses)

> the main benefit of the gcc-based platform being getting Rust to architectures LLVM hasn't shown any interest

I agree. This is the main benefit. And the most important "package" that wants to use Rust that needs to support these non-LLVM platforms is the Linux kernel. Making Rust mandatory for Linux is going to require the ability to compile Rust on platforms that LLVM does not target but GCC does.

That does not mean that it has to be the absolute latest and greatest version of Rust though. I quite like the idea that the Linux kernel will build with the Rust compiler included in Debian Stable. As reported here on LWN, that seems to be the plan for the kernel. This seems like a totally sensible cross-platform target and something that gccrs should be able to easy stay current with (after it ships at all of course).

One of the highest profile projects that I have seen wanting to add Rust was APT. Obviously, it must be possible to build APT with the version of Rust in Debian Stable. So APT is well served by this convention.

I can see other projects being less well behaved. Git wants to make Rust mandatory for example. Are they willing to submit to the same constraint? Or are they going to want to chase newer features? It would be nice if they adopted the same standard as the kernel. UUtils requires Rust 1.85.0 which is the version in Debian 13 (stable). So it seems that they may be following this standard already. But what happens if Kent Overstreet adds Rust to bcachefs? From what I have seen, he is unlikely to accept any constraints on his "engineering". I say this a user and fan of bcachefs by the way. One can hope though that we could all learn to collaborate on this.

The case for including Rust code in a project is compelling. The case for needing a compiler newer than what ships in Debian is more marginal.

We could have a really nice system if we can agree that anything that wants to be cross-platform should stick to the "builds in Debian Stable" rule and if gccrs can build that code too.

Projects that themselves specifically target a narrower set of architectures will still be free to use newer features in Rust of course. It just means they will be less portable--or at least that porting them will require gccrs to catch up. Nothing wrong with that.

Limited by gccrs

Posted Jan 8, 2026 19:48 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

Sorry, it should read as follows:

stick to the "builds in Debian Stable" rule and, if it does, gccrs can build that code too

Limited by gccrs

Posted Jan 14, 2026 12:34 UTC (Wed) by hunger (subscriber, #36242) [Link]

You can get that benefit from the rust compiler backend using gcc as well.

github: https://github.com/rust-lang/rustc_codegen_gcc
latest status report: https://blog.antoyo.xyz/rustc_codegen_gcc-progress-report-39

Limited by gccrs

Posted Jan 8, 2026 19:42 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

> to make Rust a hard dependency will split projects according to the platforms they care about

This is true of course. But it is true before we even consider something like gccrs.

Will it be possible to build your code on Debian Stable? If so, you will need to limit yourself to the version of Rust that ships with Debian Stable. This is true even if you are targeting X86-64, ARM64, or RV64. If "the platforms you care about" includes Debian or any of its many derivatives, you are not going to be relying on features shipped in rustc last month.

The Linux kernel seems to have made the choice to make the required version of Rust for the kernel to be the one that currently ships in Debian. This makes sense to me.

That would make the currently required version 1.85.0 which was released about one year ago. This version will be about 3 years old when Debian 14 ships.

It should not be that difficult for gccrs to keep up with the pace of development that will be required to build the Linux kernel.

If we can stick to this metric, if your code builds on Debian Stable then gccrs should be able to compile it as well. Which makes it pretty easy to be fully cross-platform in Rust.

But Debian 12 ships with Rust 1.63.0. Do you care about Debian 12? What versions of Ubuntu or Linux Mint do you want to support?

Some projects may value newer Rust functionality more than portability and they will require the latest versions of rustc. Yes, that means that gccrs may not be able to build these projects (yet). So your code will not build on PA-RISC. But it will not build on a fairly large percentage of much more mainstream Linux systems either. Which may be fine of course.

You are absolutely correct that projects will have to choose bleeding edge and portability. But the portability decision is not about targeting ancient or obscure architectures only supported by gccrs. Rejecting portability will be rejecting many, many more systems than that.

What is it that distros are creating?

Posted Jan 5, 2026 23:13 UTC (Mon) by jengelh (subscriber, #33263) [Link] (2 responses)

>Distributors will have to rethink what they are creating.

It's all about Added Value.

* a _curated_ set of packages that (one or more) humans take (hopefully good) care of
* lots of desirable software from _one hand_ (one distributor)
* base operating system getting that enterprisey security review/updates

In other words, if your distro

* provides scripts (e.g. in the sense of ebuilds or the like) that always download refs/heads/master and thus can randomly fail to build, that's a turnoff
* if a lot of software people take for granted is not available through the distro's package mechanism, that's a turnoff (you'd be no better than Windows)
* if packages just get bumped even in the -stable branch because no one any longer can - or wants to do - backports due to terrible separation (in essence, the software prevents others from adding value), that's a turnoff for everyone. I'm looking at you Chromium, Firefox, and everything that's over 40k source files...

What is it that distros are creating?

Posted Jan 6, 2026 8:51 UTC (Tue) by epa (subscriber, #39769) [Link] (1 responses)

Perhaps you are being a little unfair to Firefox, which does maintain a long-term-support branch. A given long-term version might not be maintained for the lifetime of a Debian stable release, so you might still be forced to make a version jump to the next long-term release. But you could say the same of the Linux kernel.

What is it that distros are creating?

Posted Jan 8, 2026 19:53 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

> you could say the same of the Linux kernel

Except that would be wrong. In addition to Debian's work to extend kernel support, Debian Stable kernels are maintained for 10 years as part of the Civil Infrastructure Project.

https://cip-project.org/about/linux-kernel-core-packages

GrapheneOS works well

Posted Jan 6, 2026 9:56 UTC (Tue) by tekNico (subscriber, #22) [Link] (1 responses)

> This restriction may increase interest in alternatives based on the Android Open Source Project. But the world needs a truly free operating system for mobile devices.

The best one, it goes WITH saying, currently being GrapheneOS.

https://grapheneos.org/

GrapheneOS works well

Posted Jan 6, 2026 10:54 UTC (Tue) by paulj (subscriber, #341) [Link]

Here's hoping we'll see GrapheneOS support other phones besides Pixels.

LineageOS (preferably a MicroG build), or e/OS (a tweaked LineageOS) support a lot more phones (though, neither support relocking the bootloader - if the phone ever leaves your hands for any significant amount of time, you can no longer trust the software).

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 9:58 UTC (Tue) by mb (subscriber, #50428) [Link] (11 responses)

>the use of LLMs for code review will increase sharply

I have now read this many times (especially here on LWN), but I really don't get it.
I have tried the commonly known LLMs for code review and it doesn't work at all for me.
All the suggestions from the LLMs are either completely useless trivial things that I had actively decided against already or are outright incorrect change suggestions.

What do I have to do to make LLMs produce useful review results?
What am I missing? Are there special LLMs that I can train to my code base before letting them review changes to this code base?

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 11:12 UTC (Tue) by jlayton (subscriber, #31672) [Link] (5 responses)

Chris Mason's review prompts add a lot more context that LLMs can use, and that makes it a lot more useful for kernel patch review. It's also a good idea to install semcode which cuts down on the tokens needed:

https://github.com/masoncl/review-prompts

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 12:48 UTC (Tue) by mb (subscriber, #50428) [Link] (4 responses)

Thanks. Got it.
So this is only true in Kernel context, where there is an extremely fine grained and extremely specific hand crafted set of rules exactly for this purpose.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 14:05 UTC (Tue) by willy (subscriber, #9762) [Link] (3 responses)

I suspect a sufficiently motivated expert could do this for any code base.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 14:31 UTC (Tue) by bronson (guest, #4806) [Link] (2 responses)

Maybe, but it depends on the codebase. If you have a React-based website with a Python backend, something the LLMs have seen zillions of? Probably pretty easy. But highly optimized C++ or low-level C code? It'll be brutal.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 14:35 UTC (Tue) by willy (subscriber, #9762) [Link]

I suggest you actually look at what Chris has done before being so dismissive. The reviews being generated in the BPF subsystem are extremely impressive. Look at a representative sample from:

https://lore.kernel.org/bpf/?q=AI+reviewed+your+patch

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 15:28 UTC (Tue) by mb (subscriber, #50428) [Link]

If you look into the kernel review prompts you will notice that a very large part of them are actually there to try to suppress certain trivial/useless/false answers from the LLM.
And that is also my experience. The main work is getting rid of the nonsense and then it might sometimes find something real. But for small projects that is much more work than doing the review manually.

So, no.
I don't think it's enough that the Internet has a lot of similar code for LLM review to be useful. It needs major project specific manual (one time) preparation work.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 16:32 UTC (Tue) by hobbitalastair (subscriber, #145290) [Link] (4 responses)

I have had success using LLMs to check large refactorings. It's definitely not perfect but will pick up copy-paste mistakes or typos that are hard for a human to spot, and is perfectly willing to read through hundreds of lines whereas a reviewer might lose focus. That's asking it to look for bugs, which is maybe different from what you were trying?

It is a (nearly) free second pair of eyes; even if it's mostly false positives, it still seems worthwhile if it spots something occasionally.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 16:55 UTC (Tue) by mb (subscriber, #50428) [Link] (2 responses)

Yes, I have also successfully found bugs, problems and typos in the past with LLMs. However, it always was a major manual effort to feed the correct context information into the LLM. Something like "Here is the code base with all of its documentation. Please review it" has never resulted in anything useful for me.
But if I describe an already known misbehavior and the expected behavior in detail, it's sometimes able to spot the problem.

That's why I don't see it as a nearly free second pair of eyeballs.
For a big project it certainly may be beneficial, though.

For small projects it will only become interesting, if the LLM will be able to pull this information from the normal documentation, code comments and the overall picture of the code base.
But we're not there, yet.

the use of LLMs for code review will increase sharply

Posted Jan 6, 2026 17:09 UTC (Tue) by daroc (editor, #160859) [Link]

It would be interesting if an LLM were capable of writing the kind of documentation that helps (making using one for this a two step process), but I don't think that's very likely. I agree that this will mostly affect big projects like the kernel where an expert can write and maintain a big detailed rules document. I have been thinking of this use of LLMs as something like a configurable linter that can (unreliably, but still at useful rates) recognize more abstract patterns than a traditional linter.

the use of LLMs for code review will increase sharply

Posted Jan 8, 2026 18:08 UTC (Thu) by kpfleming (subscriber, #23250) [Link]

> That's why I don't see it as a nearly free second pair of eyeballs.

'free' is carrying a lot of weight in that statement.

the use of LLMs for code review will increase sharply

Posted Jan 8, 2026 8:39 UTC (Thu) by taladar (subscriber, #68407) [Link]

In my experience you can't even get the LLMs to avoid changing code when it is just supposed to move it verbatim to e.g. another module. I have had major issues with it constantly downgrading code to the way it worked in the older dependency version that was current when the model was trained.

Don't make assumptions about what Firefox users want

Posted Jan 6, 2026 17:51 UTC (Tue) by roc (subscriber, #30627) [Link] (12 responses)

> But all those [Firefox] users want is a reliable, standards-compliant browser that gives them control over their web experience and protects their privacy.

It would be a lot easier for Mozilla if this was true, but it isn't. A lot of Firefox users, including me, expect a browser that is broadly feature-competitive with Chrome.

Mozilla critics invariably opine that "Mozilla just needs to make a browser that does X" where X is exactly their needs and no others. Unfortunately, when you dig into it, there are a lot of different values of X.

Better not make assumptions about what Firefox users want without actual data. Hint: a lot of Firefox users don't leave comments in LWN or whatever other forums you read.

Don't make assumptions about what Firefox users want

Posted Jan 6, 2026 18:14 UTC (Tue) by pizza (subscriber, #46) [Link]

> Hint: a lot of Firefox users don't leave comments in LWN or whatever other forums you read.

Case in point: The overwhelming majority of Firefox users run it under Windows. [1]

> A lot of Firefox users, including me, expect a browser that is broadly feature-competitive with Chrome.

The competition not being "feature competitive" is the _only_ reason Firefox ever reached its former prominence, and Google continues to pour obscene amounts of money into ensuring that Chrome has more features than ever and web sites that take advantage of said features. After all, what good is a web browser that doesn't "work" for typical web sites?

And that's before intentional vendor lock-in crap (eg $dayjob requires Edge or Chrome [3]).

[1] According to Mozilla, ~83% of all Firefox users do so under Windows, with the rest split about evenly between MacOS and Linux.
[2] Not Chromium or some other rebuild; actual DRM+phones-home Google Chrome.

Don't make assumptions about what Firefox users want

Posted Jan 6, 2026 22:13 UTC (Tue) by notriddle (subscriber, #130608) [Link]

I don’t know if you’re right (speculating about the average user is fraught with peril), but I do know what the market trends were, and anybody else can easily find them.

The fact is, Firefox market share was already in freefall in 2011, before the deprecation of XUL extensions (2017), replacement of Brendan Eich (2014), rollout of Australis UI (2013), or integration of Pocket (2015).

Causes precede effects. Therefore, none of these changes caused Firefox’s market share to begin dropping. And the chart I’m looking at [1] doesn’t make it look like any of these events significantly sped it up, either.

[1] https://commons.wikimedia.org/wiki/File:BrowserUsageShare...

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 4:25 UTC (Wed) by interalia (subscriber, #26615) [Link] (6 responses)

I agree with this 100%. I want a Firefox that is feature competitive with Chrome, because anything otherwise is even more certain death for Firefox.

I also have serious doubts there is anything Mozilla could have done differently that would have significantly altered Firefox's current course. Yes the many ways Mozilla has tried to get alternate revenue sources and wean themselves off Google funding was kind of lame, but I don't blame them for trying because that is a reasonable goal that's in my interests. When I see people here or on other forums say "all they have to do is X" I think they seriously underestimate the difficulty of chasing Chrome with the benefit of Google's black hole gravity pulling people into it.

The only way to make it easier would be for a significant number of people to de-Google themselves and Mozilla has no traction to do that regardless of what features they add to Firefox. If Mozilla added a super great feature to Firefox and users were drawn to it, Google would add it ASAP and within a few months at best that advantage would be neutralised and Firefox would be back to square one. I think people should be more realistic about Firefox's predicament and that it's not incompetence. Sometimes a war is not winnable.

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 17:11 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link] (4 responses)

This is spot on. Firefox has spent a huge fraction of its existence competing against monopolists, first Microsoft and then Google. Both companies were willing to throw money at browser development to stay ahead. They also bundled their browser as the default on their platform. It's incredibly hard to compete with those inherent monopoly advantages.

Firefox's only period of success was after the US government sued Microsoft and before Google took over with Android. It's sad, but I don't see Firefox being competitive again until/unless another government goes after Google. I don't see the Trump Administration doing that, so the EU is Firefox's best chance. Speaking cynically, they might improve their chances by moving to Europe so they look like a European company fighting the good fight against untrustworthy Americans.

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 17:21 UTC (Wed) by pizza (subscriber, #46) [Link] (3 responses)

> Firefox's only period of success

Success is relative -- Even at Firefox's peak, it still held less than 32% of the overall market.

> They also bundled their browser as the default on their platform.

Mozilla also never controlled a substantial portion of important web destinations that _still_ routinely proclaim "this site works best with <something other than Firefox>, click here to install".

> It's sad, but I don't see Firefox being competitive again until/unless another government goes after Google

Keep in mind that unlike IE, Chrome is nearly entirely F/OSS and that it is generally quite viable to use a de-googled Chromium build/variant. But it turns out that most folks _want_ that tighter platform integration...

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 17:29 UTC (Wed) by zdzichu (subscriber, #17118) [Link] (1 responses)

If you mean peak 32% in 2009, the same data shows IE at 56% at the same time. This is clearly irrelevant for Linux. When you substract IE number this would give ~80% market share for Firefox.

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 17:43 UTC (Wed) by pizza (subscriber, #46) [Link]

> If you mean peak 32% in 2009, the same data shows IE at 56% at the same time. This is clearly irrelevant for Linux. When you substract IE number this would give ~80% market share for Firefox.

So... monopolies are good when it's Firefox on top?

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 17:35 UTC (Wed) by dskoll (subscriber, #1630) [Link]

Success is relative -- Even at Firefox's peak, it still held less than 32% of the overall market.

In any normal (non-monopolistic, non-duopolistic) market, 32% market share would be considered pretty good and possibly even the #1 share.

The software market, unfortunately, is completely distorted and tends to devolve to a monopoly or a duopoly. And indeed, some companies don't consider themselves a success until they've completely destroyed all competition, which is obviously terrible for consumers.

This is why we need strict regulation to put the brakes on these rapacious companies that want complete domination. IMO, Google and Microsoft should both be forced to divest themselves of their browsers. But the odds of that happening are between epsilon << 1 and zero.

Don't make assumptions about what Firefox users want

Posted Jan 8, 2026 8:43 UTC (Thu) by taladar (subscriber, #68407) [Link]

One thing they constantly refuse to do but definitely could have done differently would have been to allow people to donate specifically to Firefox and not any of the unrelated Mozilla projects.

Don't make assumptions about what Firefox users want

Posted Jan 7, 2026 18:05 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link] (1 responses)

A lot of Firefox users, including me, expect a browser that is broadly feature-competitive with Chrome.

In other words, we've recreated the situation from the heyday of Internet Explorer, where the behavior that matters is what the leading browser does, not what the standards body says. Of course the leading browser will constantly add new features in order to maintain its lead over the competition, and other browsers will be stuck chasing tail lights. Mozilla is going to have a hard time competing in that environment.

What I would say is that being feature-competitive is the absolute minimum requirement. To win users, Firefox needs to offer some compelling justification for why it's a better choice than Chrome. That could be control over one's web experience and protecting user privacy as the original article said. It could be a richer set of plugins that aren't constrained by Google's commercial interests, which obviously overlaps with protecting privacy. It could be better performance, smaller memory footprint, or whatever. But Mozilla needs to offer something more than "just as good as the other guy".

Don't make assumptions about what Firefox users want

Posted Jan 8, 2026 20:36 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

> What I would say is that being feature-competitive is the absolute minimum requirement

Obviously a browser, as a tool, needs to work on the real web. If features are popular, they must be supported. But not everybody cares if every feature box is ticked.

I want to take a step back and say that there is a difference between what Mozilla needs to do to succeed as a company and what their web browser has to be. Or let's take a further step back and say that a browser that provides "reliable, standards-compliant browser that gives them control over their web experience and protects their privacy" does not even have to come from Mozilla.

It can be difficult to support a code base that is not part of a successful commercial product. But this is not the only way. And the "success" we are talking about here is enough to support the code base. It has nothing to do with market share. Firefox has a tiny market share. But it also has about 300 million users worldwide.

Let that sink in. Firefox has maybe 10 times the user-base that desktop Linux has. Yet we say this is the year of the Linux desktop and Firefox is so irrelevant that we should all just move on.

Ladybird is supposed be ready to use this year. It is making great progress with a tiny budget. Will the current dev team be able to make something good enough to provide a "reliable, standards-compliant browser that gives them control over their web experience and protects their privacy"? They seem to think so.

In my view, if we have a browser like Ladybird, we should be able to maintain a browser that provides a viable alternative to a "big 3" (Chrome, Safari, Firefox). Those of us that do want a reliable, standards-compliant, privacy-respecting browser can have one. And anybody that finds it lacking in features can choose one of the others (along with the inherent downsides).

Our big mistake I think is thinking of Firefox as our only hope against a fully proprietary and abuse future for the web which then become inevitably tied to the commercial survival of Mozilla the corporation.

We need at least one truly independent Open Source browser that is not tied directly to a commercial entity. Then we can let competition do what competition does.

Firefox the browser would survive even if Mozilla fails. It is after all the Phoenix that rose from the ashes of Netscape. And Servo, that Mozilla completely abandoned, is still coming along as well.

But we may not like where Firefox goes in the meantime. And it may not be the best platform from which to become "broadly feature-competitive with Chrome". Firefox at this point is a project that requires hundreds of engineers to move forward. It requires a massive amount of money behind it.

The benefit of a new code base that rises out of a much leaner dev group is that it requires many fewer developers to feed it. That does not mean that it cannot become "broadly feature-competitive with Chrome". At least not in my view. And I think that Ladybird is proving that already. It is impressive how much of the "native web" it can already handle. Performance may be the biggest hurdle. But all that requires is steady, relentless chipping away. A smaller team can still do it. It all depends just how "feature-competitive" we need to be.

Don't make assumptions about what Firefox users want

Posted Jan 14, 2026 19:34 UTC (Wed) by obspsr (subscriber, #56917) [Link]

Sure... we don't because Firefox is so good that we use it as a browser.
A lot of serious users don't care about AI, and all the shit that will happen to us...
Nothing to add.

Role of the distro

Posted Jan 7, 2026 14:13 UTC (Wed) by Richard_J_Neill (subscriber, #23093) [Link] (14 responses)

It should be the case that (almost) everything is available within the standard apt (or yum) repository, and we do ask quite a lot of the maintainers - notably to keep applications current and security-audited. If so, this would save a massive amount of work that is currently duplicated by every single end user. Otherwise, we are heading back to the Windows world, where everything has to be manually installed, and you can't trust the the source.

For example, "apt install mediawiki" should install a current version, and leave it in a working state (and where updates can also be applied by apt - I know this isn't easy, partly because it needs to bypass mediawiki's own installer).

I'd also like to see the distros metapackage everything. For example, why do we have separate package managers for node, for python, etc. These need to come within the scope of apt.

In my view, snap is a total nightmare (it's incredibly wasteful of space - yes this still matters, both for smaller SSDs and for the environment) and it doesn't solve any actual problem - we have known how do deal with shared libraries in different versions for years. Flatpak is better (it's not proprietary), but really, the answer is still to get the packages built for a main distribution, and then offer a cross-platform tarball if needed.

Similarly, if you are developing for the web, "apt install jquery" (or whatever other JS package out of thousands) should bring the current version onto the system, so that your application can just serve up a symlink to /usr/local/js/jquery-current.min (or whatever), rather than having to manually keep a copy of jquery in your own source-tree, or worse, to use a CDN (which is a security-leak).

Finally, even in the era of fast networks, why are updated packages not shipped as diffs? If I "apt-update", then package-manager should be smart enough to try to figure out that I'm going from 1.7.32 to 1.7.33, and first see if there is a delta-deb that just patches the files.

In other words, the original Debian vision is still right, and more relevant than ever before. And yes, I'd gladly pay for some of these features.

Role of the distro

Posted Jan 7, 2026 21:35 UTC (Wed) by raven667 (subscriber, #5198) [Link] (3 responses)

The one point I do agree with is that for applications/libraries in the global /usr namespace that there should be _one_ tool that has full visibility and uncontested management of all the files, and language specific tools like pip/uv, cpan, etc should integrate with rpm/dpkg/etc. Ideally distro tool makers would work with the language specific repos to build that bridge of support and share experience. I've had good luck with cpanspec building quality rpms and it'd be nice if other languages worked as well as that.

I disagree with almost everything else. I think packaging all possible services/applications is a fundamentally unachievable goal in the world of finite resources, even if there were just one well-funded distro, let alone several. We've been using rpm/dpkg for 30 years now and have not managed to package the entirety of all software which is capable of running on the Linux/Unix platform, and I don't see this approach bearing more fruit. I love packages and they solve a lot of OS problems, but I don't use them for GUI software or major server applications when I need to track the upstream bugfixes and features more closely than any distro provides. The distro model of applications being updated every 6mo or 3y makes more sense in a world where software is shipped on tape or CDROM and lifecycles match the underlying hardware, it provides less value in a world of constant CI/CD where apps can be updated every few weeks. The gatekeeping and review of software which distros provide can be helpful, and the packagers who are diligent about doing so have a lot of value, but that's not most packages or packagers who don't have the time to perform a full audit of every piece of software they package, they get it to a point where it seems to work and move on with their lives. We wouldn't be having this discussion if the majority of packagers were tracking upstream with distro CI/CD resources and working closely with developers to build artifacts suitable for wide deployment.

For most applications containers are the most reliable way to ship working code between the developer and the end-user, or statically compiled binaries, even if that makes a trade-off with security in making it more difficult for a local admin to update patched shared libraries. Containers such as Flatpak and Docker solve very real problems with real-world compatibility such that you can reliably run the same app on the distro of your choice and expect it to work, even when they use mutually incompatible shared libraries or data files. You could say that incompatible libraries shouldn't exist because of sonames and symbol versioning but that is only part of the problem, only a few library makers such as glibc have the discipline and funding to actually maintain strict ABI compatibility over a long time, most projects, even high-profile ones used by millions/billions, are run by volunteers who don't have the time and aren't being paid for that level of software engineering, and C ABI is only one part of the problem, often an application will depend on shared data files or config which is tied to the library version and does not support side-by-side installs of different versions. It doesn't matter if those are solvable problems or widespread, it does matter that those problems exist and any system has to be able to work in an imperfect world, we don't get spherical cows on friction-less surfaces. Most distros don't package multiple versions of any given library anyway, only a few high-profile cases do they do so, and instead spend a lot of time trying to de-vendor apps and try to force everything to run with one global version, but in doing so they also introduce bugs when that isn't the version the code works with or which the developer tested.

As far as why package updates aren't shipped as diffs, this was tried, at least in the rpm world, and it just didn't work, it didn't make anything simpler or easier, it made the process slower and more complex, requiring more from their CDN/mirrors, more from the client devices and had more ways to fail, leading to the need to download the full-size package update anyway. I don't think that was some specific bug in the implementation, it's a failure of the approach. Where this kind of approach _has_ worked is with container images and tools like rpm-ostree, on Silverblue and Bazzite updates to the base OS image are shipped as diffs of the filesystem, not of each individual package, and that is efficient and reliable.

The last point I do agree with is that the distros have an important role to play now and in the future to make the base OS that runs applications and manages the hardware, or build the runtime for containers, and there are many valid ways to build and manage a base OS.

I don't know if I've made a good argument, both the traditional distro approach and containers have their place, distros for slow long-term supported base runtimes and containers for faster update cadence and portability, containers do solve real problems and distros aren't great for a lot of types of software.

Role of the distro

Posted Jan 7, 2026 22:38 UTC (Wed) by Richard_J_Neill (subscriber, #23093) [Link] (2 responses)

Thanks for your thoughtful and interesting reply.

I think I should explain better with an example. The problem with packaging (something the Linux world used to do uniquely well compared to the Windows world, and something we are now getting worse at) is that it is very demanding on the maintainers (who do a great job, often without sufficient thanks). But, rather than doing the work once at distro-level, we're doing it 100,000 times at the sysadmin level.

Let me give an example. I have a web-app which uses jQuery. I serve it by https, and so as not to leak data, I insist on serving all resources locally (CDNs defeat the purpose of https). So I have a /var/www/html/myapp/js/lib/jquery-min.js file, which I have to manually keep up to date - from time to time, by manually checking the jquery website to see if there's an update - and then I maintain it in my own source tree. This means that security updates are inevitably late, my own git repo contains a bunch of jQuery code (meaning bloat), and I have extra work to do. This multiplies by every single other thing, whether it's bits of python, or bootstrap, or ...

What I should be able to do is say:

1. My app has a jquery dependency (and other libraries etc)
2. apt install jquery-3 #i.e. the latest 3.x version
3. symlink /var/www/html/myapp/js/lib/jquery-min.js -> /usr/lib/jsdist/jquery/jquery-3.min
4. apt upgrade is responsible for ensuring that /usr/lib/jsdist/jquery/jquery-3.min goes from 3.6.1 to 3.7.1

This makes my own app much easier to package, and much better behaved (+ smaller, which has a global environmental consequence when this happens a million times). The key point is that, to install "myapp" on a new system, it should be possible to express the requirements as:
* one application package (or directory, or tarball, or git-checkout).
* one standard distribution (e.g. current Debian or Ubuntu LTS)
* a set of apt dependencies (all met from the distro repo, including python and JS libraries)
* a setup script (e.g. to create a postgres database)
* a very small number of changes to the config file requirements in /etc/ (ideally none)

I also believe that, while there is sometimes a good security argument for "one app, one VM", we should still engineer our Linux applications as it used to be when one server might have multiple developers, multiple apps, and multiple instances of each app, and we trusted the kernel and Unix permissions to isolate appropriately - and the general rule was: you build the app against the distribution, and (very rarely) you might manually install something in /usr/local/, and you have to be a good citizen of a shared service, because the system is not exclusively yours.

Of course we can't package the entire world. But we aren't even trying in most of the widespread cases - and as a result, every sysadmin has got to do far more work separately than they would together. The waste - in time, energy/CO2, hardware, disk/RAM - is insane, and the core infrastructure is increasingly getting neglected.

Finally, when developers ship containers, it makes everything really brittle. Are you shipping some-lib-2.7.42.35 because you specifically need exactly that one, are are you shipping it because that's what you had at the time? 3 years later, how is anyone meant to apply security fixes?

Role of the distro

Posted Jan 8, 2026 8:17 UTC (Thu) by Wol (subscriber, #4433) [Link]

> I think I should explain better with an example. The problem with packaging (something the Linux world used to do uniquely well compared to the Windows world, and something we are now getting worse at) is that it is very demanding on the maintainers (who do a great job, often without sufficient thanks). But, rather than doing the work once at distro-level, we're doing it 100,000 times at the sysadmin level.

And another major problem we have (thank you systemd for really highlighting this - along with a fix) is that all too often config files are humungous blobs! If that config file gets into the distributed package, that's a right pita! But it has to be in the distributed package otherwise nothing works out of the box ... running gentoo that's rather obvious ...

Compare and contrast two packages, postfix and dovecot.

Dovecot is well-behaved. It has a directive "#include localconfig" or something similar, and everything in the local config file over-rides the master file. The master file is part of the package, the local config is never part (and is silently ignored if it doesn't exist).

Postfix (afaik) is very ill-behaved. It has one massive config file where the package version changes every now then (often seemingly as a result of being built on a different machine where it's slightly and meaninglessly different from the previous version. This then triggers a "you need to update your config files" on my machine, throwing me into "diff hell". Diff'ing isn't that hard, but when you almost never do it ... it's a massive learning experience every time! :-(

That's relying on upstream to be well behaved. It's not something maintainer/packagers can fix ...

Cheers,
Wol

Role of the distro

Posted Jan 8, 2026 8:51 UTC (Thu) by taladar (subscriber, #68407) [Link]

Where your idea of packaging application dependencies completely fails is everywhere where you want to run e.g. a dev, staging and/or production version of the same application or even multiple tenant versions of the same application on a single system. You essentially prevent upgrading them separately that way.

Role of the distro

Posted Jan 8, 2026 2:13 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (4 responses)

> Finally, even in the era of fast networks, why are updated packages not shipped as diffs? If I "apt-update", then package-manager should be smart enough to try to figure out that I'm going from 1.7.32 to 1.7.33, and first see if there is a delta-deb that just patches the files.

There are practical issues with this. At least on the RPM side; I imagine .deb has analogous issues.

- Diffs are generally only provided from N to N+1 builds. If you miss an update (say, holiday break?), you need two diff files, not just one.

- CPU time to rebuild the package from the local state to verify hashes ended up being slower than network speed.

- Diff metadata (especially if more than one-at-a-time steps are offered) ends up being larger than the full packages in the first place (deltarpm now does a HEAD query to see if the metadata is larger than the raw package payload and skips it if so).

- Repository generation is greatly slowed down by the creation of the diff packages.

I used deltarpm for a long time, but eventually network speeds beat the local rebuild time and it wasn't useful anymore.

Role of the distro

Posted Jan 8, 2026 3:00 UTC (Thu) by Richard_J_Neill (subscriber, #23093) [Link] (3 responses)

That's very helpful, and I can certainly see why we'd only want to handle the common case of the direct N to N+1 update.

But rsync is really quick. Imagine that a security vuln is patched inside, a big package (e.g. firefox, kernel, libreoffice, ...). It might be a 100 byte change in source-code, which should, when recompiled result in perhaps 10-1000byte change in the binary, vs a package size of 100MB. Why copy the entire package, rather than rsync the binary diff, and then verify the hash of the files at the end just to check.

Network speed does vary, but even those of us blessed with gigabit fibre and 5G mobile don't have it all the time... and there are plenty of people still stuck on 8Mbit ADSL.

Role of the distro

Posted Jan 8, 2026 4:02 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

RPM signs the package itself, so you need to take the local state, the diff, rebuild the package you would have downloaded from that, then verify it. Then you can install the rebuilt and verified package.

I know .deb files are signed at the manifest level, so you know you got the diff correctly, but you still need to verify that the local state is what is expected. I don't think you can just trust whatever hash database is used because that can be tampered with as well. You need to reconstitute something that you have a trusted signature for which means rebuilding a `.deb` locally (in a reproducible way) and verifying that it is OK then upgrading from that.

Also, packages can have installation scripts and hooks that need triggered if things change (e.g., say the patch affects a systemd unit file: applying the patch isn't enough because you really want to `systemctl daemon-reload` to let systemd aware of the change).

So yes, the common case is "easy, just rsync and apply a binary diff", the corner cases are everywhere.

Role of the distro

Posted Jan 8, 2026 8:57 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

But that isn't how that works, you don't get a binary that is identical everywhere except for a small section if you only change a small part of the source code. You will likely at the very least have changes to offsets in all kinds of sections of the binary, leading to changes everywhere else where that section is referenced,...

Role of the distro

Posted Jan 9, 2026 0:59 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

There are format-aware diffs for that. Google used them early for Android updates, as far as I remember. And even got sued for violating the patent on it.

Role of the distro

Posted Jan 8, 2026 2:28 UTC (Thu) by pabs (subscriber, #43278) [Link]

Debian has pdiffs for metadata diffs and debdelta for package diffs and both work quite well in recent years.

Of course any compression mechanism (like delta diffs) is a tradeoff between network and CPU resources. These days with fast network speeds delta diffs might not speed updates very much (and slow things down in some cases), but will result in more CPU/energy use.

OTOH it does reduce per-update costs for distributors, which is why software with large userbases that are constantly-updating usually uses delta updates; Chrome and Firefox for example.

Role of the distro

Posted Jan 8, 2026 21:14 UTC (Thu) by jmalcolm (subscriber, #8876) [Link] (3 responses)

I agree that it is best if everything is available in the repository. It drives my choice of distro.

However, I also sympathize with the challenge of every distro doing this. And I totally get that application providers do not want to have to package their software for 1000 different operating systems just to ship on Linux. While I use almost exclusively Open Source software, I think our inability to provide a decent target for commercial software is one of the biggest problems with Linux.

The one place we are being successful with commercial software on Linux is gaming and that is because Valve treats the Win32 API as the target for Linux.

For that reason, I see Flatpak as massively important. Software makers should be able to see Flatpak as "Linux". If I can ship as a Flatpak, "Linux" users can use my software. Honestly, if we could add a Flatpak store that could take payment, Linux would be off to the races.

Of course, the value of Flatpak is that it is the "one" place to distribute apps without having to target a specific browser. So, I hesitate to criticize it. However....

In my view, we should think of Flatpak as a Linux distribution. It is a Linux distribution whose job it is to be the universal platform for Linux applications. It is really just a distribution. What does it mean to build an application to ship on Flatpak? It means building against a target platform that includes certain versions of a whole bunch of support libraries, including a C library version. It is exactly like building for a specific distribution. Flatpak is like a Linux distro without a kernel.

My biggest problem with Flatpak as a distro is that the package manager sucks. If I had my way, I would start from the ground up with a great package manager pretty much exactly like real Linux distros do.

But Flatpak is also like multiple Linux releases at once. I can have one application that uses newer libraries and another that uses older ones. I can have multiple freedesktop.org platforms installed.

To me, Flatpak is exactly the same thing as Distrobox. Except where Distrobox allows you to install any Linux distro in a container, Flatpak only let's you select the distros it provides. And Flatpak platforms can only be installed in Flatpak.

I wish I could install Flatpak platforms as a Distrobox frankly. Just like I can have a Distrobox of Ubuntu 22.04 or 24.04, I can install Flatpak 24.08 or 25.08. And that would just be a bunch of packages. And I could add more or remove some.

But if you think of it that way, why did we even need Flatpak? If we could have just agreed on what distro to tell third-parties to target to provide universal Linux support, we would not need Flatpak at all. But we will never agree on that. I do not imagine we can even agree on what package manager to use in Flatpak.

Flatpak has some features above and beyond Distrobox of course. But Distrobox could use those. In a perfect world, they would combine efforts.

The best we seem to be able to hope for is to have two sets of repos. The first is native to our distro of choice. The second is the "universal" one where we find packages our distro lacks. This second one if Flatpak.

Well, except I do usually just install an Arch Linux distrobox and that gives me full access to that massive repo and the AUR. No Flatpak required. But that cannot be the answer for third-party app makers. Arch is not going to be the "universal" platform we all agree on. So, I have to say that the "second repo" needs to be Flatpak even if there is lots I do not like about it.

If we are going to use Flatpak, I would love to see it get ported to Windows as a graphical app to offer a kind of "store" for Windows users. There is no reason that Flatpak would not work on top of WSL. So you could install the exact same Flatpaks that get installed in our Linux distros. This would massively increase the reach of a Flatpak app. This would attract for more people to publish their applications as Flatpaks. Imagine a kind of "reverse Steam" where a popular way to get apps on your Windows boxes was to install the Linux versions via Flatpak.

Role of the distro

Posted Jan 8, 2026 22:47 UTC (Thu) by Richard_J_Neill (subscriber, #23093) [Link] (2 responses)

There's certainly value in Flatpak (especially compared to snap). It would be nice if it didn't bypass the package manager and maintainer though - I'd still like to know that e.g. "apt install flatpak-firefox", and that everything in the flatpak repo has a maintainer where there is some degree of trust+auditing, and that security updates and dist-upgrade would also work.

The other annoyance is the isolation. Snap is particularly frustrating here because it's un-overridable. I wasted a day on each of the below, and would gladly pay someone to maintain distro packages: [this is another problem: there is no central bug-bounty system]

For example, I cannot write a script where apache triggers firefox (use case: convert URLs to PDF, where we need to run a modern browser with modern JS). [wkhtmltopdf now chokes on ECMA6, and slimerjs, casperjs, phantomjs are all defunct. Solution: /usr/local/bin/palemoon.]

Even more frustratingly, when webmail includes .ics attachments [Aside: RoundCube, please display these things inline], I can tell Firefox that the file-handler should be $HOME/bin/vcal.sh which is allowed to run perl (to operate https://waynemorrison.com/software/vcal ), because perl IS within the snap root. But it isn't allowed to then run zenity, because zenity is not within the snap root.
So, every time I get a calendar attachment, I have to click it in Firefox, get it decoded, and then manually "cat ~/vcal.out", because Firefox will only launch app handlers if they are either (in the relevant /var/snap) or part of the Menu system.

Also, the waste. I have tried hard to minimise the amount of snap packages on my system, and I still have, for instance, 15 copies of python! I've got /bin/mv installed 11 times! This is not free: the waste consumes disk (which is moderately cheap, but older systems still might only have 128GB of SSD), and RAM (I'm using 64GB on my main desktop and lucky to have it, but I really feel it when I have to use an 8GB laptop, and my 2nd desktop, with "only 16GB, not upgradeable", which I use for web-development, routinely gets crashed by the oomkiller) To see this for yourself: updatedb; locate bin/mv | grep 'mv$'
If snap starts getting used for most applications, then we'll start needing 500GB for /, when it used to fit in 10GB.

We used to be able to tell people that "it's OK to reuse your old Win10 laptop on Linux - it will be brilliant" - but this is no longer true: modern Linux desktops + apps are a resource-hog. Just as the Pi5 brings good performance to the low-cost low-end, inefficient packaging takes it away again.

Role of the distro

Posted Jan 9, 2026 20:13 UTC (Fri) by jmalcolm (subscriber, #8876) [Link] (1 responses)

> It would be nice if it didn't bypass the package manager

It sure would. But again, Flatpak is just a reflection on the fact that we, "as a community", cannot agree on what package manager that would be. I would choose APK version 3. It is statistically unlikely that this is your pick (though I would gladly use any of the mainstream ones in Flatpak if people could agree on one).

> and maintainer

Again, the whole philosophical point of Flatpak is that there is only one "maintainer" for that app for all of Linux. So, the Flatpak maintainer is the maintainer.

> I cannot write a script where apache triggers firefox

Perhaps I am misunderstanding, but you can `flatpak run` from a script.

> Even more frustratingly

It sounds like you are talking about snap which I hate with a passion (though at least they are not just for GUI apps like Flatpak is). But I do not like Flatpak much more really. What I like about Flatpak is the idea of devs having a single target to make their apps work on Linux. As far as adding apps to my distro that may be missing, I prefer Distrobox. As Distrobox sees your normal /home, I think some of the pain you are describing goes away. Give it a try.

> Also, the waste

This bothers me too. It is one of the reasons I wish Flatpak used packages instead of "platforms". You have to install an entire freedesktop.sdk to run even a single Flatpak app even if it does not need half of it. And you may need more than one sdk version. Again, this is a reason I prefer Distrobox. I only have to install the packages I need. And it is relatively easy to use tooling in and out of the Distrobox at the same time. If I am doing dev, I may have a Distrobox to get access to a certain version of glibc and gcc for example. But I could call python in the same source directly from outside of Distrobox at the same time. I may be using a text editor, with all its plugins from outside the Distrobox for example with a Distrobox open to perform the build. And a Distrobox is just an OCI container (Docker though I use Podman). So you can use "exec" to script stuff inside a running Distrobox from outside.

> We used to be able to tell people...reuse your old Win10 laptop...but this is no longer true

I feel you.

That said, I am typing this now on a 2012 Macbook that I am using totally unironically as I read your sentence. Kernel 6.18.2 with Clang 21.1.4, KDE Plasma 6.5.3, and Firefox 145.0.1 (so pretty up-to-date). Running well enough that I did not even think about it until you mentioned old hardware. And I am doing a build of the Ladybird browser in the background (in Distrobox) so all my CPUs are busy but everything is smooth and responsive. Close to 8 GB of RAM in use (out of 16 GB) with a lot of that being Firefox. I am running two different Distroboxes right now (Arch Distrobox on Chimera Linux) but no snaps. The only Flatpak I run on this machine is pgadmin4 for PostgreSQL.

> gladly pay someone to maintain distro packages

I am not sure what distro you are using. Ubuntu?

One of the things that attracted me to Chimera Linux is the cports system and how easy it is to write and maintain packages. If I have to build something from source, I make a package for it because I prefer the package manager to maintain everything. I have about dozen packages that I maintain for my distro.

> Just as the Pi5 brings good performance

I am a RISC-V guy. So I need all the performance I can get. Thankfully, I think this is my year (RISC-V performance wise).

Role of the distro

Posted Jan 9, 2026 21:47 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> We used to be able to tell people that "it's OK to reuse your old Win10 laptop on Linux - it will be brilliant" - but this is no longer true: modern Linux desktops + apps are a resource-hog. Just as the Pi5 brings good performance to the low-cost low-end, inefficient packaging takes it away again.

I believe GNOME is at least. Not sure about KDE. I recently moved from GNOME to River and, woo, does it feel better. Not just the tiling (it is very i3-like whereas I prefer XMonad-like), but my battery performs about 50% better. I'm currently sitting here with 45% left and a 7 hour estimate.

Firefox and controversial advertising features

Posted Jan 7, 2026 17:59 UTC (Wed) by dmarti (subscriber, #11625) [Link]

Right now there's a lot of attention on the "AI" slop features in Firefox. As Asif Youssuff points out, the AI "Link Previews" are a obstruction for users who want to do something else with a link, like drag it to another application.

But there's going to be another set of decisions to be made by Linux distributions. Firefox is putting a lot of effort into deploying and promoting in-browser ad features (just doing Advertising Week in New York City isn't cheap) and distributions are going to have to figure out which, if any, of the ad features they want to pass along to users of their Firefox packages.

It doesn't look like Firefox is approaching the ad stuff in a safe or well thought-out way, so I'm using enterprise management features to turn them off. But distributions are going to have to decide on sensible default settings, firewall rules, and whatever else is needed to keep the Firefox package compliant with their policies and still usable.

Alan Cox filed a Fedora bug over one of the Firefox ad features, but I predict that in 2026 we're going to see more like it.

Digital soverignty?

Posted Jan 8, 2026 2:14 UTC (Thu) by pabs (subscriber, #43278) [Link] (4 responses)

I'm not convinced Europe understands digital sovereignty. It doesn't sound like the are looking at (EU manufactured) purchased hardware, located on premises, running community-based FOSS projects, setup by in-house sysadmins, and contributed to by in-house developers and funded from org/govt budgets.

Digital soverignty?

Posted Jan 8, 2026 8:06 UTC (Thu) by Wol (subscriber, #4433) [Link] (3 responses)

If you don't understand, isn't that why you ask for comments/input? One of my skills (which seems to be very rare) is I come into a situation and start saying "this is stupid, why are you doing that, what's wrong with the other? If you did this, wouldn't it be better?"

At a previous employer, they wanted a new telephone line in the basement of our building. We paid the management agency by cable(s) pulled, so rather than pulling a twin core for one line, I got them to pull a 50-core so BT could put a junction box in the basement. Six months later they wanted four more lines ...

Or they wanted a network connection down there. Rather than pull a single cat-5, I had them pull two 8-cable looms, with a 16-point patch panel in our comms room and the basement ceiling void. A couple of months later they wanted an RS-232 link ... two £5 adaptors later and there it was ...

"Future Proofing" just doesn't cross most peoples' minds. I'm currently dealing with the aftermath of an upgrade project that failed at my current employer, and I'm involved in two new projects. The *first* thing I think of is "what went wrong elsewhere, what can go wrong here, how can we get a better deal at minimal cost?" I'm making waves in the latest project, trying to double our capacity for maybe a 20% cost increase. Because I can see it happening - we'll rapidly fill the new capacity and want more, and if we don't plan it *now*, retrofitting it won't be an option!

Cheers,
Wol

Digital soverignty?

Posted Jan 8, 2026 9:08 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

Looking at some of the current ongoing political disasters contingency planning and adversarial mindsets seem to be very far from the way politicians think in general. Maybe part of that is that they tend to be, almost by definition, social people, so very likely they think in the softer rules of human interaction and consent building rather than the harder rules imposed by physics, technology, security and similar factors that don't adhere to the social rules of at least pretending to be friends/allies/business partners/...

Digital soverignty?

Posted Jan 8, 2026 10:56 UTC (Thu) by Wol (subscriber, #4433) [Link]

It's not even an adversarial mindset. As a borderline autistic I value comfort and win/win very highly. I'm just very exposed (comp.risks et al) to all these stories of things going wrong because people didn't ask "what can go wrong", or even "what's likely to happen"! All these wonderful plans for our workforce, with scarcely a thought given to getting their buy-in. It doesn't matter WHAT you plan, if you don't aim for workforce buy-in, you're knee-capping any proposed benefits. You just need to answer the question "What is the likely consequence if ...".

The latest project I referred to - 100% extra capacity for 20% up front cost? (Yes, we're going to be paying that next 30% cost when we want that capacity, but paying it won't even be an option if we don't address it now.) WE HAVE CUSTOMERS. CUSTOMERS WANT CAPACITY. And they are very poor at listening to reason. If we can add the option - for peanuts now - it's there when our customers inevitably change their minds and increase their demands.

Cheers,
Wol

Digital soverignty?

Posted Jan 8, 2026 21:30 UTC (Thu) by jmalcolm (subscriber, #8876) [Link]

In many of the other projects we manage in our lives, scale comes from adding people. If that is the case, risk is minimized by not getting ahead of yourself. And you can pivot quickly by adding people (and their skills) when needed.

The problem is that software is infrastructure. Politicians may understand the need to build roads before houses and even to predict population growth when deciding how wide those roads should be. But people do not understand software in the same way.

And people do not understand architecture generally or its implications.

The problem is we also train managers to distrust us by trying to architect Google-scale projects for our 10 customers or our new product launch where we do not even know yet if we have product-market fit (ie. we will attract more than 10 customers). Most people approving technology have been burned green-lighting expensive projects that turned into white elephants. They do not want to be fooled again. It is easier to ask for money when the need is present than before.

If we want people to approve our 100% foreseeable capacity for just 20% more, we need to stop saying we need Kubernetes in the cloud to support our simple ecommerce app.

Git using SHA-256 and 'enterprise Linux'

Posted Jan 8, 2026 18:13 UTC (Thu) by kpfleming (subscriber, #23250) [Link]

> The Git project's transition away from SHA-1 will make big steps in 2026 as the efforts to switch to SHA-256 by default reach fruition. As the final interoperability pieces fall into place, there will be little reason to use the older hash algorithm for new repositories, and some established ones may make the switch as well.

And this will trigger a flurry of flames from all the people using 'Enterprise Linux' versions which don't have SHA-256 support in their Git packages (which is likely to be all the ones still sticking to EL7, plus everyone on EL8 and possibly even EL9).

What about Ladybird?

Posted Jan 8, 2026 18:22 UTC (Thu) by jmalcolm (subscriber, #8876) [Link] (13 responses)

> The world desperately needs an independent browser project that serves its users

I was surprised to read this comment without a mention of Ladybird: https://ladybird.org/

Not only does Ladybird exactly match this description but 2026 is the year it is supposed to arrive. Perhaps it is not much of a "prediction" to say that a project will release when it says it will but I would still have expected to see it mentioned while talking about Firefox.

From the Ladybird FAQ:

When is it coming?

We are targeting Summer 2026 for a first Alpha version on Linux and macOS. This will be aimed at developers and early adopters.

What about Ladybird?

Posted Jan 8, 2026 18:26 UTC (Thu) by corbet (editor, #1) [Link] (1 responses)

I'll admit that Ladybird simply slipped my mind. That said, the odds against a from-scratch browser project seem somewhat steep, but one can hope. I will make a point at keeping an eye on it, thanks.

What about Ladybird?

Posted Jan 10, 2026 18:57 UTC (Sat) by jmalcolm (subscriber, #8876) [Link]

> the odds against a from-scratch browser project seem somewhat steep

That said, I am leaving this comment from Ladybird. It is not only possible but pleasant to read LWN on Ladybird but also to surf to most of the sites you link to. I am just returning from a link you made to the Fedora website.

I can also read my gmail. Or use Github. Or buy something in Shopify. And everything looks good too.

Don't get me wrong. It is not there yet. In particular, it is VERY slow. But it may be much further along than you think and which means it may have better odds of becoming a real altnerative as well.

What about Ladybird?

Posted Jan 9, 2026 0:54 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

Yeah, no. With the code like this: https://github.com/LadybirdBrowser/ladybird/blob/a203ed88... it's stillborn.

Servo ( https://servo.org/ ) is a more realistic contender.

What about Ladybird?

Posted Jan 9, 2026 1:49 UTC (Fri) by dskoll (subscriber, #1630) [Link] (1 responses)

Who calls resource_type_from_string? If it's not on the fast path, then I don't particularly see what's wrong with the code. About the only other way is to pre-populate an array mapping the strings to the types or a hash that does the mapping... but if it's not called in performance-critical code, then who cares?

What about Ladybird?

Posted Jan 12, 2026 3:08 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

They are using C++ access in security-critical code with YOLO pointer math. It does check for out-of-boundary access, but will crash if it's violated.

What about Ladybird?

Posted Jan 9, 2026 16:04 UTC (Fri) by raven667 (subscriber, #5198) [Link] (5 responses)

I'm not going to comment on the code quality because I don't know c++, however I forgot about Servo which is further along than LadyBird and already has a governance structure through the Linux Foundation Europe and funding from NLnet and the Sovereign Tech fund, so someone is already on top of making an independent browser engine outside of California. Looking at the github commits for the last year they both have about the same number of people working on it, but Servo has been around longer and seems further along, although it's no where close to being complete, and probably would take a long long time with the 10-20 contributors they seem to have. If we needed it to be competitive with Chromium/Firefox within a year, it'd need a lot more people, and it's basically impossible to spin up that many people quickly, it'd be a multi-year project to hire, train and divide the work among ~100 developers, to make the software competitive. The framework is there though.

What about Ladybird?

Posted Jan 10, 2026 18:42 UTC (Sat) by jmalcolm (subscriber, #8876) [Link] (3 responses)

In what ways do you see Servo as further along than Ladybird?

What about Ladybird?

Posted Jan 10, 2026 21:48 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Ladybird is trying to re-implement _everything_. They had a policy of not accepting any existing code (a bit relaxed now), but they still have their own implementations WASM, JavaScript, CSS, etc.

In fairly low-level C++ with plenty of unchecked access and questionable code style (mutating arguments that are passed by reference). They even have their own standard library, with all the usual problems. E.g. the hash map is susceptible to hash collision DDOS attacks: https://github.com/LadybirdBrowser/ladybird/blob/5652975f...

What about Ladybird?

Posted Jan 12, 2026 1:11 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (1 responses)

The Ladybird policy on using third-party libraries has been more than just a little relaxed. In fact, I am not sure they can say they are totally independent of Google or that they do not use "any" existing browser code as they now use the Google Skia graphics library.

But I am not sure how explaining their policies answers my question. What if I told you that Servo has to be further behind because they had to invent their own programming language to write the browser in? (Rust)

I gave some numbers and experiences above. By any objective metric other than speed, Ladybird is the more capable and standards compliant browser today.

Combine this with the fact that Ladybird is writing more stuff themselves and started much later than Servo and it is clear that Ladybird is moving much faster. It seems extremely likely that Ladybird will be the first to correctly render an acceptable enough percentage of the real web to be a daily use browser.

That said, one of the components that Ladybird is writing themselves is the JavaScript engine. It is a marvel of compliance but it is not fast. Servo is using SpiderMonkey from Mozilla which is of course a completely amazing and totally production ready JavaScript engine. This is a massive advantage for Servo.

I guess it remains to be seen if Servo can add features faster than Ladybird can get fast or vice versa.

What about Ladybird?

Posted Jan 12, 2026 3:04 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

> What if I told you that Servo has to be further behind because they had to invent their own programming language to write the browser in? (Rust)

This is absolutely true. It took more than 10 years for that development to happen, and it required a rather unique confluence of events: an existing browser, awash with cash during the zero interest rate period.

If their goal was to implement a browser, then launching a new language for it would have been a dead end.

> Combine this with the fact that Ladybird is writing more stuff themselves and started much later than Servo and it is clear that Ladybird is moving much faster.

No, it's really not.

What about Ladybird?

Posted Jan 12, 2026 0:52 UTC (Mon) by jmalcolm (subscriber, #8876) [Link]

> Servo which is further along than LadyBird

Not to my eye it is not.

- Using both, Ladybird can render many more websites properly--at least the ones I visit

- Ladybird passes quite a lot more of the Web Platfrom Tests than Servo does. Ladybird is not that far behind Safari.

- Less reliable but interesting, Ladybird scores 487 at https://html5test.co which about the same as Firefox 58 or Chrome 50

Servo scores 410 at https://html5test.co

The area where Servo certainly leads is performance. On Ladybird, I get a frame rate of 20 at the WebGL Aquarium for 500 fish. This drops to 15 at 1000 fish and all the way to 2 fps by 10000 fish. On Servo, I get 50 fps at 500 fish and it is still 8 fps at 10000.

https://webglsamples.org/aquarium/aquarium.html

If I run the Dogemania demo from the Servo website in Ladybird, the frame rate drops to 20 by 300 Doge. On Servo, it drops to 27. Who knows what that means.

But in my view, the better metric is not speed but correctness at this stage. And by that metric, Ladybird is far ahead. As a simple example, I can use github.com in Ladybird. It is slow but I can fully use the site. Servo will not even render github.com at all as far as I can tell.

What about Ladybird?

Posted Jan 12, 2026 21:06 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (1 responses)

I do not understand the objection.

They are parsing pointers. That said:

- these are not user populated
- they guard it with a both a boundary condition and a TRY() macro

TRY() is a SerenityOS/Ladybird browser feature that allows calling into a fallible function. It will either succeed with return values or fail with a propogated error. It will not crash. It is an alternative to C++ exceptions.

I am not really a C++ dev but this code looks fine to me. What do you not like about it?

What about Ladybird?

Posted Jan 12, 2026 22:04 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

TRY is fine. The coding convention with mutable references given as arguments is questionable, but OK.

The problem is that it's doing dangerous pointer arithmetic, with all the exciting integer expansions. So it _will_ contain security issues.

Linux and Resource Efficiency

Posted Jan 12, 2026 10:37 UTC (Mon) by fratti-co (subscriber, #175548) [Link] (21 responses)

> A lot of forces are coming together to show the virtues of a system that is free, user-centered, and resource-efficient.

I feel like we need to stop ourselves from assuming the resource-efficiency fairy-tale by default. Sure, Microsoft has done its best to make it truer than it ever was by fully embracing the abuse of Web rendering engines as toolkits. But unfortunately, many Linux GUI apps are starting to do the same.

I recently wondered why the Pomodoro timer I was using, Pomodorolm, ate up a whopping 337MiB of RAM. For a stop watch. My question was quickly answered when right-clicking its window had a poorly-integrated context menu with the option "Element inspector". I then went on a hunt through flathub for a pomodoro timer that uses less than 100MiB of RAM. No matter which one I chose, most of them didn't even manage to get below 200MiB. The only one that slipped under the 100MiB mark was KDE's "Francis", but sadly, its woefully unfinished.

Let's be frank here, nobody who works on software will step through the pearly gates of heaven. Any differences in resource usage in a desktop system are purely by coincidence. Nobody profiles their stuff, because we've been telling people for years that optimizing software is evil, actually. Everything is just a bundled browser showing a built-in webpage, doesn't matter what kernel or basic userspace runs below it.

Linux and Resource Efficiency

Posted Jan 12, 2026 15:54 UTC (Mon) by dskoll (subscriber, #1630) [Link] (19 responses)

Yes, I agree that a lot of modern Linux GUI apps tend to be resource hogs. I wonder if the extreme cost of RAM will finally make software developers reconsider the tradeoffs? When laptops start coming with 8GB standard instead of 16GB because of cost, the old way of doing things just won't work.

Linux and Resource Efficiency

Posted Jan 12, 2026 17:02 UTC (Mon) by Wol (subscriber, #4433) [Link] (17 responses)

Whoa! I just looked and yes it has rocketed!

But the cheapest 32GB I saw today is still only twice what I paid for 32MB when I built my first computer (£50 then, £100 now)

Cheers,
Wol

Linux and Resource Efficiency

Posted Jan 12, 2026 17:26 UTC (Mon) by fratti-co (subscriber, #175548) [Link]

GBP 100 is a really good price for 32GB right now. As an example: on 12th of October, a 2x16GB kit of Corsair Vengeance 6000MT/s DDR5 RAM was CHF 103 here for me. It is now CHF 503, or GBP 468.60 at current ECB exchange rates. That's almost a 5x increase in the span of 3 months.

Linux and Resource Efficiency

Posted Jan 12, 2026 17:29 UTC (Mon) by zdzichu (subscriber, #17118) [Link] (3 responses)

Not sure where you checked, but I see DDR5 32GB kits start at ~350 GBP. Given that all production planned for 2026 was already purchased, the prices will only go up.
We can thank AI bros.

Linux and Resource Efficiency

Posted Jan 12, 2026 21:49 UTC (Mon) by Wol (subscriber, #4433) [Link] (2 responses)

> Not sure where you checked,

Amazon? Most of the ram was about £200, but there was at least one sub-£100 DDR4 DIMM (a single dimm, not a kit. I curse Amazon when I ask for 32 and it swamps the results with 2x16 :-( although that's probably how most memory is sold.

Cheers,
Wol

Linux and Resource Efficiency

Posted Jan 13, 2026 10:36 UTC (Tue) by zdzichu (subscriber, #17118) [Link] (1 responses)

Ah, DDR4? So many years with DDR5 I forgot older modules are being sold still.

Linux and Resource Efficiency

Posted Jan 13, 2026 11:01 UTC (Tue) by Wol (subscriber, #4433) [Link]

I think my current desktop has 2 x 32GB (with space for two more). But that £100 for one is probably about the same as what I paid for both my current ones together!

Cheers,
Wol

RAM prices (was Linux and Resource Efficiency)

Posted Jan 12, 2026 18:05 UTC (Mon) by dskoll (subscriber, #1630) [Link] (11 responses)

Hah! I'm dating myself, but I paid around CAD $80 for 64kB of RAM back around 1983 or so, when I upgraded my very first computer (a Radio Shack Color Computer 1) from 16kB to 64kB. That would be CAD $42 million or so for 32GB.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 10:29 UTC (Tue) by paulj (subscriber, #341) [Link] (6 responses)

64 kiB of RAM? Luxury! When ah wur a lad, we 'ad 16 kiB RAM packs to stick on 't back of our 1 kiB ZX81s and we wur grateful for it! Both ways!

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 14:12 UTC (Tue) by Klaasjan (subscriber, #4951) [Link]

+1 same here. A belated thank you to mum and dad for the financial support in acquiring this.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 16:23 UTC (Tue) by dskoll (subscriber, #1630) [Link] (4 responses)

We used ta have ta write ones and zeros on scraps o' paper and type 'em in when the cpu wanted to fetch the data...

But more seriously, in 1988 for a university project, a lab partner and I designed and built a single-board 8088 computer. We wire-wrapped the whole thing. It had 8K EEPROM and 2K static RAM. Later on, the university designed a PCB for it and built 12 of them to use in the lab for a microprocessor course.

We wrote a command-processor for it that ran over the serial port. Even have the source code still. 🙂

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 20:41 UTC (Tue) by jmalcolm (subscriber, #8876) [Link]

I use my kids for binary digits. Standing up is one. Sitting down is zero. We can emulate an Intel 4004.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 21:10 UTC (Tue) by jmalcolm (subscriber, #8876) [Link] (1 responses)

In all seriousness though, I am still staggered by the capabilities at our fingertips these days and not just in terms of fast chips.

My son and I designed a simple mechanical keyboard last year. He designed the PCB layout in KiCad on my 2012 Macbook running Linux. $20 and a couple weeks later we had five custom PCBs. Mind blowing.

We popped on some diodes and a $10 microcontroller (much more capable than your 8088), loaded up some open-source firmware, and it worked! Combined with Internet ordered switches, keycaps and a 3D printed shell, we had a fully working mechanical keyboard totally customized for my son's finger length and preferred typing stance.

A week after we got it all up-and-running, he wrote a service so that he can program what the keys do from a webpage. He is still in high-school.

And the point of this comment is not that what we did is at all impressive. My point is almsot that this is not even remotely impressive anymore. A more capable person could be writing that they created their own custom 64 bit RISC-V CPU that they are running on a FPGA with a custom PCB that to drop it into a Framework laptop to run Linux. They could have still done it all on a Linux computer that they got for less than $100 on eBay.

It absolutely blows my mind what is possible today with very limited resources.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 21:33 UTC (Tue) by dskoll (subscriber, #1630) [Link]

Yes, absolutely. I feel like computing bifurcated around the 2010s. On the one branch was Windows, "the cloud", SaaS, rent-seeking business plans and other advanced enshittification. On the other branch were cheap microcontrollers, Arduinos, Raspberry Pis and similar SBCs that are tons of fun, very capable, and give you lots of control. And Linux is a bridge between the two worlds... runs on the heavy PC hardware but still leaves you in control.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 14, 2026 10:48 UTC (Wed) by paulj (subscriber, #341) [Link]

In lieu of a +1 or somesuch, this reply - a hand-made, wire-wrapped 8088. Very cool. You win today's internet prize. ;)

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 14:52 UTC (Tue) by geert (subscriber, #98403) [Link] (3 responses)

Impressive, as that is still cheaper per "kB" (I assume you mean KiB?) than what Sun charged for 256 MiB for the SparcServer 1000 in the nineties ;-)

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 15:18 UTC (Tue) by dskoll (subscriber, #1630) [Link] (2 responses)

Really?? So 256 MiB would have been more than CAD $325K?? Wow.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 15:25 UTC (Tue) by geert (subscriber, #98403) [Link] (1 responses)

Hmm, looks like I was a factor of 10 off, either due to miscounting the zeroes, or due to dividing in the wrong direction.

RAM prices (was Linux and Resource Efficiency)

Posted Jan 13, 2026 16:14 UTC (Tue) by Wol (subscriber, #4433) [Link]

Our Pr1me EXL7330s (Mips R3000, 16MB ram, 1GB disk) cost about £50K in the early 90s. That's 16 x 1MB SIMMs (I think it had 32 slots).

One of the SIMMs failed after a few years, and I negotiated that if we bought 8 x 4MB SIMMs to upgrade one of the computers, Pr1me would replace the broken SIMM for free and re-use the SIMMs from the upgraded machine to upgrade the other two machines to 24MB. I don't remember how much that cost, but the price for just replacing the one 1MB SIMM felt eye-wateringly expensive ... buying the 32MB upgrade was pretty cheap in comparison on a £/MB basis.

(I still have that 32MB machine in my garage, to try and get running again if I can remember how to boot it ...)

Cheers,
Wol

Linux and Resource Efficiency

Posted Jan 14, 2026 20:09 UTC (Wed) by halla (subscriber, #14185) [Link]

None of the modern Linux apps I use -- kate, konsole, krita, firefox (even) are real resource hogs. Some are buggy -- haruna uses 100% of one cpu even if the video is paused, which is clearly not intentional.

I admit QtCreator has gone a bit critical, it's still fine on a low-end laptop.

Maybe those apps aren't considered modern? But which apps are modern resource hogs then?

Linux and Resource Efficiency

Posted Jan 12, 2026 20:48 UTC (Mon) by jmalcolm (subscriber, #8876) [Link]

No desktop can defend against devs making Electron apps or equivalent.

That said, Linux remains fairly resource efficient as an ecosystem. Even most full featured, batteries included Linux distributions run just fine on very old hardware. I use x86-64 hardware as old as 2009 daily. Not only is it fast but memory usage is still realistic. Even very full-featured desktop environments come in around a gigabyte of RAM usage. On 64 bit systems, it is still feasible to have a desktop that uses less than 500 MB of RAM.

And the system GUI toolkits like Qt and GTK are still reasonably light-weight. Certainl8y they are not going to lead to apps like you describe. Even the new ones like Iced (that COSMIC is written in) are not too bad. And if you want something lighter, toolkits like FLTK work with the latest technologies, including Wayland.

I use 2013 era laptops every day for totally normal work. Granted, a lot of it is on the web or in the cloud but I am running a lot of tech locally and it is all fast and responsive (in my view anyway) and the memory use is at least good enough that it is not a problem. Web browsers are the biggest offenders memory use wise. But I am a tab abuser and still mostly get by without having to manage it too closely.

Again, that is not to say that you cannot fire up a Java based IDE or Electron based collaboration app that is so bloated that it brings your system to a crawl. That certainly happens to me. But, when compared to Windows certainly, Linux is still doing pretty well on the resource front.


Copyright © 2026, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds