|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for March 5, 2026

Welcome to the LWN.net Weekly Edition for March 5, 2026

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

The troubles with Boolean inversion in Python

By Jake Edge
February 27, 2026

The Python bitwise-inversion (or complement) operator, "~", behaves pretty much as expected when it is applied to integers—it toggles every bit, from one to zero and vice versa. It might be expected that applying the operator to a non-integer, a bool for example, would raise a TypeError, but, because the bool type is really an int in disguise, the complement operator is allowed, at least for now. For nearly 15 years (and perhaps longer), there have been discussions about the oddity of that behavior and whether it should be changed. Eventually, that resulted in the "feature" being deprecated, producing a warning, with removal slated for Python 3.16 (due October 2027). That has led to some reconsideration and the deprecation may itself be deprecated.

The problem was reported in 2011 by Matt Joiner who was surprised by the outcome of some tests that he ran:

    >>> bool(~True)
    True
    >>> bool(~False)
    True
    >>> bool(~~False)
    False
    >>> ~True, ~~True, ~False, ~~False
    (-2, 1, -1, 0)
That last example demonstrates how those unexpected results came about: True is effectively just an alias for one and False is zero. When those values are inverted, they do not really act in a Boolean kind of way. In Python, any non-zero value is treated as true in a Boolean sense, and the complement of one is -2, both of which evaluate to true. Python defines its integers as using two's complement representation.

History

The bool type, True, and False were not added to the language until Python 2.3 in 2002, though the feature was infamously backported to the 2.2.1 bug-fix release prior to 2.3. PEP 285 ("Adding a bool type") described the feature in some detail; it is clear that using an integer value was done purposefully, for backward compatibility, at least in part. The PEP abstract explains:

The bool type would be a straightforward subtype (in C) of the int type, and the values False and True would behave like 0 and 1 in most respects (for example, False==0 and True==1 would be true) [...]

The author of the PEP, Guido van Rossum, was the Python benevolent dictator for life (BDFL) at the time; the Review section of the PEP kind of foreshadows the problems that led him to step down from that role 16 years later:

I've collected enough feedback to last me a lifetime, so I declare the review period officially OVER. I had Chinese food today; my fortune cookie said "Strong and bitter words indicate a weak cause." It reminded me of some of the posts against this PEP... :-)

The PEP was silent about applying the complement operator to bool values, but the implementation allowed it. Joiner filed the bug in 2011 because he went looking for a C-like unary not operator ("!"), which is not present in the language, and ran into "~" instead. As Amaury Forgeot d'Arc pointed out, the logical not operator is what Joiner was seeking. The bug was closed the day after it was opened, because the behavior was deliberate.

But the problematic behavior popped up again in a 2019 bug report from Tomer Vromen, who noted that the bitwise and ("&") and or ("|") operators acted as expected (i.e. like the logical equivalents), while complement does not. In fact, the bitwise versions of the and/or operators returned a bool result, while "~True" returns an int -2 (and not True as the integer could be interpreted, or even False as the caller might expect). The bug report linked to a fairly lengthy python-ideas thread from 2016 that also discussed the problem. Both the bug and the thread noted that NumPy has a Boolean type that behaves as expected (at least by some) and returns False for "~numpy.bool_(True)".

In the thread, Van Rossum seemed to lean toward changing the behavior, but wanted to do it with a quick change for Python 3.6, skipping a deprecation cycle, or not at all. Python behavior seems fairly inconsistent, as he described:

To be more precise, there are some "arithmetic" operations (+, -, *, /, **) and they all treat bools as ints and always return ints; there are also some "bitwise" operations (&, |, ^, ~) and they should all treat bools as bools and return a bool. Currently the only exception to this idea is that ~ returns an int, so the proposal is to fix that.

More recently

The idea seems to have just died out in 2016, and again in 2019, but was resurrected by Tim Hoffmann in a 2022 comment on the 2019 bug report. He proposed that ~ be deprecated for the bool type, which Van Rossum endorsed, suggesting that the deprecation be added for the then-upcoming 3.12 release. Earlier, Van Rossum clearly did not want to change the type of the result of ~bool to be a bool:

Because bool is embedded in int, it's okay to return a bool value that compares equal to the int from the corresponding int operation. Code that accepts ints and is passed bools will continue to work. But if we were to make ~b return not b, that makes bool not embedded in int (for the sake of numeric operations).

Take for example

    def f(a: int) -> int:
        return ~a
I don't think it's a good idea to make f(0) != f(False).

In 2022, though, he was in favor of deprecating the use of the complement operator on bool values, rather than switching to a bool return type for complement. In the discussions about the behavior over the years, the main downside to it is that it can be confusing to users and that there is seemingly no real use case for it. For those who do end up getting confused, it is clearly not the right tool for the job, but the fact that NumPy and other libraries have normalized using bitwise complement to mean not muddies the waters.

The deprecation warning was duly added to Python 3.12 in 2023 from a pull request from Hoffmann. It gives a lengthy explanation when the exception is raised:

DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int.

One of the problems with deprecations is the visibility of the warnings; at various points, the DeprecationWarning exception was hidden by default because it too often was only seen by end users who were unable to fix the underlying problem. That changed back in 2017 to increase the visibility of the warnings, in part so that users could request fixes from library developers—deprecation in Python pops up fairly frequently in discussions about development of the language.

In August 2024, though, Barry Warsaw saw a GitHub email notification about the deprecation, which surprised him because he could not remember a wider discussion about it. He posted to the core development category to have that discussion, but he also wanted to talk about changes like this that can sometimes fly under the radar, so he started a parallel discussion as well. The question of "change visibility" seemed to reach a consensus that there was a problem in need of addressing, but there was less clarity on what might be done. Too much bureaucracy, in the form of PEPs or a more formalized change-management process, may negatively impact contributions, which largely come from volunteers; too little can lead to surprises like the deprecation of ~bool.

On the question of whether it should be deprecated at all, no real consensus was found, which has been the case throughout its history; some were strongly pro-deprecation because it is confusing and generally a footgun, while others lamented the inconsistency of only disallowing bitwise complement for the bool type and allowing all of the other arithmetic and bitwise operators.

Oscar Benjamin noted that "use of ~ for logical negation is widespread" in NumPy and SymPy. Antoine Pitrou pointed out that is because ~ can be overridden, unlike the logical not. Benjamin agreed, saying that PEP 335 ("Overloadable Boolean Operators") would have allowed NumPy and SymPy to take a different path, but it was eventually rejected in 2012. Both Benjamin and Pitrou did not think ~bool was particularly useful and were in favor of deprecation.

On the flipside, Bjorn Martinsson provided some examples of how he uses ~ on Boolean values. They are probably kind of obscure, but he has even publicized a use of the technique. A few others popped up in the thread with use cases as well.

Hoffmann summarized the arguments that led him to propose the deprecation and author the code change to effect it. Since he believed it made sense to rid the language of this footgun, only two paths presented themselves: changing the behavior to a logical negation or deprecating and eventually removing ~bool. He saw no good migration path for switching to negation, though, so he opted for deprecation. The discussion continued on for another month or so before winding down without any firm conclusion. There was talk of a PEP, but that did not come about either.

The thread sparked up again in October 2025 and Hoffmann responded to a query about the PEP, pointing to the bug discussion and his summary earlier in the thread. At that time, Tim Peters also posted about a change that he had to make to his code because of the deprecation; he thought it was far too late in the history of the language to be making breaking changes of that sort:

All computer languages have quirks. Python is, IMO, too mature and widely used now to risk changing much of any visible behaviors, short of screaming bugs, or (but less compellingly so) accidents of implementation that were never documented as "advertised" behavior.

There's nothing surprising about ~bool to people who learn the language. bool is a subclass of int in Python, period. I don't give a hoot how it works in other languages. The time for that kind of argument was when Python's semantics were first crafted. It's too late for that now.

The present

Things went quiet again until mid-February 2026, when Hayden Welch posted a concern, but had misinterpreted what was being deprecated. It led to more discussion, naturally, much of it between Hoffmann and Peters, along with a reminder from Stefan Pochmann about his use case. That caused Hoffmann to start a parallel thread to gather real-world impacts of the deprecation, which currently just has a link to Pochmann's use case and a brief mention of the deprecation (or, really, someday elimination) of ~bool being a violation of the Liskov substitution principle (which had also come up elsewhere in the discussions). Essentially, if bool is to be a subtype of int, it has to be able to be used wherever an int can be and ~ surely qualifies.

In the main thread, though, Van Rossum said that the discussion made him cry. "The inconsistency of disallowing ~x when x is a bool while allowing it when x is an int trumps the lack of a use case here." That was, of course, a complete reversal of his position back in 2023, and also different from his 2016 advocacy of a quick switch to a Boolean result for ~bool. In another message, he confirmed the reversal:

Right, I've changed my mind. Or maybe I wasn't thinking far enough ahead at the time.

I would be okay if ~b where b is statically typed as bool might trigger a warning in linters or static type checkers.

Around the time of Van Rossum's change of heart, the thread seems to have picked back up, at least for a bit. In response to Peters's argument that people mistakenly using ~ for logical not are terribly confused, "H. Vetinari" claimed that they were not, since NumPy and the like have popularized the idea, but "that it only works for arrays". Peters was strongly convinced that the NumPy model would not be good for Python as whole to follow, however. For one thing, it works on more than just arrays, "but the conceptual model is baffling". He provided a number of examples showing how NumPy is internally inconsistent in its handling of its bool type.

Everything Python does follows from that bool is a subclass of int. That's all you have to remember. numpy's bool stands as unique in its type system, and is not even "a numeric type" there - although various operations' special cases make it act like one in various ad hoc ways.

It's simply incoherent, a grab-bag of special cases. The core language shouldn't budge the width of an electron to try to cater to any such stuff.

Matthew Barnett raised the seeming oddity of bitwise & and | returning a bool result, while ~ does not; that was inconsistent in his eyes, as it was in plenty of others' along the way. James Dow largely or completely demolished that argument with extensive references to the language documentation. The language reference pretty clearly shows that the existing behavior is required; an implementation is not actually Python without allowing ~bool. Since bool is an int, the bitwise and/or operators are consistent as well: "True | False must return an integer with a value of 1 (which True is) and True & False must return an integer with a value of 0 (which False is)." Tom Fryers also had a lengthy explanation that showed why the Liskov substitution principle matters, and that real breakage results from deprecating the ~bool operation, even though that operation is perhaps weird and unlikely.

Hoffmann seems amenable to reversing course on the deprecation. In the abstract, that should be easy enough to do; code that changed due to the warning will continue to function just fine if the warning goes away. It is not entirely clear how a decision like that would be made, but one guesses the steering council will be brought in at some point to make a pronouncement. There is no huge rush, at least until the time comes to turn the warning into an exception, which is a year or more off at this point.

Overall, the mood seems to be shifting away from deprecation. Using inversion on a bool is a bit of a dark corner of the language, for sure, and it may have been a mistake not to create a separate Boolean type, certainly some in the discussions believe so. The confusion comes to those who think the language does have a separate Boolean type, and it would be nice to find a way to warn them, but removing the feature altogether seems like a step too far.

The long journey for ~bool is probably not over, but perhaps some kind of ending will come before long. This episode demonstrates a number of aspects of the Python development process over the years, from its more freewheeling days 20 or more years ago through its more stodgy aspect these days. Throughout, we see the general cordiality and collegial nature of its discussions; one suspects we have not seen the last of this odd corner of the language, but that further discussion or development will proceed along the same genial lines. Both the language and the community are rather mature at this point—and it shows.

Comments (52 posted)

The ongoing quest for atomic buffered writes

By Jonathan Corbet
March 2, 2026
There are many applications that need to be able to write multi-block chunks of data to disk with the assurance that the operation will either complete successfully or fail altogether — that the write will not be partially completed (or "torn"), in other words. For years, kernel developers have worked on providing atomic writes as a way of satisfying that need; see, for example, sessions from the Linux Storage, Filesystem, Memory Management, and BPF (LSFMM+BPF) Summit from 2023, 2024, and 2025 (twice). While atomic direct I/O is now supported by some filesystems, atomic buffered I/O still is not. Filling that gap seems certain to be a 2026 LSFMM+BPF topic but, thanks to an early discussion, the shape of a solution might already be coming into focus.

Pankaj Raghav started that discussion on February 13, noting that both ext4 and XFS now have support for atomic writes when direct I/O is in use, but that supporting atomic buffered I/O "remains a contentious topic". There are a couple of outstanding proposals to add this feature: this 2024 series from John Garry and a more recent patch set from Ojaswin Mujoo. These proposals have stalled, partly out of concern about the amount of complexity added to the I/O paths and questions about whether there is really a need for atomic buffered writes.

A frequently mentioned potential user for this feature is the PostgreSQL database which, unlike many other database managers, uses buffered I/O. The PostgreSQL code often has to go out of its way to ensure that partial I/O operations do not corrupt the database, sometimes at a cost to performance. PostgreSQL is an important user, but not all developers are convinced that atomic buffered writes are the solution to its problems; Christoph Hellwig, for example, commented: "I think a better session would be how we can help postgres to move off buffered I/O instead of adding more special cases for them."

PostgreSQL developer Andres Freund responded that the project is indeed working on adding direct-I/O support, but its performance has not yet reached the level of the buffered-I/O method. But, he said, direct I/O will only ever be useful for some larger installations. Smaller systems, or those where the database is running as part of a larger application with its own memory needs, will still do better in a buffered-I/O setup where the kernel can manage the allocation of memory. Even when direct I/O becomes competitive as an option for PostgreSQL, he said, "well over 50% of users" will not be able to benefit from it. Most of the developers in the conversation seem to accept that there is a legitimate use case for atomic buffered I/O, though Hellwig remains a holdout.

An agreement that a solution would be nice to have does not, itself, create a solution, though. Atomic direct I/O was a complex problem to solve, requiring the kernel to keep I/O requests together all the way through to the eventual storage device. Buffered I/O adds complexity, since those operations have to go through the page cache, and the actual write operation is normally carried out at a different time, when the kernel gets around to it. Tracking atomicity requirements through the kernel in this way and preventing multiple operations from interfering with each other are not simple tasks.

Early in the discussion Mujoo suggested that one possible solution might be to use writethrough semantics for atomic buffered writes. In other words, when user space initiates a buffered write requesting atomic behavior (which would be done using pwritev2() with the RWF_ATOMIC flag), the kernel would immediately initiate the process of writing that data to disk. That would allow creating a short-term pin to keep the pages in memory (it is hard to do an atomic write if one of the pages full of data is pushed out to swap in the middle of the operation) and would let the kernel prevent any other changes to those pages while the operation is underway. There would be no need to find a way to track atomic writes for dirty data that is sitting in the page cache.

Jan Kara agreed that writethrough behavior could be interesting. It would allow much of the existing direct-I/O infrastructure to be reused, he said, making the solution much simpler. The real question, he said, was whether writethrough behavior would be useful for PostgreSQL. Freund answered that writethrough would indeed be useful, even in the absence of atomic behavior. He suggested implementing it by requiring that atomic buffered writes include a new RWF_WRITETHROUGH flag along with RWF_ATOMIC; that way, if the kernel ever implemented atomic buffered writes without writethrough, there would not be a behavior change seen by user space.

Raghav asked about the difference between the proposed RWF_WRITETHROUGH flag, and the existing RWF_DSYNC, saying that the former might (like most buffered writes) be asynchronous, while the latter is synchronous. Dave Chinner disagreed with that interpretation, though, saying that writethrough behavior is inherently synchronous so that errors can be immediately reported. The way to get asynchronous behavior, he said, is to use the asynchronous-I/O interface or io_uring. But RWF_WRITETHROUGH itself, he said, should behave identically to direct-I/O writes, allowing the existing I/O paths to be used to implement it. RWF_DSYNC, he said, would still be different in that it forces the storage device to commit the data to persistent media, while RWF_WRITETHROUGH would not take that extra step (meaning that data could remain in the device's write cache).

In an attempt to summarize the discussion, Raghav posted this set of proposed conclusions; the first step would be to implement the proposed writethrough behavior with immediate initiation of the requested operation. Writethrough alone, though, does not guarantee atomic behavior, so there will be more to be done. The next step will be to ensure that the data being written is not modified while the operation is underway. Fortunately, the kernel has long had a mechanism, stable pages, that can be brought into play here. By preventing modifications to a buffer that is being written, the kernel can prevent the data from being corrupted.

Later steps will include taking care to copy the full data range into the page cache before beginning the operation, and to make sure that the buffer is written in a single, atomic operation. There will inevitably be other details to deal with, such as specifying and enforcing alignment requirements for buffers used with atomic writes. But it would appear that the path toward atomic buffered writes is starting to become more clear. It shouldn't take more than another half-dozen or so LSFMM+BPF sessions before the problem is fully solved.

Comments (16 posted)

The exploitation paradox in open source

By Joe Brockmeier
March 2, 2026

CfgMgmtCamp

The free and open-source software (FOSS) movements have always been about giving freedom and power to individuals and organizations; throughout that history, though, there have also been actors trying to exploit FOSS to their own advantage. At Configuration Management Camp (CfgMgmtCamp) 2026 in Ghent, Belgium, Richard Fontana described the "exploitation paradox" of open source: the recurring pattern of crises when actors exploit loopholes to restrict freedoms or gain the upper hand over others in the community. He also talked about the attempts to close those loopholes as well as the need to look beyond licenses as a means of keeping freedom alive.

Fontana is a lawyer who is well-known as an expert on FOSS licenses. He has worked for Red Hat for much of his career, and now works directly for IBM since it absorbed Red Hat's legal department in early 2026. He said that this would be an unusual talk for CfgMgmtCamp, as it was not about configuration management—though he had provided legal support to people working on related projects such as Ansible and Foreman. He would not be speaking for Red Hat or IBM in his talk, however, though he said it did draw on his work experiences over the years. "I'm on vacation, seriously. I wanted to go to Ghent".

Infrastructure and freedoms

He said that he might look at open source differently than many in the audience, and that he had been struck by how there were periodic crises and disagreements related to "legal stuff going wrong". These periodic flashpoints are not totally random, he said, they have underlying features in common; the thing that varies over time is what he called the infrastructure. "I don't mean like 'servers', I mean the current state of play that software is situated in", from a technical, cultural, and social perspective. Basically, everything that shapes where power concentrates and how freedom can be exercised.

[Richard Fontana]

Our definitions of freedom are anchored to an earlier technological world, he said. For example, the Free Software Foundation's four essential freedoms: the ability to run, study, modify, and share software all relate to the early days of software development. There is also "the other normative definition that doesn't use the word freedom", the Open Source Definition (OSD) by the Open Source Initiative (OSI). Those definitions can be thought of as sort of a constitutional foundation for open source.

Fontana observed that the "state of play that software is situated in", everything that is relevant from a technical, social, economic, and business perspective, keeps evolving. Each time that it does, there are new tensions and power dynamics that pop up; but the definitions that underlie our understanding of free software and open source stay the same. They have not been revised to change with the times. This is in part because the gatekeepers for those licenses ("and I've been one of these gatekeepers in the past") do not want to revise the definitions. In a sense, he said, open source is a conservative domain because it is tied to unchanging definitions even while other conditions do change.

When infrastructure changes, there are new opportunities to exploit open source—to exercise power, to create new business models, to make a profit—that did not exist previously. When that happens, people tend to reach for legal fixes to address the exploit, which in turn can create new control points. To illustrate, Fontana said he would walk through some of the history of open source to give examples, beginning with the first flashpoint: the invention of copyleft.

Copyright and copyleft

Originally, developers were able to share code because it was not obvious that copyright even applied to software. "All software was inherently free. It was a commons." And then it became clear in the late 1970s that copyright did apply to software after all. That was an infrastructure shift that made it possible to exert control over software by stopping people from making and distributing modifications to software.

Copyleft, in the form of the GPL, was a response to that new control point. "It, famously, uses copyright law to create a different type of license that tries to keep software free." It was a well-intentioned attempt to use a legal tool to improve conditions brought about by legal changes. But despite it being well-intentioned, it was controversial in software-developer communities, Fontana said. Even today there is still a schism between copyleft proponents and those who prefer permissive licenses, such as the BSD, MIT, and Apache licenses.

The GPL also opened up a new, unintended, control point in the form of the dual-licensing model. "And this is really interesting, because the GPL is designed to prevent software from being exploited through copyright." Dual licensing was used to make proprietary licensing effective by giving one party control over copyright, but not others. "You're the one copyright owner of a GPL-licensed code base and you provide a proprietary version for a fee." That, too, was controversial, but it took time for people to develop the vocabulary to explain why they were concerned about it, he said.

Instead of the motivations being to perpetuate the free software commons, you have people using the machinery of copyleft licensing in a certain sense to move code out of the commons. Even though, in a formal sense, it's still there, and there's nothing in the GPL that says this is wrong.

Dual-licensing is the first example of "a phenomenon that repeats itself throughout the history of open source. This feature is asymmetry." Anyone can exercise the freedoms under the GPL, but only one actor has the freedom to use proprietary licensing. To implement this asymmetry, the copyright holder needs to implement a copyright-assignment system or contributor-license agreements (CLAs) that give more power to the maintainer of the project.

SaaS loophole

The first attempt to use asymmetrical power in open source to make money "in a way that is somehow divorced from the ideals open source is founded on" was dual-licensing, but it was not the last. Businesses continue to use the freedoms granted by open-source licenses to "introduce new forms of scarcity in some way or another".

Fontana said that the audience had probably heard of what he called the Software-as-a-Service (SaaS) loophole, which "kind of breaks open-source licensing". In particular, it breaks the GPL and copyleft licensing, because the legal foundations of those licenses rest on distribution, which does not happen when the code is used in a SaaS context. "You sort of escape the intended obligation under the GPL even though you're doing things that are sort of similar to what distributors do". Since there is no binary distributed, the requirements in the GPL are not triggered. In a SaaS context, "the copyleft GPL software becomes equivalent to permissive-license software".

Once again, some people responded to this change with concern about the integrity of open source and an attempt to fix the problem. In particular, it led to the creation of the Affero GPL (AGPL), "sort of an attempt to patch the GPL", so that deployment of a service becomes a trigger for releasing source code. "I would argue that the AGPL was well-intended, but I don't know if I would say that it was well-designed to combat the problem it was created to deal with."

The AGPL is another example of trying to make a fix to a license when a problem emerges, but licensing does not solve the problem very well. In fact, Fontana said, the AGPL is often used by businesses in a dual-licensing context.

Brand identity

The value of open source as a brand identity is another sort of infrastructure shift; there is value in labeling something "open source", but it is problematic for the community because there is no way to protect that brand. The Open Source Initiative tried to trademark the term "open source" but failed to do so. That has led to various parties stretching the definition of open source, often toward more restrictions, "really stretching the normative foundations [of open source] or kind of entering into public conflict with them". Those parties have taken advantage of the ambiguity around what open source is, and turned it into an asset that can be monetized.

Open source has become a misused term, without any clear way to combat its misuse. "Open source became this valuable brand, and in some ways it became more valuable than the substance it was supposed to represent." One form of this that Fontana described is the creation of source-available licenses "mostly used by startups that got built up around a popular open-source project". The familiar narrative, after a few years, is that the startup does not like the way that people are using the freedoms they were given through the open-source licenses. For example, cloud providers can often operate services based on open-source projects better than the startups can, which leads companies to decide to use licensing against their competitors.

The source-available licenses are designed to look like open-source licenses, and the projects are often hosted publicly and allow some of the freedoms that users expect. Those licenses do not comply with the OSD, though, because they discriminate against at least one class of users. "They're ultimately sort of aimed at competitors, without saying, 'if you compete with us, you can't use this software.' They're not honest, in that sense."

Fontana used the example of HashiCorp switching its license from the weak-copyleft Mozilla Public License (MPL) to the Business Source License (BUSL). That license "basically says 'you can use this, but not in production'", and then converts to an open-source license after several years.

The BUSL is not the worst kind of source-available license, he said, and admitted he does not like source-available licenses, in part because they exploit confusion about what "open" means. If a person is not "really clued into this stuff", then they might be confused and misled into thinking it was open source. Sometimes companies will even continue referring to the project as open source, even while using a restrictive license:

There's no question that part of what gives power to these licenses, and the business models enabled by these licenses, is the existing confusion it is exploiting around what 'open' means and what 'open source' means. So source-available licenses just exacerbate some of these problems we've seen historically around asymmetry and so forth.

Around the same time source-available licenses became a problem, he said, a "splinter movement in open source" started up as well: the ethical-source movement. He described that movement as believing that normative definitions of open source are flawed because "open source allows you to do all sorts of bad things". Fontana noted that the ethical-source movement did not fit exactly with the model of exploiting open source for profit, but it "sort of should, in a sense".

The concern that open-source software could be used for "nefarious purposes" has been around for a long time, of course. And it is true, he said, that it is morally neutral because the freedoms are available to everyone. "You can't discriminate against users, or you can't say the GPL is only available as long as you're a good person." The JSON license from 2002, which is basically the MIT license with a provision added that the software "shall be used for Good, not Evil", was a forerunner to the ethical-source licenses.

There are problems with the ethical-source licenses, too. They do not fit with the accepted definitions of open source, because they discriminate against specific use cases such as "you can't use the software for any use case that violates human-rights law", or similar. Though Fontana did not say this explicitly, enforcing such licenses would also be difficult, if not impossible. His slide described those licenses as "principled, but misdirected". (The full set of slides is available on the CfgMgmtCamp site.)

Open-source developers realized that bad things are happening with their software and feel they have to do something to stop it. But, how? "You're not empowered to write new laws. You're just a software developer [...] so the only tools you know how to use are licenses" because those are the foundational tools of the whole system. Ethical licenses, he said, are their own infrastructure shift; they are designed to allocate power to certain people and deny it to other people. This time the attempt to create an asymmetry of power is not for profit, but to try to do good.

AI

The most recent infrastructure shift is AI. Fontana said that that there are "all sorts of asymmetries around what we're calling AI now, and they're more extreme than anything we've seen before". He said he was tempted to say that AI has nothing to do with open source, but that isn't quite accurate. "AI in the modern sense is built on a foundation of lots of important open-source projects", which includes authentic open-source projects built up around the use of AI models.

But within the world of people creating AI models themselves, "the term 'open' is used extensively, but it's used meaninglessly. And then people using the technology repeat this problem". The ambiguity around open source just gets worse in the AI era; "open source" in the AI context just basically means that model is public. "It is actually worse than what we have with source available, it's just a signal with no substance".

Misuse of "open" in this context, he said, was openwashing. The models, if thought of as software, do not meet the normative definition of open source. There is no source code, in this case training data, published, and often even information about the training data is not disclosed. "So there's this kind of extreme non-transparency in a context where the term 'open source' is being widely used", which is unfortunate.

So you might say, "why can't we solve all this by creating a new license?" And you know by now my answer is that licenses are not good at solving these problems.

Some people are angry about AI and have proposed creating licenses that basically forbid using software to create a new model. Those licenses, Fontana said, would violate the OSD pretty clearly, and it's not even clear that those licenses could solve the problems. Licenses are "very brittle tools" that can't do much. They were effective for the limited purpose they had in the 1980s and 1990s, but the problems of today are too complex for a single type of tool to solve.

Licenses aren't the solution

Fontana said that when he was discussing the talk with one of the organizers, he was asked to be inspirational: "I'm not used to doing that, I mostly just like to complain about stuff" he deadpanned. He was, however, willing to try.

The problem that he identified was that the way open source is conceptualized is rooted in the past, and it does not get updated for new problems. His suggestion is that we should try to reframe open-source freedoms "in a way that is more dynamic or adaptive or mobile". He displayed a slide (reproduced below) first with the classical freedoms and then with his concepts for new freedoms: reproduce, verify, participate, exit, and stewardship.

[Slide: Classical freedoms must remain mobile]

He ran through the new freedoms quickly. The right to reproduce "is not an original idea in any sense, kind of a generalization of the work done on reproducible builds". The GPL is designed to allow users to rebuild software from source, but systems are more complex now and "being able to rebuild source code is not enough". There is a need for a more robust ability to rebuild and verify software. As an example, he said, someone claims to be running a service based on open-source software, but perhaps they've modified it in a substantial way without publishing the modifications. "How can you verify the claims they make about those things?"

He mapped the right to modify software to a new concept of a right to participate in development of software. "If you are dependent on a project, there's a sense in which you should have some way of ideally participating in its governance." Modification is a local freedom, whereas participation is more of a collective freedom. He said it was not a radical proposal for open-source development to become a free-for-all with no standards for contribution, "but it's sort of elevating participation to the level of the original freedoms."

Everybody talks about how the right to fork is a fundamental aspect of open source, but "it turns out in practice, and this has become increasingly true over time, you can't easily fork projects in most cases". It is actually too costly to practically exercise, so he felt that open source should explicitly state that it is built on "the right to compete" which could make it more practical for participants to exit a community that no longer serves their needs. That, of course, is directly in conflict with the source-available licenses.

Finally, stewardship "corresponds to the work you need to do to sustain projects and the community" and should be "elevated to the foundational level for what open source means". Open source is a human endeavor, Fontana said. The freedoms that he was articulating correspond to real human activities that are important to consider when thinking about the ideals that open source ought to meet.

So, the right to reproduce is based on curiosity. The right to verify is based on integrity. The right to participate is related to the notion of solidarity. The right to exit corresponds to the concept of courage. And stewardship, of course, corresponds to care. So these are all human forms of these kinds of reframed definitional freedoms.

He was not proposing, he said, to replace the existing freedoms or the notion of what an open-source license is. Those are still a foundational part of open source. But he felt that we need to have a bigger and more expansive sense of what open source means that is not simply rooted in a "static checklist of permissions of 1980s and 1990s kinds of concepts."

Asymmetry is inevitable in open source. It is a feature of infrastructure shifts; there will always be changes in the field of play that create new power relationships and leverage points. What we can do, Fontana said, is make sure that power does not become ossified, "and that's what this notion of mobile freedoms is sort of aimed at". We cannot eliminate asymmetry, he said, but we can continue to work around it.

There was time for one question. An audience member wanted to know if he was referring to the Open Source AI Definition (OSAID) in his talk. Fontana said that he had not mentioned the OSAID in the talk, but had been a critic of the definition. The OSI came up with something that was too complicated and impractical "and also didn't make anyone happy because it has this big compromise built into it". It tried to address the problem of undisclosed training data, but it does so in a way that has "kind of a hole in it". It was, "sort of pointless, frankly" and maybe shows that trying to come up with a definition similar to the open-source definition is not the right approach to address the problem. "But I'd have to think about that more."

With that, time elapsed. The new freedoms proposed by Fontana seem interesting, and could do with more detail on how to implement them, but his point that licensing alone is insufficient is certainly valid. It would be useful for people and projects to be thinking beyond licensing to new ways to retain the ideals of open source as the world keeps changing.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Ghent to attend CfgMgmtCamp.]

Comments (36 posted)

Magit and Majutsu: discoverable version-control

By Daroc Alden
March 4, 2026

Jujutsu is an increasingly popular Git-compatible version-control system. It has a focus on simplifying Git's conceptual model to produce a smoother, clearer command-line experience. Some people already have a preferred replacement for Git's usual command-line interface, though: Magit, an Emacs package for working with Git repositories that also tries to make the interface more discoverable. Now, a handful of people are working to implement a Magit-style interface for Jujutsu: Majutsu.

Magit was started by Marius Vollmer in 2008; over time, the project grew organically to cover the users' needs for an intuitive Git interface. The current version is v4.5.0, and new releases come every few months. The project's statistics page shows that a majority of the code at this point has been written by Jonas Bernoulli, but many authors have contributed improvements for their specific workflows and use cases. The result is a startlingly comprehensive feature set, which Bernoulli calls "essentially complete", covering "about 90% of what can be done using git".

Majutsu is much younger: it was started in November 2025 by Brandon Olivier and has had six contributors so far, reaching version 0.6.0 on February 12. Its interface is already fairly comprehensive, however, owing both to Jujutsu's fewer corner cases and to the libraries written for Magit. Both projects are licensed under version 3 of the GPL, and Majutsu reuses Magit's interface design and libraries for handling transient windows. (Emacs predates most graphical interfaces, and calls the things everyone else calls windows "frames". It calls panels that subdivide a frame "windows".)

Discoverable design

Magit's transient windows are core of its semi-graphical interface, allowing the package to combine keyboard-driven actions with text-based status display. When Magit is started for the first time (by typing "C-x g" or "M-x magit", depending on how good one is at remembering arcane Emacs incantations), it shows a status summary screen:

[A screenshot showing the Magit status screen]

From that status screen, there are a number of keyboard shortcuts that can be used to perform Git operations. Hitting "d", for example, brings up the transient window for diffs, which lists all of the various things that can be done with diffs in Magit:

[A screenshot showing Magit's transient diff command window]

Continuing to type the characters shown in green (on my theme, at least) applies possible command-line flags — which are saved until they are reset. For example, my --stat and --no-ext-diff flags (which generate a diffstat and turn off external diffing programs, respectively) are already turned on, and will be applied to all diff-related commands I use until I turn them off again. Actually choosing an operation to perform requires typing one of the un-prefixed letters shown at the bottom of the transient window (not shown here because they've scrolled off the screen). Typing the same letter again, however, is always bound to "do what I mean", and does a reasonable thing involving the current cursor position in the status buffer is. So, hitting "d" again (with my cursor in the status buffer on one of the listed recent commits) opens the diff associated with the chosen commit in a new window:

A diff showing a trivial change

Typing "q" will close any of Magit's temporary windows and return to the status buffer. Typing "?" lists all of the keys that pop up a transient window to begin with, although they're mostly mnemonic, such as "l" for logs or "c" to commit.

Magit commands like this are context-sensitive. If the cursor in the status buffer is on a commit identifier (branch, tag, or hash), hitting "d d" shows the diff associated with that commit. If the cursor is on a file with unstaged changes, hitting "dd" shows the diff of that file against the staging area. Within that diff, placing a cursor on a particular hunk and typing "s" stages it. Normal Emacs navigation, such as clicking or arrow keys, suffices to navigate to any Git object, such as a file, commit, hunk, or tree. Once the cursor is on it, the default contextual commands will do something useful.

When updating something about Git's state using Emacs, such as staging or unstaging a hunk, all Magit's buffers remain automatically in sync. This includes editing a file in the repository with Emacs — saving the edited file will make Magit update the diffs if they are open in another window. If the repository is updated outside of Magit, typing "g" forces a manual refresh.

This design makes the use of Magit pleasingly discoverable — performing simple operations is intuitive, and all of the text on the status screen can be interacted with. Performing a more complex operation involves opening the appropriate transient window and then turning on and off options and selecting the appropriate operation. It doesn't require going to the Magit manual or the Git manual, because there is a handy short reference guide right there in Emacs. By the same token, for all that Magit calls itself an alternate Git porcelain, one's existing knowledge of the Git command line is not obviated: Magit commands can use almost all of the same flags and Git subcommands as the normal command-line interface.

Majutsu

Despite operating on top of a different version-control system, Majutsu looks fairly similar at first glance:

A screenshot showing Majutsu's status window

The main difference is that Jujutsu does away with the concept of a staging area: there is always a particular working commit (given the short name "@"), and one just edits that commit in-place, rather than staging changes and committing them only once they're finished. Consequently, Majutsu puts the graph of recent commits at the top, with expanded details about the state of the current working commit down below.

The interface works exactly the same way as Magit, which is unsurprising since it reuses Magit's libraries: start typing, and a transient window will pop up to show possible completions of the command. Majutsu does have fewer transient windows (25 vs. Magit's 37), but that is partially a result of Jujutsu having fewer commands than Git. The Majutsu manual goes into more detail about the available transient windows. It does have an overall more bare-bones feel than Magit, which is somewhat to be expected with many fewer years of contributions.

The graph in the main status window shows some differences from Git's model — for example, it shows commit identifiers (on the left side, starting with a pink or purple letter) instead of commit hashes. Jujutsu commit identifiers are stable through rebases and other history-modification operations, so they can be used to refer unambiguously to a commit even in the middle of a rebase. The colored letters at the beginning highlight the minimum prefix needed to refer to them unambiguously: Jujutsu will understand "r" to refer to commit "rkrmpkzv", at least until the repository gets another commit ID starting with "r". Jujutsu commits do still have cryptographic hashes — for signing, and for interoperability with Git — which can be seen at the bottom of the status window, starting with blue letters. These names, while normally quite helpful at the Jujutsu command-line, are less helpful in Majutsu, because commits are typically referred to by placing the cursor on them instead of referring to them by name.

An example of that is Majutsu's rebase interface, which is simplified compared to Magit's interface. In Magit, a rebase is started using "r r", whereupon one will have to select a starting revision, and go through Git's normal interactive-rebase workflow. In Majutsu, the experience is more visual. A rebase starts with "r", but then one can select which revisions should be picked, squashed, or rebased onto directly in the main status window or in the detailed log window. Once the correct commits have been selected pressing return actually performs the rebase. The procedure is greatly streamlined compared to Magit, which makes sense given that Jujutsu's design encourages rebasing more frequently than Git does.

There are some rough edges with Majutsu. Using it to clone a new Git repository (with "M-x majutsu-git-clone") was a bit confusing — it warned me about being used outside a Jujutsu repository, and asked if I wanted to create one. When I did so and then cloned my target repository, it checked it out into a subdirectory, leaving me with two nested repositories. That's a fairly minor detail, however.

More annoying is the fact that Jujutsu's log command (and therefore Majutsu's status buffer) doesn't show commits from before importing a Git repository. This is despite the fact that Jujutsu supports operating colocated with Git, using both in the same repository. Subsequent commits made with Git in a colocated repository are shown, but it makes for an awkward transition. Other operations, such as committing and moving bookmarks (the equivalent of Git's tags) around, went smoothly.

Jujutsu is an interesting experiment in building a version-control system with a simplified design. After years of using Git, it can feel uncomfortable — but Majutsu makes it easy to explore. For a version-control system that has to wrestle with Git's dominance, having a discoverable interface feels like an important step toward making it easier for inveterate Git users to migrate. Majutsu has a ways to go before it reaches Magit's level of polish, but it's more than ready to help people curious about Jujutsu experiment without leaving the comfortable embrace of Emacs.

Comments (11 posted)

IIIF: images and visual presentations for the web

February 26, 2026

This article was contributed by Ronja Koistinen

The International Image Interoperability Framework, or IIIF ("triple-eye eff"), is a small set of standards that form a basis for serving, displaying, and reusing image data on the web. It consists of a number of API definitions that compose with each other to achieve a standard for providing, for example, presentations of high-resolution images at multiple zoom levels, as well as bundling multiple images together. Presentations may include metadata about details like authorship, dates, references to other representations of the same work, copyright information, bibliographic identifiers, etc. Presentations can be further grouped into collections, and metadata can be added in the form of transcriptions, annotations, or captions. IIIF is most popular with cultural-heritage organizations, such as libraries, universities, and archives.

Collections and presentations can—and often do—link to images hosted at many different web sites. A key strength of the framework is standardizing complex, feature-rich image hosting with the explicit goal of interoperable referencing and grouping into combined presentations.

Audience and implementers

IIIF is mostly used by public-sector organizations that deal with heritage science and digital humanities, the core audience being the galleries, libraries, archives, and museums (GLAM) field. The greatest benefits of IIIF are gained when there are few (or no) legal or technical restrictions placed on the content being served, which in practice means works that are born in the public domain or whose copyright has expired.

Among IIIF users likely to be of interest to LWN readers are Wikimedia Commons and the Internet Archive. Wikimedia started tentative integration work in 2018 in the form of IIIF manifests in Wikidata properties, but it seems that further deployment is on hold. The Internet Archive started integrating IIIF in 2015 and officially adopted it in 2023.

The image server at the heart of it

The "How It Works" page on the IIIF website does a good job of explaining the basics of the framework's technical principles, but I will provide an overview of my own here. At the core of IIIF is the Image API, simply defined as a URL with this format:

    https://example.org/{id}/{region}/{size}/{rotation}/{quality}.{format}

Here, id is a string that identifies an image, and region is an expression that crops a portion, for example "full" for the entire image or "50,100,200,300" for "50 pixels right, 100 down, 200 wide, 300 high". size specifies how much to downscale the image after cropping, if at all. rotation requests a rotation in degrees, quality can be "default", "grayscale" or one of a few more predefined keywords, and, finally, format is the file extension of the desired image format, usually jpg.

For example, let's look at a Map of Colorado from 1882 hosted by the Library of Congress. A direct URL to the full map, downscaled to 800 pixels wide, looks like this (split for readability):

    https://tile.loc.gov/image-services/iiif
    	/service:gmd:gmd370m:g3700m:g3700m:gla00130:ca000120   (identifier)
	/full/800,/0/default.jpg			       (region etc.)

And if we want to focus on Boulder County by cropping out the rest, the URL looks like:

    https://tile.loc.gov/image-services/iiif
    	/service:gmd:gmd370m:g3700m:g3700m:gla00130:ca000120
	/2910,1028,660,509/250,/0/default.jpg

Boulder County in an 1882 map of Colorado. Library of Congress.

The difference is in the last part of the URL, where a restricted region and lower resolution have been requested; the result appears on the right.

The specification requires another endpoint, of the form https://example.org/{id}/info.json, where the server must provide key characteristics about the image as well as which optional features of the API the server itself implements. Here is the info.json that the Library of Congress gives us for the above image. (This site still uses version 2 of the Image API.)

Using this API, IIIF viewer software can efficiently query the server for cropped and downscaled pieces of even extremely large image files (such as this panorama of the painting The Battle of Murten), to focus on an area of interest and to provide advanced, traffic-efficient, deep-zoom views, similar to applications like Google Maps or OpenStreetMap. (Keeping in mind that map services generally deal with rasterizing vector tiles, whereas IIIF deals with bitmaps.) Image servers are also required to advertise in the info.json which resolutions and tile sizes are available pre-rendered, so viewers may prioritize requesting those to minimize lag and the computation demands imposed on the server.

An image server with large assets can be susceptible to denial-of-service attacks. The protocol necessarily allows clients to repeatedly ask for computationally intensive image-processing operations and vary the parameters just a little bit to ensure cache misses. Servers must be deployed defensively, taking care to aggressively cache the most anticipated assets and their most requested tile sizes. Many implementors also try to fend off machine-learning bots with reverse proxies, web application firewalls, rate limits, or services like Cloudflare. It is a struggle.

Typically, when serving images over IIIF, the files are kept on disk in a format that supports storing multiple resolutions in a tiled representation, as opposed linear rows of pixels. The most popular formats are TIFF, using a special internal layout called "Pyramid TIFF", and JPEG 2000.

There are a handful of popular IIIF-compatible image servers, the most popular ones probably being Cantaloupe (University of Illinois/NCSA Open Source License) and IIPImage (GPL v3.0). The latter can handle a few other, similar protocols in addition to IIIF, plus it has features like support for multispectral images, allowing sites to keep not only multiple resolutions of visible-light photography but also X-ray or ultraviolet representations of a subject in one TIFF file.

An IIIF-compatible backend can also be implemented completely statically by simply pre-computing all the tiles one wishes to serve, along with the appropriate JSON metadata, as long as one declares the appropriate flag in that JSON announcing to clients that only these tiles may be requested.

As the Image API is quite simple, some organizations do end up creating their own server software, and some complicated digital-asset-management systems (DAMS) also support the API.

Presentations and collections

The other leg that IIIF stands on is the Presentation API, which is quite a bit more complex. It is a definition for a document that points at one or more images and accompanies them with metadata. The Presentation API is what makes a digital object whole by, for example, stitching the individually digitized pages of a book into a linear viewing experience. The metadata usually contains information about the physical original's location, authorship, publication date, whether it is part of some series, its Uniform Resource Name, and so on.

A presentation can comprise images from multiple sources. For instance, if multiple museums in different countries have incomplete, digitized fragments of a manuscript, an IIIF presentation can combine them together into a virtual whole—without ever having to download, process, arrange, or in any way re-host the files. See this demo from Biblissima in France, which is a virtual reconstruction of a 15th-century manuscript that had its illuminations cut out long ago.

If a digitized manuscript is difficult to read due to outdated handwriting, faded text, or old-fashioned orthography, an IIIF presentation can be used to non-destructively add a set of annotations, which viewer software can then display alongside the image. Annotations can even make a centuries-old book searchable on a computer, since it has been thoroughly transcribed in the metadata.

The Internet Archive has IIIF manifests for its materials, but finding them isn't obvious. For a quick demo, let's look at the digitized copy of the first issue of BYTE magazine, hosted at https://archive.org/details/byte-magazine-1975-09. Here, the string byte-magazine-1975-09 in the URL is the item ID. To get the item's presentation manifest, we need to plug the ID into another URL template to get https://iiif.archive.org/iiif/byte-magazine-1975-09/manifest.json. Note that the Internet Archive hosts both "items" and "collections"; for collections, the correct suffix is collection.json.

Next, we can grab the URL of the manifest and take it with us to any IIIF tool. For example, that magazine can be fed to the Mirador viewer, yielding a different interface to the same material.

There is a handful of other IIIF APIs that are not as widely deployed. They deal with authorization, content search, and various machine-to-machine data-ingest concerns, for example.

The IIIF metadata formats have not been designed ad hoc, but rather they build on prior art established in existing W3C recommendations and semantic foundations, most notably "Architecture of the World Wide Web" (2004) and "JSON-based serialization for Linked Data (JSON-LD)" (first draft 2012, latest revision 2020).

While this article only talks about images, and the second I in IIIF stands for "image", in actuality the framework also supports audio-visual materials. A presentation or collection can include audio and video files as well; similarly, annotations can target spatially addressed areas of interest in a static image, but they can also target temporally addressed sections in audio or video. A popular feature enabled by this is the coupling of digitized sheet music with an audio recording, so both can be studied simultaneously.

An update to the Presentation API is expected in 2026 in the form of version 4.0 which, most notably, adds better support for 3D objects. 3D is already doable in the current 3.0 spec, but the next version comes with a major rework of some core concepts that brings 3D to equal semantic footing with 2D images and audio-visual materials.

Client software

While the Image API is designed to be compatible with regular URLs displayable in any regular web context and browser, implementing a zoomable and pannable IIIF view or presentation display requires a client called an IIIF viewer; these are generally JavaScript programs embedded in web pages. Some popular ones are Mirador (Apache 2.0), The Universal Viewer (MIT license; not to be confused with another program with the same name), Clover IIIF (MIT), and TIFY (AGPL v3.0).

All of the viewers above are wrappers on top of OpenSeadragon (three-clause BSD) which actually handles fetching the tiles from the server, stitching them, and drawing them on the page. OpenSeadragon, which made its 6.0 release on February 18, is highly configurable, supporting many different rendering modes, tweaks to user controls as well as zooming and panning behaviors, and more. What the above viewers add on top is support for showing multiple images at once, overlaying annotations, displaying metadata, and implementing the behaviors specified in the presentation and collection manifests.

Not all client programs are simple viewers; some build more advanced applications on top of IIIF. One highly celebrated and quickly evolving tool is Allmaps, a platform for georeferencing IIIF-enabled digitized maps or aerial photographs.

Highlights from the 2026 Online Meeting

The 2026 IIIF Online Meeting took place January 27–29; there is a YouTube playlist of the plenary session and four rounds of lightning talks. The plenary covered IIIF Consortium and community news and provided an overview of new features in the upcoming Presentation API 4.0, followed by discussion.

To kick off the first set of lightning talks, Tom Cramer, chair of the IIIF Consortium Executive Committee, spoke about IIIF Content Commons, a project he wants to see happen that would enhance the discoverability of content. He began by outlining the technical success and broad adoption of IIIF at a wide range of institutions but also bemoaned the fact that IIIF content is still hard to find. He proposed a new initiative to develop content aggregation solutions to remedy this.

In another talk, Tristan Roddis of Cogapp showed how the company built a new system for the British Library's Endangered Archives Programme, now incorporating images as well as audio files on the same site, using IIIF. This is part of the long recovery the British Library has been undergoing since its 2023 cyber attack.

Sonia Cook-Broen, a writer at TheTechMargin, gave a talk from the more esoteric, cyberpunk end of things, providing colorful visions of coupling IIIF plus some artificial intelligence with the InterPlanetary File System (IPFS) and a decentralized storage platform called Storacha. She observed that data impermanence is a big problem on the Internet and that, while IIIF contributes to solving many problems, it does nothing for this one. She showed a demo of her prototype site, Codex Protocol, which integrates with Storacha to find cultural-heritage objects online and store them on IPFS.

Cook-Broen's site also contains the COLLECTION_EXPLORER, a search engine of sorts for discovering IIIF content.

Alexis Pantos from the Museum of Cultural History and the University of Oslo, Norway, demoed some impressive 3D visuals provided by the BItFROST platform, based on the 3D Heritage Online Presenter (3DHOP). He showed an in-progress use case where archeological artifacts had been 3D scanned in-context at excavations, then later compiled into a research environment where the objects were annotated, linked to the archaeological context, and they could even be viewed individually or virtually placed back in situ at the excavation site. At least for the lay person with no training in archaeology, this seemed impressive. He said that, currently, too many manual steps go into combining the landscape-scale scans of excavations with models of artifacts, but his group is hoping to build better tooling and find suitable representations for this relationship in the IIIF metadata.

Governed by consortium

The IIIF Consortium, formed in 2015, steers the development of the framework, hosts meetings, and moderates an online community of contributors and implementors. There is an annual in-person conference; the next one is coming up in June 2026 in the Netherlands. The consortium comprises 71 members from all around the world. Most members are academic institutions or non-governmental organizations involved with digital-humanities and cultural-heritage subjects—libraries, universities, and ministries of culture—but there are a few corporations as well.

IIIF has been around for roughly a decade and has gone through quite a few revisions but, at the end of the day, it is just a framework and a toolkit. Frameworks live and die by the people and organizations applying their creativity to make the most of them. To see what's out there, the "awesome-iiif" GitHub repo is a nice place to start. Some highlights: Zooniverse is a crowdsourcing platform for annotations and transcriptions, Canopy IIIF is a static-site generator for building IIIF-based exhibitions, and IMMARKUS is an experimental annotation platform that currently only runs in Chromium due to its reliance on some cutting-edge browser features.

Comments (5 posted)

Free software needs free tools

By Joe Brockmeier
March 3, 2026

CfgMgmtCamp

One of the contradictions of the modern open-source movement is that projects which respect user freedoms often rely on proprietary tools that do not: communities often turn to non-free software for code hosting, communication, and more. At Configuration Management Camp (CfgMgmtCamp) 2026, Jan Ainali spoke about the need for open-source projects to adopt open tools; he hoped to persuade new and mature projects to switch to open alternatives, even if just one tool, to reduce their dependencies on tech giants and support community-driven infrastructure.

[Jan Ainali]

Ainali does contract work for the Swedish chapter of the Wikimedia Foundation, called Wikimedia Sverige, through his company Open By Default. Wikimedia, of course, provides the MediaWiki software, hosts Wikipedia, and much more. He said that all of the tooling, everything in production, the analytics, and so forth is open source. "There is a very strong ethos in the Wikimedia movement to do it like that."

However, that ethos weakens the farther away one gets from development. "When you step away from development to the more peripheral parts of the workflow, it gets less and less open source in the tooling." For example, Wikimedia uses the proprietary Figma software for design, and its annual conference uses Zoom to record talks and publishes them on YouTube. Even projects that have a strong drive to do something open, he said, struggle to do everything using only open-source software.

He emphasized that the presentation was not a rant against open-source projects using proprietary software. He said that he understood that it might be challenging to use more open-source tools and to move away from proprietary ones. It is particularly difficult, he said, for projects that have to work with other parties which have constraints or requirements in the tools they use. "Even though I am going to say a lot of things here, it is all coming from a place of love and a wish for change."

Tools shape culture

Proprietary tools come with many kinds of restrictions, he said. For example, perhaps users are limited in the ways they can export data, or customize tools to suit specific workflows. There are many things that would be possible with open source that are not possible with proprietary tools; a project cannot make a tool its own if it does not have the ability to modify the software.

The tools also shape a project's culture, Ainali said. First, someone suggests using a tool that is not open source. "It's never with a bad intent. It's often like, oh, it has this feature that I cannot find anywhere else." But that is a slippery slope, he said, a bad spiral. Once the decision is made to use one proprietary tool, it becomes easier to do it the next time. "If our design guide is already in a proprietary software; maybe the next thing in the toolchain also could be like that. You don't have the same incentive to stay open." That, in turn, leads to exclusion.

There are also instances where geopolitics come into play, such as the incident when the Organic Maps project lost access to GitHub, presumably because it had some contributors from Russia. "And this is not because GitHub has something against Russia, it's because where they are located and their local laws." The flip side of that is that some contributors may not want to provide data to a platform that might be required to hand over data to its government. Especially when that platform is in another country.

Even in the absence of political interference, he cautioned that dependency on closed platforms posed other problems. "They try to lure you in, and then to lock you in, so that it will be difficult to leave." It is especially easy for open-source projects to be lured in, he said, because many platforms start out with free tiers or special deals for nonprofits and open projects.

And, of course, there's this lovely term from Cory Doctorow, "enshittification". He defined a couple of phases of how things get worse over time. First, you get lured in, and when they have a very large user base, they feel like they can extract more and more value out of you. It's not like they deliberately try to make it worse for you. It's just going to become worse for you as a user. Maybe it becomes more expensive. Maybe they extract more data out of you. Maybe they are trying to monetize on that data by selling targeting ads in the other end. So it's sort of like just working towards something getting worse.

At the same time, proprietary cloud platforms and services get value from open-source projects, he said. The company gets metrics, usage data, bug reports, and may be advertising that "this open-source system is using our product". Projects can also be victims of a company's whims. A platform can decide at any time to end its free tiers for open-source projects, or change its terms of service: "Suddenly it says in the new [terms] update that 'oh, now we're going to use your data for training AI'."

There are also scenarios where a company is not trying to take advantage of open-source projects, it simply makes a business decision to close down a platform or service that it no longer considers profitable. That leaves the open-source projects that depend on it in a bad spot; if a project had been using something that was open source it could spin up a local deployment and maintain the software itself. With proprietary services, of course, that is not an option; once the plug is pulled, the party is over.

Losing contributors

Ainali said that choosing open tools is not "just a purity test". Some people will be discouraged from participating because they simply do not want to use proprietary tools. But if a project is requiring proprietary tools to participate, it may mean that some people literally can't participate. For example, if a tool requires using macOS, then it excludes participants that do not use that operating system. In some parts of the world, he observed, people may only have the option of running an open-source operating system because everything else is too expensive.

Accessibility is another consideration. Many proprietary tools "are very slick, but they may not have good accessibility". Open-source tools may not be beautiful, but they are functional, he said.

Even if the project does not fully lose contributions from a person, it may not get full participation. Perhaps a person continues to make code contributions, but they do not join video calls to discuss project direction, or participate in text chats because the project uses a proprietary product for those activities.

So you're losing an important voice in your community. They might stay on the trusted old mailing list. And these people that are often very experienced, and know very well how their data could be used. So they're happy voting with their feet and not going in there. They care a lot about their freedom and they care a lot about their data. On the flip side, they are also often very knowledgeable in open source too, because they've learned that they don't want to be locked in with proprietary tools.

He encouraged projects to invest in their own ecosystems by deploying open-source tools, adding features if necessary, improving documentation, submitting bug reports, and so on. He anticipated a counter argument: "'The proprietary tools are so much better', you scream. 'We cannot use these [open] ones.' Well, we're getting ourselves in a convenience Catch 22". Ainali acknowledged that, sometimes, proprietary tools were better from a technical perspective. "But they're bad for your resilience, for your project sustainability. And you could be helping to improve those open tools instead."

As long as open projects are using the proprietary tools, they're providing the metrics to improve them. If the projects are paying for proprietary tools, then they're funding the improvement of those tools. "So instead, you should try to help the community catch up and expand." It will be difficult to break free of proprietary tools, he said, if projects keep using them and giving them all the benefits of their use.

Ainali also predicted that people would object to leaving proprietary platforms because "everybody's already on this platform"; he did not say it, but that seemed to be primarily directed at proprietary code forges such as GitHub. The network effects of such platforms do not last, he said. "We have seen plenty of social media platforms rise and go and other tools come and go", because they need to make a profit or perish. "Whereas maybe open-source tools are more resilient because they don't need the same extraction of value from their users".

Start small

Projects should experiment, he said; perhaps try to mirror Git repositories onto a freer platform. When projects need to choose a new tool for something, they should choose an open tool. Above all, he said that a project should listen to its community and start moving to open tools where it makes the most sense for the community. "Don't wait. It really should have started already, but it's never too late to start now. It's never a bad time." It is also not all or nothing, he said. Projects do not have to become "pure" overnight, or try to switch everything all at once. There are many places a project can start.

You pick one tool. You make one change. You evaluate, "did that work?" If it doesn't work, if it turns out you really need that feature, don't be afraid to roll back. Often with your community it will go well if you radiate the intent why you're wanting to do this change. That everybody can see, "oh, this is coming for our sake in the long run". It will make us more sustainable. It is possible, and when it works, it's really contagious. And don't beat yourself up if you cannot do it all at once. It is okay to not be perfect.

He closed out his talk by arguing that projects had made a choice to be open source, and that choice should be reflected in the tools used by the projects as well. Open source, he said, is more than code and licenses: "it is a culture, it's a way of working, it's the community, and it's the freedom that these licenses allow us". Projects, he said, should not try to build that freedom on a foundation that they do not control.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Ghent to attend CfgMgmtCamp.]

Comments (10 posted)

Page editor: Joe Brockmeier

Brief items

Security

CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements (404 Media)

This 404 Media article looks at how the US Customs and Border Protection agency (CBP) is using location data from phones to track the location of people of interest.

Specifically, CBP says the data was in part sourced via real-time bidding, or RTB. Whenever an advertisement is displayed inside an app, a near instantaneous bidding process happens with companies vying to have their advert served to a certain demographic. A side effect of this is that surveillance firms, or rogue advertising companies working on their behalf, can observe this process and siphon information about mobile phones, including their location. All of this is essentially invisible to an ordinary phone user, but happens constantly.

We should note that the minimal advertising shown on LWN is not delivered via this bidding system.

Comments (38 posted)

Garrett: To update blobs or not to update blobs

Matthew Garrett examines the factors that go into the decision about whether to install a firmware update or not.

I trust my CPU vendor. I don't trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don't think it's likely that my CPU vendor has designed a CPU that identifies when I'm generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it's not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don't have a computer.

Comments (29 posted)

Kernel development

Kernel release status

The current development kernel is 7.0-rc2, released on March 1. Linus said:

So I'm not super-happy with how big this is, but I'm hoping it's just the random timing noise we see every once in a while where I just happen to get more pull requests one week, only for the next week to then be quieter.

This release, as of -rc2, has brought in 11,960 non-merge changes from 1,957 developers, 339 of whom are first-time kernel contributors. The release history looks like:

RCDateCommits
v7.0-rc1 2026-02-2212468 12468
v7.0-rc2 2026-03-01434 434

This -rc2 does indeed contain just over 100 more commits than 6.19-rc2 did. See the (subscriber-only) KSDB 7.0 page for a lot more details.

Stable updates: 6.19.4 and 6.18.14 were released on February 26, followed one day later by 6.19.5 and 6.18.15 to fix a regression. The 6.19.6, 6.18.16, 6.12.75, 6.6.128, 6.1.165, 5.15.202, and 5.10.252 updates were released on March 4.

Comments (none posted)

Høiland-Jørgensen: The inner workings of TCP zero-copy

Toke Høiland-Jørgensen has posted an overview of how zero-copy networking works in the Linux kernel.

Since the memory is being copied directly from userspace to the network device, the userspace application has to keep it around unmodified, until it has finished sending. The sendmsg() syscall itself is asynchronous, and will return without waiting for this. Instead, once the memory buffers are no longer needed by the stack, the kernel will return a notification to userspace that the buffers can be reused.

Comments (none posted)

Quote of the week

I will again note that LTS kernels have been created using machine learning "AI" models composed of neural networks as early as 2018 to find kernel commits containing bug fixes that should be backported to the stable branches. Given that people seem to be throwing around "AI slop" without defining precisely what they mean by "AI", if we are sloppy about banning all code that has ever been built using AI-assisted tooling, you'd have to start shipping the Linux kernel back to the version used in Debian 8 "Jessie".
Ted Ts'o

Comments (none posted)

Distributions

Motorola announces a partnership with the GrapheneOS Foundation

Motorola has announced that it will be working with the GrapheneOS Foundation, a producer of a security-enhanced Android distribution. "Together, Motorola and the GrapheneOS Foundation will work to strengthen smartphone security and collaborate on future devices engineered with GrapheneOS compatibility.". LWN looked at GrapheneOS last July.

Comments (11 posted)

Distributions quote of the week

Writing meaningless slop requires no creativity; writing really bad code requires human ingenuity.

procmail is still in the archive, for heaven's sake. [1]

I too am concerned about the potential degradation in quality of free software given the *volume* of bad code that people can generate using LLM agents, but the objectively worst software in the archive is the product of human ingenuity and I am dubious that's going to change.

Making rules that require us to make all sorts of guesswork judgments and that are effectively unenforceable in practice (no one is required to inform us if they use LLMs) strikes me as a recipe for endless future arguments, which doesn't seem very likely to improve the average quality of Debian packages. Or the experience of being a Debian Developer.

If we think software is bad, we should remove the software because it's bad. I am quite dubious that investigations into the software development tools used by upstream are going to give us much additional information on top of the sorts of metrics we already have readily available (bug rates, CVEs, user complaints, unexplained behavior changes between releases, regressions, lack of necessary feature development, etc.).

[1] For those who don't know the reference, this is not intended as a slam against procmail's functionality or against the people who have worked to keep it viable all these years, but is a reference to procmail's notoriously, uh, unique coding style and carefully (?) hand-coded security-critical string manipulation in C.

Russ Allbery

Comments (19 posted)

Development

Gram 1.0 released

Version 1.0 of Gram, an "opinionated fork of the Zed code editor", has been released. Gram removes telemetry, AI features, collaboration features, and more. It adds built-in documentation, support for additional languages, and tab-completion features similar to the Supertab plugin for Vim. The mission statement for the project explains:

At first, I tried to build some other efforts I found online to make Zed work without the AI features just so I could check it out, but didn't manage to get them to work. At some point, the curiosity turned into spite. I became determined to not only get the editor to run without all of the misfeatures, but to make it a full-blown fork of the project. Independent of corporate control, in the spirit of Vim and the late Bram Moolenaar who could have added subscription fees and abusive license agreements had he so wanted, but instead gave his work as a gift to the world and asked only for donations to a good cause close to his heart in return.

This is the result. Feel free to build it and see if it works for you. There is no license agreement or subscription beyond the open source license of the code (GPLv3). It is yours now, to do with as you please.

According to a blog post on the site, the plan for the editor is to diverge from Zed and proceed slowly.

Comments (25 posted)

groff 1.24.0 released

Version 1.24.0 of the groff text-formatting system has been released. Improvements include the ability to insert hyperlinks between man pages, a new polygon command for the pic preprocessor, various PDF-output improvements, and more.

Full Story (comments: 2)

Texinfo 7.3 released

Version 7.3 of Texinfo, the GNU documentation-formatting system, has been released. It contains a number of new features, performance improvements, and enhancements.

Full Story (comments: 1)

Development quote of the week

community: In software company writing, this means either "people who will do work for my company for free" or "people who will pick up after me after I move fast and break things."
Don Marti

Comments (none posted)

Page editor: Daroc Alden

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Miscellaneous

Calls for Presentations

CFP Deadlines: March 5, 2026 to May 4, 2026

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
March 10 May 2 22nd Linux Infotag Augsburg Augsburg, Germany
March 13 August 6
August 9
FOSSY 2026 Vancouver, Canada
March 15 May 21
May 22
Linux Security Summit North America Minneapolis, Minnesota, US
March 15 May 30
May 31
Journées du Logiciel Libre 2026 Lyon, France
March 18 June 18
June 20
Linux Audio Conference Maynooth, Ireland
March 29 May 29 Yocto Project Developer Day Nice, France
March 31 June 6 Hong Kong Open Source Conference Hong Kong, Hong Kong
April 7 August 8
August 9
UbuCon Asia 2026 @ COSCUP Taipei, Taiwan
April 15 May 4
May 11
MiniDebConf Hamburg 2026 Hamburg, Germany
April 20 July 20
July 25
DebConf 26 Santa Fe, Argentina
April 20 July 13
July 19
DebCamp 26 Santa Fe, Argentina
April 23 October 5
October 7
Linux Plumbers Conference 2026 Prague, Czechia
April 30 September 29
September 30
devopsdays Berlin 2026 Berlin, Germany

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: March 5, 2026 to May 4, 2026

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 5
March 8
SCALE Pasadena, CA, US
March 9
March 10
FOSSASIA Summit Bangkok, Thailand
March 16
March 17
FOSS Backstage Berlin, Germany
March 19 Open Tech Day 26: OpenTofu Edition Nuremberg, Germany
March 23
March 26
KubeCon + CloudNativeCon Europe Amsterdam, Netherlands
March 28 Central Pennsylvania Open Source Conference Lancaster, Pennsylvania, US
March 28
March 29
Chemnitz Linux Days Chemnitz, Germany
March 28
March 29
InstallFest 2026 Prague, Czechia
April 10
April 11
Grazer Linuxtage Graz, Austria
April 20
April 21
SambaXP Göttingen, Germany
April 23 OpenSUSE Open Developers Summit Prague, Czech Republic
April 25
April 26
Sesja Linuksowa (Linux Session) Wrocław, Poland
April 27
April 28
foss-north Gothenburg, Sweden
April 28
April 29
stackconf 2026 Munich, Germany
April 29
May 1
Linaro Connect Madrid 2026 Madrid, Spain
May 2 22nd Linux Infotag Augsburg Augsburg, Germany

If your event does not appear here, please tell us about it.

Security updates

Alert summary February 26, 2026 to March 4, 2026

Dist. ID Release Package Date
AlmaLinux ALSA-2026:3208 10 389-ds-base 2026-02-26
AlmaLinux ALSA-2026:3189 9 389-ds-base 2026-02-26
AlmaLinux ALSA-2026:3297 10 buildah 2026-02-26
AlmaLinux ALSA-2026:3298 9 buildah 2026-02-26
AlmaLinux ALSA-2026:3428 8 container-tools:rhel8 2026-03-03
AlmaLinux ALSA-2026:3341 9 containernetworking-plugins 2026-03-02
AlmaLinux ALSA-2026:3361 10 firefox 2026-02-26
AlmaLinux ALSA-2026:3338 8 firefox 2026-03-03
AlmaLinux ALSA-2026:3339 9 firefox 2026-02-26
AlmaLinux ALSA-2026:3068 10 freerdp 2026-02-26
AlmaLinux ALSA-2026:3334 8 freerdp 2026-02-26
AlmaLinux ALSA-2026:3067 9 freerdp 2026-02-26
AlmaLinux ALSA-2026:3477 10 gnutls 2026-03-02
AlmaLinux ALSA-2026:3668 9 go-rpm-macros 2026-03-04
AlmaLinux ALSA-2026:3092 10 golang-github-openprinting-ipp-usb 2026-02-26
AlmaLinux ALSA-2026:3035 10 grafana-pcp 2026-02-26
AlmaLinux ALSA-2026:2721 10 kernel 2026-02-26
AlmaLinux ALSA-2026:3275 10 kernel 2026-03-02
AlmaLinux ALSA-2026:3464 8 kernel 2026-03-03
AlmaLinux ALSA-2026:2722 9 kernel 2026-02-26
AlmaLinux ALSA-2026:3066 9 kernel 2026-02-26
AlmaLinux ALSA-2026:3488 9 kernel 2026-03-04
AlmaLinux ALSA-2026:3463 8 kernel-rt 2026-03-03
AlmaLinux ALSA-2026:3405 9 libpng 2026-03-02
AlmaLinux ALSA-2026:3031 9 libpng15 2026-02-26
AlmaLinux ALSA-2026:3407 8 mingw-fontconfig 2026-03-03
AlmaLinux ALSA-2026:3033 10 munge 2026-02-26
AlmaLinux ALSA-2026:3034 9 munge 2026-02-26
AlmaLinux ALSA-2026:3638 9 nginx:1.24 2026-03-04
AlmaLinux ALSA-2026:2783 9 nodejs:20 2026-02-26
AlmaLinux ALSA-2026:2782 9 nodejs:22 2026-02-26
AlmaLinux ALSA-2026:3336 10 podman 2026-02-26
AlmaLinux ALSA-2026:3337 9 podman 2026-02-26
AlmaLinux ALSA-2026:3094 10 protobuf 2026-02-26
AlmaLinux ALSA-2026:3095 9 protobuf 2026-02-26
AlmaLinux ALSA-2026:3354 10 python-pyasn1 2026-02-26
AlmaLinux ALSA-2026:3359 9 python-pyasn1 2026-02-26
AlmaLinux ALSA-2026:3291 9 runc 2026-02-26
AlmaLinux ALSA-2026:3343 10 skopeo 2026-02-26
AlmaLinux ALSA-2026:3340 9 skopeo 2026-03-02
AlmaLinux ALSA-2026:3516 9 thunderbird 2026-03-04
AlmaLinux ALSA-2026:3507 9 valkey 2026-03-04
Debian DSA-6151-1 stable chromium 2026-02-27
Debian DLA-4496-1 LTS firefox-esr 2026-03-02
Debian DSA-6148-1 stable firefox-esr 2026-02-25
Debian DSA-6156-1 stable gimp 2026-03-03
Debian DLA-4493-1 LTS libstb 2026-02-26
Debian DSA-6153-1 stable lxd 2026-03-01
Debian DSA-6149-1 stable nss 2026-02-26
Debian DLA-4494-1 LTS orthanc 2026-02-28
Debian DSA-6154-1 stable php8.2 2026-03-02
Debian DSA-6150-1 stable python-django 2026-02-26
Debian DSA-6155-1 stable spip 2026-03-03
Debian DLA-4495-1 LTS thunderbird 2026-02-28
Debian DSA-6152-1 stable thunderbird 2026-02-28
Fedora FEDORA-2026-27ce708600 F43 389-ds-base 2026-02-26
Fedora FEDORA-2026-e0e9d0d54a F42 apt 2026-03-04
Fedora FEDORA-2026-1c47e433df F43 apt 2026-03-04
Fedora FEDORA-2026-405dab5af2 F42 avr-binutils 2026-03-04
Fedora FEDORA-2026-10cccbf560 F43 avr-binutils 2026-03-04
Fedora FEDORA-2026-a48b5f36ec F42 cef 2026-03-02
Fedora FEDORA-2026-0bced5158d F43 cef 2026-03-02
Fedora FEDORA-2026-7ba8ba6dff F42 chromium 2026-02-26
Fedora FEDORA-2026-2e8248f158 F43 chromium 2026-03-01
Fedora FEDORA-2026-d51972eee3 F42 erlang 2026-03-03
Fedora FEDORA-2026-8a15e7a423 F43 erlang 2026-03-03
Fedora FEDORA-2026-0709b275a5 F42 firefox 2026-02-27
Fedora FEDORA-2026-766e3a6ec8 F43 firefox 2026-02-26
Fedora FEDORA-2026-be60dd75d9 F43 freerdp 2026-02-27
Fedora FEDORA-2026-21a2f3709a F43 gh 2026-02-27
Fedora FEDORA-2026-3e21dad421 F43 gimp 2026-03-01
Fedora FEDORA-2026-c2b5451b35 F42 keylime 2026-03-04
Fedora FEDORA-2026-e5027335a3 F43 keylime 2026-03-04
Fedora FEDORA-2026-c2b5451b35 F42 keylime-agent-rust 2026-03-04
Fedora FEDORA-2026-e5027335a3 F43 keylime-agent-rust 2026-03-04
Fedora FEDORA-2026-814a1deec8 F43 libmaxminddb 2026-02-27
Fedora FEDORA-2026-ebf9437c9e F42 munge 2026-02-26
Fedora FEDORA-2026-ec8baadd48 F43 munge 2026-02-26
Fedora FEDORA-2026-889607c7a0 F42 nextcloud 2026-03-02
Fedora FEDORA-2026-ae48fa379e F43 nextcloud 2026-03-02
Fedora FEDORA-2026-0709b275a5 F42 nss 2026-02-27
Fedora FEDORA-2026-49b5d5c5e6 F43 opentofu 2026-02-26
Fedora FEDORA-2026-b0bf6e9c9b F42 perl-Crypt-URandom 2026-03-04
Fedora FEDORA-2026-88f1155b8b F43 perl-Crypt-URandom 2026-03-04
Fedora FEDORA-2026-9a4d6dd8eb F42 pgadmin4 2026-03-02
Fedora FEDORA-2026-a0d40b97a8 F43 pgadmin4 2026-03-02
Fedora FEDORA-2026-e0e9d0d54a F42 python-apt 2026-03-04
Fedora FEDORA-2026-1c47e433df F43 python-apt 2026-03-04
Fedora FEDORA-2026-ca3d81129a F42 python-django4.2 2026-03-01
Fedora FEDORA-2026-00b5bf3150 F42 python-django5 2026-02-28
Fedora FEDORA-2026-3adb735295 F43 python-django5 2026-02-28
Fedora FEDORA-2026-0d673fa503 F42 python-pillow 2026-03-03
Fedora FEDORA-2026-b1b37b00ef F42 python3-docs 2026-02-28
Fedora FEDORA-2026-27ce708600 F43 python3-docs 2026-02-26
Fedora FEDORA-2026-4e99b7fe5f F43 python3.12 2026-03-02
Fedora FEDORA-2026-b1b37b00ef F42 python3.13 2026-02-28
Fedora FEDORA-2026-27ce708600 F43 python3.14 2026-02-26
Fedora FEDORA-2026-10af0bfadd F42 python3.15 2026-02-27
Fedora FEDORA-2026-cf721e4319 F43 python3.15 2026-02-27
Fedora FEDORA-2026-cad5404d98 F42 python3.9 2026-02-28
Fedora FEDORA-2026-289d6d4f69 F43 python3.9 2026-02-28
Fedora FEDORA-2026-de8c9d7b6f F42 rsync 2026-03-04
Fedora FEDORA-2026-c6d7c9de1d F43 udisks2 2026-02-27
Fedora FEDORA-2026-fea53fa4da F42 vim 2026-02-26
Fedora FEDORA-2026-120874f63c F43 vim 2026-02-26
Oracle ELSA-2026-3297 OL10 buildah 2026-02-25
Oracle ELSA-2026-3298 OL9 buildah 2026-02-26
Oracle ELSA-2026-3428 OL8 container-tools:rhel8 2026-02-27
Oracle ELSA-2026-3341 OL9 containernetworking-plugins 2026-02-25
Oracle ELSA-2026-3361 OL10 firefox 2026-02-26
Oracle ELSA-2026-3338 OL8 firefox 2026-02-26
Oracle ELSA-2026-3339 OL9 firefox 2026-02-26
Oracle ELSA-2026-3334 OL8 freerdp 2026-02-26
Oracle ELSA-2026-1590 OL7 gimp 2026-02-25
Oracle ELSA-2026-3188 OL8 grafana 2026-02-25
Oracle ELSA-2026-3187 OL8 grafana-pcp 2026-02-25
Oracle ELSA-2026-3275 OL10 kernel 2026-02-26
Oracle ELSA-2026-3083 OL8 kernel 2026-02-25
Oracle ELSA-2026-3405 OL9 libpng 2026-02-26
Oracle ELSA-2026-3407 OL8 mingw-fontconfig 2026-02-27
Oracle ELSA-2026-3336 OL10 podman 2026-02-25
Oracle ELSA-2026-3337 OL9 podman 2026-02-26
Oracle ELSA-2026-3354 OL10 python-pyasn1 2026-02-26
Oracle ELSA-2026-3359 OL9 python-pyasn1 2026-02-26
Oracle ELSA-2026-3291 OL9 runc 2026-02-25
Oracle ELSA-2026-3343 OL10 skopeo 2026-02-26
Oracle ELSA-2026-3340 OL9 skopeo 2026-02-25
Oracle ELSA-2026-3443 OL10 valkey 2026-02-26
Red Hat RHSA-2026:3428-01 EL8 container-tools:rhel8 2026-02-27
Red Hat RHSA-2026:3669-01 EL10 go-rpm-macros 2026-03-04
Red Hat RHSA-2026:3668-01 EL9 go-rpm-macros 2026-03-04
Red Hat RHSA-2026:2708-01 EL8 go-toolset:rhel8 2026-02-26
Red Hat RHSA-2026:3468-01 EL8.2 go-toolset:rhel8 2026-03-03
Red Hat RHSA-2026:3470-01 EL8.4 go-toolset:rhel8 2026-03-03
Red Hat RHSA-2026:3489-01 EL8.6 go-toolset:rhel8 2026-03-03
Red Hat RHSA-2026:3471-01 EL8.8 go-toolset:rhel8 2026-03-03
Red Hat RHSA-2026:2706-01 EL10 golang 2026-02-26
Red Hat RHSA-2026:3192-01 EL10.0 golang 2026-02-26
Red Hat RHSA-2026:2709-01 EL9 golang 2026-02-26
Red Hat RHSA-2026:3473-01 EL9.0 golang 2026-03-03
Red Hat RHSA-2026:3472-01 EL9.2 golang 2026-03-03
Red Hat RHSA-2026:3469-01 EL9.4 golang 2026-03-03
Red Hat RHSA-2026:3193-01 EL9.6 golang 2026-02-26
Red Hat RHSA-2026:3092-01 EL10 golang-github-openprinting-ipp-usb 2026-02-26
Red Hat RHSA-2026:3188-01 EL8 grafana 2026-02-26
Red Hat RHSA-2026:3187-01 EL8 grafana-pcp 2026-02-26
Red Hat RHSA-2026:0247-01 EL9 mariadb:10.11 2026-02-26
Red Hat RHSA-2026:0334-01 EL9.6 mariadb:10.11 2026-02-26
Red Hat RHSA-2026:3337-01 EL9 podman 2026-02-26
Red Hat RHSA-2026:3340-01 EL9 skopeo 2026-02-26
Red Hat RHSA-2026:3506-01 EL10.0 yggdrasil 2026-03-03
Red Hat RHSA-2026:3699-01 EL10.0 yggdrasil-worker-package-manager 2026-03-04
Slackware SSA:2026-059-01 gvfs 2026-02-28
Slackware SSA:2026-058-01 mozilla 2026-02-27
Slackware SSA:2026-062-01 python3 2026-03-03
Slackware SSA:2026-059-02 telnet 2026-02-28
SUSE openSUSE-SU-2026:10267-1 TW ImageMagick 2026-02-28
SUSE openSUSE-SU-2026:20270-1 oS16.0 autogen 2026-02-27
SUSE SUSE-SU-2026:20525-1 SLE-m6.0 avahi 2026-02-27
SUSE SUSE-SU-2026:20491-1 SLE-m6.1 avahi 2026-02-27
SUSE SUSE-SU-2026:0759-1 SLE15 busybox 2026-03-03
SUSE SUSE-SU-2026:0758-1 SLE15 oS15.5 oS15.6 busybox 2026-03-03
SUSE openSUSE-SU-2026:10241-1 TW cacti 2026-02-25
SUSE openSUSE-SU-2026:20277-1 oS16.0 chromium 2026-02-27
SUSE openSUSE-SU-2026:10268-1 TW cockpit-356 2026-02-28
SUSE SUSE-SU-2026:20454-1 SLE-m6.0 cockpit 2026-02-26
SUSE openSUSE-SU-2026:10250-1 TW cockpit-machines-348 2026-02-26
SUSE openSUSE-SU-2026:10251-1 TW cockpit-packages 2026-02-26
SUSE openSUSE-SU-2026:10269-1 TW cockpit-podman-120 2026-02-28
SUSE SUSE-SU-2026:20494-1 SLE-m6.1 cockpit-podman 2026-02-27
SUSE openSUSE-SU-2026:10252-1 TW cockpit-repos 2026-02-26
SUSE openSUSE-SU-2026:10253-1 TW cockpit-subscriptions 2026-02-26
SUSE openSUSE-SU-2026:20279-1 oS16.0 containerized-data-importer 2026-02-28
SUSE SUSE-SU-2026:0777-1 SLE15 oS15.4 oS15.6 cosign 2026-03-03
SUSE SUSE-SU-2026:20452-1 SLE-m6.0 crun 2026-02-26
SUSE SUSE-SU-2026:20528-1 SLE-m6.0 cups 2026-03-03
SUSE SUSE-SU-2026:20535-1 SLE-m6.1 cups 2026-03-04
SUSE openSUSE-SU-2026:10260-1 TW digger-cli 2026-02-27
SUSE SUSE-SU-2026:0772-1 SLE12 docker 2026-03-03
SUSE SUSE-SU-2026:0666-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 docker 2026-02-26
SUSE openSUSE-SU-2026:10261-1 TW docker 2026-02-27
SUSE SUSE-SU-2026:20451-1 SLE-m6.0 docker-compose 2026-02-26
SUSE SUSE-SU-2026:0641-1 SLE12 docker-stable 2026-02-26
SUSE SUSE-SU-2026:0659-1 SLE15 oS15.6 docker-stable 2026-02-26
SUSE openSUSE-SU-2026:20262-1 oS16.0 docker-stable 2026-02-27
SUSE SUSE-SU-2026:0661-1 SLE15 oS15.3 oS15.6 erlang 2026-02-26
SUSE SUSE-SU-2026:0776-1 SLE15 oS15.4 evolution-data-server 2026-03-03
SUSE SUSE-SU-2026:0775-1 SLE15 oS15.6 evolution-data-server 2026-03-03
SUSE openSUSE-SU-2026:10262-1 TW evolution-data-server 2026-02-27
SUSE SUSE-SU-2026:20481-1 SLE-m6.1 expat 2026-02-27
SUSE SUSE-SU-2026:0647-1 SLE12 expat 2026-02-26
SUSE SUSE-SU-2026:0646-1 SLE15 expat 2026-02-26
SUSE openSUSE-SU-2026:10257-1 TW firefox 2026-02-27
SUSE openSUSE-SU-2026:10242-1 TW firefox-esr 2026-02-25
SUSE openSUSE-SU-2026:20291-1 oS16.0 fluidsynth 2026-03-02
SUSE SUSE-SU-2026:0762-1 SLE12 freerdp 2026-03-03
SUSE SUSE-SU-2026:0763-1 SLE15 freerdp 2026-03-03
SUSE SUSE-SU-2026:0656-1 SLE15 oS15.4 freerdp 2026-02-26
SUSE SUSE-SU-2026:0649-1 SLE15 oS15.6 freerdp 2026-02-26
SUSE SUSE-SU-2026:0761-1 SLE15 oS15.6 freerdp 2026-03-03
SUSE SUSE-SU-2026:0683-1 SLE15 freerdp2 2026-02-27
SUSE openSUSE-SU-2026:10243-1 TW freerdp2 2026-02-25
SUSE SUSE-SU-2026:0665-1 SLE15 frr 2026-02-26
SUSE SUSE-SU-2026:0684-1 SLE15 oS15.4 oS15.6 gimp 2026-02-27
SUSE SUSE-SU-2026:20446-1 SLE-m6.0 glib2 2026-02-26
SUSE SUSE-SU-2026:20493-1 SLE-m6.1 glib2 2026-02-27
SUSE SUSE-SU-2026:20527-1 SLE-m6.0 glibc 2026-02-27
SUSE SUSE-SU-2026:20536-1 SLE-m6.1 glibc 2026-03-04
SUSE SUSE-SU-2026:0680-1 SLE12 glibc 2026-02-27
SUSE SUSE-SU-2026:0766-1 oS15.6 gnome-remote-desktop 2026-03-03
SUSE SUSE-SU-2026:0687-1 SLE15 go1 2026-02-27
SUSE SUSE-SU-2026:0789-1 SLE15 oS15.6 go1.24-openssl 2026-03-03
SUSE SUSE-SU-2026:0790-1 SLE15 go1.25-openssl 2026-03-03
SUSE SUSE-SU-2026:0760-1 SLE15 oS15.6 go1.25-openssl 2026-03-03
SUSE SUSE-SU-2026:20483-1 SLE-m6.1 google-guest-agent 2026-02-27
SUSE SUSE-SU-2026:20486-1 SLE-m6.1 google-osconfig-agent 2026-02-27
SUSE openSUSE-SU-2026:10270-1 TW gosec 2026-02-28
SUSE SUSE-SU-2026:0757-1 oS15.6 govulncheck-vulndb 2026-03-03
SUSE SUSE-SU-2026:0694-1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 gpg2 2026-02-27
SUSE SUSE-SU-2026:20444-1 SLE-m6.0 gpg2 2026-02-26
SUSE SUSE-SU-2026:20487-1 SLE-m6.1 gpg2 2026-02-27
SUSE openSUSE-SU-2026:10275-1 TW gvfs 2026-03-02
SUSE openSUSE-SU-2026:20290-1 oS16.0 haproxy 2026-03-02
SUSE openSUSE-SU-2026:10263-1 TW heroic-games-launcher 2026-02-27
SUSE SUSE-SU-2026:20473-1 SLE-m6.0 kernel 2026-02-26
SUSE SUSE-SU-2026:20479-1 SLE-m6.0 kernel 2026-02-26
SUSE SUSE-SU-2026:20478-1 SLE-m6.0 kernel 2026-02-26
SUSE SUSE-SU-2026:20477-1 SLE-m6.0 kernel 2026-02-26
SUSE SUSE-SU-2026:20520-1 SLE-m6.0 SLE-m6.1 kernel 2026-02-27
SUSE SUSE-SU-2026:20498-1 SLE-m6.0 SLE-m6.1 kernel 2026-02-27
SUSE SUSE-SU-2026:20519-1 SLE-m6.0 SLE-m6.1 kernel 2026-02-27
SUSE SUSE-SU-2026:20496-1 SLE-m6.0 SLE-m6.1 kernel 2026-02-27
SUSE SUSE-SU-2026:0688-1 SLE11 kernel 2026-02-27
SUSE openSUSE-SU-2026:20287-1 SLE16 oS16.0 kernel 2026-02-28
SUSE SUSE-SU-2026:20450-1 SLE-m6.0 kernel-firmware 2026-02-26
SUSE SUSE-SU-2026:20495-1 SLE-m6.1 kernel-firmware 2026-02-27
SUSE openSUSE-SU-2026:20281-1 oS16.0 kubevirt 2026-02-28
SUSE openSUSE-SU-2026:10272-1 TW libIex-3_4-33 2026-02-28
SUSE SUSE-SU-2026:0648-1 SLE15 libjxl 2026-02-26
SUSE openSUSE-SU-2026:10271-1 TW libjxl-devel 2026-02-28
SUSE SUSE-SU-2026:20523-1 SLE-m6.0 libpng16 2026-02-27
SUSE SUSE-SU-2026:20530-1 SLE-m6.1 libpng16 2026-03-04
SUSE SUSE-SU-2026:20448-1 SLE-m6.0 libsodium 2026-02-26
SUSE SUSE-SU-2026:20484-1 SLE-m6.1 libsodium 2026-02-27
SUSE openSUSE-SU-2026:10246-1 TW libsoup-2_4-1 2026-02-25
SUSE openSUSE-SU-2026:10276-1 TW libsoup-3_0-0 2026-03-02
SUSE SUSE-SU-2026:0658-1 SLE-m5.2 libsoup 2026-02-26
SUSE SUSE-SU-2026:0792-1 SLE-m5.2 libsoup 2026-03-04
SUSE SUSE-SU-2026:20445-1 SLE-m6.0 libsoup 2026-02-26
SUSE SUSE-SU-2026:20529-1 SLE-m6.0 libsoup 2026-03-03
SUSE SUSE-SU-2026:0703-1 SLE12 libsoup 2026-03-02
SUSE SUSE-SU-2026:0689-1 SLE15 oS15.4 libsoup 2026-02-27
SUSE SUSE-SU-2026:0690-1 SLE15 oS15.6 libsoup 2026-02-27
SUSE SUSE-SU-2026:0788-1 SLE15 oS15.6 libsoup 2026-03-03
SUSE SUSE-SU-2026:0657-1 SLE15 oS15.6 libsoup2 2026-02-26
SUSE openSUSE-SU-2026:20283-1 oS16.0 libsoup2 2026-02-28
SUSE SUSE-SU-2026:20524-1 SLE-m6.0 libssh 2026-02-27
SUSE SUSE-SU-2026:20531-1 SLE-m6.1 libssh 2026-03-04
SUSE SUSE-SU-2026:0778-1 SLE12 libssh 2026-03-03
SUSE SUSE-SU-2026:0779-1 SLE15 oS15.6 libssh 2026-03-03
SUSE openSUSE-SU-2026:10273-1 TW libudisks2-0 2026-02-28
SUSE openSUSE-SU-2026:10274-1 TW libwireshark19 2026-02-28
SUSE SUSE-SU-2026:0782-1 SLE12 libxml2 2026-03-03
SUSE SUSE-SU-2026:0740-1 SLE-m5.2 mozilla-nss 2026-03-02
SUSE SUSE-SU-2026:0660-1 oS15.4 openvswitch 2026-02-26
SUSE SUSE-SU-2026:0781-1 SLE15 oS15.6 patch 2026-03-03
SUSE SUSE-SU-2026:0768-1 SLE15 postgresql14 2026-03-03
SUSE SUSE-SU-2026:0786-1 SLE15 oS15.6 postgresql14 2026-03-03
SUSE SUSE-SU-2026:0770-1 SLE15 postgresql15 2026-03-03
SUSE SUSE-SU-2026:0771-1 SLE15 oS15.6 postgresql15 2026-03-03
SUSE SUSE-SU-2026:0784-1 SLE15 postgresql16 2026-03-03
SUSE SUSE-SU-2026:0787-1 SLE15 postgresql17 2026-03-03
SUSE SUSE-SU-2026:0785-1 SLE12 postgresql18 2026-03-03
SUSE SUSE-SU-2026:0769-1 SLE15 postgresql18 2026-03-03
SUSE SUSE-SU-2026:20490-1 SLE-m6.1 protobuf 2026-02-27
SUSE SUSE-SU-2026:0663-1 SLE12 python 2026-02-26
SUSE SUSE-SU-2026:0774-1 SLE15 oS15.6 python 2026-03-03
SUSE openSUSE-SU-2026:20292-1 oS16.0 python-azure-core 2026-03-02
SUSE SUSE-SU-2026:20447-1 SLE-m6.0 python-pyasn1 2026-02-26
SUSE SUSE-SU-2026:20482-1 SLE-m6.1 python-pyasn1 2026-02-27
SUSE SUSE-SU-2026:0623-1 SLE12 python-tornado 2026-02-25
SUSE SUSE-SU-2026:20443-1 SLE-m6.0 python-urllib3 2026-02-26
SUSE SUSE-SU-2026:20485-1 SLE-m6.1 python-urllib3 2026-02-27
SUSE SUSE-SU-2026:0635-1 SLE15 oS15.6 python-urllib3_1 2026-02-25
SUSE openSUSE-SU-2026:20271-1 oS16.0 python-urllib3_1 2026-02-27
SUSE SUSE-SU-2026:0645-1 SLE12 python3 2026-02-26
SUSE SUSE-SU-2026:0664-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.6 python3 2026-02-26
SUSE openSUSE-SU-2026:10247-1 TW python311-Django4 2026-02-25
SUSE openSUSE-SU-2026:10264-1 TW python311-Flask 2026-02-27
SUSE SUSE-SU-2026:0693-1 MP4.3 SLE15 oS15.4 python311 2026-02-27
SUSE SUSE-SU-2026:0767-1 SLE15 oS15.6 python311 2026-03-03
SUSE SUSE-SU-2026:0644-1 SLE15 oS15.6 python312 2026-02-26
SUSE SUSE-SU-2026:0642-1 SLE15 python313 2026-02-26
SUSE SUSE-SU-2026:0643-1 SLE15 oS15.3 oS15.6 python39 2026-02-26
SUSE SUSE-SU-2026:0662-1 oS15.6 qemu 2026-02-26
SUSE SUSE-SU-2026:0650-1 oS15.6 redis 2026-02-26
SUSE SUSE-SU-2026:0667-1 oS15.6 redis7 2026-02-26
SUSE openSUSE-SU-2026:10256-1 TW regclient 2026-02-26
SUSE SUSE-SU-2026:20526-1 SLE-m6.0 rust-keylime 2026-02-27
SUSE SUSE-SU-2026:20534-1 SLE-m6.1 rust-keylime 2026-03-04
SUSE SUSE-SU-2026:0741-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.6 shim 2026-03-02
SUSE SUSE-SU-2026:0765-1 SLE15 smc-tools 2026-03-03
SUSE SUSE-SU-2026:0692-1 SLE15 oS15.6 thunderbird 2026-02-27
SUSE SUSE-SU-2026:0780-1 SLE15 oS15.6 tracker-miners 2026-03-03
SUSE SUSE-SU-2026:20522-1 SLE-m6.0 ucode-intel 2026-02-27
SUSE SUSE-SU-2026:0670-1 SLE11 ucode-intel 2026-02-26
SUSE SUSE-SU-2026:0669-1 SLE12 ucode-intel 2026-02-26
SUSE SUSE-SU-2026:0668-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 ucode-intel 2026-02-26
SUSE SUSE-SU-2026:0685-1 oS15.6 valkey 2026-02-27
SUSE SUSE-SU-2026:0783-1 SLE15 SLE-m5.5 oS15.5 oS15.6 zlib 2026-03-03
Ubuntu USN-8045-1 14.04 16.04 18.04 20.04 22.04 24.04 25.10 ceph 2026-02-25
Ubuntu USN-8062-2 14.04 16.04 18.04 20.04 curl 2026-03-03
Ubuntu USN-5376-6 18.04 git 2026-03-02
Ubuntu USN-5376-4 20.04 22.04 git 2026-02-27
Ubuntu USN-5376-5 20.04 22.04 git 2026-02-27
Ubuntu USN-8069-1 14.04 16.04 18.04 20.04 22.04 24.04 imagemagick 2026-03-04
Ubuntu USN-8068-1 16.04 18.04 20.04 22.04 24.04 25.10 intel-microcode 2026-03-03
Ubuntu USN-8070-1 16.04 linux, linux-aws, linux-kvm 2026-03-04
Ubuntu USN-8060-5 20.04 22.04 linux-aws, linux-aws-5.15, linux-gcp-5.15, linux-hwe-5.15, linux-ibm, linux-ibm-5.15, linux-nvidia-tegra-5.15, linux-nvidia-tegra-igx, linux-oracle-5.15 2026-03-04
Ubuntu USN-8059-6 22.04 24.04 linux-aws, linux-aws-6.8, linux-ibm, linux-ibm-6.8, linux-xilinx 2026-02-26
Ubuntu USN-8060-6 22.04 linux-aws-fips 2026-03-04
Ubuntu USN-7990-6 18.04 20.04 linux-raspi, linux-raspi-5.4 2026-03-03
Ubuntu USN-8067-1 16.04 20.04 mailman 2026-03-02
Ubuntu USN-8064-1 14.04 16.04 18.04 mongodb 2026-02-25
Ubuntu USN-8063-1 22.04 24.04 25.10 protobuf 2026-02-25
Ubuntu USN-8065-1 22.04 24.04 python-authlib 2026-02-26
Ubuntu USN-8058-1 20.04 22.04 24.04 25.10 rlottie 2026-02-26
Ubuntu USN-8066-1 20.04 22.04 24.04 25.10 ruby-rack 2026-02-26
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 7.0-rc2 Mar 01
Sasha Levin Linux 6.19.6 Mar 04
Greg Kroah-Hartman Linux 6.19.5 Feb 27
Greg Kroah-Hartman Linux 6.19.4 Feb 26
Sasha Levin Linux 6.18.16 Mar 04
Greg Kroah-Hartman Linux 6.18.15 Feb 27
Greg Kroah-Hartman Linux 6.18.14 Feb 26
Sasha Levin Linux 6.12.75 Mar 04
Daniel Wagner v6.12.74-rt16 Mar 02
Sasha Levin Linux 6.6.128 Mar 04
Sasha Levin Linux 6.1.165 Mar 04
Sasha Levin Linux 5.15.202 Mar 04
Sasha Levin Linux 5.10.252 Mar 04

Architecture-specific

Build system

Alan Maguire Add BTF layout to BTF Feb 26

Core kernel

Development tools

Device drivers

Larysa Zaremba libeth and full XDP for ixgbevf Feb 25
illusion.wang nbl driver for Nebulamatrix NICs Feb 26
Ioana Ciocoi-Radulescu accel: New driver for NXP's Neutron NPU Feb 26
Maciek Machnikowski Implement PTP support in netdevsim Feb 25
Bastien Curutchet (Schneider Electric) net: dsa: microchip: Add PTP support for the KSZ8463 Feb 26
Waqar Hameed Add driver for TI BQ25630 charger Feb 27
Dave Stevenson Raspberry Pi HEVC decoder driver Feb 27
David Heidelberg via B4 Relay Input: support for STM FTS5 Mar 01
Mike Marciniszyn (Meta) eth fbnic: Add fbnic self tests Mar 01
Alexandre Courbot gpu: nova-core: add Turing support Mar 01
Bin Du Add AMD ISP4 driver Mar 02
Griffin Kroah-Hartman Add support for Awinic AW86938 haptic driver Mar 02
Sriharsha Basavapatna RDMA/bnxt_re: Support uapi extensions Mar 02
Thomas Hellström Two-pass MMU interval notifiers Mar 02
Ovidiu Panait Add versaclock3 support for RZ/V2H Mar 02
Cristian Ciocaltea Add HDMI 2.0 support to DW HDMI QP TX Mar 03
Jingyuan Liang Add spi-hid transport driver Mar 03
Rodrigo Alencar via B4 Relay ADF41513/ADF41510 PLL frequency synthesizers Mar 03
Caleb James DeLisle mips: econet: Add clk/reset and PCIe support Mar 03
Xianwei Zhao via B4 Relay Add Amlogic general DMA Mar 04
Laurentiu Palcu Add support for i.MX94 DCIF Mar 04
John Erasmus Mari Geronimo Add support for Analog Devices MAX30210 Mar 04
Krzysztof Kozlowski drm/msm: Add Qualcomm Eliza SoC support Mar 04
Russell King (Oracle) net: stmmac: improve PCS support Mar 04
Ratheesh Kannoth octeontx2-af: npc: Enhancements. Mar 04
Nicolas Frattaroli MediaTek UFS Cleanup and MT8196 Enablement Mar 04

Device-driver infrastructure

Filesystems and block layer

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Matthew Wood rust: module parameter extensions Feb 26
David Laight Enhance printf() Mar 02
Pablo Neira Ayuso iptables 1.8.13 release Mar 04

Page editor: Joe Brockmeier


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds