|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for January 15, 2026

Welcome to the LWN.net Weekly Edition for January 15, 2026

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

SFC v. VIZIO: who can enforce the GPL?

By Daroc Alden
January 8, 2026

The Software Freedom Conservancy (SFC) is suing VIZIO over smart TVs that include software licensed under the GPL and LGPL (including the Linux kernel, FFmpeg, systemd, and others). VIZIO didn't provide the source code along with the device, and on request they only provided some of it. Unlike a typical lawsuit about enforcing the GPL, the SFC isn't suing as a copyright holder; it's suing as a normal owner of the TV in question. This approach opens some important legal questions, and after years of pre-trial maneuvering (most recently resulting in a ruling related to signing keys that is the subject of a separate article), we might finally obtain some answers when the case goes to trial on January 12. As things stand, it seems likely that the judge in the case will rule that that the GPL-enforcement lawsuits can be a matter of contract law, not just copyright law, which would be a major change to how GPL enforcement works.

The primary question at the heart of the case is: who has the right to enforce the GPL? There are plenty of things that are illegal, but that nobody can (or cares to) enforce — for example, in my home state of New Hampshire, it is illegal for a restaurant to serve sugar in a container with holes wider than 3/8ths of an inch, but I would be incredibly surprised if the police actually charged someone under that law.

Enforcement is the mechanism that binds the words written in the law (which are, after all, just ink on paper) to the actions that people actually take in the real world. The difference between criminal law and civil law lies in who is supposed to point out violations and do something about them. In the case of criminal law, it is generally up to the government to choose which violations of law warrant taking action. In the case of civil law, it's mostly up to everyone else — although there are exceptions to this general trend in both directions.

In the US, anyone can sue anyone for anything — for a loose definition of "can". Anyone is allowed to file a lawsuit against anyone else, but in order for the lawsuit to actually be adjudicated by the court, the person suing must have "standing to sue". Exactly who does and does not have standing can be complicated, but generally a person must show that they were actually harmed by someone else's action, and that there was some law or agreement in place that was violated. In the typical GPL-enforcement case, this involves having the copyright holder show that someone broke the terms of the GPL, and argue that they were harmed thereby because now someone is distributing their copyrighted code without a license.

Whether anyone other than the copyright holder has standing to sue under the GPL has been a matter of debate for years. The problem is that if the GPL is only a copyright license and nothing more, then the only person harmed by violating the license is the copyright holder. On the other hand, if the GPL is enforceable under contract law, other people who benefit from the GPL potentially have standing to sue for breach of contract.

VIZIO makes and sells TVs that include copies of the Linux kernel (two different versions of the kernel, actually, with one running as a guest of the other) and other software which is covered under the terms of version 2 of the GPL or the LGPL. The lawsuit lists 25 different pieces of open-source software. When the SFC bought a TV from VIZIO, they asked for "the complete and corresponding source code" of the version of all of this software on the TV, which the GPL says must be included with the product or provided upon request. VIZIO provided some code, but it was not complete and did not compile, ultimately leading to SFC filing a lawsuit against the company.

Unlike a typical GPL-enforcement lawsuit (which the SFC has tried previously), however, this one is not a copyright-infringement case. This lawsuit is new and unique in the US because the SFC isn't suing on behalf of a copyright holder; it's suing as a recipient of the software. Its argument is that the GPL is intended to benefit users (specifically, by ensuring that they can obtain the source code for programs that they buy), so the SFC is a "third-party beneficiary" of the GPL: it isn't the original author, and it didn't adapt the kernel, but it was still intended to benefit, so it is harmed by VIZIO not complying with the GPL's requirements. (Technically, the SFC does hold some relevant copyrights, but in this case it chose not to assert them, so they don't matter to the case.)

155. Permitting Plaintiff to bring this cause of action is consistent with the objectives of the GPLv2 and LGPLv2.1 and the reasonable expectations of VIZIO and the developers of the SmartCast Works at Issue.

156. Therefore, Plaintiff is an intended third-party beneficiary of the GPLv2 and LGPLv2.1 between VIZIO and the developers of the SmartCast Works at Issue and, because of this, may seek to enforce the Source Code Provision against VIZIO.

The main benefit of the lawsuit is that if the SFC wins, VIZIO will be required to share the code for its modified kernels, systemd daemon, GNU C library, etc. But an important secondary benefit is establishing the precedent that owners of products including GPL-licensed code are third-party beneficiaries to the GPL, and have standing to sue when companies don't comply with the license. The US uses a common law system, inherited from the UK England. Under that system, judges are required to take prior rulings on related cases into account when making a judgment. Once the SFC v. VIZIO lawsuit concludes (and at least one appeal has been made), other GPL-enforcement lawsuits will be able to point to it as evidence one way or the other about whether people have standing to sue.

This is important because the authors of GPL software are not always available — or willing — to kick off a lawsuit to enforce the terms of the license. Any of the many contributors to the kernel could theoretically have sued VIZIO (and might have had a better chance of winning), but lawsuits are time-consuming and expensive, and nobody did sue VIZIO over this until the SFC did. If owners of products are allowed to file GPL-enforcement lawsuits, it becomes much easier to hold people who adapt GPL-licensed code accountable.

The case so far

The most visible part of a lawsuit is the trial, but there's a lot that goes on behind the scenes before a trial is possible. The whole process started with the SFC explicitly asking VIZIO for the source code in August 2018. In January 2019, VIZIO responded with what it claimed was the complete source code for the GPL-licensed software included in its smart TVs. The code included in the response, however, did not even compile.

The GPL requires anyone who modifies GPL-licensed programs to provide recipients of the software with the complete source code, which it defines like this:

For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.

Since the provided code did not compile, it was definitely not the complete source code. The SFC wrote to VIZIO explaining that it was missing part of the complete source code; VIZIO responded with an amended version. This repeated six times throughout 2019, but never resulted in a set of source code that the SFC was happy with. Throughout this time period, the SFC continued buying and checking new VIZIO products for compliance, eventually finding one which contained an offer to provide the source code buried deep in a submenu. (This had previously gone undetected by the SFC because the offer only appeared when the TV was connected to the internet.)

Finally, in 2021, the SFC filed a lawsuit against VIZIO, asking the court to order VIZIO to produce the missing files. (Later in the process, the SFC amended that complaint to be more specific about what the problem with VIZIO's provided code was.) Specifically, the SFC asked for:

a. An order directing Defendants to produce to Plaintiff the complete source code corresponding to whatever versions of the SmartCast Works at Issue, and any other program subject to the GPLv2 or LGPLv2.1 that are resident on VIZIO smart TVs having model numbers V435-J01, D32h-J09, and M50Q7-J01, including the Linux kernel used with VIZIO's SmartCast operating system, in a format that may be compiled and installed without undue difficulty. For purposes of this prayer for relief, "complete source code" means all source code for all modules contained in such version or versions of the SmartCast Works at Issue, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.

b. An order directing Defendants to produce to Plaintiff the complete source code or object code for any program resident on VIZIO smart TVs having model numbers V435-J01, D32h-J09, or M50Q7-J01 that links with any of the SmartCast Libraries at Issue, or any other library subject to the LGPLv2.1, so that the Plaintiff can modify the library and then relink to produce a modified executable;

VIZIO replied by filing a motion to move the lawsuit from California's state court (where the SFC originally filed the lawsuit) to federal court, on the grounds that this was fundamentally a copyright dispute, which is a matter of federal law. Its argument was that the SFC's "claims are completely preempted by the laws of the United States, specifically, the federal Copyright Act". The act in question (17 U.S.C. § 106) pertains specifically to the rights of copyright holders — which means that if the case had been transferred to federal court, VIZIO could have claimed that the SFC, which is not a copyright holder, did not have standing to sue. The SFC said that the case wasn't eligible to go to the federal court because the GPL introduced an extra contractual obligation to provide source code, and that this contractual obligation goes beyond what federal copyright law requires. US district court judge Josephine Staton did not agree with VIZIO, and returned the matter firmly to California's state court.

In the US legal system, trials have two purposes: finding out the facts of what happened, and determining whether what happened broke the law or breached a contract. In cases where the first part is unnecessary (either because the people involved in the lawsuit agree on what happened, or where one side's argument is so clearly invalid that the specific details don't matter), the participants can ask the court for "summary adjudication", where the judge in the case decides the outcome of some or all of the arguments made so far without a trial, based only on what the law says. In 2023, VIZIO asked the state court to perform a summary adjudication in this case, on the grounds that this should be a copyright-infringement question (which the SFC can't sue over, not being a copyright holder), and that even if it weren't a copyright matter, the SFC still couldn't sue because it isn't an intended third-party beneficiary.

The judge in the case, Sandy Leal, wasn't convinced by VIZIO's argument, and denied its motion for summary adjudication. In that ruling, she determined that the matter was not purely a copyright issue, and the SFC could in principle have standing to sue. On the other hand, she did not address whether the SFC was a third-party beneficiary of the GPL, leaving that to be argued about during the trial.

In 2024, the SFC asked for summary adjudication the other way, hoping that the judge would agree that it definitely was a third-party beneficiary. Leal wasn't convinced of that either. With the lawsuit not obviously favoring one side or the other, the rest of 2024 saw both sides attempting to gather evidence and attempt mediation before going to trial. In 2025, the SFC asked for summary adjudication a second time for a slightly different set of reasons:

(1) The undisputed facts establish that VIZIO made an offer to provide the applicable source code subject to the GPLv2 and the LPGLv2.1 in connection with VIZIO's sale of its Smart TV Model No. D32h-J09 to SFC, which offer was accepted by SFC. Therefore, under general principles of contract law, VIZIO has a contractual duty to provide that source code to SFC; and (2) The plain language of the GPLv2 and the LGPLv2.1 compels the conclusion that purchasers of VIZIO Smart TVs such as SFC are third-party beneficiaries of the provision of the GPLv2 that entitles such purchasers to receive the source code for: (a) any software on VIZIO's Smart TVs that is licensed under the GPLv2; and (2) any library on VIZIO's SMART TVs that is licensed under the LGPLv2.l. Therefore, under the GPLv2 and the LGPLv2.1, VIZIO has a contractual duty to provide purchasers of VIZIO Smart TVs with that source code.

So, the SFC was arguing two separate things: that VIZIO had promised them the source code and had to follow through; and that even if VIZIO had not promised the source code, people who buy the TVs in question are still third-party beneficiaries, and therefore have standing to sue anyway. This attempt at summary adjudication has not yet resulted in a ruling, with VIZIO and the SFC going back and forth without garnering a response from the judge.

VIZIO then filed its own second attempt at summary adjudication, arguing that the SFC was asking for VIZIO to enable installing modified programs on the TVs without breaking anything, which the GPL doesn't require:

VIZIO moves on the grounds that the plain language of GPLv2 and LGPLv2.1 or, in the alternative, the undisputed extrinsic evidence, compels the conclusion that neither license imposes a duty on licensees to provide all information necessary to permit reinstallation of modified software back on the same device such that the device continues to function properly [Emphasis added].

In its first attempt at summary adjudication, VIZIO argued that this was a copyright problem, not a contract problem. In this second attempt, VIZIO argues that even if there is a contract, the contract doesn't actually require them to produce "working" source code — that the source code they provide can be considered complete even if it cannot actually be successfully installed onto the same device.

The SFC denied asking for that, saying that it only wanted access to the source code of the scripts to control installation, not a guarantee that those scripts would actually result in a working TV. That wasn't obvious to everyone, though, with many people feeling that the SFC had overreached; a companion LWN article explores this topic in more depth. The judge agreed with VIZIO that the GPL definitely doesn't require the company to ensure that recompiling and reinstalling the GPL-licensed software must result in a working TV — but, since the SFC had other demands that were covered by the GPL, the matter would have to proceed to trial.

On December 4, Judge Leal did issue a set of tentative rulings on her upcoming cases, including SFC v. VIZIO. (That link goes to an excerpted PDF containing only the pages relevant to SFC v. VIZIO.) Tentative rulings are not binding — the judge is free to change her mind later — but they represent her current understanding and disposition going into the trial. The tentative ruling considered several points, and deferred most of them to the trial, but the judge did come down on one explicit facet of the case: VIZIO does have a contractual duty to provide SFC with the complete source code, as the GPL defines it, for the GPL-licensed programs on one particular kind of smart TV. The smart TV includes an offer to "provide applicable source code upon request for a processing fee covering the cost of fulfilling the distribution" buried in one of its menus. VIZIO made this offer, and the SFC accepted it, so the SFC has shown that there really is a contract to have a lawsuit over.

Judges often rule narrowly on individual parts of a case, to avoid being caught in unfortunate corner cases. So, her tentative ruling doesn't cover what would happen if a company didn't provide a source offer; such a company might be able to argue that if there is no offer to share the source code (which the GPL requires), then they may have broken the license, but not created a contract that can be violated. Since VIZIO isn't making that argument, however, whether it would stand up in court will be a matter for a different lawsuit. Judge Leal's tentative rulings didn't address whether recipients of modified GPL-licensed programs are third-party beneficiaries of the GPL in general, even before accepting a source offer.

Where things stand

Currently, that trial is scheduled for January 12. After years of legal back-and-forth, this is when VIZIO and the SFC will finally present their actual arguments to the judge. There are a few questions that will need to be resolved during the trial. The judge has already ruled that this is not purely a copyright matter, but is the SFC a third-party beneficiary to the GPL itself? If so, did VIZIO break the contract by not providing the complete source code in the first place? What about after the SFC accepted an offer for the source code?

Judge Leal could rule either way on any of those questions. If she does rule that the SFC has standing to sue (as her judgment of VIZIO's motion for summary adjudication said could be the case in principle), and that VIZIO is in breach of contract (as her tentative ruling indicates), she could order VIZIO to provide the complete source code. On the other hand, she could also rule that the SFC has standing to sue, but that VIZIO's behavior did not violate the GPL. That would be a loss for the SFC, but it would still establish the precedent that owners of GPL-licensed software can have standing to sue (if the company makes a source offer, and possibly at all times, depending on how the trial goes).

Even once Leal renders her decision, the whole process is not necessarily over: no matter who wins, the case could be appealed. In fact, VIZIO already filed an appeal, but the court of appeals chose not to consider it. Even if there are no appeals, we could still see more cases like this coming up in different states. If the case isn't appealed, it doesn't become binding precedent (in this case, only the court of appeals can make binding precedents, and only in published opinions), but it can still be "persuasive precedent", which judges often look at when considering a case, but are not required to take into account. Even if the case were appealed and became binding precedent, that would still only apply in the state of California. As a practical matter, having Californian precedent that owners of products incorporating GPL-licensed code are third-party beneficiaries of the GPL does make it easier to argue to companies that they should avoid an expensive losing legal battle by complying up front — but companies in other jurisdictions could still try to fight such attempts.

Summary

Overall, the question to be decided in the upcoming trial is: in California (and weakly throughout the rest of the US), can people sue companies for not providing the complete source code of GPL-licensed software used in products they buy? If the company makes a source offer, the answer, based on Judge Leal's tentative ruling, is almost certainly yes. If that is confirmed during the trial, it will herald many new opportunities to attempt GPL enforcement against companies that are currently flouting the license.

Trials typically take several days or weeks in complex cases such as this. Keep an eye on LWN's news feed for the outcome when it does conclude.

[ I would like to thank SFC executive director Karen Sandler and other non-SFC advisors for reviewing this article for factual mistakes. ]

Comments (65 posted)

GPLv2 and installation requirements

By Jonathan Corbet
January 8, 2026
On December 24 2025, Linus Torvalds posted a strongly worded message celebrating a ruling in the ongoing GPL-compliance lawsuit filed against VIZIO by the Software Freedom Conservancy (SFC). This case and Torvalds's response have put a spotlight on an old debate over the extent to which the source-code requirements of the GNU General Public License (version 2) extend to keys and other data needed to successfully install modified software on a device. It is worth looking at whether this requirement exists, the subtleties in interpretation that cloud the issue, and the extent to which, if any, the SFC is demanding that information.

Tivoization

It is increasingly common for computing systems to refuse to boot and run software that is lacking an authorized signature. There are many legitimate reasons to want this feature; a company may want to ensure that its internal systems have not been tampered with, or a laptop owner may want defense against evil maid attacks. There are also numerous cases of companies using this feature to protect their own business models. The practice of locking down devices running otherwise free software so that their owners cannot change that software has become known as "tivoization", after Tivo used it to control access to its digital video recorders.

Tivoization sits poorly with many free-software developers, who see it as a way of using copyleft software without providing all of the freedoms that should come with it. The practice enables devices built around antifeatures that, in a more free world, users would simply remove. But, while there is no universal agreement on whether this practice violates version 2 of the GPL, the larger contingent would appear to be on the side of "no". It is noteworthy that the GNU project, while criticizing tivoization, does not claim that it is a GPLv2 violation. When version 3 of the GPL was drafted, a special effort was made to add language disallowing tivoization, but many projects, including the kernel, have never adopted that version of the license.

Keys in the VIZIO case

At the end of 2025, the judge in the VIZIO case ruled that provision of signing keys is not required by the GPL. Specifically, the ruling said that the GPL's source-code requirements do not extend to materials required to install a modified version of the software on the original device and have it "continue to function properly". The SFC was quick to put out a press release saying that the ruling addressed an issue that was not actually before the court — that the SFC had not been asking for that ability. The truth of the matter would appear to be a bit more nuanced, though.

The SFC's complaint driving this lawsuit goes into a fair amount of detail about why access to the source code is important. Examples include claim 111:

With the source code for the SmartCast Works at Issue as used on the Subject TVs, developers could continue to develop and improve an operating system for smart televisions, which would benefit the public and further the goals of software freedom.

And claim 118:

Access to the Source Code of the Linux kernel, the other SmartCast Works at Issue, and for the Library Linking Programs, as used on VIZIO smart TVs, would enable software developers to preserve useful but obsolete features. It would also allow software developers to maintain and update the operating system should VIZIO or its successor ever decide to abandon it or go out of business. It would also allow for the maintenance of older models that are no longer supported by VIZIO. In these ways, purchasers of VIZIO smart TVs can be confident that their devices would not suffer from software-induced obsolescence, planned or otherwise.

The complaint also goes into some detail about how VIZIO makes far more money from sales of advertising and data about its customers than it does from selling televisions. Supporting the company's real business model requires monitoring what owners of the devices are doing and selling that information to others. With access to the source, developers could remove the surveillance features built into VIZIO devices, improving their privacy and security. The complaint notes that VIZIO seems unlikely to make it possible to disable those features on its own.

All of these assertions are fairly obviously true, and there is no doubt that the owners of VIZIO devices would benefit from the ability to make the described changes. That is what the freedom of free software is all about. But there is a catch: access to the source will provide none of those freedoms without the ability to install modified software on the device — and have the device actually run that software. The SFC's complaint does not mention signing keys specifically, but it describes freedoms that cannot be exercised without those keys.

Reinstallation requirements

So why does the SFC claim that the court's ruling is irrelevant? There are, as it turns out, multiple ways to look at what it means to install software on a device. Specifically, should a modified software load still be able to provide all of the functionality that the original did? VIZIO's software, for example, surely contains proprietary components that allow the device to play DRM-protected content. Allowing a user-modified kernel to access those components would enable the creation of a channel to export unprotected content from the device, thus accelerating the escape of that content onto the Internet by several milliseconds. For some reason, VIZIO feels that this outcome should be avoided.

The SFC states that it never asked for this capability, that, specifically, nobody is claiming that the device must continue to provide all of its functionality if modified software is installed. At the other extreme, though, while a device that refuses to boot because the kernel lacks the requisite signature surely fits within the description of "not continuing to function properly", the SFC appears to find that result insufficient. The level of functionality that must be available to user-modified software has not been clearly defined, but it seems to involve functionality beyond that needed for a doorstop.

What the SFC may be asking for is something akin to the Chromebook developer mode. A Chromebook can either be locked down, in which case all supported functionality is available, or it can be in developer mode, which allows user-installed software but loses access to some features. Similarly, some Android devices allow their software to be replaced, but they lose the ability to attest to the purity of their software and some apps may refuse to run. If VIZIO televisions had a developer mode that allowed users to install software with reduced functionality, that would seemingly satisfy the SFC's requests. VIZIO, though, did not see fit to design that feature into its products.

That said, the SFC did argue that the GPL at least might have a "reinstallation requirement" that preserves the functionality of the device; it is worth understanding why. As noted above, the SFC's complaint assumes that it would be possible to install modified software on the system. As a way of heading off the threat to its business model, VIZIO asked for a summary judgment that no such requirement exists in the GPL:

VIZIO moves on the grounds that the plain language of GPLv2 and LGPLv2.1 or, in the alternative, the undisputed extrinsic evidence, compels the conclusion that neither license imposes a duty on licensees to provide all information necessary to permit reinstallation of modified software back on the same device such that the device continues to function properly.

Note that this motion was brought forward as a way of requesting that a significant portion of the SFC's case be thrown out of court. A failure to respond to the motion in the strongest possible way would have been an act of malpractice on the part of the SFC's lawyers; they had to challenge it. So, even though the SFC claims to have never brought in the "the device continues to function properly" language, it found itself having to defend that language anyway. That defense came in this filing, which included quotes from some important developers that the installation requirement does indeed exist:

As one member of the FOSS community [Bdale Garbee] explains, under the GPLv2, the distributor of a device that includes a computer program licensed under the GPLv2 "must provide scripts that allow a recompiled binary of the source code to be installed back onto the device in question."

The purpose here was not to establish the existence of a reinstallation requirement; it was to show the existence of doubt on the question so that a summary judgment would be inappropriate. The judge, however, sided with VIZIO and granted the motion. While the judge did not address the issue of whether installation without the need for proper functioning might be required by the license, it seems unlikely that the outcome would be different.

The approach taken by Torvalds and other (but certainly not all) kernel developers toward tivoization reflects an important tradeoff. Had the kernel moved to GPLv3 and required the ability to install modified versions, much of the industry built around Linux may well have moved on to something else. That, in turn, would lead to a significant reduction in corporate support for Linux development.

There are certainly developers who would happily make that trade in exchange for a license that guarantees more freedom. But a kernel that is not used provides no freedom at all, and Linux without companies behind it would not be in anything close to the condition it is now. Torvalds chose, many years ago, a policy that freedom extends to the software, but not beyond; the result is a kernel that is now nearly ubiquitous. So we find ourselves in a situation where the software is capable and free, but that is not always the case for the hardware it runs on. That problem is well worth solving, but GPLv2 does not appear to be the right tool for the job.

Comments (13 posted)

Debian discusses removing GTK 2 for forky

By Joe Brockmeier
January 14, 2026

The Debian GNOME team would like to remove the GTK 2 graphics toolkit, which has been unmaintained upstream for more than five years, and ship Debian 14 ("forky") without it. As one might expect, however, there are those who would like to find a way to keep it. Despite its age and declared obsolescence, quite a few Debian packages still depend on GTK 2. Many of those applications are unlikely to be updated, and users are not eager to give them up. Discussion about how to handle this is ongoing; it seems likely that Debian developers will find some way to continue supporting applications that require GTK 2, but users may have to look outside official Debian repositories.

GTK 2 was released in 2002 and was declared end of life with the release of GTK 4 on December 16, 2020; the final release, 2.24.33, was published a few days later. The GTK project currently maintains two stable branches—GTK 3.x ("oldstable") and GTK 4.x ("stable"). The GTK 3.x branch will be maintained until the project releases GTK 5, and the project has not yet announced any firm plans for such a release.

On January 7, Matthias Geiger announced that Debian's GNOME team has a goal of removing GTK 2 from forky before it is released in 2027; in addition to being unmaintained, he said, it lacks native Wayland support and features needed for HiDPI displays.

Geiger pointed out that Debian would not be alone in dropping GTK 2, as Arch Linux and Red Hat Enterprise Linux (RHEL) have already done so. Arch moved the gtk2 package and those that depend on it out of the official repositories and into the Arch User Repository (AUR) in October 2025, and RHEL dropped support for GTK 2 with the release of RHEL 10 in May 2025. It might be worth noting, however, that Red Hat will still be on the hook to support GTK 2 in RHEL 9 through 2032, and it is still packaged for current Fedora releases as well as EPEL 10.

Developers maintaining packages with GTK 2 dependencies have had ample time to consider options and alternatives; Simon McVittie reported bugs against packages that still depended on GTK 2 in April 2020, about eight months before it went end of life. At that time more than 640 packages still relied on it, so he also started a discussion about "minimizing the amount of GTK 2 in the archive", though he acknowledged the difficulty in getting rid of it entirely:

GTK 2 is used by some important productivity applications like GIMP, and has also historically been a popular UI toolkit for proprietary software that we can't change, so perhaps removing GTK 2 from Debian will never be feasible. However, it has definitely reached the point where a dependency on it is a bug - not a release-critical bug, and not a bug that can necessarily be fixed quickly, but a piece of technical debt that maintainers should be aware of.

Now, almost six years later, there are just slightly more than 150 packages that carry a dependency on GTK 2; far fewer than in 2020, but still a significant number. GIMP, for example, updated to GTK 3 with its 3.0 release in March 2025. And, as Geiger noted, one of the blockers to ridding Debian of GTK 2 entirely is the fact that Debian's graphical installer still depends on it.

Jonathan Dowland said that the Debian GNOME team should not have to maintain GTK 2 if it does not want to. But, he argued, the correct thing for the team to do is to orphan the package to see if others are willing to maintain it.

I respect your opinion that Debian would be better off without GTK2 in the archive. However, I don't agree with it. The two pillars of my position are: removing this forces the removal of useful dependent programs in the archive which have active users; it also makes it more difficult for users to run dependent programs *outside* the archive, including software of historical significance. IMHO this falls foul of [Debian Social Contract section 4].

The section in Debian's Social Contract that Dowland refers to is "our priorities are our users and free software". It states that Debian will place the needs of users first, and not object to non-free works intended to be used on Debian systems. Since there are non-free works that depend on GTK 2, and are unlikely to be ported to later versions, one could argue maintaining the toolkit is in users' best interest.

Some would disagree with that, though. Emilio Pozuelo Monfort said "with my Release Team hat on" that it would be a disservice to keep shipping GTK 2 in new releases, since it has been dead upstream so long:

We don't ship every old library just because someone could make use of it. There is a maintenance cost to that. See e.g. QT4, libsdl1.2 and so many others that could been have kept for similar reasons. Perhaps those old packages also need us to ship GCC 5 or an old cmake. That's a slippery slope.

He suggested that there was still time to port useful packages to GTK 3 or GTK 4. Dowland shot that idea down, though; some of the applications, such as Hexchat, were not likely to be ported to a new version, ever. He also noted that later versions of GTK were "simply not equivalent" and would require fundamental design changes. Some applications are still using GTK 2, he suspected, because the developers would rather not use later versions.

Dowland also said that it was important to recognize that Debian does not "have a cast-iron rule that we apply even-handedly to every library". There is no Debian policy that developers can simply point to as guidance for GTK 2 or other libraries that have reached the end of life. Gioele Barabucci put out some ideas about conditions for keeping legacy libraries, which sparked a brief discussion about the security concerns related to forks of unmaintained software.

Outside Debian

One possibility that Geiger raised would be to move GTK 2 packages to Debian's Debusine instance, which is open to all Debian developers and maintainers. Debusine is a project developed by Freexian; it provides tools for developing Debian derivatives, including package building and hosting APT repositories. In December, Colin Watson announced that Debian's instance would allow Debian developers and maintainers to create "APT-compatible add-on package repositories" similar to Ubuntu's personal package archives (PPAs). Geiger suggested that GTK 2 and any dependent packages could be moved to Debusine rather than keeping them in Debian's official archives.

Another option would be to create an upstream fork of GTK 2 and package it in Debian. Adam Sampson observed that the Ardour digital-audio-workstation project has created its own fork of the toolkit. However, it is unclear that the project has any interest in maintaining a generic fork of GTK 2 suitable for use beyond its needs for Ardour's user interface.

McVittie discouraged the idea of maintaining a fork of the toolkit. He argued that software that no longer had an active upstream "seems to have a tendency to soak up a disproportionate amount of time outside the immediate package". He also raised the option of Debusine as a way out of keeping GTK 2 in Debian and noted that would be similar to Arch moving it to the AUR.

Time marches on (and on)

The discussion is still ongoing. Dowland expressed optimism that the issue would be resolved in due course. What the solution looks like is still to be determined, but it seems likely forky users will have some way to obtain GTK 2 if necessary.

A broader solution that applies beyond GTK 2 might be in order, though. The scenario of "such-and-such is obsolete, unmaintained, and at the end of its life" has been popping up often over the past few years; and it will continue to do so with increasing frequency. As time goes on, the pile of "old" software (and hardware) that is still in use will continue to grow—and it will grow faster than the number of people interested in doing the work of porting to new libraries just because older libraries have been abandoned upstream.

Comments (114 posted)

Format-specific compression with OpenZL

By Jake Edge
January 14, 2026

OSS Japan

Lossless data compression is an important tool for reducing the storage requirements of the world's ever-growing data sets. Yann Collet developed the LZ4 algorithm and designed the Zstandard (or Zstd) algorithm; he came to the 2025 Open Source Summit Japan in Tokyo to talk about where data compression goes from here. It turns out that we have reached a point where general-purpose algorithms are only going to provide limited improvement; for significant increases in compression, while keeping computation costs within reason for data-center use, turning to format-specific techniques will be needed.

[Yann Collet]

Zstandard was introduced ten years ago and "it offered really much better performance tradeoffs than what existed before", Collet began. The alternatives were zlib, which was a "very good middle ground for decent speed and decent compression ratio", but was not fast enough, and LZ4, which provided much better compression speed but did not compress the data enough. Zstandard quickly supplanted the others because it was fundamentally better for size and speed. In the years since, Zstandard has improved, especially in its decompression speed, but those advances are still fairly modest. "We are reaching the limits of that technology."

In looking at what can be done to improve things, there are other problems beyond just the diminishing returns. The Zstandard format is limiting; with a new format, gains of 2-3% for compression ratio and 10-20% for speed are possible. "Is it worth it?", he asked. It is not really about the time needed to develop the new format, but that there is a huge ecosystem of Zstandard users that would need to change, which is extremely costly. He does not think there would be a serious shift to a new format unless it offered overwhelming advantages. "If we introduce a new compressor, it has to be vastly better."

There are other options, such as copy-based algorithms (e.g. LZ78), which copy repeated data from the compression dictionary to reconstruct the original; they can meet the needs for data-center compression, but they converge toward the same limits as Zstandard. That convergence was surprising, Collet said, because the techniques are quite different, but it stems from the fact that all of them make no assumptions about the data and simply treat it as a stream of undifferentiated bytes. There are high-compression algorithms that can achieve better results (e.g. PPM) but they run too slowly for data-center applications.

Format specific

Compressors that are only concerned with a specific format can do much better. For a trivial example, a simple array of consecutive integer values cannot be compressed by algorithms like LZ because there are no repetitions. A simple delta transformation turns that into something that can be heavily compressed, however. "If we know what we are compressing, it's not just a bunch of bytes, [...] it opens more options and, because we have more options, we should be able to compress better."

A more real example is a compressor for the Smithsonian Astrophysical Observatory star catalog format, known as "SAO". It is part of the Silesia compression corpus, which consists of data sets that are used to compare compression algorithms. "It's very well defined", with a header followed by an array of 28-byte structures with fixed fields and types.

Turning the array of structures into a structure of arrays is a "trivial transformation"; each array is homogeneous and it can be analyzed separately. For example, the first two fields in the structure are 64-bit X and Y positions. The X values are mostly sorted, so a delta compression gives good results; the Y values are bounded and have a limited number of values compared to the range, so a transpose transformation can focus on compressing the high (largely unchanging) bytes, while other techniques can be applied to the subset of all the possible values for the low bytes. Other fields have properties that can be exploited as well.

He compared the results of a few different compressors on the SAO file. Zstandard using its default (i.e. zstd -3) reduced the 7.2MB file to 5.5MB for a 1.31 compression factor, which is not great. The data is numeric, which Zstandard is not particularly good at compressing, and the SAO file is "packed with information", lacking zeroes and repeating sequences, "it is difficult to compress". But the speed of Zstandard is good, Collet said, compressing at 100MB per second and decompressing at 750MB/s; "if you want to deploy something in a data center, you want this kind of speed".

He compared the "best of the best" widely available compression (lzma -9), which got much better compression (4.4MB or 1.64 compression factor), but the speed was not adequate for deployment (2.9MB/s compression, 45MB/s decompression). For another data point, he used cmix, which is an experimental compressor by Byron Knoll; you would not deploy it, he said, but "it's recognized as the best compressor out there". It reduced the SAO file to 3.7MB, which is almost a factor of two, but compressing and decompressing can only be done at 0.001MB/s.

Those results set the goals for the SAO-specific compressor: a factor of around two and speed like that of Zstandard. It achieves those goals easily, with a compressed size of 3.5MB (2.06 compression factor) and speeds faster than those of Zstandard (215MB/s compression, 800MB/s decompression). "Here we have enough gains to justify deploying something new in our data centers; this is the next step we were looking for." It turns out that knowing anything about the data gives a major advantage in compression; it is "an insane advantage, a way too large advantage to ignore".

Drawbacks

There are some problems in switching to format-specific compression, starting with the need to design algorithms for the formats. It will require engineers, hopefully with data-compression experience, some time to understand the format and devise an algorithm for it. That typically takes around 18 months, he said, "and you don't know in advance what you will get"—it is not just time and money, but there is uncertainty as well.

Once a good algorithm has been found, there will be a need to optimize it and to safeguard it against attacks. "Every codec [compressor/decompressor] is an injection point." Since there are lots of formats, and there is a need to be cost effective with developing these compressors, developers may rely on only handling "safe" data instead of spending the effort on fuzzing and other techniques for hardening. After a while, the codec may slowly start being used on less-safe data, resulting in vulnerabilities and attacks, however.

Once a codec is ready for deployment, there are still hurdles to overcome. Decompressors must be deployed everywhere the data may need to be accessed, which is not necessarily as easy as it sounds. That may include thousands (or hundreds of thousands) of servers all over the world, clients of various sorts, and so on; it is not uncommon that it takes longer to deploy a new compression algorithm than it did to develop it, Collet said.

There is also a large maintenance cost associated with a format-specific compression. In addition, if the format needs to change, the compressor will also, and all of the deployment woes arise again. The original developers may well have moved on to other things, so finding people to work on it may be hard and take time. This becomes a "silent velocity obstacle", because no one wants to consider changing the format, even if there would be large benefits to doing so, because it is so daunting.

Enter OpenZL

So there is a tension between the promise of format-specific compression and the problems that can come from using it. But the truth is that those problems already exist, Collet said, because in every large organization there are already groups using these compression techniques; "the gains are so huge" that they get adopted piecemeal. "OpenZL is our answer to this tension; we believe that this solution solves all the problems that were just mentioned."

OpenZL has a core library and tools that allow creating specialized compressors. He likened it to the OpenGL graphics API, which "is not a 3D app but is a set of primitives to do a 3D app"; similarly, the OpenZL library gives users primitives to build their own compressor. The idea is to define compressors as graphs of pre-validated codecs, so that the these different pieces can be combined in a myriad of different ways to produce compressors—"pretty much like Lego".

Using those codecs will allow creating new compressors in a matter of days, instead of months. The graphs provide an enormous search space, by human standards, but that space is not particularly large for computers, so it can be systematically searched. "We can provide tools that will do this work of finding the best arrangement of codecs and will give you an answer in minutes." That is a game-changer, he said; users can know quickly whether it even makes sense to pursue a format-specific compressor.

Assuming that it does make sense, the "deployment bottleneck" will soon rear its head. OpenZL avoids that by having a unified decompression engine that can handle any graph, so there is only one program that needs to be deployed. Updates and changes to the compressor are simply new configurations; transitions can be handled by supporting multiple graphs for a format. In addition, graphs can even be changed dynamically during compression if desired. The maintenance headaches are reduced, as well, since there is only a single code base that needs attention for bug fixes, performance improvements, and security upgrades.

It is natural to think of these graphs as being static, but that is not the reality. These compressors have a selector that chooses a graph by analyzing the data, so the graph for a format can change based on the input. The intent is to maintain performance, he said, but, more importantly, to handle exceptions. If an integer array is expected, but text is found, using a numeric compressor "is going to end badly"; that should be detected and a switch made to Zstandard, which is the fallback codec.

The first step to generating an OpenZL compressor is describing the data format. There are already around a dozen formats supported by OpenZL and dozens more will be added over the next few months, he said. Those will only cover common formats, however, so others will need to be described, either by providing a parser function or by using the Simple Data Description Language (SDDL) compiler.

SDDL can describe straightforward formats easily; it can also handle more complex formats, "but at some point, it is no longer the right tool". If creating the SDDL becomes too difficult, the work can be outsourced to an LLM "and it actually works", he said. There is one prompt that teaches the LLM about the SDDL syntax and then it can be asked to generate the SDDL. "If it's a good LLM, it should work well; like every LLM, you should read it." It is approaching the point where no programming at all will be needed to do this, Collet said.

OpenZL has tools that will use the description of the data and some sample files to create multiple compressors in a few minutes. Those different compressors allow users to choose the tradeoffs that matter to them: faster speed or more compression. In order to compress a file using one of them, the description of it, called a serialized compressor, is specified along with the file to compress. Decompression does not need to specify the compressor because the description is stored in the compressed data.

Any of the steps can be done manually, which might be somewhat painful, but means that everything about the compressor can be examined. "We can observe it, we can change it, we can see if we can find something better". That is important for debugging and research into compression techniques.

He showed some graphs comparing OpenZL compression to existing tools, but noted that "it's not a fair fight". The graphs show OpenZL doing much better than the competition. That's the whole point of OpenZL, he said, "if you know something about your data, why not use it to get better performance?"

OpenZL is already deployed widely at his employer, Meta. One of the main workloads at Meta is LLMs, so there is a lot of data to handle. The Meta system is set up to constantly monitor the data being generated, periodically retrain the compressors based on that, and then deploy the resulting compressed files immediately—the decompressor can always handle the result. He noted that compression is not only about saving storage, it is also about transmission time savings for moving data around—to and from GPUs, for example. That directly translates to higher compute utilization.

OpenZL is open source and available on GitHub (under the three-clause BSD license). The quick start instructions are straightforward, Collet said; following those steps will introduce all of the new concepts and tools. "It's not Zstandard++, this thing is different", so there are more steps and users need to invest some time to come up to speed. If they do, they will get better compression and more speed, however; "the difference is stark".

It has not yet reached a 1.0 release, because the OpenZL developers believe the final wire protocol needs to be built with the community. Over the next few years, the idea is to engage with the community to ensure that all of the different use cases are covered. In addition, there is work on getting OpenZL acceleration working directly in various types of hardware: CPUs, GPUs, and ASICs. That will take some time, "but we expect to see the result of that before the end of the decade", he concluded.

Interested readers may wish to view the YouTube video of the talk or look at Collet's slides.

[ I would like to thank the Linux Foundation, LWN's travel sponsor, for assistance with traveling to Tokyo for Open Source Summit Japan. ]

Comments (20 posted)

A high-level quality-of-service interface

By Daroc Alden
January 13, 2026

LPC

Quality-of-service (QoS) mechanisms attempt to prioritize some processes (or network traffic, disk I/O, etc.) over others in order to meet a system's performance goals. This is a difficult topic to handle in the world of Linux, where workloads, hardware, and user expectations vary wildly. Qais Yousef spoke at the 2025 Linux Plumbers Conference, alongside his collaborators John Stultz, Steven Rostedt, and Vincent Guittot, about their plans for introducing a high-level QoS API for Linux in a way that leaves end users in control of its configuration. The talk focused specifically on a QoS mechanism for the scheduler, to prioritize access to CPU resources differently for different kinds of processes. (slides; video)

Historically, the server market has been a big factor in optimizing the performance of the Linux scheduler, Yousef said. That has changed somewhat over time, but at least initially the scheduler was extremely throughput-focused. In recent years, there has been more concern given to interactivity, but there are still a lot of stale assumptions about how to wring the best performance out of a Linux system. POSIX hasn't evolved to cover new developments, applications still often spawn one thread per CPU core as though the system has no other workloads running, and so on.

The current default scheduler in the kernel, EEVDF, has a number of configurable parameters that can be adjusted for the system as a whole or per-process; Yousef thought that the best way to implement a QoS API for Linux was to give the scheduler enough information about processes to set reasonable default values for the existing configuration options. In the past, people have focused on the kernel interface used to communicate with the scheduler, he said, but that isn't the problem that matters. What matters is providing a high-level API for applications that doesn't require detailed knowledge of the scheduler's configuration to use.

[Qais Yousef]

iOS (and related platforms such as macOS and watchOS) already have an interface like that. It provides four QoS levels for a program to choose between: "user interactive" (for tasks needed to update a program's user interface), "user initiated" (for things that the user is actively doing), "utility" (for tasks that should happen promptly but that don't directly impact the user), and "background" (for tasks that have no particular latency requirements). The QoS level can be set independently per thread in a program.

Yousef proposed stealing that design for use on Linux, and mapping each of those classes to the time slice, policy, ramp-up multiplier, and uclamp settings of the scheduler. Threads would default to the utility class, which would match the scheduler's current default values. Threads in the user interactive or user initiated classes would be given shorter time slices, which tell the scheduler to prioritize latency over throughput. Threads in the background class would be given longer time slices, so that they can run for longer periods without interruption when the system is idle, but would be interrupted if a higher-priority thread became runnable.

One audience member objected that Linux does already have a way to control how much time is given to different threads: the deadline scheduler. Rostedt piped up to ask whether they "really want to run Chrome under the deadline scheduler?" He clarified that using the deadline scheduler required root privileges, but Yousef's proposal was all about allowing normal, unprivileged applications to provide performance hints. This led to an extended argument in the audience about deadline scheduling versus performance hinting until Yousef pulled things back on topic.

Adopting QoS classes in the scheduler would require some changes to the code that handles placing tasks on different CPUs, as well, he said. Right now, the kernel decides which CPU should run a task based on one of four criteria: CPU load, CPU load and energy usage, CPU load and NUMA domain, or core affinity. That should really be extended to consider aspects of the task being placed, Guittot explained. When the load balancer is placing a task with a short time slice, it should first consider idle CPUs (where the task could run right away), but if there are none, it should prefer putting the task on a CPU working on something with a long time slice (so that it can be preempted).

The required code changes in the kernel aren't the main problem, though, Yousef claimed. The problem is how to encourage application developers to adopt a new API; it can take years for people to actually use new kernel APIs. "I think we can do better", he said. Rather than using an interface based on calling functions in code, which requires application developers to update their programs to take them into account, why not use a configuration-file-based approach? That way applications can ship default configuration files if they care to, but users and distribution maintainers can add configuration files for any applications they want to see support the new API. It also leaves the user of the system in ultimate control: if the application ships an obnoxious default configuration, they can override it.

One person asked why an application developer wouldn't just ship a configuration setting all of their threads to the highest priority. Another audience member pointed out that the proposed QoS API was really most useful for prioritizing threads within an individual application; a video game would want to make the user interface part of the user interactive class, but giving a background cleanup task the same class would just cause jitter or lag, while slowing down the throughput of background processing.

Yousef also said that the kernel should not be bound to blindly follow an application's configuration no matter what. There should be appropriate access control. If an application requests the user interactive QoS class, but the system knows that the application isn't currently in the foreground (possibly due to hints from the desktop environment, through the use of control groups, or from some other process-management mechanism), it could restrict the application to the utility class instead. On the other hand, any application should probably be allowed to mark a thread as part of the background class in order to get the throughput benefits, since those threads won't impact latency.

The important thing is to give the scheduler more information with which to make reasonable decisions for different workloads, Yousef summarized. For servers, which mostly care about throughput, it makes sense to just leave everything at the default QoS level and let the existing code (which has been mostly optimized for servers already) handle it. For laptops, phones, and other applications where latency is more important than raw throughput, having the few threads that matter most indicated to the scheduler lets it make better decisions.

There is other ongoing work in the kernel that will also help with this, he added. There was a discussion at the conference in the toolchains track (video) about how to enable high-priority tasks to "donate" CPU time (or potentially other resources, such as QoS class or other scheduler settings) to lower-priority tasks that are currently holding a lock that the high-priority task needs. That way, a high-priority task isn't stuck waiting for a low-priority task to be scheduled on the CPU so it can make progress (a situation known as priority inversion). Such work has historically been called priority inheritance, but in the toolchains track talk it was called performance inheritance, to indicate that it can involve more than just priority. Regardless of what it is called, Yousef said that work would also contribute to improving user-visible latency.

Whether Linux will end up adopting a QoS API, and whether it will mimic Apple's API so closely, remains to be seen. It seems clear that there has been a shift in recent years, putting increased focus on concerns other than throughput in the kernel's scheduling subsystem. If that push continues, which seems likely, users may look forward to more responsive user interfaces in the future.

[ Thanks to the Linux Foundation for sponsoring my travel to the Linux Plumbers Conference. ]

Comments (36 posted)

READ_ONCE(), WRITE_ONCE(), but not for Rust

By Jonathan Corbet
January 9, 2026
The READ_ONCE() and WRITE_ONCE() macros are heavily used within the kernel; there are nearly 8,000 call sites for READ_ONCE(). They are key to the implementation of many lockless algorithms and can be necessary for some types of device-memory access. So one might think that, as the amount of Rust code in the kernel increases, there would be a place for Rust versions of these macros as well. The truth of the matter, though, is that the Rust community seems to want to take a different approach to concurrent data access.

An understanding of READ_ONCE() and WRITE_ONCE() is important for kernel developers who will be dealing with any sort of concurrent access to data. So, naturally, they are almost entirely absent from the kernel's documentation. A description of sorts can be found at the top of include/asm-generic/rwonce.h:

Prevent the compiler from merging or refetching reads or writes. The compiler is also forbidden from reordering successive instances of READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some particular ordering. One way to make the compiler aware of ordering is to put the two invocations of READ_ONCE or WRITE_ONCE in different C statements.

In other words, a READ_ONCE() call will force the compiler to read from the indicated location exactly one time, with no optimization tricks that would cause the read to be either elided or repeated; WRITE_ONCE() will force a write under those terms. They will also ensure that the access is atomic; if one task reads a location with READ_ONCE() while another is writing that location, the read will return the value as it existed either before or after the write, but not some random combination of the two. These macros, other than as described above, impose no ordering constraints on the compiler or the CPU, making them different from macros like smp_load_acquire(), which have stronger ordering requirements.

The READ_ONCE() and WRITE_ONCE() macros were added for the 3.18 release in 2014. WRITE_ONCE() was initially called ASSIGN_ONCE(), but that name was changed during the 3.19 development cycle.

On the last day of 2025, Alice Ryhl posted a patch series adding implementations of READ_ONCE() and WRITE_ONCE() for Rust. There are places in the code, she said, where volatile reads could be replaced with these calls, once they were available; among other changes, the series changed access to the struct file f_flags field to use READ_ONCE(). The implementation of these macros involves a bunch of Rust macro magic, but in the end they come down to calls to the Rust read_volatile() and write_volatile() functions.

Some of the other kernel Rust developers objected to this change, though. Gary Guo said that he would rather not expose READ_ONCE() and WRITE_ONCE() and suggested using relaxed operations from the Rust Atomic crate the kernel's Atomic module instead. Boqun Feng expanded on the objection:

The problem of READ_ONCE() and WRITE_ONCE() is that the semantics is complicated. Sometimes they are used for atomicity, sometimes they are used for preventing data race. So yes, we are using LKMM [the Linux kernel memory model] in Rust as well, but whenever possible, we need to clarify the intention of the API, using Atomic::from_ptr().load(Relaxed) helps on that front.

IMO, READ_ONCE()/WRITE_ONCE() is like a "band aid" solution to a few problems, having it would prevent us from developing a more clear view for concurrent programming.

In other words, using the Atomic crate allows developers to specify more precisely which guarantees an operation needs, making the expectations (and requirements) of the code more clear. This point of view would appear to have won out, and Ryhl has stopped pushing for this addition to the kernel's Rust code — for now, at least.

There are a couple of interesting implications from this outcome, should it hold. The first of those is that, as Rust code reaches more deeply into the core kernel, its code for concurrent access to shared data will look significantly different from the equivalent C code, even though the code on both sides may be working with the same data. Understanding lockless data access is challenging enough when dealing with one API; developers may now have to understand two APIs, which will not make the task easier.

Meanwhile, this discussion is drawing some attention to code on the C side as well. As Feng pointed out, there is still C code in the kernel that assumes a plain write will be atomic in many situations, even though the C standard explicitly says otherwise. Peter Zijlstra answered that all such code should be updated to use WRITE_ONCE() properly. Simply finding that code may be a challenge (though KCSAN can help); updating it all may take a while. The conversation also identified a place in the (C) high-resolution-timer code that is missing a needed READ_ONCE() call. This is another example of the Rust work leading to improvements in the C code.

In past discussions on the design of Rust abstractions, there has been resistance to the creation of Rust interfaces that look substantially different from their C counterparts; see this 2024 article, for example. If the Rust developers come up with a better design for an interface, the thinking went, the C side should be improved to match this new design. If one accepts the idea that the Rust approach to READ_ONCE() and WRITE_ONCE() is better than the original, then one might conclude that a similar process should be followed here. Changing thousands of low-level concurrency primitives to specify more precise semantics would not be a task for the faint of heart, though. This may end up being a case where code in the two languages just does things differently.

Comments (22 posted)

Asciinema: making movies at the command-line

By Joe Brockmeier
January 12, 2026

In open-source circles there are many situations, such as bug reports, demos, and tutorials, when one might want to provide a play-by-play of a session in one's terminal. The asciinema project provides a set of tools to do just that. Its tools let users record, edit, and share terminal sessions in a text-based format that has quite a few advantages compared to making and sharing videos of terminal sessions. For example, it is easy to use, offers the ability to search text from recorded sessions, and allows users to copy and paste directly from the recording.

History

Marcin Kulik started the project in 2011; it was originally called "ascii.io" and then renamed to asciinema and published to the Python Package Index (PyPI) in 2013. The asciinema project now consists of several parts: a command-line interface (CLI) recorder and player, a web player, the asciicast file format for recordings, the agg utility to create animated GIFs, and the asciinema virtual terminal (avt) used by the project's other components.

Each project has its own license; the asciinema CLI and agg are available under the GPLv3, while the other components are Apache 2.0-licensed. The project's components have undergone a number of changes and rewrites in the past 13 years. Version 3.0 of the CLI, released in September 2025, featured a complete rewrite in Rust, as well as an updated (v3) file format.

The project provides hosting for recordings on asciinema.org for users who don't mind storing them with a third party. The terms of service seem fairly standard for an open-source project that provides content hosting. LWN readers can explore some of the published recordings to get a sense of how others are using the tools. I should note that the project seems to have done a good job keeping recordings online: I have recordings from 2017 that are still available.

The server software is open-source too, of course, for those who prefer to self host. It is written in Elixir and uses the Phoenix Framework; the server is available under the Apache 2.0 license. The most recent server release, v20251114 from November 2025, included new search features and stricter validation of uploaded asciicast files.

Setting the stage

The asciinema CLI is packaged for most popular Linux distributions, but in some cases the available version is outdated. Debian 13, for example, has the 2.4.0 release from October 2023. The project provides Linux binaries for x86-64 with each release, or users can choose to use Rust's cargo to build from source.

After installing the CLI, the asciinema command can be used to record terminal sessions, play them back, or upload the recording to asciinema.org (or another server). For example, this command will record a terminal session to the file lwn.cast:

    $ asciinema rec -i 1 --window-size 80x24 lwn.cast

The use of rec tells asciinema to record, -i 1 optimizes the recording to get rid of any idle moments in the recording longer than one second. That can be fairly useful when recording demos, to get rid of any pauses between composing commands, etc. The --window-size option does what it says on the tin; it sets the terminal window size for the session to the number of columns and rows specified.

As the "how it works" page explains, asciinema captures all terminal output, including window resize events as well as the escape and control sequences in their unfiltered form. By default, asciinema does not record keyboard input (lest it record passwords, for example) but that can also be enabled with the -I option.

The lwn.cast file is plain text; by default, asciinema rec saves a session as an asciicast v3-formatted file, which is a newline-delimited JSON format. Opening the v3 file output in a text editor will show that the first line has metadata about the version of asciinema used, size of the terminal, environment variables, and so forth. The rest of the file includes a line for every element in the event stream—that is, basically each line output to the terminal during the recording session. Each line is a JSON array with the following fields:

    [interval, code, data]

The interval is the time since the previous event, in seconds. The code is the type of event, such as "o" for output, "i" for input, "m" for a marker used for playback or navigation, and so forth. The rest of the line is the data in UTF-8 encoding.

The v2 format, used by the 2.x versions of asciinema, is also still available as an output option with asciinema 3.x in order to share recordings with users who have not upgraded. Or one can use the "asciinema convert" option to take a v3 recording and output it as v2. The CLI also offers a raw output format that includes the raw terminal output without timing or metadata and a plain-text format that does not include control sequences or colors in the output.

Performance

I've found asciinema particularly useful in the past for recording demos and sharing them as embeds for blog posts or to play back during talks. Fumbling around on the keyboard during a talk is not my favorite thing to do, and if something can go wrong during a live demo, it probably will. Prerecording a session's demos not only makes for a smoother presentation; it allows a speaker to share the demos for the audience to review later. An asciinema recording can be a helpful supplement to provide along with slides and video of a talk.

The "asciinema play filename.cast" command plays a session recording back in the terminal. The terminal-based player is a bit limited in terms of controls; it only supports pausing with the "Space" key, stepping through the recording one frame at a time with the "." key, or advancing to a marker by pressing "]". A marker is similar to a chapter on a DVD or a track on a CD.

The web-based player is written in JavaScript and Rust (compiled to WebAssembly). The project has a quick start that demonstrates how to embed the player on a web page without needing to host a separate asciinema server. If a recording is hosted on asciinema.org or another server, it can be embedded in a web page with a "<script>" tag. The web version is more full-featured than the CLI player; it allows rewinding and fast-forwarding playback, stepping backward and forward through a recording, and more. It is also possible to link directly to a specific time in a video by appending something like "?t=M:SS", where M would be the minutes and SS would be the seconds.

Users can copy and paste from the CLI or web-based players, which can make asciinema recordings quite useful for tutorials or otherwise sharing with coworkers or collaborators. Showing a coworker how to do something at the terminal is easy as pie when they can watch a realtime recording of it being done and pause to copy the commands to their own terminal. Likewise, it may be a useful format to include with bug reports and the like; why settle for a log file when it's possible to include a detailed session that shows exactly what happens when a bug is encountered?

The 3.0 release added two new commands for live streaming terminal sessions, "asciinema stream" and "asciinema session". The "stream" command publishes a live stream either through a built-in HTTP server, or by feeding the stream through an asciinema server as a relay. The "session" command allows a user to live stream a session and record it to a file at the same time. That can be ideal if one has to do a live demo for coworkers and save it for posterity as well.

Prior versions of asciinema made it too easy to accidentally upload recordings, but that is no longer the case. Users must specify a filename when recording a session, and uploading is a manual step that requires the user to pick a server to upload to. To upload a recording, use "asciinema upload recording.cast". Note that the project allows users to upload recordings that are not linked to an account; those sessions are deleted after seven days. Recordings associated with an account are preserved indefinitely. Here is a basic demo created with asciinema that shows its ability to capture rich terminal output:

Assuming one has JavaScript enabled, the embedded player should show up above this sentence. Note that this demonstrates another feature of asciinema; it allows setting a terminal theme for playback that is different from the terminal settings used to record a session. I've chosen the "Monokai" theme here so that the player will stand out more for readers who have the site set to dark mode.

The project provides Linux container images for those who would like to self host the asciinema server using Docker, Podman, or Kubernetes. The quick start instructions include a configuration to run all of its services with Docker Compose. There is also a high-level explanation of the configuration for those who want to understand what's going on under the hood; it may be helpful for those who wish to try running the services with a different container manager or without one entirely.

The project is largely a one-person show; there are a number of contributors to the various components, but Kulik has thousands of commits to each repository, while those of other contributors tend to number in the single digits. Kulik has kept the project running pretty well for nearly 15 years; it is mature, updates come out regularly, and he seems to care about maintaining backward compatibility for asciinema's users. There is a Discourse forum for users to discuss the project, and the project is on the fediverse for those who would like to keep up with updates.

Comments (14 posted)

Page editor: Joe Brockmeier

Inside this week's LWN.net Weekly Edition

  • Briefs: OpenSSL and Python; LSFMM+BPF 2026; Fedora elections; Gentoo retrospective; EU lawmaking; Git data model; Firefox 147; Radicle 1.6.0; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.
Next page: Brief items>>

Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds