|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for October 30, 2025

Welcome to the LWN.net Weekly Edition for October 30, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Retrieving pixels from Android phones with Pixnapping

By Jake Edge
October 29, 2025

A new class of attacks on Android phones, called "Pixnapping", was announced on October 13. It allows a malicious app to gather output rendered in a victim app, pixel-by-pixel, by exploiting a GPU side-channel. Depending on what the victim app displays, anything from sensitive email and chats to two-factor authentication (2FA) codes could be captured—and shipped off to an attacker's site.

As noted in the Pixnapping paper (with seven authors from various universities), pixel-stealing attacks are not new. They were described in 2013 in the context of web browsers, using iframes and SVG filters; since then, browsers have largely mitigated those kinds of attacks with various restrictions. Pixnapping applies the ideas behind pixel stealing to Android's apps, completely outside of the browser context—though the browser is also an app so it can also be targeted. The demo video on the Pixnapping site shows a malicious app relaying 2FA credentials from the Google Authenticator app in less than the 30-second timeout, which allowed the "attacker" to log into a Reddit account.

The vulnerability uses Android intents to cause a specific app to start, thus to send its output into the rendering pipeline, but it does so with flags that cause the victim app's output not to appear on the phone screen. A stack of semi-transparent Android activities are added on top of the output via the malicious app sending more intents. All of that can be done without the user realizing any of it has happened, even if the intended victim app is not installed—normally sending an intent to a nonexistent app gives a warning to the user, but that can be bypassed by catching an exception in the malicious app.

In order to determine whether a pixel is white or non-white (though there are variants to distinguish more colors), a mask is used. One of the activities in the stack consists of all white pixels except for a transparent pixel at the location of interest; if that pixel is non-white, the victim app is displaying something there. The malicious app then uses the blurring operation, which is the only GPU operation that can be applied to another app's output in Android, to enlarge the pixel and make it easier to detect its value via the side-channel.

The Pixnapping attack measures the timing of frame rendering by registering a callback for the Android VSync signal, which is called when the SurfaceFlinger compositor has completed rendering for the screen. The malicious app invalidates the transparent window of the activity at the bottom of the stack it created (i.e. the one directly above the victim app). That means SurfaceFlinger needs to re-render and compose all of the windows above it in the stack and consult the victim app's output to do so. By measuring the time it takes, the malicious app can derive the status of the pixel in a hardware-specific manner, though it requires multiple measurements (34 or 64) for each pixel to get an accurate result. In addition, for many of the attacks described in the paper, a 1.5 second delay between pixels was inserted to avoid Android throttling the CPU, which might affect the measurements.

The actual side-channel is called GPU.zip, which "exploits an optimization that is data dependent, software transparent, and present in nearly all modern GPUs: graphical data compression". In order to save memory bandwidth, GPUs compress the data they operate on in a way that causes measurable differences in the rendering speed; the lossless compression algorithm used is dependent on the type of GPU installed. So a non-white pixel will result in less compression for the mask than a white pixel will—measurably so. The Pixnapping paper reports on attacks against Google Pixel phones (versions 6-9) and the Samsung Galaxy S25, but the underlying ideas are more widely applicable. The researchers empirically determined the right number of transparent activities to add into the stack to be able to derive a signal from the noisy render-time measurements.

The paper describes attacks against a number of different web sites and apps. For example, an intent can be used to open a specific web site, such as myaccounts.google.com, where the attack was able to retrieve specific information, such as the user's full name or home address. It can also sift through email in the Gmail web site, recover timeline information from the Google Maps app, access financial information in the Venmo app, retrieve messages from the Signal app, and more.

None of those attacks is particularly speedy however. Depending on the amount of screen real estate the desired information covers, these attacks can take hours to effect. For example, an unoptimized attack on a Google Maps timeline entry, which covers around 60,000 pixels, takes 20-27 hours to complete depending on the phone model. One suspects a user just might notice a non-sleeping, likely fairly warm, phone in their pocket long before the battery ran out while running the malicious app, but the attack against the Google Authenticator app shows an optimization path that targeted attacks could use.

The key to the optimization is recognizing that there is no need to recover all of the pixels of the six-digit numbers displayed by the app. In fact, depending on the font used, single pixels can be enough to eliminate multiple digits. In the Google Sans font used by Authenticator, a single pixel can determine whether the digit is in one set of digits (2, 3, 4, 7, 8) or not; it only takes four pixels to uniquely determine the actual value. Beyond that, this attack reduced the number of samples used for each pixel to 16 (instead of 34 or 64) and the delay from 1.5 seconds to 70ms. The attack also awaited the start of the 30-second time interval to give the attack the maximum amount of time to leak the information while it was still valid.

The results were generally good (from an attacker's perspective) though far from perfect. The accuracy ranged from 28% on a Pixel 8 to 73% on a Pixel 6; Pixel 7 and 9 were both 53%. Average time to recover a code ranges from 14.3s on Pixel 6 to 25.8s on Pixel 7; the other two were around 25s. The Samsung device thwarted the researchers' efforts to leak the codes within 30s "due to significant noise" and they left tuning the attack on that device for future work.

According to the Pixnapping web site, Google has already attempted a fix for the problem by limiting the number of activities that an app can apply the blur operation to. But the researchers found a way around that, which is still under embargo. There are three conditions required for the attack, but two of them may not lend themselves to mitigations for the vulnerability. Activities and intents are a major part of the Android architecture, so making changes to those mechanisms may not be popular with users. The paper notes that attackers generally find another side-channel after one is closed down, so focusing on the side-channel is not their recommended path to fix the problem either. As Google has already tried, they suggest restricting the ability of apps to arbitrarily add layers on top of other app's output:

We therefore believe that our attack would be best mitigated by targeting condition two, i.e., by preventing attacker computations on victim pixels. One way to achieve this would be to allow developers to restrict transparent layering over their activities to an explicit allowlist.

Pixnapping is a clever attack and the difficulty in mitigating it, at least so far, means that it will be around for a while longer. It does seem somewhat impractical for various reasons, but, as with most attacks, will likely improve over time. It may be much more useful for targeted attacks against specific phones (people) or apps, however. As the web site notes, Android users should ensure that security patches get installed as soon as they are available. That's good advice, of course, though the fix for this hole may still be a ways off.

Comments (6 posted)

Fil-C: A memory-safe C implementation

By Daroc Alden
October 28, 2025

Fil-C is a memory-safe implementation of C and C++ that aims to let C code — complete with pointer arithmetic, unions, and other features that are often cited as a problem for memory-safe languages — run safely, unmodified. Its dedication to being "fanatically compatible" makes it an attractive choice for retrofitting memory-safety into existing applications. Despite the project's relative youth and single active contributor, Fil-C is capable of compiling an entire memory-safe Linux user space (based on Linux From Scratch), albeit with some modifications to the more complex programs. It also features memory-safe signal handling and a concurrent garbage collector.

Fil-C is a fork of Clang; it's available under an Apache v2.0 license with LLVM exceptions for the runtime. Changes from the upstream compiler are occasionally merged in, with Fil-C currently being based on version 20.1.8 from July 2025. The project is a personal passion of Filip Pizlo, who has previously worked on the runtimes of a number of managed languages, including Java and JavaScript. When he first began the project, he was not sure that it was even possible. The initial implementation was prohibitively slow to run, since it needed to insert a lot of different safety checks. This has given Fil-C a reputation for slowness. Since the initial implementation proved viable, however, Pizlo has managed to optimize a number of common cases, making Fil-C-generated code only a few times slower than Clang-generated code, although the exact slowdown depends heavily on the structure of the benchmarked program.

Reliable benchmarking is notoriously finicky, but in order to get some rough feel for whether that level of performance impact would be problematic, I compiled Bash version 5.2.32 with Fil-C and tried using it as my shell. Bash is nearly a best case for Fil-C, because it spends more time running external programs than running its own code, but I still expected the performance difference to be noticeable. It wasn't. So, at least for some programs, the performance overhead of Fil-C does not seem to be a problem in practice.

In order to support its various run-time safety checks, Fil-C does use a different internal ABI than Clang does. As a result, objects compiled with Fil-C won't link correctly against objects generated by other compilers. Since Fil-C is a full implementation of C and C++ at the source-code level, however, in practice this just requires everything to be recompiled with Fil-C. Inter-language linking, such as with Rust, is not currently supported by the project.

Capabilities

The major challenge of rendering C memory-safe is, of course, pointer handling. This is especially complicated by the fact that, as the long road to CHERI-compatibility has shown, many programs expect a pointer to be 32 or 64 bits, depending on the architecture. Fil-C has tried several different ways to represent pointers since the project's beginning in 2023. Fil-C's first pointers were 256 bits, not thread-safe, and didn't protect against use-after-free bugs. The current implementation, called "InvisiCaps", allows for pointers that appear to match the natural pointer size of the architecture (although this requires storing some auxiliary information elsewhere), with full support for concurrency and catching use-after-free bugs, at the expense of some run-time overhead.

Fil-C's documentation compares InvisiCaps to a software implementation of CHERI: pointers are separated into a trusted "capability" piece and an untrusted "address" piece. Since Fil-C controls how the program is compiled, it can ensure that the program doesn't have direct access to the capabilities of any pointers, and therefore the runtime can rely on them being uncorrupted. The tricky part of the implementation comes from how these two pieces of information are stored in what looks to the program like 64 bits.

When Fil-C allocates an object on the heap, it adds two metadata words before the start of the allocated object: an upper bound, used to check accesses to the object based on its size, and an "aux word" that is used to store additional pointer metadata. When the program first writes a pointer value into an object, the runtime allocates a new auxiliary allocation of the same size as the object being written into, and puts an actual hardware-level pointer (i.e., one without an attached capability) to the new allocation into the aux word of the object. This auxiliary allocation, which is invisible to the program being compiled, is used to store the associated capability information for the pointer being stored (and is also reused for any additional pointers stored into the object later). The address value is stored into the object as normal, so any C bit-twiddling techniques that require looking at the stored value of the pointer work as expected.

This approach does mean that structures that contain pointers end up using twice as much memory, and every load of a pointer involves a pointer indirection through the aux word. In practice, the documentation claims that the performance overhead of this approach for most programs makes them run about four times more slowly, although that number depends on how heavily the program makes use of pointers. Still, he has ideas for several optimizations that he hopes can bring the performance overhead down over time.

One wrinkle with this approach is atomic access to pointers — i.e. using _Atomic or volatile. Luckily, there is no problem that cannot be solved with more pointer indirection: when the program loads or stores a pointer value atomically, instead of having the auxiliary allocation contain the capability information directly, it points to a third 128-bit allocation that stores the capability and pointer value together. That allocation can be updated with 128-bit atomic instructions, if the platform supports them, or by creating new allocations and atomically swapping the pointers to them.

Since the aux word is used to store a pointer value, Fil-C can use pointer tagging to store some additional information there as well; that is used to indicate special types of objects that need to be handled differently, such as functions, threads, and mmap()-backed allocations. It's also used to mark freed objects, so that any access results in an error message and a crash.

Memory management

When an object is freed, its aux word marks it as a free object, which lets the auxiliary allocation be reclaimed immediately. The original object can't be freed immediately, however. Otherwise, a program could free an object, allocate a new object in the same location, and thereby cover up use-after-free bugs. Instead, Fil-C uses a garbage collector to free an object's backing memory only once all of the pointers to it go away. Unlike other garbage collectors for C — such as the Boehm-Demers-Weiser garbage collector — Fil-C can use the auxiliary capability information to track live objects precisely.

Fil-C's garbage collector is both parallel (collection happens faster the more cores are available) and concurrent (collection happens without pausing the program). Technically, the garbage collector does require threads to occasionally pause just long enough to tell it where pointers are located on the stack, but that only occurs at special "safe points" — otherwise, the program can load and manipulate pointers without notifying the garbage collector. Safe points are used as a synchronization barrier: the collector can't know that an object is really garbage until every thread has passed at least one safe point since it finished marking. This synchronization is done with atomic instructions, however, so in practice threads never need to pause for longer than a few instructions.

The exception is the implementation of fork(), which uses the safe points needed by the garbage collector to temporarily pause all of the threads in the program in order to prevent race conditions while forking. Fil-C inserts a safe point at every backward control-flow edge, i.e., whenever code could execute in a loop. In the common case, the inserted code just needs to load a flag register and confirm that the garbage collector has not requested anything be done. If the garbage collector does have a request for the thread, the thread runs a callback to perform the needed synchronization.

Fil-C uses the same safe-point mechanism to implement signal handling. Signal handlers are only run when the interrupted thread reaches a safe point. That, in turn, allows signal handlers to allocate and free memory without interfering with the garbage collector's operation; Fil-C's malloc() is signal-safe.

Memory-safe Linux

Linux From Scratch (LFS) is a tutorial on compiling one's own complete Linux user space. It walks through the steps of compiling and installing all of the core software needed for a typical Linux user space in a chroot() environment. Pizlo has successfully run through LFS with Fil-C to produce a memory-safe version, although a non-Fil-C compiler is still needed to build some fundamental components, such as Fil-C's own runtime, the GNU C library, and the kernel. (While Fil-C's runtime relies on a normal copy of the GNU C library to make system calls, the programs that Fil-C compiles use a Fil-C-compiled version of the library.)

The process is mostly identical to LFS up through the end of chapter 7, because everything prior to that point consists of using cross-build tools to obtain a working compiler in the chroot() environment. The one difference is that the cross-build tools are built with a different configured prefix, so that they won't conflict with Fil-C. At that point, one can build a copy of Fil-C and use it to mostly replace the existing compiler. The remaining steps of LFS are unchanged.

Scripts to automate the process are included in the Fil-C Git repository, including some steps from Beyond Linux From Scratch that result in a working graphical user interface and a handful of more complicated applications such as Emacs.

Overall, Fil-C offers a remarkably complete solution for making existing C programs memory-safe. While it does nothing for undefined behavior that is not related to memory safety, the most pernicious and difficult-to-prevent security vulnerabilities in C programs tend to rely on exploiting memory-unsafe behavior. Readers who have already considered and rejected Fil-C for their use case due to its early performance problems may wish to take a second look — although anyone hoping for stability might want to wait for others to take the plunge, given the project's relative immaturity. That said, for existing applications where a sizeable performance hit is preferable to an exploitable vulnerability, Fil-C is an excellent choice.

Comments (38 posted)

Debian splits ftpmaster team

By Joe Brockmeier
October 29, 2025

Debian's ftpmaster team has been responsible for allowing new packages to enter Debian, removing old packages, and otherwise maintaining Debian's package archive for more than two decades. As of October 26, the team is no more and its duties are being split between two new teams. The Archive Operations Team will focus on the infrastructure required to support the Debian archives, and the DFSG, Licensing & New Packages Team, which is responsible for reviewing packages entering the new queue. In time, this move could speed up processing of new packages, as well as making the teams more sustainable, but only after new members are recruited and trained. For now, the same folks are doing the work but spread across two teams.

Ftpmaster frustrations

The ftpmaster team has been in place at least since 2000, according to a snapshot of the Debian Organizational Structure page on the Internet Archive. It held a great deal of control over what did, or did not, enter Debian's archive. And with great power, of course, came a lot of responsibility as well. The team's duties ranged from maintaining the Debian archive infrastructure, developing the Debian Archive Kit (dak) software, and reviewing new packages. When a package is uploaded to Debian for the first time, it is placed in the new queue; before a package is allowed to enter the archive, it must be checked to ensure that it complies with Debian policy, has an appropriate license, its name does not conflict with another package, and so on. The Reject FAQ provides a non-exhaustive list of reasons that packages might be rejected.

It also made the team something of a bottleneck; packages submitted to the new queue could languish for months before being approved or rejected. There is a summary page for the new queue as well as a statistics page with graphs that track the number of packages in new over time. According to the summary page, there are many packages that have been in the queue for several months.

In one of the recent discussions about the ftpmaster team, Otto Kekäläinen cited an example of an aspiring Debian developer waiting months to see their work reviewed by someone from the team. The contributor, he said, "has been mostly idle with his Debian work just waiting for the package to pass in order to proceed". It is fair to note, though, that the delayed packages are outliers: the median time for packages in the new queue is less than two days, according to an email from Matthias Urlichs in March. Even so, developers who have had to wait on reviews likely find little consolation in knowing that other packages are moving through more quickly. It also does not help Debian retain new developers if their early encounters with packaging involve months of waiting.

One of the reasons the ftpmaster team gives for packages waiting a long time for review is that there were too few hands to do the work. That has been a problem for quite some time; when LWN covered the ftpmaster team in 2010, Joerg Jaspert had hoped to add at least one more person to the team, citing too few people to review all of the new packages. Finding qualified volunteers is harder than it may sound, though; the scope of duties meant that it was a rare individual indeed who could fill the shoes of an ftpmaster.

Jaspert said then, in a call for new volunteers, that becoming an ftpmaster required a candidate to possess a basic understanding of "just about every programming language you can imagine", have a good understanding of how packaging works, and have a love of reading and dealing with legal texts. A volunteer should also, he observed, be able to deal with doling out unpopular decisions. "If you can't stand a bit of flames / don't like to take hard decisions, this is no job for you."

Finding people with the full set of skills to do the job was already difficult; adding to that was the fact that the existing members of the team did not have the bandwidth to mentor new users. Debian Project Leader (DPL) Andreas Tille said, in March 2025, that the ftpmaster team was looking for new members, but Sean Whitton quickly replied: "No, we are not." Whitton said that it was not a good time for the ftpmaster team to train new people, because the existing team was too busy doing other things.

Time to split

The split has been in the works since DebConf24 in Busan, South Korea. It was brought up in a "meet the ftpteam" BoF (notes); it was also a topic of discussion during the DPL election campaign period this year. Whitton complained that nothing had been done to address perceived problems with the ftpmaster team. Part of Tille's lengthy reply, was that he wanted to "gather advice from all sides and work toward solutions with consensus".

It has taken a while, but Tille announced, in his "Bits from the DPL" email on October 3, that he was planning to split the ftpmaster team into two teams; one to review packages for compliance with the Debian Free Software Guidelines (DFSG), and one to manage archive operations. This would, he reasoned, make it easier for each team to concentrate on its tasks and for new contributors to be involved, while helping Debian developers to understand the process.

Overall the reaction to the idea was positive, though there was some pushback on his ideas about package removals. Tille had noted that, previously, package removals were not officially part of the ftpmaster set of duties, "though they remain an important responsibility". He proposed that Debian should be able to withdraw a package within 48 hours in the case of copyright claims or major security vulnerabilities.

What "withdraw" meant was not entirely clear, and the timeline given raised a few objections. Adrian Bunk said that it would be challenging to do everything required to remove a package within 48 hours. Even if a code fix was available immediately, he said, updating the source package, rebuilding installers (if required), and creating a new point release would be challenging to do within 48 hours.

Holger Levsen said that he was "perplexed and shocked" that Tille would propose such an aggressive timeline. Tille replied that his choice of wording "shifted the focus into an unfortunate direction". He was really interested in discussing whether the package removals "should explicitly fall under the responsibility of the Archive Operations Team". He also said that he was looking for "a formalized process that shows we take such reports seriously and that helps protect our developers from potential legal exposure".

Delegations

Aside from concerns about removal timelines, there was not much discussion on the list about the proposed split. Tille announced the delegations for the teams on October 26. Both teams are starting with the same four members, carried over from the last ftpmaster delegation that was issued by Tille on August 18: Thorsten Alteholz, Ansgar Burchardt, Luke Faraone, and Jaspert. One email details the DFSG, Licensing & New Packages Team ("DFSG team"), and another describes the Debian Archive Operations Team ("Archive team"). Ultimately, he placed responsibility for package removals with the Archive team. The announcements also revoked the former ftpmaster team delegation, so it officially no longer exists.

One thing that is unclear is the status of those who were involved with the ftpmaster team as an assistant or trainee; whether the new teams will create assistant or trainee roles with lesser privileges remains to be seen.

The list of responsibilities will look familiar; aside from package removals, all of the tasks are the same, just split down the middle (or thereabouts). The Archive team is tasked with operating the Debian archive, maintaining its infrastructure (such as tools for processing uploads) and the dak software, as well as documenting its processes "especially those related to releases". The DFSG team is responsible for handling packages in the new queue, communicating about the status of those packages, ensuring that the packages "respect the DFSG and applicable licensing and legal requirements", and documenting its policies.

Even with the divvying up of duties, volunteers are going to need a rare set of skills to meet the requirements of either team. The problem of finding new volunteers, and mentoring them, still falls to the same people. But, one hopes, this will eventually help Debian expand the number of people doing crucial work and reduce the load on each volunteer. It will be interesting to see how it works out over the coming months.

Comments (1 posted)

GoFundMe to delete unwanted open-source foundation pages

By Joe Brockmeier
October 24, 2025

Open-source foundations and projects that have charity status in the US may want to see if GoFundMe has created a profile for them without permission. The company has operated since 2010 as a self-service fundraising platform; individuals or groups could create pages to raise money for all manner of causes. In June, the company announced that it would expand its offerings to "manage all aspects of charitable giving" for users through its platform. That seems to include creating profiles for nonprofit organizations without their involvement. After pushback, the company said on October 23 that it would be removing the pages. It has not answered more fundamental questions about how it planned to disburse funds to nonprofits that had no awareness of the GoFundMe pages in the first place.

There are 29 types of nonprofit organization in the US. The 501(c)(3) type of organization is exempt from federal income tax, and donors may deduct donations to those charities from their income for federal taxes. Many open-source foundations based in the US, such as the Apache Software Foundation (ASF), Free Software Foundation (FSF), Python Software Foundation (PSF), and Software Freedom Conservancy (SFC) have 501(c)(3) status. Other foundations, such as the Linux Foundation and Ruby Central, have 501(c)(6) status; that is meant for trade organizations, and is not tax-deductible.

GoFundMe pages

Historically, the GoFundMe platform has been used to allow individuals or groups to crowdfund for specific events, rather than as a tool for nonprofits' official fundraising efforts. For example, a couple might put up a page to raise funds for a honeymoon, or a family might use it to pool money to celebrate grandma's 80th birthday. The site is, sad to say, now commonly used to attempt to raise funds for medical bills or groceries rather than for happy occasions. No doubt many LWN readers have been invited to contribute to something on GoFundMe since the platform was launched.

Apparently, though, the company decided to branch out beyond medical debt and try to persuade its users to use the platform to funnel all charitable donations through the site. However, for that scheme to work, users have to be able to donate to charities through GoFundMe. Rather than waiting for nonprofits to join the platform voluntarily, though, the company seems to have decided it will simply start taking donations on behalf of the those organizations without consulting them first.

A recent story from ABC 7 News in California reported that GoFundMe had created pages for 1.4 million 501(c)(3) organizations using US Internal Revenue Service (IRS) public data. That includes many open-source nonprofits, such as the PSF, which currently has a page on the platform that sports a "verified" badge. It does not say what has been verified, but it is definitely not the PSF's approval of the page. Deb Nicholson, executive director of the PSF, said in an email that the foundation was aware of the page.

No money has been raised on our page, and we don't have any information about what they were going to do with any funds they raised or what kind of terms GFM [GoFundMe] is offering. If we aren't successful in gaining control of the PSF-branded page and figuring out what the intent is, then we'll have to look into contacting GFM more officially.

The PSF is not alone. There are pages for the FSF, SFC, the Open Source Initiative, and others that all seem to have been automatically generated. A search for "software foundation" on the site turns up 90 matches, many open-source and free-software organizations among them.

One might think that GoFundMe would not accept donations through the placeholder pages until they are claimed by nonprofits. That isn't the case, though. I was able to make a $5 donation through the FSF's page; it seemed unlikely that the FSF would choose to use a proprietary platform to raise money, so I emailed the FSF to see if the organization was aware of the page. Greg Farough, campaigns manager for the FSF, said:

We've been in contact with them for a while now. We filed a complaint with them when we first noticed the page. They told us they would investigate the issue. Several weeks later, we followed up again, but have received no other response from their team yet.

[...] We're obviously not comfortable with being listed on the site.

We also see that the site states that the "Free Software Foundation has not provided consent or permission for this page." We can 100% verify that part.

He did note that he could see my $5 donation on the page, and would let me know if it went through.

GoFundMe takes a cut of donations for processing fees, and often tries to solicit a "tip" from users when they make donations through the platform. It's unclear how the platform would justify imposing those fees when nonprofits have not agreed to them ahead of time, though. The ABC 7 News story reports that donors were asked for a 14.5% or 16.5% tip when using the site. However, that was not my experience—there was no tip suggested, and no field to enter one when I made the FSF donation. It has suggested one when I have donated to other pages in the past, however.

It is surprising to see a company using the tactic of adding nonprofits without their consent; last year, Grubhub paid $25 million to settle charges from the Federal Trade Commission that stemmed, in part, from "unfairly and deceptively listing restaurants on its platform without their permission". One would have thought that would discourage other companies from doing the same.

Reversal

I emailed GoFundMe with questions on October 23; the company responded a few hours later with its statement that had just been published on LinkedIn. According to the post, the company will be making the pages opt-in and deleting the pages that have not been "claimed and verified":

Unclaimed Nonprofit Pages will be de-indexed: We will remove and de-index the Nonprofit Pages that are not claimed so they no longer appear in search engine results. Once a nonprofit opts in, they can choose to index their Nonprofit Page, turn SEO [search-engine optimization] on, and edit their Nonprofit Page.

However, the company did not address my questions directly, and has not responded to my follow up email. I had asked whether donors would be able to get refunds for donations to "unclaimed" pages, and what the plans were to disburse funds to organizations that had not agreed to GoFundMe's fees or terms. Finally, I asked what would happen if funds were not claimed; would they be refunded to donors at some point, or would GoFundMe keep them? All of those questions are unaddressed by the statement.

For now, anyone who is responsible for fundraising or managing a budget for an open-source nonprofit in the US may wish to check to see if there is a page for the organization. It may be desirable to claim the page—certainly many people use GoFundMe, and it could bring in additional funds—or to request its deletion. It would be unfortunate if money meant to go toward open-source development was languishing unclaimed in GoFundMe's coffers.

Comments (6 posted)

Safer speculation-free user-space access

By Jonathan Corbet
October 23, 2025
The Spectre class of hardware vulnerabilities truly is a gift that keeps on giving. New variants are still being discovered in current CPUs nearly eight years after the disclosure of this problem, and developers are still working to minimize the performance costs that come from defending against it. The masked user-space access mechanism is a case in point: it reduces the cost of defending against some speculative attacks, but it brought some challenges of its own that are only now being addressed.

The Spectre vulnerabilities can be used to exfiltrate data from the kernel in a number of ways, but the attacks usually come down to exercising a kernel path that will speculatively execute with an attacker-provided address, leaving traces of the target data that can then be recovered via a side channel. One of the most common ways to defeat such attacks is to simply prevent speculative execution of some code; it is effective, but also expensive.

Defending user-space access

One common target for speculative attacks is accesses to user space by the kernel, since the address in question is often controlled by user space. Since the tests for the validity of an address nearly always succeed, speculative execution tends to take the "address is valid" path, even when the address is anything but. The functions used by most of the kernel for user-space access (such as copy_from_user()) are well defended, but the kernel has a number of places where faster access is required for acceptable performance. This can especially be a concern when multiple accesses to user space are required. Code in such situations tends to use a pattern like this one from the 6.10 implementation of the select() system call, which only incurs the cost for the speculation defense once but performs two reads:

        if (from) {
            if (!user_read_access_begin(from, sizeof(*from)))
                return -EFAULT;
            unsafe_get_user(to->p, &from->p, Efault);
            unsafe_get_user(to->size, &from->size, Efault);
            user_read_access_end();
        }
        return 0;
    Efault:
        user_access_end();
        return -EFAULT;

The user_read_access_begin() call is implemented as a chain of macros before finally doing two things: enabling user-space access with a STAC instruction, and blocking speculation with an LFENCE instruction. The unsafe_get_user() macros, which include a jump to Efault on error, can then be used to access the relevant data. Finally, user_read_access_end() and user_access_end() both boil down to a CLAC instruction to re-enable supervisor mode access prevention; an important step that, if forgotten, can leave the kernel open to other attacks. The STAC/CLAC pair is unavoidable, but it would be nice to do away with the costly LFENCE if possible.

Defense without fences

The first commit in the 6.11 merge window was this change from Linus Torvalds adding a new mechanism that he called "user address masking". It uses a relatively simple trick to avoid the LFENCE instruction, ensuring that any attempt at kernel-space access with a supposedly user-space address will fail. There were two new macros:

    #define mask_user_address(x) ((typeof(x))((long)(x)|((long)(x)>>63)))
    #define masked_user_access_begin(x) ({ __uaccess_begin(); mask_user_address(x); })

Passing a pointer to mask_user_address() will perform a logical OR of the address with a version of itself right-shifted by 63 bits. The sign-extension performed by the x86 CPU means that, if the address is in kernel space (the topmost bit is one), the resulting address will be all ones, which is not valid. Any speculation involving a kernel-space address will, as a result, fail on the invalid access. Since exploitable speculation can no longer happen, there is no longer any need for the LFENCE instruction.

(For the curious, the implementation of these macros was changed in 6.14, making them quite different from the original in current kernels; amusingly, they no longer involve masking. The end result is the same, though, and the "masked access" term is still used.)

Masked access can accelerate performance-sensitive operations, but it has a small disadvantage: it is not supported by all architectures. So code that uses this feature must be prepared to fall back to the previous method on architectures where masked access is not available. The select() code shown above is, as a result, in 6.17, written as:

        if (from) {
            if (can_do_masked_user_access())
                from = masked_user_access_begin(from);
            else if (!user_read_access_begin(from, sizeof(*from)))
                return -EFAULT;
            unsafe_get_user(to->p, &from->p, Efault);
            unsafe_get_user(to->size, &from->size, Efault);
            user_read_access_end();
        }
    Efault:
        user_access_end();
        return -EFAULT;

The code is faster, but has also become more complex.

Using scopes

As Thomas Gleixner pointed out in this patch series, all that code to read two user-space values is just the sort of "tedious" boilerplate that offers numerous opportunities for security-critical mistakes. As the use of the masked-access primitives grows over time, the chances of introducing new bugs will grow as well. He set out to improve this pattern using the kernel's scoped primitives to ensure that the proper cleanup is done once the access is complete. The result in the current version of the series is three new macros:

    scoped_user_read_access(address, label)
    scoped_user_write_access(address, label)
    scoped_user_rw_access(address, label)

Each of these starts a new block and speculation-proofs the given address, inserting a jump to the specified label in the case of an access violation. Using these macros, the select() code can now look like:

        if (from) {
            scoped_user_read_access(from, Efault) {
                unsafe_get_user(to->p, &from->p, Efault);
                unsafe_get_user(to->size, &from->size, Efault);
            }
        }
    Efault:
        return -EFAULT;

The end result is clearly simpler and less prone to the sorts of mistakes that developers are likely to make. The need for explicit cleanup code, in particular, has been completely removed.

This work is in its third revision; aside from some relatively minor comments, it would appear to have reached general approval. It seems to be a likely candidate for the 6.19 merge window. This work may affect a relatively obscure corner of the kernel that few developers will see directly, but it is a good example of the ongoing effort to make kernel development a bit less error prone. Moving away from C is not in the cards for a long time, so the next best thing is to make working in C safer.

Comments (7 posted)

BPF signing LSM hook change rejected

By Daroc Alden
October 27, 2025

BPF lets users load programs into a running kernel. Even though BPF programs are checked by the verifier to ensure that they stay inside certain limits, some users would still like to ensure that only approved BPF programs are loaded. KP Singh's patches adding that capability to the kernel were accepted in version 6.18, but not everyone is satisfied with his implementation. Blaise Boscaccy, who has been working to get a version of BPF code signing with better auditability into the kernel for some time, posted a patch set on top of Singh's changes that alters the loading process to not invoke security module hooks until the entire loading process is complete. The discussion on the patch set is the continuation of a long-running disagreement over the interface for signed BPF programs.

One might hope that signing BPF programs would just be a matter of attaching a signature to the program, and then checking that signature. Alas, things are a bit more complicated. BPF uses "compile once — run everywhere" (CO-RE) relocations to let compiled programs run on multiple different kernel versions. Thus, the version of the BPF program on disk is not exactly the same as the version presented to the kernel for loading, which invalidates any signatures on the BPF binary.

Singh's patch set solves this problem by using a two-step process: first, user space loads a specialized BPF program, called a loader, that does not require relocations (and so can have its signature checked directly by the kernel). Then, the loader program verifies that the real program matches a hash stored in the loader. That hash covers the code of the real program (as well as some BPF maps containing configuration), so a correctly implemented loader won't load a program that has been tampered with. This design has the benefit of presenting a relatively minimal user-space interface, but moving part of the program-verification process out of the kernel proper and into BPF code is a potential downside.

Boscaccy's patch set adds support for verifying a BPF program's initial maps (including the instructions of the real program) alongside the loader program. That would simplify the loader program, because it no longer needs to check a hash of the maps, but it also gives Linux security modules (LSMs) more information about the loaded program. Specifically, it lets LSMs see whether the loader program actually completes successfully. In his cover letter, Boscaccy says: "This approach addresses concerns from users who require strict audit trails and verification guarantees, especially in security-sensitive environments."

This is not a new proposal. Boscaccy spoke at this year's Linux, Filesystem, Memory-Management, and BPF Summit about his attempts to put together a solution for BPF program signing that works for that use case. Since then, he has posted a number of patch sets implementing the same basic idea in different ways, none of which have been merged. BPF signing in general has been a topic of active discussion and development since Alexei Starovoitov (a maintainer of the BPF subsystem) introduced the concept of "light skeletons" in 2021 as an initial step toward this kind of program loader.

Paul Moore, the LSM maintainer, wrote in support of Boscaccy's most recent patch set, noting that it does not prevent users from using Singh's signature support if they want to. Instead, it adds a separate, compatible signature scheme for users who want the additional guarantees. In his view, Boscaccy's patch sets have not been given the due attention and review that they need, in favor of finalizing a solution that does not meet everyone's needs. He specifically called for Linus Torvalds to comment on the situation around Boscaccy's patch set, which Torvalds has not done.

Singh replied that the lack of maintainer engagement with Boscaccy's patch sets has been because Boscaccy has repeatedly ignored feedback from maintainers. Singh called Boscaccy's approach "broken", and opined that having multiple signature schemes would not provide a good user experience. He also linked to part of the discussion of his patch set where he had attempted to refute Boscaccy's concerns.

You keep mentioning having visibility in the LSM code and I again ask, to implement what specific security policy and there is no clear answer? On a system where you would like to only allow signed BPF programs, you can purely deny any programs where the signature is not provided and this can be implemented today.

Moore disagreed with that assessment, saying that Boscaccy's several patch sets have all been different approaches because of feedback he received from reviewers. He agreed that in a perfect world there would only be a singular BPF signature scheme — but if there are users who need the capabilities offered by Boscaccy's patch set, then their use cases should be enabled. Singh asked again for information on the specific use case that requires Boscaccy's patch set. Moore replied that there was not one, specific, use case that he had in mind. Rather, it is important to let users customize their policies with LSMs, which is only possible if the signature verification takes place before the LSM hook is called. Otherwise, it's possible for the process to fail after the LSM has already recorded a "success".

Singh reiterated his position, which has remained unchanged since he began work on BPF signing: since the loader is trusted, and it verifies the hash of the full program before loading it, verifying the signature on the loader is fully equivalent to verifying a signature made across every component. In fact, the cover letter for Singh's patch set explains that the loader programs are generated by libbpf, which is maintained as part of the kernel source, by the BPF maintainers — so if anyone felt unable to trust the verification code generated by libbpf, they would have bigger problems.

James Bottomley said that the LSM issue isn't about signing, per se. It's "full determination that all the integrity conditions [the LSM] is imposing are satisfied by the time the hook is called." Singh did not reply, however; he and Moore agreed to disagree, and the latter asked Starovoitov whether he intended to take Boscaccy's patch set or not.

Starovoitov won't. Neither Bottomley nor Moore understand what Boscaccy's patch set actually does, he said. Starovoitov does understand the worries about asking LSMs to make decisions before the program loading process finishes, but that's also true of the kernel module loading process, he says. Both signed kernel modules and signed BPF programs need to have trusted build systems, or the signing is pointless.

Bottomley disagreed with that comparison; the ELF loader for kernel modules is built into the kernel. The loader programs for signed BPF programs are generated by libbpf, which is maintained in the kernel source tree, but they're not part of the kernel itself.

Integrity checking is not complete until the integrity of both has been verified. If you sign only the loader and embed the hash of the program into the loader that is a different way of doing things, but the integrity check is not complete until the loader does the hash verification which, as has been stated many times before, is after the load LSM hook has run.

Starovoitov did not believe that was true. In his opinion, the kernel's role in integrity checking is done as soon as the loader has been verified. He likened Singh's approach to a self-extracting zip archive: the actual cryptographic signature covers the whole archive, and so it doesn't really matter that the archive contains code that will be executed to create new, unzipped files that aren't covered by the signature. In the same way, once the kernel has verified the signature on the loader, there is no more signature verification from the kernel needed, since the trusted (and now verified) loader will handle the last steps.

Bottomley and Starovoitov failed to reach a conclusion; neither was convinced by the other's position. Moore continued to push Boscaccy's patch, but Starovoitov asked him to stop. With a working solution merged in the kernel, and Starovoitov strongly opposed to Boscaccy's approach, it seems unlikely that any future attempts with this approach from Boscaccy will be considered — although Boscaccy may keep trying. Whether the debate will end here, or will ultimately require a pronouncement from Torvalds to resolve remains to be seen.

Comments (11 posted)

Page editor: Joe Brockmeier

Inside this week's LWN.net Weekly Edition

  • Briefs: Man pages 6.16; Btrfs on AlmaLinux; Fedora Linux 43; ICANN report; PSF grants; Rust Coreutils 0.3.0; Tor Browser 15.0; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.
Next page: Brief items>>

Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds