LWN.net Weekly Edition for March 6, 2025
Welcome to the LWN.net Weekly Edition for March 6, 2025
This edition contains the following feature content:
- A look at Firefox forks: there are a number of projects trying to create more friendly versions of the Firefox web browser.
- Two new graph-based functional programming languages: Bend and Vine try to bring new life to a once-promising language design.
- A hole in FineIBT protection: researchers find a way around Linux's use of Intel's control-flow-integrity mechanisms; kernel developers close the hole with a clever workaround.
- Guard pages for file-backed memory: a new hardening feature threatens to break some applications.
- Fedora discusses Flatpak priorities: a disagreement over application packaging boils over.
- A look at the Zotero reference management tool: keep your bibliographical database under control.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
A look at Firefox forks
Mozilla's actions have been rubbing many Firefox fans the wrong way as of late, and inspiring them to look for alternatives. There are many choices for users who are looking for a browser that isn't part of the Chrome monoculture but is full-featured and suitable for day-to-day use. For those who are willing to stay in the Firefox "family" there are a number of good options that have taken vastly different approaches. This includes GNU IceCat, Floorp, LibreWolf, and Zen.
How we got here
Mozilla has been disappointing a lot of Firefox users for years, but it seems the pace is accelerating. Its announcement on February 19 that it needs to "diversify" beyond Firefox did not inspire confidence, and it annoyed many who would like to see Mozilla go all-in on its flagship browser (and increase its market share) rather than chasing AI or dabbling in advertising. But a recent and more alarming example is its introduction of terms of use for the browser and the removal of its pledge not to sell users' personal data. Though it has backpedaled somewhat since, and rewritten its terms of use, the damage has been done.
Firefox forking is hardly a new phenomenon. Debian began maintaining forks of Mozilla applications with minimal changes but different names due to conflicts between the Debian Free Software Guidelines and Mozilla's trademark-usage policy. (LWN covered this in 2005.) The era of Iceweasel, Debian's brand name for Firefox, came to an end in 2016. Note that the name Iceweasel is not merely a play on the name "Firefox"; its origin is one of Matt Groening's Life in Hell comic strips (here), which contained a fictional quote attributed to Friedrich Nietzsche.
Love is a snowmobile racing across the tundra and then suddenly it flips over, pinning you underneath. At night, the ice weasels come.
The GNU project also adopted the name IceWeasel for the GNUzilla
project—basically Mozilla source code with any non-free
code, such as the Adobe Flash
Player, stripped out. In 2007, Karl Berry announced
that GNUzilla would be adopting the name IceCat for its version
"not because we have anything against weasels
" but to avoid
confusion with Debian's version.
GNU IceCat
IceCat has the distinction of being the oldest Firefox fork still in development. Ray Dryden applied for GNUzilla to become part of the GNU Project in August 2005, and test releases based on Firefox 1.5.0 were available later that year. IceCat, as with all of the forks covered in this article, is available under the Mozilla Public License (MPL) 2.0. However, the scripts and other tools used to create an IceCat release from Firefox are licensed under the GPLv3.
GNUzilla does not distribute binaries of IceCat. The project recommends using GNU Guix to install IceCat on x86_64 Linux systems, and also makes its scripts available in its Git repository to compile IceCat from Firefox's extended-support releases (ESRs). It may, however, also be packaged for a user's favorite Linux distribution. Fedora 41, for example, currently has IceCat 115.20.0esr—which is based on Firefox 115.20.0; both were released on February 4.
Current-day IceCat has several changes that distinguish it from
Firefox. The most immediately obvious is its use of the LibreJS add-on to
block "nonfree nontrivial JavaScript while allowing JavaScript that
is free and/or trivial
". In practice, this means that a significant
number of sites will not work unless the user adds exceptions for
the JavaScript used by the site. Users can choose to add exceptions for
individual scripts blocked by LibreJS or to add an exception for the
entire site. Even LWN, which uses a minimal amount of JavaScript,
has scripts that are blocked by LibreJS.
IceCat includes the JShelter extension, which attempts to block not just malware, but browser fingerprinting and user tracking as well. It modifies the JavaScript environment that is available to web pages to try to confuse fingerprinters and make it more difficult to carry out attacks using JavaScript. It may block APIs or return fake values to thwart these attempts. Like LibreJS, it can be modified or turned off entirely for specific sites. There is a paper from 2022 that explains the extension's approach in great detail, and an extensive FAQ that may be of use in troubleshooting interactions between JShelter and web sites.
In a similar vein, IceCat includes a fork of
the Third-party
Request Blocker extension that (as the name implies) blocks
connections to third-party resources without user consent. It is a
little concerning that the page
describing the extension describes it as "seemingly maintained by
'sw'
", and its last update was in March 2020. The home page
listed for the extension is no longer available. Despite the lag in
development, it still seems to be working and blocking plenty of
third-party requests. A visit to a site like The Guardian, for instance, shows
seven sites blocked. As the screenshot shows, site layout and images
are often affected by IceCat's default settings. Usually the sites are
still usable, but far less aesthetically pleasing.
One thing that worked well for me was to enable just enough to see the page text and then use the reader view to read a site's articles or other content. (Sadly, none of the forks offer a "browse everything in reader view by default" option.)
In all, IceCat ships with eight extensions that either attempt to enhance user privacy, block non-free software, or unbreak sites that are affected by its other extensions. It includes a "LibreJS/USPS compatibility" plugin to offer an alternative shipping calculator for the US Postal Service site as well as an extension to replace JavaScript blocked by LibreJS on the Library Genesis sites.
The project has an extension-finder service called Mozzarella, which (of course) only lists extensions that are free software. However, the extensions may be outdated compared to their counterparts listed in Firefox's add-on catalog. For example, the Privacy Badger extension in the Mozzarella catalog was last updated in June 2023. The Firefox catalog version was last updated on January 29, 2025.
Right now, three people are listed as maintainers for GNUzilla: Ruben Rodriguez, Amin Bandali, and Mark H. Weaver. The development mailing lists are a bit on the quiet side. The last-archived conversation currently for the gnuzilla-dev list is from August 2024. The bug-gnuzilla list is a little more lively—its last activity was in December 2024.
IceCat is probably a good choice for folks who are more concerned with the free software ethos and privacy than with functionality.
Floorp
The Floorp project is a much newer entrant. It is developed by a community of Japanese students called Ablaze. Development is hosted on GitHub, and the project solicits donations via GitHub donations. According to its donations page, donors who contribute at the $100 level may submit ads to feature in the new tab page—but the ads, which are displayed as shortcuts with a "sponsored" label, can be turned off in the settings. I've been unable to find any information about the project governance or legal structure of Ablaze.
Its contributors page lists seven primary maintainers and 39 code contributors, as well as many people who have contributed to its language packs and translations, or who maintain packages. Floorp does not offer native packages for Linux distributions, but it does provide a Flatpak via Flathub and precompiled releases for x86_64 and ARM64.
Originally Floorp was based on Chromium but switched to Firefox in early 2022. The first Firefox-derived version was Floorp v7 (announcement in Japanese), and it was based on the Firefox rapid releases, but the project switched to the ESR releases as their base with v8. The most recent release, version 11.23.1, was announced on February 15, and is based on (according to about:config) the Firefox ESR 128.8.0 release, which came out on March 4. It would be nice if the project were more explicit in its release notes about which version of Firefox a release was based on. This is not merely for curiosity's sake—it would help users track whether Floorp was receiving the most recent security updates. The project has said that it plans to move back to the rapid release versions of Firefox with v12, which is currently in beta.
The project promises "strong tracking protection
" and that
it does not track users or have any affiliation with advertising
companies. However, the project does not give details on how its
tracking protection differs from Firefox's. It still uses Google as
its default search engine and includes the Firefox
browser sync feature. It also uses Mozilla's add-ons repository,
and should work with most Firefox add-ons
that are compatible with the corresponding Firefox version.
Floorp does have a number of interesting features and enhancements that may tempt users. It has a dual-sidebar layout that allows users to access bookmarks, history, and other tools on the left-hand side, while the right-hand has the Web Apps panel. Users can add web sites to open in the Web Apps panel, which can be useful while (for example) doing research for an article while keeping a version of the article open in the panel.
In addition to the Apps panel, Floorp has a split-view feature that lets users open two pages side-by-side by selecting a tab and clicking "Split this Tab". Each split has its own history and URL bar. Floorp's layout is great for wide-screen monitors, and I like the ability to open sites in split view rather than juggling multiple browser windows.
Another interesting inclusion in Floorp is its Workspace feature. This allows users to group tabs by categories like "work", "comics", "shopping", or whatever makes sense for the users' browsing habits. I've found this useful for working on projects and stories for LWN—I might have a dozen tabs open for a specific story, which I can group into a single workspace. Workspaces can also be assigned to Firefox's multi-account containers. For example, a user might want to log in to the same site using different accounts—without having to sign in and out repeatedly. Combining the workspace and multi-account containers can be useful in a number of scenarios.
Firefox's tabs have seen little feature advancement in the past few years. Floorp adds a few much-needed enhancements here, allowing users to move the tab bar to the bottom of the window, use a multi-row tab bar, and even a vertical tab bar. However, the Floorp implementation of the vertical tab bar will go away in v12, now that Mozilla has finally added vertical tabs in Firefox 136.0.
Overall, Floorp is an interesting project with some nice enhancements to the Firefox UI. However, the development roadmap seems a bit more haphazard than I would like—switching back and forth between Firefox rapid release and ESRs, for example. That may not dissuade other folks, though.
LibreWolf
The LibreWolf project got its start in 2020. Its focus is primarily
around privacy, security, and the removal of "anti-freedom
"
features, such as telemetry and DRM, from Firefox. It lists seven core
contributors on its home page and points to its Matrix room for
development discussions. Its development is hosted on Codeberg.
LibreWolf is available in the Arch User Repository (AUR) for Arch Linux users; and the project offers its own package repositories for Debian-based distributions and for Fedora. It recommends its Flatpak packages for most other distributions. The most recent version of LibreWolf is 135.0.1, which was a minor update based on Firefox 135. The first LibreWolf 135.0 release came out on February 9, about five days after the upstream Firefox version.
LibreWolf has the normal configuration options one would expect for a Firefox fork, but it also has the option of using a special configuration file called librewolf.overrides.cfg to set preferences that can take effect across multiple profiles rather than having to tweak the configuration for each profile. It also makes preferences easy to back up and move to a new machine. The documentation explains where to find this file, depending on the installation method, and offers several suggestions for possible preference changes.
LibreWolf is mostly notable for what it doesn't have rather than what it does. That is, it removes other features from Firefox that have not been well-received by many users such as Pocket integration, telemetry, and more. Firefox Sync is disabled by default but it can be enabled in settings.
LibreWolf does include the uBlock Origin add-blocker add-on as part of its standard installation. It should be noted that uBlock Origin is being disabled for Chrome users as Google phases out support for the WebExtension API V2 in favor of V3, which will curtail features that uBlock Origin and other add-ons require to function. To its credit, Mozilla has committed to continuing its support for Manifest V2 and V3. LWN covered Manifest V3 and its impact on content blockers in 2021.
For the most part, users would be hard-pressed to spot many differences between LibreWolf and Firefox at first (or second) glance, so a screen shot of LibreWolf seemed a bit unnecessary. That approach is likely to appeal to many users who are uneasy with things like telemetry and Pocket, but don't want an entirely new browsing experience.
Zen
The Zen browser project is the most recent entrant. Its development began last year with an announcement on Reddit. It is currently in beta, with its most recent version, 1.8.2b based on Firefox 135.0.1, released on February 25. Kudos to the Zen project, by the way, for proudly including the Firefox version alongside the project version in its "About" dialog—information that literally every other Firefox fork seems intent on hiding. Zen lists 12 people in the main project team, and about 90 contributors to the browser. Development for Zen is hosted on GitHub, and discussion takes place on Discord (link goes to a Discord invitation).
Like Floorp, the project solicits donations to assist with development, but little information seems to be available about its governance or structure to provide transparency about how the money is spent.
Unlike the other forks, it is not immediately obvious that Zen is an offshoot of Firefox. It does not look at all like the standard Firefox interface, even before users start customizing it. Even Floorp, which allows significant customization, still bears some resemblance to Firefox on first use. Zen sports a tab sidebar on the left that blends the Workspace concept from Floorp and vertical tabs, with a set of default bookmarks ("Essentials") as icons at the top. The browser menu is located in the top-left corner, indicated by a button with three dots. The window title bar is hidden and only appears if a user hovers the mouse at the top of the window for a few seconds.
While Zen looks modern and interesting, its sleek user interface and configurability comes at the cost of intuitive usability in some cases. For example, one might expect that setting Zen to light mode in the "Language and Appearance" settings would also change the browser's interface to light mode. It does not, as shown in the screenshot. Instead, a user has to go to the "Add-ons and Themes" settings to select a light theme. It would help a great deal if Zen's user guide were more complete, but it only has a little bit of documentation to offer at the moment. To be fair, it is still a beta project, so it may be much improved by the time the Zen browser has its first stable release. For now, users will need to be ready to dig through Reddit and other forums for tips.
Features like glance, which pre-fetches a link and gives a preview of it before opening it in a new tab or window, are useful, but not at all obvious how to use, even if one is aware the feature exists. (On Linux, activate glance with Alt+click.) Likewise, Zen's split-screen mode requires the user to select multiple tabs and then right-click to select "Split Tab". Rearranging the splits is also not intuitive. That said, the additional features are compelling if one is willing to do some searching to figure things out.
The Zen interface can be customized extensively to suit individual tastes via the settings. If those options aren't enough, Zen has its own set of add-ons and extensions called Mods to modify the interface or add features. This ranges from a green-hued theme called Matcha to tweaks to further minimize the sidebar. Most Firefox add-ons should work with Zen as well, though some may clash with its user-interface changes.
Currently, Zen isn't fully baked enough for me to consider switching to it. Others may be more adventurous in their browsing habits than I am, though. I can say that it has stabilized significantly since I first tried it shortly after its first public release. The project does bear keeping an eye on, and the Mozilla folks could do worse than to copy some of the ideas (and code) that the project is experimenting with.
Others
The Firefox fork rabbit hole is surprisingly deep. There are a few alternatives I chose not to try—but mention here for completeness—and probably a few that I've missed. The Basilisk project is a kind of retro-Firefox project that aims to retain technologies that Firefox has removed. This includes the deprecated Netscape Plugin Application Programming Interface (NPAPI) plugin support, ALSA support on Linux, XUL extensions, and more.
Waterfox is a browser that began in 2011 as an independent project by Alex Kontos while he was a student. It was acquired and then un-acquired by Internet-advertising company System1. Its site does not, at least at the moment, have enough specifics about the browser's differences and features to compel me to take it for a test drive.
The Pale Moon project is another browser that has forked off of Mozilla Firefox code and no longer tracks it directly. It uses the Goanna fork of the Gecko rendering engine and still supports NPAPI plugins and XUL extensions. The project promises no telemetry or data gathering. It offers a somewhat nostalgic look and feel that is similar to Firefox in the mid-2000s.
For those who pine for the days of the Netscape suite that included the browser, mail client, HTML editor, IRC chat, and more, there is SeaMonkey. The project uses code from Firefox and Thunderbird, though it is not directly based on recent versions. According to its site, it backports security fixes from Firefox and Thunderbird ESRs that apply to SeaMonkey. The project also maintains the Composer HTML editor and ChatZilla IRC client that are no longer maintained by Mozilla. SeaMonkey is still packaged for a number of Linux distributions, and binaries are available for Linux on x86_64 and x86 as a tarball. It might be a good option for users who are still using 32-bit x86 Linux systems.
Still dependent on Mozilla
Regardless which Firefox fork one chooses, it is important to remember the downsides. First and foremost, all of the forks are dependent on Mozilla to do the heavy lifting. The bulk of development is carried by Mozilla, the direction of Firefox is set by Mozilla, and choosing to run a fork puts the user one step removed from security and bug fixes. This does not mean users shouldn't consider one of these forks, but they should be aware of the potential downsides.
There is some precedent for soft forks displacing the original upstream. For example, the Go-oo fork of OpenOffice.org became LibreOffice after Oracle consumed Sun. That fork has clearly overtaken OpenOffice.org in the Linux community as the go-to desktop office suite and its development has eclipsed that of its counterpart Apache OpenOffice. Go-oo, of course, had corporate support as well as community support. For a Firefox fork to be truly independent and sustainable, it would need a similar effort behind it. Thus far, no such movement has materialized.
A recent question on the LibreWolf issue tracker drives that point
home nicely. User "kallisti5" asked if
LibreWolf was prepared to fork Firefox "if Mozilla continues
farther down this path?
" One of LibreWolf's contributors, "ohfp",
replied that the project was "absolutely not prepared to do
that
" due to limited time and energy to work on the project as it
is. "We would not even remotely be able to fork and maintain a
browser fully, let alone to continually develop and improve
it.
"
Another downside to the forks is that there are far fewer eyes on their code and communities. When Mozilla makes an important move, whether it's positive or negative, users are likely to hear about it quickly. As of now, the forks get relatively little attention.
Folks who want to jump ship from Mozilla's ecosystem entirely, while still sticking to open source, have some options. Ladybird, which LWN covered in June last year, is an attempt to create a new browser from whole cloth. It is an interesting effort, but not ready for day-to-day use for most folks. Qutebrowser, Nyxt, and NetSurf are also worth a look—though they may have some drawbacks for day-to-day use in terms of site compatibility and features. We will take a look at some of those options soon.
Two new graph-based functional programming languages
Functional programming languages have a long association with graphs. In the 1990s, it was even thought that parallel graph-reduction architectures could make functional programming languages much faster than their imperative counterparts. Alas, that prediction mostly failed to materialize. Even though graphs are still used as a theoretical formalism in order to define and optimize functional languages (such as Haskell's spineless tagless graph-machine), they are still mostly compiled down to the same old non-parallel assembly code that every other language uses. Now, two projects — Bend and Vine — have sprung up attempting to change that, and prove that parallel graph reduction can be a useful technique for real programs.
Background
A graph is a fairly general data structure consisting of nodes (which, for example, might represent commits, filesystem objects, or network routers), connected by a set of edges (which might represent relationships between commits, directory entries, or network cables). Graphs are frequently used to represent interconnected data of all kinds.
In the case of graph-reduction-based functional programming, nodes represent the data and functions of a program, and the edges represent the dataflow between those parts. For example, a particular function application (function call) might be represented by a node that is connected to the function's arguments, its definition (represented by a function abstraction node), and where the return value is used. The program is executed by incrementally rewriting the graph into a simpler and simpler form, until all that's left is the result. Andrea Asperti and Stefano Guerrini's book, The Optimal Implementation of Functional Programming Languages, gives this example of what part of the rewrite rules for a version of the lambda calculus might look like:
That diagram is a graphical representation of a rule that rewrites function application nodes (written with an @) and function abstraction nodes (written with a λ) when they come in contact. The graph on the left represents the configuration that the rule matches, and the graph on the right is what the rule rewrites it into. The connections are labeled with letters to make the rewrite easier to see, but the actual rule itself just needs to act on these two nodes whenever they become connected. The idea behind the rule is to attach the argument being given to the function (d) to the place in the function that references its argument (b). Then the result of the application (a) is connected to the final value of the body of the function (c).
Through a series of small rewrite rules like this, a graph machine can implement every operation in a program. Since the nodes are all directly connected to the data that they need to reference during a rewrite, there is no global synchronization needed at run time. Because these rules are local, they can be applied in parallel with no locking, automatically scaling the evaluation of the program across as many cores as are available. In some respects, the approach is similar to cellular automata — a method of computation based on a dynamic, evolving structure rather than on a static sequence of instructions.
The major advantages of this approach are parallelism and simplicity; the evaluation of the program depends on a handful of simple rewrite rules that can be simultaneously applied across the entire program. As processors get faster, the execution time of programs is increasingly defined by the latency of retrieving information from memory. In theory, structuring a computation so that it can apply one rule at a time to every node in the program could help with that, by ensuring that the CPU needs to fetch less context for each operation, and that operations can be located near the data they require.
The major disadvantage is that computers don't work like that. In practice, CPUs excel at applying complex operations to small amounts of data that are locally held in the cache, and struggle with operations that require operating over large blocks of memory. Evaluating a graph-reduction-based language is a pathological case for modern CPUs: simple rules, the evaluation of which requires fetching lots of data from memory and chasing chains of pointers. So the graph-reduction-based future of the 1990s didn't come to pass, and graphs are mostly only used in modern programming language implementations as an internal compiler representation. At the end of the compilation process, programs end up generating code in pretty much the same way regardless of language, although the details can vary.
In recent years, however, hardware trends have been shifting. GPUs are designed to apply many simple operations across a large piece of memory, although there are still problems with pointer-chasing that make implementing graph reduction difficult. Modern computers also sport many CPU cores, which makes the idea of being able to automatically parallelize execution tempting. So some projects, at least, believe that the graph-reduction-based approach warrants another look.
Bend and the HVM
Bend is a new, Apache-2.0-licensed, programming language designed to scale automatically to thousands of threads. It was started by Victor Taelin, who has been experimenting with prototypes of graph-reduction-based languages for many years, and has now started the Higher Order Company in order to attempt to develop those ideas further. In 2024, he published a summary of his rationale for working with graph-based functional programming.
Programs written in Bend do not yet have competitive performance compared to existing functional
programming languages like Haskell or OCaml, a circumstance that the language's
documentation
attributes to the fact that
"our code gen is still embarrassingly bad, and there is a lot to
improve
". However, Bend already allows programs written in it to be run
on a GPU and automatically parallelized, with measurable speedups from doing so.
The language does need to make a few concessions in order to make that possible. For example, a traditional garbage collector would be a single, centralized piece of code that would touch every value in the program — which would immediately destroy the value of the graph representation, by turning local rewrites into something with global consequences. So Bend's garbage collection is a bit unorthodox: every use of a variable points to an independent copy of its data. If a variable is used more than once, the data is copied so that each use gets its own copy. If part of a data structure is used in one place, and a different part is used in another, that still only counts as one use of the whole data structure, so it isn't copied. This design, with every use of a variable pointing to a separate copy of the data, allows for the memory to be garbage-collected as soon as the variable goes out of scope, with no other coordination.
Copying everything that needs to be used more than once is not good for performance in a traditional system, however. Bend partially circumvents that cost by performing copy operations lazily. So copying a data structure and passing it to a function that only uses part of it will only copy the parts that are actually used. Copying some data and then immediately freeing one of the copies by having its variable go out of scope is essentially free. In the future, better code generation should help with eliminating unnecessary copies.
Bend's syntax is inspired by Python, although it does have required type annotations and some features taken from Haskell. An implementation of merge sort looks like this:
def merge_sort(l: List(f24)) -> List(f24): match l: # Here List is the type; Nil and Cons are possible values of the type # Nil represents the empty list. case List/Nil: return List/Nil case List/Cons: match l.tail: case List/Nil: # Returning a new list here instead of just doing "return l" # avoids copying. Since l.head and l.tail are each used once, l # is only used once. return List/Cons(l.head, List/Nil) case List/Cons: (as, bs) = split(l) return merge_lists(merge_sort(as), merge_sort(bs))
That code demonstrates a few features of Bend; notably, that accessing data inside a structure like a list is done by pattern matching. If a list is zero elements long (that is, just a Nil value) or one element long (a Cons holding a single value and ending with a Nil), it's already sorted and we can just put the list back together and return it. The interesting case is when a list is longer than that: we split the list into two parts, sort each part independently, and then merge them back together. How the list is split doesn't matter, as long as the parts are roughly equal in size, but interested readers can see one possibility here. This is a naive translation of the definition of merge sort, but the fact that the two recursive merge_sort() calls are independent of each other is enough to let Bend evaluate them in parallel.
Bend compiles programs into a graph that can be executed by the Higher-order Virtual Machine (HVM). From there, the graph can either be evaluated on HVM's slow-but-definitely-correct interpreter, translated to C and compiled for execution on the CPU, or compiled to NVIDIA's CUDA language for execution on the GPU. Both of the compiled versions execute in multiple threads, although the GPU version typically has significantly more hardware threads available.
Bend is not magic, of course. Some algorithms are inherently serial, and so there's no way for Bend to effectively split them over multiple threads. Generally, in order to take advantage of Bend's capabilities, the documentation recommends writing things using a divide-and-conquer approach. For example, insertion sort is not any faster in Bend than in any other language (because each version of the array is needed to calculate the next array), but bitonic sort, which breaks the array into independent chunks during sorting, can be much faster.
I ran a series of performance experiments with Bend, to evaluate the potential speedup from running on the GPU. I used the examples from the Bend repository, which are mostly microbenchmarks of various kinds. Also, obviously, the CPU and GPU of a computer are not necessarily comparable in terms of straight-line performance. So these numbers should be taken with a large grain of salt as approximate indications of what is possible for some algorithms.
With that said, running a test that generated a large array and then sorted it with bitonic sort took 68 seconds on a single thread, 25 seconds on four threads (although this may have been impacted by hyperthreading on the virtual machine, which had access to four dedicated vCPUs), and 0.78 seconds on the GPU. This is not exactly a linear speedup — the GPU I tested on has 6,144 cores — but that is probably down to the overhead of loading the data into GPU memory and decoding the result once the program finishes. The language definitely shows its potential for inherently parallelizable algorithms.
Vine
Vine is a programming language started by "T6" in June 2024.
When I emailed them to ask about Vine, they said they started the project because
"there's a lot of potential in
interaction nets, and I wanted a language that could realize that.
" Since
then, the project has attracted a handful of other contributors. Unlike
Bend, Vine does not currently have code generation for GPUs, but it is still
designed around a graph-based representation, known as an
interaction net, that can be
executed on Vine's own virtual machine. Vine is dual-licensed under the Apache
2.0 and MIT licenses.
Vine focuses less on having a simple syntax, and more on exposing the power of its underlying model to the programmer. For example, it uses borrowed references to let the programmer avoid copies in many cases.
Its most unusual feature is probably a concept called an "inverse type", which can loosely be taken as representing an edge in the graph that "goes the other way". This is a concept more or less unique to graph-based languages, although it's somewhat similar to a promise in JavaScript or a channel in Go. A variable of a normal type represents a value that some other part of the program is responsible for generating; a variable of an inverse type represents a value that this part of the program is responsible for generating and sending out.
For example, a variable of type N32 (a 32-bit unsigned integer) corresponds to an edge that points to some computation that eventually returns an N32. The other end of that edge is represented by the inverse type, ~N32, and must eventually be given an N32. This mechanism lets the programmer write code like this function, which finds the minimum value in a list and then subtracts that value from each element:
fn sub_min(&list: &List[N32]) { // A normal variable, to store the smallest item of the list that we've // seen so far. We start with the maximum value. let min_acc = 0xFFFF_FFFF; // An inverted variable, which represents an obligation to eventually // provide the actual minimum value in the list. let ~min: N32; let it = list.iter(); while it.next() is Some(&val) { // As we iterate through the list, we keep track of the smallest value // so far in the normal way. if val < min_acc { min_acc = val; } // But we simultaneously get the eventual minimum value and subtract // it from each element. val -= ~min; } // After going through the whole list, we know what the real minimum was, // so we fulfil our obligation to provide it by putting the value into ~min. ~min = min_acc; }
Unlike the same code in a more typical language, this code only needs to traverse the list once. As it does, it fills the list up with partially-evaluated nodes corresponding to the subtraction. Once it reaches the last line (where the obligation to provide a value for ~min is discharged), those nodes have access to both of their arguments, and get evaluated in place, putting the final values into the list.
While interesting, and more than a little mind-bending, Vine hasn't yet shown whether this feature is really useful in real-world programs. I asked T6 whether they thought that Vine was practical for real programs, to which they said:
Not quite yet. There are a few additional features that I think are useful for writing more complex programs; they've recently been unblocked by my implementation of traits. Some more [language server] features would improve [quality of life] a bit as well.
[...]
Now, I think for it to be used in the real world, there will need to be applications of interaction nets in the real world. I think there will be, but it remains to be seen.
They also thought that eventually implementing graph-based functional
programming languages on the GPU (or even on custom hardware designed for the
purpose) could be "very fast
", but that it would take some additional
work to catch up to the decades of work that have gone into optimizing
programming language runtimes on existing CPUs. "I'm not trying to compete
with that.
"
Their future plans for Vine at this point involve rewriting Vine's compiler in
the language itself, which "will be a good test of the practicality of the
language
". They also want to investigate ways to statically prevent
deadlocks, and to develop a debugger. In general T6 seems optimistic about the
future of graph-based functional programming, and encouraged anyone who was
interested in the topic to look into it, since relatively few people are focused
on it at the moment.
Conclusions
While neither Vine nor Bend seems likely to become a competitor to major programming languages any time soon, they do indicate an exciting area of potential research for future languages. The hardware may have finally caught up with the lofty predictions of how to implement efficient functional languages. If computers continue adding more parallelism, there could be a lot of utility in programming language designs that support automatically scaling to match the hardware. Unlike some other advances in programming language design, however, it seems difficult for existing languages to adopt these features, since it would involve completely changing their evaluation model.
A hole in FineIBT protection
Intel's indirect branch tracking (IBT) is a hardware-implemented control-flow-integrity mechanism that makes it harder for an attacker to gain control of the system by way of a corrupted indirect branch. FineIBT is a software extension to IBT that is meant to improve its protection. Recently, though, Jennifer Miller reported a novel way to bypass FineIBT by taking advantage of how the kernel's system-call entry point is constructed. In response, Peter Zijlstra is working on some FineIBT enhancements to close that hole and make IBT more secure in general.The kernel (like many other programs) makes extensive use of indirect branches, typically in the form of a function call using a pointer value that is determined at run time. These indirect calls have always been an attractive target for attackers; if that pointer can be set to an attacker-controlled value, the call can be sent to an arbitrary location. Usually, that is all that is needed to gain control over the system. In a body of code as large as the kernel, there will surely be an instruction sequence that performs some useful operation for an attacker when it is invoked in an unexpected way.
IBT implements forward-edge control-flow integrity by blocking indirect calls to arbitrary locations. It works by requiring that the target of an indirect call be a special instruction, either endbr32 or endbr64 (collectively referred to as endbr), that serves as a marker indicating a legitimate call target. IBT greatly reduces the number of places an indirect call can go; instead of anywhere in the program (including in the middle of a multi-byte instruction), calls can only go to actual function entry points.
This restriction improves security, but only by so much. There are still a lot of functions in a program like the kernel, and much mayhem can be created by redirecting the control flow to an unexpected function. The protection would be much tighter if IBT could ensure that an indirect call lands on one of the intended targets, rather than on any function. FineIBT, which was described in this paper in 2023, is a step in that direction. In the kernel's implementation, every indirect call is modified to first load a special hash value into a specific register. The called function, immediately after the endbr instruction, will compare that hash against the expected value; if the two do not match, execution is aborted. The hash is generated from the prototype of the called function, but is then perturbed at boot time so that the hashes used in any given running system are different (and hopefully unknown to attackers).
FineIBT was merged for the 6.2 kernel and has, hopefully, made life a bit harder for attackers ever since. As Miller has demonstrated, though, that protection is not absolute. In this case, the way around FineIBT takes advantage of one special assembly-language function within the kernel, entry_SYSCALL_64(), which is called by the CPU (on x86_64 systems) when user space makes a system call. Looking at the code, one can see that it begins, as expected, with an endbr instruction; IBT requires that, even in response to a system-call trap.
The following instruction, though, is not the usual FineIBT hash validation, since this function will not be called from within the kernel. Instead, it is swapgs, which exchanges the contents of the processor's GS-segment base register with the contents of a special model-specific register (MSR). This instruction is needed because, on entry into the kernel, the kernel's execution environment has not been set up, so it is not possible to access memory (or do much of anything). Executing swapgs is the first step toward establishing that environment, allowing access to kernel data and the kernel stack. Immediately prior to the return to user space, the kernel will execute another swapgs to restore the GS base register to its user-space value.
If an attacker is able to redirect an indirect branch to land on entry_SYSCALL_64(), the hardware IBT check will pass, since the expected endbr instruction is present. The FineIBT hash check, though, will not happen, since that code is missing from the function preamble. As a result, hostile indirect calls to that function will be allowed to proceed. That is bad enough, but the swapgs instruction makes it far worse. It will restore the user-space GS-base value (the one that was replaced when the kernel was first entered) while the CPU is still running in kernel mode; user space is allowed to change that register, so the kernel's GS base will be set to a value that is entirely under the control of the attacker. Among other things, that puts the kernel stack under the attacker's control; the result is a quick takeover of the kernel.
When spelled out in this way, the problem is reasonably obvious; checking a hash within the called function can only work if every function includes that checking — and there are functions, including entry_SYSCALL_64(), that cannot do the checking. Moving the checking to the caller avoids this problem, at the cost of making the entire sequence a bit more expensive. That is the approach that Zijlstra has taken; the code sequence that is used for this checking, found in this patch, merits a look:
/* * Notably LEA does not modify flags and can be reordered with the CMP, * avoiding a dependency. Again, using a non-taken (backwards) branch * for the failure case, abusing LEA's immediate 0xf0 as LOCK prefix for the * Jcc.d8, causing #UD. */ asm( ".pushsection .rodata \n" "fineibt_paranoid_start: \n" " movl $0x12345678, %r10d \n" " cmpl -9(%r11), %r10d \n" " lea -0x10(%r11), %r11 \n" " jne fineibt_paranoid_start 0xd \n" "fineibt_paranoid_ind: \n" " call *%r11 \n" " nop \n" "fineibt_paranoid_end: \n" ".popsection \n" );
The 0x12345678 is patched at run time with the expected hash value. When the time comes to perform the indirect call, the cmpl instruction compares the patched-in value against the hash that is expected to be stored just prior to the entry point for the indirectly called function. The lea instruction is essentially a fast no-op, but it is there for the following cleverness. The jne instruction will look at the result of the cmpl two instructions before; in the not-equal case (the hash did not match) it jumps backward into the just-executed code; otherwise the call is executed as usual.
Why the backward jump? Since branches that are not taken are faster than
those that are, it is better to jump in the uncommon case. This particular
jump is noteworthy, though, in that it will land in the middle of
the lea instruction, which will cause the CPU to see an invalid
instruction sequence and generate a #UD trap. This trick, it
seems, is the fastest and most space-efficient way that could be found to
perform the test and generate the trap without slowing down legitimate
indirect function calls (which should be all of them) any more than
necessary. This special sequence is evidently the brainchild of Scott
Constable at Intel; in a previous
version of the patch set, Zijlstra admonished: "be warned, Scott
loves overlapping instructions
".
At the completion of this patch series, there are a couple of new options to the painstakingly undocumented cfi= command-line parameter. Setting cfi=warn causes control-flow-integrity errors to generate a warning rather than generating an oops, while cfi=paranoid enables the new verify-before-calling mode. Toward the end of the series, there is also a patch adding another option, cfi=bhi, that improves the Spectre mitigations that are supposed to be built into IBT, but which have been found to be lacking at the hardware level in some processors.
Zijlstra, in the cover letter, expressed the hope that the current version of the patch set would be the last before it is merged. Such hopes are often dashed in the kernel world, but this series would appear to be getting close to completion. It is not clear whether attackers have ever exploited the bypass reported by Miller but, once this code goes in, the authors of any such exploits will have to look for a new way to get around the kernel's control-flow-integrity protections.
Guard pages for file-backed memory
One of the many new features packed into the 6.13 kernel release was guard pages, a hardening mechanism that makes it possible to inject zero-access pages into a process's address space in an efficient way. That feature only supports anonymous (user-space data) pages, though. To make guard pages more widely useful, Lorenzo Stoakes has put together a patch set enabling the feature for file-backed pages as well; in the process, he examined and resolved a long list of potential problems that extending the feature could encounter. One potential problem was not on his list, though.The purpose of a guard page is to prevent buggy (or malicious) code from overrunning a memory region. An inaccessible page placed at the end of a region will cause a segmentation fault should the running process try to read or write to it; well-placed guard pages can trap a number of common buffer overruns and similar problems. Prior to 6.13, though, the only way to put a guard page into a process's address space was to set the protections on one or more pages with mprotect(); that works, but at the cost of creating a new virtual memory area (VMA) to contain the affected page(s). Placing a lot of guard pages will create a lot of VMAs, which can slow down many memory-management functions.
The new guard-page feature addresses this problem by working at the page-table level rather than creating a new VMA. A process can create guard pages with a call to madvise(), requesting the MADV_GUARD_INSTALL operation. The indicated range of memory will be rendered inaccessible; any data that might have been stored there prior to the operation will be deleted. There is an operation (MADV_GUARD_REMOVE) to remove guard pages as well.
Placing guard pages in VMAs containing anonymous pages is the simplest case, which is why anonymous pages were supported first. These pages have no connection to any file on disk, so there are relatively few hazards involved with changing their behavior. File-backed pages bring more complexity, though, and a number of places where guard pages could cause problems. Stoakes goes through the list in detail in the patch posting.
For example, readahead is an important part of maintaining performance when a process is working sequentially through a file. As that process reads some data from a file, the kernel can guess that the process will go on to request the following data in the file in the near future. By initiating a read operation before user space gets around to asking for the data, the kernel can ensure that this data is present (or at least on its way) when the request arrives. The presence of a guard page will stop readahead cold at that point, since the page has been marked inaccessible. As Stoakes notes, this should not be a problem, since it would be unusual for a process to map a file, place a guard page, then try to read through that page.
Similar complications arise in other situations. The kernel will often try to "fault around" a page that has been faulted in, under the assumption that nearby data will be of interest; guard pages will prevent that as well. If a file is truncated, the removed portion may include guard pages, but the guard pages themselves will remain in place. And so on; in each case, Stoakes has ensured that the kernel's operation will be correct and make sense.
There are still a couple of exceptions, though, one of which was known
about before the patches were posted, while the other was a surprise. The
known issue is that guard pages cannot be placed in memory areas that have
been locked into RAM with mlock().
The problem, as Vlastimil Babka pointed
out, is that mlock() guarantees that the affected pages will
not be kicked out of RAM. Installing a guard page, though, frees any data
stored there, which runs counter to the mlock() promise. Stoakes
is considering
a new operation that would make this data destruction explicit in that case
but, as David Hildenbrand said,
"mlock is weird
" and there are a number of other details that would
have to be managed there.
The unexpected issue was raised by Kalesh Singh, who wondered how the presence of guard pages would be represented in /proc/PID/maps and /proc/PID/smaps. These files, which are documented in Documentation/filesystems.proc.html, describe a process's VMAs in detail. Singh said:
In the field, I've found that many applications read the ranges from /proc/self/[s]maps to determine what they can access (usually related to obfuscation techniques). If they don't know of the guard regions it would cause them to crash; I think that we'll need similar entries to PROT_NONE (---p) for these, and generally to maintain consistency between the behavior and what is being said from /proc/*/[s]maps.
It seems that banking apps running on Android are known for this sort of behavior and could run into trouble if guard pages are installed — which is something that the Android runtime might well want to do as a general hardening measure. Since those apps already read the indicated /proc files, Singh thought that would be a logical place to indicate the presence of the guard pages.
This request took Stoakes by surprise, since he thought the topic had been discussed previously and the situation understood. That situation is that, since those files describe VMAs, they are not a suitable place to put information about guard pages which, by design, do not have their own VMAs. Hildenbrand quickly suggested that a bit in /proc/PID/pagemap, which provides page-level data now, would be the best way to export that information to user space. The conversation nonetheless became a little tense, seemingly mostly as a result of misunderstandings rather than true disagreement.
In the end, though, it was agreed that pagemap was the right place for this information. Suren Baghdasaryan eventually joined the conversation, saying that some work would be needed to make this information available to apps in the Android system, but that he would start on that project. Apologies and thanks were shared around, and Stoakes said that he would go ahead and implement the kernel side of the pagemap solution.
With that issue seemingly resolved, there does not appear to be any serious obstacles to this feature heading toward the mainline in the near future. The patch series (minus the pagemap changes) is sitting in linux-next now and could conceivably go upstream as soon as the 6.15 merge window. That should result in easier and cheaper user-space hardening, which seems worth the trouble.
Fedora discusses Flatpak priorities
Differences of opinion, as well as outright disputes, between upstream open-source projects and Linux distribution packagers over packaging practices are nothing new. It is rarer, though, for those disputes to boil over to threats of legal action—but a disagreement between the Open Broadcaster Software (OBS) Studio project and Fedora packagers reached that point in mid-February. After escalation to a higher authority, things have been worked out to the satisfaction of the OBS project, but some lingering questions remain. How Fedora should prioritize Flatpak repositories, how to handle conflicts between upstreams and Fedora packagers, and the mechanics of removing or retiring Flatpaks all remain open questions.
Flatpak
Flatpak has been in development for more than a decade, and its earliest origins date back to 2007. Originally it was called xdg-app, and is designed to bundle individual applications and their dependencies into an easy-to-install format that works on any Linux distribution.
It is more than just a packaging format, though. Flatpaks are sandboxed, which means that an application installed as a Flatpak can be limited or restricted entirely from accessing user files, graphics sockets, networking, devices, and so forth. They are also self-contained, so installing an application using a Flatpak won't disrupt other applications and libraries on a user's system. Flatpaks can be distributed as OStree repositories or Open Container Initiative (OCI) images.
Flatpak is meant to provide a number of benefits for projects, Linux distributions, and users over Debian packages, RPMs, and other distribution-specific formats. A project can target a single packaging format and distribute the software with the project's preferred defaults. Users don't have to depend on their distribution to package the application, or for distribution packagers to stay up to date with the most recent releases. The upstream can release major updates at whatever cadence it pleases, for example, and users don't have to wait for the next distribution major release for the upgrade.
Projects can also make use of runtimes that provide libraries and toolchains, such as the GNOME and KDE runtimes, which free developers from having to package all those dependencies or worry about things like "which version of GTK or Qt is available on the user's system?" Linux distributions, and their users, benefit from a wider selection of software without the need for the distribution to package everything under the sun.
Adoption of Flatpak has been on the upswing in the past few years. Some are even pushing for it to be the de facto method for installing software on the Linux desktop. Work on Flathub as a central repository for Flatpaks started in 2017, and it passed the one million active user mark last January. Flatpaks are being prioritized by GNOME as a way for distributing GNOME software, and it is the preferred and most expedient way to install applications on image-based distributions like Fedora's Atomic desktops, Bluefin, Aeon Desktop (based on openSUSE), and others because Flatpaks can be installed in the user's home directory and do not need to be installed system-wide. Fedora has its own Flatpak repository, which contains Flatpaks built from Fedora RPMs and distributed as OCI images. Fedora serves its Flatpak images and Linux container images from the same registry, and provides its own runtimes for applications rather than using the Freedesktop, KDE, or GNOME runtimes.
The complaint
OBS Studio is a popular open-source, multi-platform project for live streaming and screen recording. The project provides its own Flatpak package via Flathub and recommends installing that package prominently on its download page. Fedora enables the Flathub repository in the GNOME Software package-management application, but also provides OBS Studio as an RPM and as a Fedora-packaged Flatpak in the Fedora Flatpak repository. Users searching for OBS Studio using GNOME Software will be offered the Fedora Flatpak as the default, then the Fedora RPM, and finally the OBS Studio official Flatpak from Flathub.
OBS contributor Joel Bethke created
a ticket with the Fedora Flatpak special interest group (SIG) on
January 21. He complained that the OBS Studio Flatpak from Fedora
was "poorly packaged and broken
" which, in turn, led users to
complain to the upstream project because they thought they were
getting the official OBS Flatpak. The initial report did not specify
what those issues might be, but later in the thread Angaddeep Singh
pointed to a bug
filed with OBS Studio that complained about problems using the H.264
encoder, and a complaint
in the OBS forum that the Twitch chat and streaming interface
were missing. In both cases, users were using Fedora-packaged versions
of OBS and complaining upstream about problems that were not present
in the OBS builds. Bethke later
said that the project had received complaints specifically about
the Fedora Flatpak that included regressions due to the Qt version
bump, hardware acceleration not working, OpenH264 encoder failures,
and even failure to launch the application at all. He said that the
OBS Project's Flatpaks, and "in some cases, even the RPM
" did
not have the same issues. The reason Bethke requested
that the Flatpak either be removed, or that the Fedora Flatpak be
clearly identified as a third-party package, is that users were not
even aware they were using Fedora's instead of the OBS project's. He
also wanted an explanation why Fedora would publish its package at a
higher priority than OBS's official builds.
In response, Michael Catanzaro proposed
in a ticket for the Fedora
Workstation's working group that Fedora deprioritize or
remove Fedora Flatpaks from GNOME Software. He made the case that Fedora
Flatpaks have been "a significant source of quality problems and
have frankly been generally unsuccessful
". Users want Flatpaks
created by upstreams, but are confused by the way they are displayed in
GNOME Software. He recommended that the priority be changed to display
software from Flathub first, then Fedora Flatpaks, and finally Fedora
RPMs.
On February 4, Catanzaro posted an update
to his ticket with notes from the Workstation working group's
meeting (minutes). He said that the working group had not reached any final
conclusions, but had consensus on a few points. One was that
Fedora Flatpaks "are generally the worst software source for users
to install
", so they should not have the highest priority in GNOME
Software. He also acknowledged that there are problems with the user
interface for GNOME Software, because it was easy for users to be
confused about the source for software. However, he had some critical
words for the OBS Studio project's packaging: he preferred the RPM
version to the OBS Flatpak because OBS version "notably depends on an
EOL runtime that no longer receives security updates
".
Currently OBS Studio's Flatpak depends version 6.6 of the KDE/Qt
runtime, which reached end-of-life when the 6.8
runtime was made available in October 2024. The KDE project
only maintains branches with the two most recent versions of Qt
(currently 6.7 and 6.8), as well as long-term-support branches for Qt
5.15. This means that a 6.x version of the runtime has a shelf life of
about a year. According to the
list
of vulnerabilities for Qt, there seem to be a few vulnerabilities
that would impact the 6.6 runtime, but it's unclear if any would
present a real security problem for users of OBS Studio. Bethke replied
that OBS Studio had well-documented
reasons for not updating the KDE runtime due to regressions in
Qt. "We made the choice to have a functional application, report
the bugs upstream, and update the dependency once they have been
fixed.
" He also said that the project had the impression that the
concerns of OBS Studio as an upstream were being dismissed.
I have said this before, but I will say again it should not be upstream's responsibility to track and report on downstream packaging issues, and the fact that we have and it's been ignored has been frustrating. I still don't have a good understanding on why Fedora is so adamant about repackaging and redistributing Flatpaks wholesale without proper testing or validation.
He also pointed out that Yaakov Selkowitz, who owned the obs-studio
Flatpak package for Fedora, was maintaining hundreds of packages. "In what
world is a single person able to maintain, test, and support, that
many packages and applications?
" The Fedora Flatpak initiative, he
said, seemed poorly planned, badly implemented, and largely
unsustainable.
Catanzaro agreed
with several points that Bethke made, but ultimately said,
on February 12, that allowing a runtime to reach end of life
"is unacceptable and indicates terrible
maintainership
". That seemed to be the proverbial straw that broke
the camel's back, and prompted Bethke to issue
a formal request that Fedora remove any OBS Studio branding
from the distribution and threatened legal action if Fedora failed to
comply.
Resolution
On February 14 Fedora Project Leader (FPL) Matthew Miller stepped in to deescalate the situation, which brought Bethke back to the table to talk. Bethke said that, as an outsider, it was difficult to navigate Fedora's processes and groups when he didn't even know what some of the acronyms stood for or which groups to engage with.
I was asked to report things to several different locations by, supposedly, representatives of Fedora.
It was a frustrating process to even know who I should be talking to about these issues, and when the people who were talking to us decided to dismiss the issue and insult us instead, we were left with what we felt was no choice but to take action. We are always open to discussion as long as it is in good faith, and still are.
Fedora contributor Adam Williamson summarized the situation and sympathized with Bethke. He also noted that the request to assert OBS Studio's rights and remove the OBS Flatpak was being handled as a high-priority request. Unfortunately, he said, there was no current process for removing a Flatpak from Fedora. Indeed, based on the discussion in the ticket for Fedora release engineering there was confusion about whether it was actually possible to remove a Flatpak image from the index, and how to notify users of such an event. Following a meeting with the Flatpak SIG and Miller, Bethke withdrew the request to remove or rebrand the OBS Studio Flatpak on February 18, which reduced the urgency to find a solution. Bethke suggested that the ticket be used to track the technical issues with the Fedora Flatpak, and asked that Fedora make it clearer for upstream projects to report bugs.
Even though the OBS situation seems to be resolved, Fedora is left with a number of things to work through with regard to Flatpak. The first and most obvious, of course, is creating a policy and method for removing Flatpaks from its repository. It is somewhat surprising that the project rolled out a method of distributing packages without a plan to address the inevitable need to quickly remove a package from the repository when needed.
Priority
The next open question to tackle is the priority order for RPMs and various flavors of Flatpak in GNOME Software. When Fedora initially accepted the "unfiltered Flathub" proposal for Fedora 38, the Fedora Engineering Steering Council (FESCo) rejected the idea of prioritizing Flathub software over Fedora sources. (LWN covered this in 2022.) It's possible that attitudes have changed in the past couple of years, particularly given Flathub's increasing popularity and some of its newer practices for allowing projects to verify Flatpaks as being provided directly by the project. As Williamson suggested in the OBS Studio ticket, it may be time to rethink the role of distribution-specific Flatpaks:
The story we should be reading from all the comments on this from folks who find it inherently surprising that Fedora would offer its own flatpaks over ones from flathub is that flathub is successful: for a lot of people who buy into the flatpak/snap/appimage-style approach, flathub is their primary trusted source for flatpaks and it's where they assume their flatpaks will come from 'by default', especially if they've clicked through a "turn on third-party repos" dialog. And there are genuine reasons for this - in general the flatpaks from flathub work, people like that they can recommend them to others regardless of distribution and expect a consistent experience, and people like the involvement of upstream developers in many flathub flatpaks.
However, Catanzaro's effort to give Flathub packages top priority
does not seem to be finding much success. On February 18, he
said
that the Workstation working group had discussed the topic for a third
time, and it had still not reached consensus. "I'm no longer
confident that the Working Group will make any changes here.
" He
also
said that he planned to continue arguing to remove the Fedora
Flatpak repository—which would make Fedora RPMs the top priority,
followed by Flathub if it is enabled. But, he was beginning to fear
that the working group would not accept his proposal. On
February 25, he wrote
that the group had discussed it yet again, and he thought that the
group was evenly split on what to do. A possible option would be a
proposal that Fedora Flatpaks would need to be owned by the package's
RPM maintainer, which would result in "a drastically reduced set of
Flatpaks
".
Finally, it seems clear that there is work to be done to make it
easier for upstream projects to engage with Fedora when they have
questions or concerns about how their work is packaged. Justin Wheeler
said
that he had been working with the Fedora Council on an "explicit
statement on our 'upstream first' policy
" and had added
some language to specifically address collaboration and clear lines of
communication. However, that does not provide any concrete changes
that would help upstreams navigate and resolve their problems with
Fedora.
There is also the question of whether Fedora should honor requests from an upstream not to package software at all. Jordan Petridis wrote about an interview that Miller did recently where he discussed the OBS issue and an older kerfuffle about packaging the Bottles project. Petridis took issue with several of Miller's comments about Flathub's review processes, and complained that Fedora does not live up to its own standards around naming and branding that Miller discusses.
In the interview Miller talks about Fedora's branding rules and asking
that projects do not call their derivatives "Fedora". However, the
Bottles project asked Fedora not to package its software, or to call
it something else if it was going to package the software. After
initially considering the request and planning to drop it in
Fedora 38, the packager decided to go ahead and provide Bottles
RPMs anyway, since the upstream only provides Flatpak packages. And it
is still called Bottles in Fedora, several years later, albeit with a
disclaimer that it is not an official package and a link to
Fedora's bug tracker rather than upstream's. Petridis said that it was
wrong that it had to come to legal threats to get Fedora to "treat
application developers like human beings and get the Fedora packagers
and community members to comply
".
Fedora, and other distributions, might want to consider creating a simple and straightforward process for upstreams to report problems and discuss concerns—one that does not require navigating a twisty maze of SIGs, working groups, and multiple bug or issue trackers. Even then, however, upstreams may not wish to participate in the process at all and would prefer to be left to be the sole distributor of their software. At least under their own brand.
The changes and compromises that go along with distribution packaging are not always compatible with the vision that upstreams have for their software. This is especially true with applications like OBS Studio that make use of specialized hardware and have dependencies on multimedia codecs or other libraries that distributions like Fedora do not ship, or ship in modified form for legal reasons. Distributions should consider when it is appropriate to leave software distribution to an upstream project that is uninterested in—or even hostile to—having distributions provide native packages.
The landscape has changed dramatically since Linux distributions were at the center of the universe for free-software distribution. There is still immense value in the Linux distribution packaging model, but it is not a good fit for all cases. Recognizing that, and accepting it, might improve things (and reduce drama) for everyone involved.
A look at the Zotero reference management tool
Zotero is an open-source reference management tool designed for collecting, organizing, and citing research materials. It is particularly useful for those writing research papers, theses, or books that require a bibliography in standard formats like APA Style, Chicago Style, or MLA Format. Zotero stores bibliographic metadata, annotations, and user data and integrates with word processors like LibreOffice, Microsoft Word, and Google Docs to produce in-text citations and bibliographies. The core features of Zotero include metadata extraction, tagging, full-text indexing, and cloud synchronization for multi-device access, and Zotero has a plugin system to allow anyone to expand its capabilities. The most recent major release, Zotero 7, added support for reading EPUBs, brought user-interface improvements including a dark mode, performance improvements, and more.
History
Zotero was originally developed by the Center for History and New Media at George Mason University. The name "Zotero" is derived from an Albanian verb meaning "to master", reflecting the project's aim to empower users to manage their research data. The project was launched in October 2006 and was first released to the public as a Firefox browser extension. It is now maintained by the nonprofit Corporation for Digital Scholarship.
In 2011, the development team addressed the limitations of the browser-extension model by introducing a standalone version of Zotero and allowed integration with multiple web browsers, including Google Chrome, Mozilla Firefox, and Safari, via browser extensions.
The Zotero client application, and its plugins, are primarily written in JavaScript; Zotero uses SQLite to store data locally. Zotero is available under the Affero General Public License (AGPL). The project encourages users to contribute by developing code, providing support to other users on the forum, or writing documentation. The community provides and maintains numerous plugins that extend the application's functionality and customization options, as well as its integration with other software. For example, Better BibTeX adds features for managing data with text-based toolchains like LaTeX and Markdown, while Better Notes extends Zotero's note-taking capabilities. There are dozens of community-supplied plugins to choose from, and users can also write their own, of course.
According to the project's GitHub page, 76 people have contributed to its development. The pull requests and issues sections are fairly active, as is the discussion forum, which is the recommended venue for support. A complete version history of the project is available all the way back to the 1.0 release.
Getting started
To begin using Zotero, users will need to install the client application and will probably want a browser extension as well. The application and browser extensions are available via the download page. The project only supplies a tarball with the application binaries, rather than providing distribution packages. Browser-specific extensions must be installed to enable metadata extraction directly from online sources. Zotero parses structured data embedded in web pages, such as bibliographic metadata from journal databases or library catalogs, using translators written in JavaScript.
After completing the installation, users can set up a Zotero account with synchronization features, though this is optional. There is no cost for an account, but the free tier comes with a 300MB limit on data storage. If users need more they will need to pay for storage. Synchronization mirrors the user's library, including metadata, collections, and notes, to the Zotero's servers and to all signed-in devices. Zotero can also store metadata and attachments separately, and users may wish to save synchronization space by storing attachments locally.
Interface
The Zotero interface is designed for users who require detailed control over their bibliographic data. It consists of a three-pane layout: a navigation pane, a content pane, and a metadata editor, each serving distinct functions.
The navigation pane provides hierarchical navigation of the library. Collections and subcollections act as organizational nodes, allowing users to categorize references. The "Tags" and "Saved Searches" sections facilitate dynamic queries and labeling.
Zotero supports creating saved searches using criteria such as item type, tags, creators, or custom fields. I find it convenient to create a bibliographic folder in this panel for each article or book I plan to work on. Each of these folders can contain references related to a specific project, helping to provide a well-organized library that remains easily accessible in the future. In the case of a book, I also create subfolders corresponding to its various chapters. This makes it easier to locate and update references as needed.
The content pane displays the content of the selected collection or search query. It displays the list of citations associated with each of the folders contained in the left pane and functions as the main data grid, listing items with columns for title, creator, year, and other metadata fields. Users can sort information via the column selector, enabling a tailored view of the dataset.
Another way to sort information is by clicking column headers, and multi-field sorting can be implemented using modifier keys. Drag-and-drop functionality provides quick reorganization of items within collections. Items can be added manually with the "New Item" button, but I have never needed to enter items manually, thanks to the browser extensions.
The metadata editor pane has three tabs: Info, Notes, and Tags. The Info tab allows detailed bibliographic entry editing, including custom fields. To edit an item, simply click on it and make the desired changes. The Notes tab supports rich-text annotations, while the Tags tab enables manual and automated tagging. Tags are searchable and can be color-coded for visual prioritization. Full-text indexing of attachments, such as PDFs, is accessible here, allowing granular search capabilities.
Zotero can be extended with scripts if it does not support something out of the box. For example, Zotero does not support batch processing natively, but that feature is available through a third-party script. It's possible to select multiple items to perform bulk edits, such as updating metadata fields, tagging, or attaching files. Batch operations are particularly useful for cleaning imported datasets or reformatting collections.
The search bar, located at the top right, supports realtime filtering with the ability to toggle between "All Fields and Tags", "Title, Creator, Year", and "Full-Text" search modes. For more precise queries, the advanced search interface provides multi-condition filtering with Boolean operators, nested conditions, and specific field targeting.
Workflow
Users can organize their library hierarchically through collections and subcollections, effectively creating a nested structure for research projects. Adding items to the library can be automated via browser extensions or manually using the "New Item" functionality. Thanks to this integration, a researcher could search PubMed (which has citations for biomedical literature) and add an article to their library simply by right-clicking on it and selecting "Save to Zotero".
Alternatively, on any web page, users can save the current page's URL by using the Zotero Connector extension for their browser. The documentation walks through using the connector and customizing it. For manual additions, it's possible to input standard bibliographic fields such as title, authors, publication date, and identifiers like DOI or ISBN. Zotero supports metadata standards like Dublin Core and MARC, providing compatibility with library systems.
To integrate Zotero into the research and writing process, the application offers plugins for word processors such as LibreOffice, Microsoft Word, and Google Docs. Citations can be added or edited quickly by using the plugin to search the library and choosing a citation style. Zotero uses the Citation Style Language (CSL) for formatting, supporting a vast library of styles that can be customized via XML editing for discipline-specific requirements. Bibliographies can be generated dynamically, which are updated automatically as references are added or modified.
Data management in Zotero extends to file attachments, too. Users may associate references with PDFs, images, or datasets. These files are indexed locally, allowing full-text searching, which is enabled through the built-in PDF.js engine based on Mozilla's pdf.js. Users can configure external tools, such as PDF annotators, to integrate with Zotero for a seamless workflow. However, using this option increases storage-space requirements, which can push a user past the 300MB free tier. I've never felt the need to use this feature, as I prefer to save PDFs and other supplementary files on my local hard drive.
For collaboration, Zotero provides group libraries hosted on its servers. Group libraries allow multiple users to share references, notes, and files in a centralized repository. Zotero supports role-based access, so a group can control access to sensitive research data (for example) by limiting it to certain roles. It is free to create groups and share libraries, but but storage counts toward the group owner's quota. For those who don't see the need to upload document files to the cloud, the storage limit is a negligible restriction.
Developers can use the Zotero API to interact programmatically with their libraries, enabling integrations with institutional repositories, content-management systems, or other research platforms. For local usage, one can even manipulate the SQLite database directly, though this requires a solid understanding of Zotero's schema to avoid data corruption.
Finally, proper backup practices are essential for long-term reliability. While Zotero's synchronization feature offers redundancy, users should periodically export their library in JSON or RDF format for external archiving. The data directory, which contains the SQLite database and attached files, can also be manually copied for offline storage. These measures can ensure that research materials remain intact even in the event of system failures.
Drawbacks
Zotero has lots of positives, but a stroll through its forum shows that its users have identified some notable drawbacks that may have an impact on users with specific requirements. Some users report that the interface may become sluggish when managing large libraries containing tens of thousands of references. However, this seems to be an extreme scenario that does not affect the common user experience. I have been using Zotero to manage bibliographies for scientific papers for several years and have found it reliable. In common usage scenarios, such as writing an article or a book, the bibliography consists of at most a few dozen references, so the application has always proved to be responsive.
Customization is another area where Zotero falls short. Although the application supports plugins and offers basic interface adjustments, users seeking deeper modifications—such as creating custom layouts or advanced automation workflows—often encounter limitations due to the lack of customization options for the core elements of the interface. Zotero's API is not designed for making major changes to the user interface. The JavaScript API is primarily focused on working with items in the user's library.
The tagging and organizational system does not natively support hierarchical or nested tags, which can be a limitation for users managing complex, multi-dimensional taxonomies. Similarly, batch-editing functionality is constrained by the interface design; certain operations, such as merging duplicates across collections or applying advanced metadata transformations, require external scripts or manual intervention.
The cloud-storage options are convenient, but come with a cost for users handling large datasets, particularly those with a lot of PDF or other attachments. Users have to choose between Zotero's proprietary service, a third-party WebDAV service, or running their own WebDAV server.
Zotero's reliance on JavaScript for its core operations introduces limitations in computationally intensive tasks. For instance, full-text indexing of large document libraries or bulk metadata updates can be slower compared to tools written in compiled languages.
Finally, while Zotero supports citation styles through CSL, advanced customization of styles requires manual editing of XML files, which may not be user-friendly. This can be a significant hurdle for researchers with specific formatting needs not covered by existing styles. Integration with non-standard word processors or LaTeX workflows, while possible, lacks the native polish offered by its main competitors, requiring users to rely on third-party tools or scripts.
Development
Zotero releases are based on feature development rather than a fixed schedule. Major versions are introduced when substantial changes, such as new features or architectural updates, are completed. Minor updates and patches are issued as necessary to address bugs, maintain compatibility, or refine existing functions. Unfortunately, there is no official development roadmap available. However, those interested in contributing or just curious about Zotero's development can join the Google Group and follow the discussions on GitHub. The development process is guided by technical requirements and user feedback.
Since the release of version 7.0, the community is no longer providing updates for the 6.x series. Zotero offers beta builds for users interested in testing upcoming features and improvements before they are released in the stable version. These beta versions are currently built from the development line for Zotero 7.1. Additionally, Zotero provides beta versions of its connectors for browsers like Firefox and Safari, which allow users to test connector-specific features.
Conclusion
Zotero is a useful tool for managing bibliographic data and integrating research workflows. A few simple configuration steps are enough to meet the needs of many professionals, including academic researchers working on a scientific paper, journalists writing articles for magazines, and anyone looking to draft a technical book. Its open-source foundation, extensive functionality, and cross-platform capabilities make it a suitable choice for advanced users. However, limitations in performance with large datasets, and restricted customization options, are areas for improvement.
Brief items
Security
Zen and the Art of Microcode Hacking (Google Bug Hunters)
The Google Bug Hunters blog has a detailed description of how a vulnerability in AMD's microcode-patching functionality was discovered and exploited; the authors have also released a set of tools to assist with this kind of research in the future.
Secure hash functions are designed in such a way that there is no secret key, and there is no way to use knowledge of the intermediate state in order to generate a collision. However, CMAC was not designed as a hash function, and therefore it is a weak hash function against an adversary who has the key. Remember that every AMD Zen CPU has to have the same AES-CMAC key in order to successfully calculate the hash of the AMD public key and the microcode patch contents. Therefore, the key only needs to be revealed from a single CPU in order to compromise all other CPUs using the same key. This opens up the potential for hardware attacks (e.g., reading the key from ROM with a scanning electron microscope), side-channel attacks (e.g., using Correlation Power Analysis to leak the key during validation), or other software or hardware attacks that can somehow reveal the key. In summary, it is a safe assumption that such a key will not remain secret forever.
Kernel development
Kernel release status
The current development kernel is 6.14-rc5, released on March 2. Linus commented: "Nothing looks particularly big or worrisome".
Stable updates: 6.13.5, 6.12.17, and 6.6.80 were released on February 27.
The 6.13.6, 6.12.18, 6.6.81, and 6.1.130 updates are in the review process; they are due on March 7.
McKenney: Speaking at Kernel Recipes
Paul McKenney has put together a series of articles on how to improve one's ability to give a good talk at a technical conference.
On the other hand, (1) presentation skills stay with you through life, and (2) small improvements in presentation skills over months or years can provide you with great advantages longer term. An old saying credited to Thomas Edison claims a breakdown of 1% inspiration and 99% perspiration. However, my own experience with RCU has instead been 0.1% inspiration, 9.9% perspiration, and 90% communication. Had I been unable to communicate effectively, others would have extreme difficulty using RCU, as in even more difficulty than they do now.
There is a lot of speaking experience distilled into this set of posts.
Quotes of the week — squirrel security
Squirrels are funny rodents. If you model their behavior you will declare that they are herbivores. In California (where many strange and wonderful things happen) squirrels have begun to eat voles, a very carnivorous behavior. If you believe in modeling as a way to identify correct behavior, you have to say that these furry creatures that eat voles are not squirrels.— Casey Schaufler
I have a pet squirrel named Rocky that I have owned since it was a pup and its mother was killed crossing Douglas County Road 7 in front of our house. He lives in a small kennel in our house.— Greg WettsteinEvery day I take Rocky outside and open the door to his kennel. Rocky runs around the yard twice and then up an oak tree and loads his cheeks with acorns. He comes back to his kennel, eats his acorns and falls asleep until the next day.
One night Jay Evil sneaks into the house, abducts Rocky and replaces him with his evil squirrel Rabid, who looks exactly like Rocky but fell out of a tree on his head when he missed a jump from one branch to another and hasn't been right since.
As usual, the next day I take what I think is Rocky out into the front yard and open the kennel door. The faux Rocky runs out into the yard, chases down, attacks, kills and begins to eat our German Shepherd Max.
Conclusion, this squirrel's behavior is suspicious and should be remediated.
TSEM, as a high granularity modeling architecture, would interrupt the process when the squirrel began to chase Max.
Distributions
Linux from Scratch version 12.3 released
Version 12.3 of Linux From Scratch (LFS) has been released, along with Beyond Linux From Scratch (BLFS) 12.3. LFS provides step-by-step instructions on building a customized Linux system entirely from source, and BLFS helps to extend an LFS installation into a more usable system. Notable changes in this release include toolchain updates to GNU Binutils 2.44, GNU C Library (glibc) 2.41, and Linux 6.13.2. The Changelog has a full list of changes since the previous stable release.
Distributions quote of the week
First, I know that pretty much everyone is (understandably) freaking out about stuff that is getting worse, but I just wanted to share some good news in the form of an old-fashioned open-source success story. I'm a fairly boring person and developed most of my software habits in the late 1990s and early 2000s, so it's pretty rare that I actually hit a bug.
But so far this blog has hit two: one browser compatibility issue and this one. The script for rebuilding when a file changes depends on the inotifywait utility, and it turned out that until recently it breaks when you ask it to watch more than 1024 files.
- I filed a bug
- A helpful developer, Jan Kratochvil, wrote a fix and put in a pull request.
- A bot made test packages and commented with instructions for me on how to test the fix.
- I commented that the new version works for me
- The fix just went into Fedora. Pretty damn slick.
This is a great improvement over how this kind of thing used to work. I hardly had to do anything. These kids today don't know how good they have it.
This thread about line wrapping also shows that there are many with two or more decades of experience in Debian, who have over the years formed their own highly optimized workflows and email client and text editor settings which diverge from what "mainstream" today considers easy or optimal. I am hugely grateful for people who have contributed to Debian for decades and I hope to see them continue to contribute for decades to come. At the same time I wonder how we can narrow the evident cultural gap between the Mutt user generation and newer web email generation users, which also manifests in other areas of workflow preferences as we have seen in discussions about email vs web interface for bug reports.
Development
FerretDB 2.0 released
Version 2.0.0 of FerretDB has been released. FerretDB is an open-source alternative to MongoDB, which switched to a non-open license in 2018, built on top of PostgreSQL. This release utilizes the DocumentDB PostgreSQL extension for better performance, adds vector search, and replication.
Terms of use and privacy changes for Firefox
There is a fair amount of unhappiness on the Internet about the announcement from Mozilla about a new "terms of use" agreement and an updated privacy notice for the Firefox browser.
Firefox will always continue to add new features, improve existing ones, and test new ideas. We remain dedicated to making Firefox open source, but we believe that doing so along with an official Terms of Use will give you more transparency over your rights and permissions as you use Firefox. And actually asking you to acknowledge it is an important step, so we're making it a part of the standard product experience starting in early March for new users and later this year for existing ones.
Specifically, the apparent removal of a promise to not sell users' personal data has drawn attention.
(See also: this
analysis by Michael Taggart. "So, is this Mozilla 'going evil?'
Nah, prolly not. But it is at best clumsy, and a poor showing if they want
me to believe they care about Firefox, rather than the data it can
provide
".)
Mozilla reverses course on its terms of use
Mozilla has issued
an update to its terms of use (TOU) that were announced
on February 26. It has removed a reference in the TOU to
Mozilla's Acceptable Use Policy "because it seems to be causing
more confusion than clarity
", and has revised the TOU "to more
clearly reflect the limited scope of how Mozilla interacts with user
data
". The new language says:
You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.
Mozilla has also updated its Privacy FAQ to provide more detail about its reasons for the changes.
Firefox 136.0 released
Version 136.0 of the Firefox browser has been released. Changes include a new vertical tab layout, an automatic attempt to upgrade HTTP connections to HTTPS, support for AMD GPUs on Linux, an Arm64 port for Linux, and more.Fish shell 4.0 released
Version 4.0 of the Fish shell has been released. Improvements include a better key-binding mechanism, the ability to tie abbreviations to a specific command, selective ignoring of commands in the history, some scripting improvements, and more. See the release notes for details.Incus 6.10 released
Version 6.10 of the Incus container-management system has been released. New features include better Let's Encrypt support, API-wide filtering, IOMMU support in virtual machines, and more. See this announcement for details.Thunderbird Desktop 136.0 released
Version 136.0 of the Thunderbird Desktop mail client has been released. The release includes a quick toggle for adapting messages to dark mode, and a new "Appearance" setting to control message threading and sorting order globally, as well as a number of bug fixes. See the security advisory for a full list of security vulnerabilities addressed in Thunderbird 136.0.
Xen 4.20 released
The Xen Project has announced the release of Xen 4.20. This release adds support for AMD Zen 5 CPUs, improved compliance with the MISRA C standard, work on PCI-passthrough on Arm, and more. Xen 4.20 also removes support for Xeon Phi CPUs, which were discontinued in 2018. See the feature list and release notes for more information.
Development quote of the week
SunOS 4.0 or 4.1 was when the Sun geniuses unbundled the C compiler and made it a $$$ add on. That move single-handedly made GCC the reference compiler moving forward.
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: March 6, 2025 to May 5, 2025
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
March 10 | June 26 June 27 |
Linux Security Summit North America | Denver, CO, US |
March 16 | July 24 July 29 |
GUADEC 2025 | Brescia, Italy |
March 16 | May 24 May 25 |
Journées du Logiciel Libre | Lyon, France |
March 28 | October 12 October 14 |
All Things Open | Raleigh, NC, US |
March 30 | July 1 July 3 |
Pass the SALT Conference | Lille, France |
March 31 | June 26 June 28 |
Linux Audio Conference | Lyon, France |
March 31 | August 9 August 10 |
COSCUP 2025 | Taipei City, Taiwan |
April 13 | April 30 May 5 |
MiniDebConf Hamburg | Hamburg, Germany |
April 14 | August 25 August 27 |
Open Source Summit Europe | Amsterdam, Netherlands |
April 27 | June 13 June 15 |
SouthEast LinuxFest | Charlotte, NC, US |
April 28 | July 31 August 3 |
FOSSY | Portland OR, US |
April 30 | June 26 June 28 |
openSUSE Conference | Nuremberg, Germany |
April 30 | September 23 | Open Tech Day 25: Grafana Edition | Nuremberg, Germany |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: March 6, 2025 to May 5, 2025
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
March 6 March 9 |
SCALE 22x | Pasadena, CA, US |
March 10 March 11 |
FOSS Backstage | Berlin, Germany |
March 10 March 14 |
Netdev 0x19 | Zagreb, Croatia |
March 13 March 15 |
FOSSASIA Summit | Bangkok, Thailand |
March 18 | Nordic PGDay 2025 | Copenhagen, Denmark |
March 18 March 20 |
Linux Foundation Member Summit | Napa, CA, US |
March 20 | pgDay Paris | Paris, France |
March 22 March 23 |
Chemnitz Linux Days 2025 | Chemnitz, Germany |
March 24 March 28 |
TechWeekStorage 25 | Geneva, Switzerland |
March 24 March 26 |
Linux Storage, Filesystem, Memory-Management and BPF Summit | Montreal, Canada |
March 29 | Open Source Operating System Annual Technical Conference | Beijing, China |
April 1 April 2 |
FediForum Unconference | online |
April 7 April 8 |
sambaXP 2025 | Göttingen, Germany |
April 8 April 10 |
SNIA SMB3 Interoperability Lab EMEA | Goettingen, Germany |
April 14 April 15 |
foss-north 2025 | Gothenburg, Sweden |
April 26 | Central Pennsylvania Open Source Conference | Lancaster, PA, US |
April 26 | 21st Linux Infotag Augsburg | Augsburg, Germany |
April 29 April 30 |
stackconf 2025 | Munich, Germany |
April 30 May 5 |
MiniDebConf Hamburg | Hamburg, Germany |
If your event does not appear here, please tell us about it.
Security updates
Alert summary February 27, 2025 to March 5, 2025
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
AlmaLinux | ALSA-2025:1659 | 9 | kernel | 2025-03-04 |
Debian | DLA-4069-1 | LTS | emacs | 2025-02-27 |
Debian | DSA-5871-1 | stable | emacs | 2025-02-27 |
Debian | DLA-4073-1 | LTS | ffmpeg | 2025-02-28 |
Debian | DLA-4070-1 | LTS | freerdp2 | 2025-02-27 |
Debian | DLA-4071-1 | LTS | gst-plugins-good1.0 | 2025-02-27 |
Debian | DLA-4075-1 | LTS | kernel | 2025-03-01 |
Debian | DSA-5873-1 | stable | libreoffice | 2025-03-04 |
Debian | DLA-4076-1 | LTS | linux-6.1 | 2025-03-01 |
Debian | DLA-4074-1 | LTS | mariadb-10.5 | 2025-03-01 |
Debian | DSA-5870-1 | stable | openh264 | 2025-02-26 |
Debian | DLA-4077-1 | LTS | proftpd-dfsg | 2025-03-02 |
Debian | DLA-4072-1 | LTS | xorg-server | 2025-02-28 |
Debian | DSA-5872-1 | stable | xorg-server | 2025-02-28 |
Fedora | FEDORA-2025-eeba8bf9d8 | F40 | chromium | 2025-03-01 |
Fedora | FEDORA-2025-25ab311510 | F41 | chromium | 2025-03-01 |
Fedora | FEDORA-2025-6f77f6c77a | F40 | cutter-re | 2025-03-01 |
Fedora | FEDORA-2025-1290a47fff | F41 | cutter-re | 2025-03-01 |
Fedora | FEDORA-2025-e694138ac5 | F40 | exim | 2025-03-05 |
Fedora | FEDORA-2025-4dfc188932 | F41 | exim | 2025-03-05 |
Fedora | FEDORA-2025-fc7c0ca5c5 | F41 | fscrypt | 2025-03-05 |
Fedora | FEDORA-2025-a1d884e467 | F41 | iniparser | 2025-03-01 |
Fedora | FEDORA-2025-15a818859e | F40 | java-17-openjdk | 2025-02-28 |
Fedora | FEDORA-2025-e97e5c6ce3 | F41 | nodejs22 | 2025-03-01 |
Fedora | FEDORA-2025-e60e30944c | F40 | python3.6 | 2025-02-28 |
Fedora | FEDORA-2025-59cbb4663d | F41 | python3.6 | 2025-02-28 |
Fedora | FEDORA-2025-6f77f6c77a | F40 | rizin | 2025-03-01 |
Fedora | FEDORA-2025-1290a47fff | F41 | rizin | 2025-03-01 |
Fedora | FEDORA-2025-3dfc505946 | F41 | rpm-ostree | 2025-02-27 |
Fedora | FEDORA-2025-57805565ad | F40 | webkitgtk | 2025-03-01 |
Fedora | FEDORA-2025-04475838f9 | F40 | wireshark | 2025-03-01 |
Fedora | FEDORA-2025-08e73d463e | F41 | wireshark | 2025-03-01 |
Fedora | FEDORA-2025-20f63c4273 | F41 | xen | 2025-03-01 |
Fedora | FEDORA-2025-b40b12a89e | F41 | xorg-x11-server | 2025-03-01 |
Fedora | FEDORA-2025-2210d27149 | F41 | xorg-x11-server-Xwayland | 2025-02-28 |
Mageia | MGASA-2025-0084 | 9 | binutils | 2025-03-02 |
Mageia | MGASA-2025-0076 | 9 | dcmtk | 2025-02-25 |
Mageia | MGASA-2025-0085 | 9 | ffmpeg | 2025-03-02 |
Mageia | MGASA-2025-0082 | 9 | libcap | 2025-02-26 |
Mageia | MGASA-2025-0080 | 9 | openssh | 2025-02-26 |
Mageia | MGASA-2025-0081 | 9 | proftpd | 2025-02-26 |
Mageia | MGASA-2025-0083 | 9 | radare2 | 2025-02-28 |
Mageia | MGASA-2025-0086 | 9 | x11-server | 2025-03-03 |
Oracle | ELSA-2025-1917 | OL8 | emacs | 2025-03-03 |
Oracle | ELSA-2025-1915 | OL9 | emacs | 2025-03-03 |
Oracle | ELSA-2025-1659 | OL9 | kernel | 2025-03-03 |
Red Hat | RHSA-2025:2130-01 | EL7 | emacs | 2025-03-04 |
Red Hat | RHSA-2025:1917-01 | EL8 | emacs | 2025-02-27 |
Red Hat | RHSA-2025:2157-01 | EL8.2 | emacs | 2025-03-04 |
Red Hat | RHSA-2025:1963-01 | EL8.4 | emacs | 2025-03-03 |
Red Hat | RHSA-2025:1961-01 | EL8.6 | emacs | 2025-03-03 |
Red Hat | RHSA-2025:1962-01 | EL8.8 | emacs | 2025-03-03 |
Red Hat | RHSA-2025:1915-01 | EL9 | emacs | 2025-02-27 |
Red Hat | RHSA-2025:2022-01 | EL9.0 | emacs | 2025-03-03 |
Red Hat | RHSA-2025:1964-01 | EL9.2 | emacs | 2025-03-03 |
Red Hat | RHSA-2025:2195-01 | EL9.4 | emacs | 2025-03-04 |
Red Hat | RHSA-2025:1658-01 | EL9.4 | kernel | 2025-02-27 |
Red Hat | RHSA-2025:2270-01 | EL9.4 | kernel | 2025-03-05 |
Red Hat | RHSA-2025:1920-01 | EL9.2 | pki-servlet-engine | 2025-02-27 |
Red Hat | RHSA-2025:2034-01 | EL8 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:1960-01 | EL8.2 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:1959-01 | EL8.4 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:2121-01 | EL8.6 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:1958-01 | EL8.8 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:2035-01 | EL9 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:1957-01 | EL9.0 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:2126-01 | EL9.2 | webkit2gtk3 | 2025-03-03 |
Red Hat | RHSA-2025:2125-01 | EL9.4 | webkit2gtk3 | 2025-03-03 |
Slackware | SSA:2025-057-01 | emacs | 2025-02-26 | |
Slackware | SSA:2025-063-01 | mozilla | 2025-03-04 | |
SUSE | SUSE-SU-2025:0751-1 | MP4.3 SLE15 oS15.4 oS15.6 | azure-cli | 2025-02-28 |
SUSE | openSUSE-SU-2025:14844-1 | TW | bsdtar | 2025-02-28 |
SUSE | openSUSE-SU-2025:0077-1 | osB15 | chromium | 2025-02-27 |
SUSE | SUSE-SU-2025:0776-1 | SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 | docker | 2025-03-04 |
SUSE | openSUSE-SU-2025:14833-1 | TW | ffmpeg-4 | 2025-02-26 |
SUSE | openSUSE-SU-2025:14834-1 | TW | ffmpeg-7 | 2025-02-26 |
SUSE | openSUSE-SU-2025:14850-1 | TW | ffmpeg-7 | 2025-03-03 |
SUSE | SUSE-SU-2025:0783-1 | SLE12 | firefox | 2025-03-05 |
SUSE | SUSE-SU-2025:0727-1 | SLE-m5.1 SLE-m5.2 | gnutls | 2025-02-26 |
SUSE | SUSE-SU-2025:0728-1 | SLE-m5.3 | gnutls | 2025-02-26 |
SUSE | SUSE-SU-2025:0765-1 | SLE-m5.4 SLE-m5.5 oS15.4 | gnutls | 2025-03-03 |
SUSE | SUSE-SU-2025:0766-1 | SLE12 | gnutls | 2025-03-03 |
SUSE | SUSE-SU-2025:0767-1 | SLE12 | gnutls | 2025-03-03 |
SUSE | SUSE-SU-2025:0764-1 | SLE15 oS15.6 | gnutls | 2025-03-03 |
SUSE | openSUSE-SU-2025:14835-1 | TW | gnutls | 2025-02-26 |
SUSE | SUSE-SU-2025:0770-1 | SLE15 oS15.6 | govulncheck-vulndb | 2025-03-03 |
SUSE | openSUSE-SU-2025:14843-1 | TW | govulncheck-vulndb | 2025-02-28 |
SUSE | SUSE-SU-2025:0771-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | kernel | 2025-03-03 |
SUSE | SUSE-SU-2025:0757-1 | MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 | libX11 | 2025-02-28 |
SUSE | SUSE-SU-2025:0740-1 | SLE12 | libX11 | 2025-02-28 |
SUSE | SUSE-SU-2025:0739-1 | SLE15 oS15.6 | libX11 | 2025-02-28 |
SUSE | openSUSE-SU-2025:14836-1 | TW | libiniparser-devel | 2025-02-26 |
SUSE | SUSE-SU-2025:0758-1 | MP4.3 SLE15 SES7.1 oS15.6 | libxkbfile | 2025-02-28 |
SUSE | SUSE-SU-2025:0748-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | libxml2 | 2025-02-28 |
SUSE | SUSE-SU-2025:0747-1 | SLE12 | libxml2 | 2025-02-28 |
SUSE | SUSE-SU-2025:0746-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | libxml2 | 2025-02-28 |
SUSE | openSUSE-SU-2025:14848-1 | TW | nodejs-electron | 2025-03-02 |
SUSE | SUSE-SU-2025:0744-1 | SLE12 | openssh8.4 | 2025-02-28 |
SUSE | SUSE-SU-2025:0742-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | openvswitch3 | 2025-02-28 |
SUSE | SUSE-SU-2025:0752-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | ovmf | 2025-02-28 |
SUSE | openSUSE-SU-2025:0081-1 | osB15 | phpMyAdmin | 2025-03-03 |
SUSE | SUSE-SU-2025:0775-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | podman | 2025-03-04 |
SUSE | SUSE-SU-2025:0737-1 | oS15.6 | postgresql13 | 2025-02-28 |
SUSE | SUSE-SU-2025:0741-1 | MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.6 | procps | 2025-02-28 |
SUSE | SUSE-SU-2025:0725-1 | SLE12 | procps | 2025-02-26 |
SUSE | SUSE-SU-2025:0756-1 | SLE15 oS15.6 | python | 2025-02-28 |
SUSE | SUSE-SU-2025:0750-1 | MP4.2 MP4.3 SLE15 oS15.6 | python-azure-identity | 2025-02-28 |
SUSE | openSUSE-SU-2025:14845-1 | TW | python311-jupyter-server | 2025-02-28 |
SUSE | SUSE-SU-2025:0736-1 | MP4.3 SLE15 SES7.1 oS15.6 | ruby2.5 | 2025-02-27 |
SUSE | SUSE-SU-2025:0772-1 | MP4.3 SLE15 SLE-m5.5 SES7.1 oS15.3 oS15.6 | skopeo | 2025-03-03 |
SUSE | SUSE-SU-2025:0726-1 | SLE12 | socat | 2025-02-26 |
SUSE | SUSE-SU-2025:0753-1 | SLE15 oS15.6 | tiff | 2025-02-28 |
SUSE | openSUSE-SU-2025:0080-1 | osB15 | trivy | 2025-03-03 |
SUSE | SUSE-SU-2025:0763-1 | SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 | u-boot | 2025-03-03 |
SUSE | SUSE-SU-2025:0755-1 | oS15.6 | u-boot | 2025-02-28 |
SUSE | SUSE-SU-2025:0724-1 | SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 | vim | 2025-02-26 |
SUSE | SUSE-SU-2025:0722-1 | SLE12 | vim | 2025-02-26 |
SUSE | SUSE-SU-2025:0723-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | vim | 2025-02-26 |
SUSE | SUSE-SU-2025:0735-1 | SLE15 SES7.1 | webkit2gtk3 | 2025-02-27 |
SUSE | SUSE-SU-2025:0754-1 | SLE15 oS15.6 | wireshark | 2025-02-28 |
SUSE | SUSE-SU-2025:0732-1 | MP4.3 SLE15 oS15.4 | xorg-x11-server | 2025-02-26 |
SUSE | SUSE-SU-2025:0734-1 | SLE12 | xorg-x11-server | 2025-02-26 |
SUSE | SUSE-SU-2025:0733-1 | SLE15 SES7.1 | xorg-x11-server | 2025-02-26 |
SUSE | SUSE-SU-2025:0731-1 | SLE15 oS15.5 | xorg-x11-server | 2025-02-26 |
SUSE | SUSE-SU-2025:0730-1 | SLE15 oS15.6 | xorg-x11-server | 2025-02-26 |
SUSE | openSUSE-SU-2025:14841-1 | TW | xorg-x11-server | 2025-02-27 |
SUSE | SUSE-SU-2025:0729-1 | SLE15 oS15.6 | xwayland | 2025-02-26 |
Ubuntu | USN-7309-1 | 16.04 18.04 20.04 22.04 24.04 24.10 | Ruby SAML | 2025-02-28 |
Ubuntu | USN-7306-1 | 20.04 22.04 24.04 24.10 | binutils | 2025-02-26 |
Ubuntu | USN-7319-1 | 20.04 22.04 24.04 24.10 | cmark-gfm | 2025-03-04 |
Ubuntu | USN-7313-1 | 20.04 22.04 24.04 24.10 | erlang | 2025-03-03 |
Ubuntu | USN-7207-2 | 20.04 | git | 2025-02-27 |
Ubuntu | USN-7314-1 | 20.04 22.04 24.04 24.10 | krb5 | 2025-03-03 |
Ubuntu | USN-7267-2 | 24.04 24.10 | libsndfile | 2025-02-25 |
Ubuntu | USN-7307-1 | 18.04 20.04 22.04 24.04 24.10 | libxmltok | 2025-02-27 |
Ubuntu | USN-7327-1 | 20.04 22.04 | linux, linux-lowlatency, linux-lowlatency-hwe-5.15 | 2025-03-05 |
Ubuntu | USN-7324-1 | 22.04 24.04 | linux, linux-lowlatency, linux-lowlatency-hwe-6.8 | 2025-03-05 |
Ubuntu | USN-7322-1 | 24.04 24.10 | linux, linux-oem-6.11 | 2025-03-05 |
Ubuntu | USN-7325-1 | 22.04 24.04 | linux-aws, linux-aws-6.8, linux-oracle, linux-oracle-6.8, linux-raspi | 2025-03-05 |
Ubuntu | USN-7311-1 | 22.04 24.04 | linux-aws, linux-aws-6.8 | 2025-02-28 |
Ubuntu | USN-7323-1 | 24.04 24.10 | linux-aws, linux-gcp, linux-hwe-6.11, linux-oracle, linux-raspi, linux-realtime | 2025-03-05 |
Ubuntu | USN-7328-1 | 20.04 22.04 | linux-aws, linux-gkeop, linux-ibm, linux-intel-iotg, linux-intel-iotg-5.15, linux-oracle, linux-oracle-5.15, linux-raspi | 2025-03-05 |
Ubuntu | USN-7294-2 | 18.04 20.04 | linux-aws, linux-oracle, linux-oracle-5.4 | 2025-02-27 |
Ubuntu | USN-7308-1 | 22.04 | linux-aws | 2025-02-27 |
Ubuntu | USN-7326-1 | 22.04 24.04 | linux-gcp, linux-gcp-6.8, linux-gke, linux-gkeop | 2025-03-05 |
Ubuntu | USN-7303-3 | 22.04 24.04 | linux-gcp-6.8, linux-raspi | 2025-03-03 |
Ubuntu | USN-7294-3 | 20.04 | linux-ibm | 2025-02-28 |
Ubuntu | USN-7289-4 | 20.04 22.04 | linux-intel-iotg, linux-intel-iotg-5.15 | 2025-02-27 |
Ubuntu | USN-7294-4 | 20.04 | linux-kvm | 2025-03-03 |
Ubuntu | USN-7310-1 | 24.04 | linux-oem-6.11 | 2025-02-28 |
Ubuntu | USN-7283-1 | 14.04 16.04 18.04 | lucene-solr | 2025-02-20 |
Ubuntu | USN-7312-1 | 24.04 24.10 | opennds | 2025-03-03 |
Ubuntu | USN-7049-3 | 14.04 | php5 | 2025-02-27 |
Ubuntu | USN-7315-1 | 20.04 22.04 24.04 24.10 | postgresql-12, postgresql-14, postgresql-16 | 2025-03-03 |
Ubuntu | USN-7316-1 | 20.04 22.04 24.04 24.10 | raptor2 | 2025-03-03 |
Ubuntu | USN-7318-1 | 18.04 20.04 24.10 | spip | 2025-03-04 |
Ubuntu | USN-7282-1 | 16.04 | tomcat7 | 2025-02-25 |
Ubuntu | USN-7317-1 | 20.04 22.04 24.04 24.10 | wpa | 2025-03-03 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier