|
|
Subscribe / Log in / New account

Leading items

Welcome to the LWN.net Weekly Edition for January 16, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Chimera Linux works toward a simplified desktop

By Daroc Alden
January 13, 2025

Chimera Linux is a new distribution designed to be "simple, transparent, and easy to pick up". The distribution is built from scratch, and recently announced its first beta release. While the documentation and installation process are both a bit rough, the project already provides a usable desktop with plenty of useful software — one built primarily on tools adopted from BSD.

Chimera Linux was started by "q66" (who previously worked on Void Linux) in 2021 with the goal of creating a modern distribution that could "eliminate legacy cruft where possible" to provide a simple, practical desktop. In service of that goal, the project is based on BSD tools. Chimera's frequently asked questions page explains that unlike other projects that use those tools for licensing reasons, project picked BSD tools for their smaller code size and reduced complexity. Bootstrapping a modern Linux distribution is quite complex, with many packages that depend on other packages; using BSD tools allowed the project to avoid a lot of that complexity. For example, Chimera uses musl as its C library, which cuts out a lot of dependencies from the GNU C library.

Some people may also say that the BSD licensing is its own benefit. We do not say that, because as far as core userland goes, the licensing is more or less meaningless for us and we could easily live with the GPL. Therefore, this is largely a technical decision for us. While the benefits may seem small to some, they are there, and they matter to the project.

That same drive for simplicity also motivated the project to eschew systemd. The project's documentation calls systemd's implementation "rather messy", but also acknowledges that "it has been a net functional improvement for Linux". Chimera acknowledges that systemd has a lot of features and tools, but ended up deciding that the cost of maintaining compatibility with how systemd expects a Linux system to behave is just too high.

At the same time, the project specifically disavows any association with "the so-called 'systemd-free community', which tends to spread a lot of misconceptions and frankly deranged opinions that [end] up hurting any sort of positive effort." Chimera Linux is focused on building a simplified, usable Linux system — and not on getting into fights about what software the contributors do or do not use to get there. This pragmatic approach has been attractive to contributors, and now q66 has been joined by Isaac Freund (a contributor to Zig and the author of waylock) as a co-maintainer, and more than a hundred other contributors.

But the choice to eschew systemd itself causes a number of problems for the distribution. For example, systemd's logind daemon handles tracking user sessions; without systemd, Chimera needs another solution for that. The project's current approach is a standalone fork of logind called elogind, but the eventual plan is to develop an API that works with both logind and any other session-tracking software. The project's documentation cites this as another benefit of choosing less common software: it presents an opportunity to try to improve the portability of existing programs.

For service management, Chimera uses the Dinit init system, which also has a focus on portability. Dinit offers some of the same core features of systemd — a daemon that supervises system services based on a configuration, including user services — but delegates everything outside that core scope to other programs. Chimera supports several architectures, including x86_64, ppc64, ppc64le, aarch64, and riscv64.

Installation

Chimera Linux does not yet have a graphical installer. The installation process is done mostly by hand, from a live image. People who have previously installed Arch or Gentoo will find the process familiar. The project publishes several live images to use, including a minimal, console-only version, a GNOME version, and a KDE Plasma version. Users who haven't previously had a reason to install a Linux system without the aid of an installer will probably find using the GNOME or KDE version easiest, so that they can refer to the online documentation as they go.

Regardless of which live image is used, the process is the same: configure disk partitions (with cfdisk), set up file systems, mount them in the desired configuration, and then set up the system inside a chroot. The installation documentation contains a section on partitioning, including suggestions for different architectures. Chimera supports many different configurations, leaving the user free to carve up their disk as they please — but does not support having /usr on a separate partition. Chimera is "fully /usr merged", and stores some programs that are needed to mount disks and fully boot the system there.

Once disks and filesystems are set up, the chimera-bootstrap tool sets up the minimum required tooling for the new installation; by default, this consists of packages fetched from Chimera's package repository, but it can also be instructed to copy the packages from the live image for offline installations. Chimera uses Alpine Package Keeper (APK), Alpine Linux's package manager, but its packages are not derived from Alpine's. APK can be used to install additional software — including a kernel and bootloader, at a minimum, since chimera-bootstrap doesn't include either of those, perhaps so that users can have a choice of which kernel package to install. The sections in the installation documentation make a de-facto checklist for what needs to be set up before rebooting into the new installation, but the documentation is really more a list of ways that a user could choose to configure things than a prescriptive set of steps to follow.

[A screenshot of the Chimera Linux desktop, open to a disk partitioning tool.]

Software

Most users will want to install a desktop environment, which also doesn't come by default (although the package for the corresponding desktop environment is available on the live image, for offline installations). Chimera's recommended desktop is GNOME (using Wayland, but X11 is also supported), but several others are packaged for the distribution. When installing GNOME, there isn't much software installed by default. It comes with the Web browser, along with the other basic tools from Apps for GNOME.

More common software, such as Firefox and LibreOffice, is available from the package repository, however, which has nearly 10,000 packages available. Since Chimera doesn't ship the GNU C library, software that relies on it will not work. Most software is able to use musl, but pre-compiled binaries, such as proprietary software, tends to break. If the user needs applications that require it anyway, the documentation recommends installing Flatpak and using that to run such software in a more typical container.

The choice of BSD tools and lack of systemd don't really impact the day-to-day use of the system; other than substituting doas for sudo, the other command line software that I use in a normal day worked just fine. For development, however, some software is conspicuously missing; Chimera doesn't package GCC for all its architectures, for example, although it does package GCC as a cross-compiler for aarch64, arm, and riscv64. The default C compiler is Clang, and tools such as make and [Edit: GNU make is now the default] tar use the BSD versions by default, even when the GNU versions are packaged.

Updates are fairly simple; APK is not a BSD-style system that builds installed software from scratch, but rather a normal Linux package manager. The distribution hosts pre-compiled binary packages, but the packages can also be compiled by hand from the definitions. There are a handful of contributors making sure important software stays up to date, but in my research I wasn't able to find a documented process for security updates — something the distribution will almost certainly need as it grows.

Overall, Chimera Linux seems to have made a good step toward its goal of creating a simplified Linux desktop. The distribution is definitely usable, and offers a good amount of flexibility for experienced users, while still being relatively simple. Still, there are some rough spots. The manual and somewhat idiosyncratic installation process will put some people off — and the people who wouldn't be put off are probably expert users who have their own existing setups.

The future

In 2025, the project plans to focus on smoothing out a handful of sharp edges, mostly related to doing service management without systemd. In particular, making progress toward the goal of removing elogind and replacing it with a custom solution. Planning, discussion about development, and user support all happen on IRC in OFTC's #chimera-linux channel, or in the bridged Matrix channel. There is also a somewhat active Reddit community and an official Mastodon account for the project, which shares progress updates.

Chimera's alpha phase took a year and a half, from June 2023 to December 2024; if the beta takes the same amount of time, it could see a 1.0 release in 2026. On the other hand, the project has grown quickly, so it may reach a stable release sooner rather than later. Where exactly Chimera will be in another 18 months, and whether it will prove useful to more than it's current small yet dedicated community, remains to be seen.

Comments (20 posted)

The state of Vim

January 10, 2025

This article was contributed by Murukesh Mohanan

The death of Bram Moolenaar, Vim founder and benevolent dictator for life (BDFL), in 2023 sent a shock through the community, and raised concern about the future of the project. At VimConf 2024 in November, current Vim maintainer Christian Brabandt delivered a keynote on "the new Vim project" that detailed how the community has reorganized itself to continue maintaining Vim and what the future looks like.

Vim after Bram

Brabandt began with his history with Vim: he has been involved in Vim since 2006, and said his first commit to the project was made in the 7.0/7.1 days (sometime around 2006). He started by contributing small patches and fixes, and then contributed larger features such as the gn and gN commands, which combine searching and visual-mode selection, improved cryptographic support using libsodium, maintained the Vim AppImage, and more. He said he became less active in the project around 2022 due to personal and work-related reasons.

That changed in August 2023, when Moolenaar passed away. Moolenaar had been the maintainer of Vim for more than 30 years; while he had added Brabandt and Ken Takata as co-maintainers of Vim in the years before, most development still flowed through him. With his death, a considerable amount of knowledge was lost—but Brabandt and others stepped up to keep the project alive.

Moolenaar was the only owner of the Vim GitHub organization at the time, so only his account could change certain settings. Initially, contributors tried to use the GitHub deceased user policy to add owners to the organization. That was quite an involved process, and it soon became apparent that the end result would be the deactivation of Moolenaar's account. Having Moolenaar's account be accessible by his family was important, so they abandoned that approach, and instead the family granted access to it as needed for organizational changes.

Charles Campbell (known as "Dr Chip"), a Vim contributor for more than 25 years also decided to retire soon after Moolenaar's death. His departure was followed by an expansion of the team of maintainers, as Yegappan Lakshmanan joined it, with Dominique Pellé, Doug Kearns, and GitHub users "glepnir", "mattn", and "zeertzjq" joining soon after.

More than just the source code

He stressed that maintaining Vim is not just about the source code. There are quite a few other things to be managed, such as the Vim web site, FTP server, security disclosures, Vim communities on other sites such as Reddit and Stack Exchange, and more.

Vim's site needed work. The design, and most of the code, had been unchanged for quite a while—until 2023, it was based on PHP 5. In recent times, there had been a few occasions where the web site was unstable, and so he started looking for a new host in 2024. The move involved an upgrade to PHP 8, for which some of the code had to be rewritten. Brabandt thanked Mark Schöchlin, who stepped up to take care of all this.

He acknowledged that the design has been pretty much unchanged since 2001, doesn't look modern, and can be scary to new users. There has been some work on redesigning it, but the first attempt hasn't been that successful. He prioritizes consistency and does not wish to scare away longtime users.

DNS was also troublesome—the vim.org domain was managed by Stefan Zehl, but Moolenaar also owned a number of other domains such as vim8.org, vim9.org, etc. Thankfully, SSL certificates were already managed using Let's Encrypt, so Brabandt had no problems there. Several email addresses, such as bram@vim.org, bugs@vim.org, etc., were forwarded to Moolenaar's personal email; those have since been updated to point to Brabandt's address instead. The FTP server was hosted by NLUUG, but he decided to retire it and says that he hasn't received any complaints so far.

ICCF Holland

As readers might know, Vim is charityware, and the charity of choice is ICCF Holland, founded by Moolenaar. Brabandt said that the ICCF is very much alive, and plans to reorganize and restructure itself. Quite a few users started donating after Moolenaar's passing, and in 2023, it raised about €90,000. The project plans to continue to work with ICCF and doesn't want to change ICCF's association with Vim. He noted that there is no sponsorship for the maintainers, all of whom are working for free. Traditionally, all money raised has been given to the ICCF and he has no plans to change that. Brabandt said he earns enough from his job that he doesn't need assistance to work on Vim, so he's happy to let all donations go to ICCF.

As an incentive to donate, Moolenaar had allowed people who donated to ICCF to vote on Vim feature requests. Donors to the ICCF could link to their Vim.org account when donating, and then vote on features. This is one aspect that he no longer sees a need for, now that issues and enhancements are discussed on GitHub, and so has decided to shut this down. Linking the accounts and donations was also not easy for Brabandt—he was not sure how Moolenaar did this in the past.

Communication channels

He also talked about the community centered around the Vim mailing lists, which are hosted on Google Groups. In May 2024, he received an automated message from Google informing him that all content from the vim-dev list had been blocked due to spam or malware. This caused a fair bit of trouble, and while it was restored in around a day or so, he still does not know what the exact problem was. There has been some consideration of self-hosting the list, but one drawback is that everyone would have to sign up again. The mailing list is no longer that active now, with more of the community conversations happening on Reddit or Stack Exchange.

Security reporting had to be addressed as well. A couple of years ago, people were reporting issues on the Huntr platform. There were quite a few open issues which have since been taken care of. Huntr was acquired by another company in 2023, which refocused it entirely on AI and shut down general open-source vulnerability reporting.

Now, Vim is accepting security reports via email or GitHub, and publishing vulnerabilities via GitHub security advisories. There is a private mailing list for as-yet unpublished security issues, and emails are forwarded to all maintainers. Brabandt has started adding a [security] tag to commit messages for marking security fixes, and such commits are announced on the oss-security list (the most recent being from October) and to maintainers of distribution packages.

Maintenance mode

Brabandt then showed the contribution graph, to demonstrate that development did not stop after Moolenaar passed away. There was a slowdown as Moolenaar's health deteriorated, and then a spike as he cleaned up the open pull requests (PRs). Version 9.1, dedicated to Moolenaar, was released on January 2, 2024—about four months after his passing.

The 9.1 release included improvements to virtual text (which enables completion suggestions and such to appear in the editing area, while not being part of the actual text), smooth scrolling, and OpenVMS support. After 9.1, he started adding more potentially controversial changes, such as support for the XDG base directory specification. Now Vim does not need to litter your top-level home directory: ~/.vimrc or ~/.vim/vimrc still work, but $XDG_CONFIG_HOME/vim/vimrc will now work if neither of the above are present. Another such change is Wayland support. It is not complete yet, and he says he is not sure whether remaining problems with clipboard support are Vim bugs or Wayland ones.

As he went through the backlog of PRs, he started developing a policy for merging PRs, prioritizing the need to test things well. Tests are now running with continuous integration (CI). He said that it's also important to have good documentation.

Vim has interfaces to quite a few languages, including Python 2 and 3, Ruby, Lua, Tcl, and MzScheme. But Brabandt isn't sure which of these are really needed these days. For example, Python 2, Tcl, and MzScheme (which does not build with the latest version of the language) might need to be retired to reduce the maintenance burden. Other areas to improve include the GUI (GTK 4 has been around for a while, but Vim does not use it yet), support for advanced terminal features, and better spell checking (which has largely remained unchanged since Vim 7). Support for the tree-sitter parser generator is wished-for, but it is controversial, and he does not see it coming to Vim soon.

He knows there have been some significant changes in Neovim, but he's not sure how many of those can come to Vim. There have been small changes in Vim, but for major changes, you need community support. He does not want to make backward-incompatible changes and is quite hesitant to merge changes that might break things. He said he has to keep in the mind the whole picture, especially the expectations of users, when dealing with PRs. Currently, he said that Vim is more-or-less in maintenance mode.

He said he has created an internal repository to keep track of stakeholders and to ensure that if something were to happen to him, other maintainers could pick up where he left off.

Brabandt recommended that those new to the project start by making small contributions and becoming familiar with the codebase. He had some pointers for developers. He said it is important to use a defensive style with C to ensure that new bugs aren't being introduced. One should use Coverity, a static-analysis tool, to scan for defects. Some parts of the Vim codebase are complex, he said, and need to be refactored into more manageable units if possible.

Maintaining Vim is a full-time job, he said, and it is not only about maintaining the code, but also the community—managing expectations and listening to users' needs. He has to understand the community: what does it want Vim to be? An IDE? Bug-for-bug compatibility with old Vim? How can we make Vim9 script, the new Vim scripting language, more widely used? How can we ensure that the Vim community remains healthy? He ended his talk by thanking all the Vim contributors and then took a few questions.

Questions

One audience member asked about the difference between Vim and Neovim's maintenance model. Since most PRs are still merged by Brabandt, would that make him the new BDFL for Vim?

Brabandt emphatically denied being a BDFL. Currently, he merges most changes because the version number has to be incremented with each change, so multiple people merging can introduce conflicts. However, when he was on vacation, he handed over the main maintainership to Lakshmanan. He emphasized that it's a community project, and he listens to the community before making decisions. It just happens that at this time the other maintainers don't want to merge changes themselves and instead defer to Brabandt, which is fine with him.

Another member of the audience wondered about language barriers, since there are many Japanese members of the Vim community as well as many languages in Europe, etc. Brabandt answered that, as an international project, the primary language for working on Vim is English. He also noted that it is easier these days to collaborate across languages thanks to ChatGPT and translation tools, but it still happens that some users do not communicate in English well, and that makes it harder to understand their needs.

The rest of VimConf 2024

VimConf was first held in 2013 by the Japanese Vim user group vim-jp. Since then, the group has organized it every year, until 2020 when VimConf was canceled due to COVID. After a hiatus, it resumed in 2023 with a scaled-down version. The full-fledged edition returned to Akihabara, Tokyo on November 23, 2024.

Even though most of the organizers and attendees are Japanese, VimConf strives to be welcoming to all. Presentation materials are expected to be in English, and live translation is provided in both Japanese and English for keynotes and regular presentations, except for lightning talks. PDFs for the talks are available on VimConf's website, and all of the talks are now on YouTube.

Comments (7 posted)

Page-table hardening with memory protection keys

By Jonathan Corbet
January 9, 2025
Attacks on the kernel can take many forms; one popular exploitation path is to find a way to overwrite some memory with attacker-supplied data. If the right memory can be targeted, one well-targeted stray write is all that is needed to take control of the system. Since the system's page tables regulate access to memory, they are an attractive target for this type of attack. This patch set from Kevin Brodsky is an attempt to protect page tables (and, eventually, other data structures) using the "memory protection keys" feature provided by a number of CPU architectures.

Memory protection keys are an additional access-permission mechanism that is layered on top of the permissions implemented in the page tables. Memory can be partitioned into a relatively small number (eight or 16, typically) of domains (or "keys"). A key, in the sense used here, is simply a small integer value that has a set of memory-access permissions associated with it. Each page has an assigned key that can be used to impose additional access restrictions. Memory that is nominally writable cannot be written if its key denies that access. The permissions associated with a key can be changed quickly and affect all pages marked with that key; as a result, large swaths of memory can be quickly made accessible or inaccessible at any time.

Changing the permissions associated with a key is an unprivileged operation. Memory protection keys, thus, cannot protect against attackers who are able to execute arbitrary code. They can, though, be useful to protect against unintended access. Critical data can be write-protected using a key, with that key's permissions being briefly changed only when that data must be written. An attacker attempting to overwrite the same data, perhaps through exploitation of a use-after-free vulnerability, will be blocked, making the system that much harder to compromise. Similarly, memory containing sensitive data (cryptographic keys, for example) can be assigned a key that, most of the time, allows no access at all, reducing the likelihood that this data will be leaked to an attacker.

Linux first gained support for memory protection keys with the 4.6 kernel release in 2016. That support is available for 64-bit Arm and x86 systems, but only for user space. Some attempts over the years notwithstanding, memory protection keys have never been used to protect memory in kernel space, despite the fact that the CPUs support that functionality.

Brodsky's patch set is an attempt to change that situation by using memory protection keys to regulate access to page tables on 64-bit Arm systems. Page tables were chosen for protection because of their value as a target, but also because access to them is already well confined to a set of helper functions, making it relatively easy to add the necessary hooks to change the key protections for a brief period when page tables need to be modified.

A recurring concern with memory protection keys is their relatively small number; it is generally expected that there will be demand for more keys than the hardware can provide, though there has been little evidence of that happening so far. The user-space interface added a set of system calls, including pkey_alloc(), which is used to allocate a new key. On the kernel side, though, there may be no need for a general allocation mechanism; the kernel's code is all present in the repository, so keys can be assigned statically, at least for now.

The patch set does add a bit of structure, though, in the form of a concept called "kpkeys levels". Each level allows access to specific regions of memory. The intent would appear to be that access grows monotonically as the level increases, but there is nothing in the code that implements a hierarchy of levels; each level can be independent of the others. Since this is the first use of this mechanism, there are only two levels implemented: KPKEYS_LVL_DEFAULT, which provides access to kernel-space memory that is not further protected, and KPKEYS_LVL_PGTABLES, which enables write access to page tables.

This abstraction might seem like more than is really needed in this case, where one could simply assign a key for page-table pages and be done with it. Brodsky appears to be looking forward to future applications where more complex combinations of permissions are needed. Separating levels from specific keys also makes it possible for multiple levels to use the same key, which could be useful if the available keys are oversubscribed someday.

The interface to kpkeys levels, from the point of view of most kernel code, is fairly simple; there are two new functions:

    u64 kpkeys_set_level(int level);
    void kpkeys_restore_pkey_reg(u64 pkey_reg);

A call to kpkeys_set_level() will set the current kpkeys level, enabling whatever access that level provides. The return value is an architecture-specific representation of the state of key permissions prior to the change (not the previous level, since other code may be using some of the keys outside of the kpkeys levels mechanism). The previous protections can be restored by passing that returned value to kpkeys_restore_pkey_reg().

The page-table protection API is layered on top of the kpkeys levels machinery. It causes page-table pages to be assigned to the memory protection key set aside for page tables; by default, the associated protections do not allow writing to those pages. Any code that must modify a page-table page should first enable access with a call to kpkeys_set_level() setting the level to KPKEYS_LVL_PGTABLES, then use kpkeys_restore_pkreg_reg() to remove that access afterward. The easier and safer way, though, is to use a scope-based guard:

    guard(kpkeys_hardened_pgtables)();

This bit of magic will make page tables writable and ensure that the change is undone as soon as the current function returns, making it impossible to forget to restore the page-table protections regardless of which code path is taken.

The current implementation, Brodsky said, "should be considered a proof of concept only". It includes just enough support for Arm's kernel-space memory protection keys feature ("Permission Overlay Extension" or POE) to make the rest work; it is not intended to be a complete kernel-space POE implementation at this point. There are also no benchmark results showing what the impact of this mechanism is on performance; developers will want to see those measurements eventually.

As it happens, this is not the first attempt to protect page-table pages using memory protection keys; Rick Edgecombe posted an x86 patch set back in 2021. There was also an attempt to use memory protection keys to prevent stray writes to persistent memory by Ira Weiny in 2022. Neither series progressed to the point of being merged into the mainline, and Edgecombe eventually set the page-table work aside in favor of other projects.

It seems clear, though, that there is interest in providing this sort of protection for page-table pages. To be successful, a patch set will almost certainly need to incorporate elements from both the Arm and x86 work to show that it is, indeed, applicable to more than one architecture. If that barrier can be overcome, the kernel might eventually have hardening of page-table access. Thereafter, it may make sense to extend this protection to other critical data structures within the kernel (Brodsky suggests task credentials and SELinux state, among other things). First, though, there needs to be agreement on the core infrastructure, and that discussion has barely begun.

Comments (2 posted)

Modifying another process's system calls

By Jonathan Corbet
January 14, 2025
The ptrace() system call allows a suitably privileged process to modify another in a large number of ways. Among other things, ptrace() can intercept system calls and make changes to them, but such operations can be fiddly and architecture-dependent. This patch series from Dmitry Levin seeks to improve that situation by adding a new ptrace() operation to make changes to another process's system calls in an architecture-independent manner.

ptrace() has, since the 5.3 release in 2019, supported an operation, PTRACE_GET_SYSCALL_INFO, that can be used when the traced process has been stopped at a system call. It is used by, for example, the strace utility to obtain information about the system calls made by a process of interest. The addition of this operation made life easier for programs like strace which, previously, had needed special code to handle the unique way in which each architecture manages system-call arguments and return values. Now, the same ptrace() call works on all architectures supported by Linux.

System calls can be intercepted — and information gathered — at three different points: on system-call entry, on the return to user space after the system call completes, or when a seccomp() trace rule is executed. The information available at each point varies; on entry, for example, the system-call number and arguments are available. On exit, instead, PTRACE_GET_SYSCALL_INFO will provide the return value from the executed system call. This information comes back in the ptrace_syscall_info structure; the ptrace() manual page describes each of the returned fields.

While ptrace() can be used to obtain system-call information, there is no equivalent way to change that information in an architecture-independent way. Any process that wants to mess with another process's interactions with the kernel must, thus, resort to lower-level means. Levin suggests that things should have been done differently: "Ideally, PTRACE_SET_SYSCALL_INFO should have been introduced along with PTRACE_GET_SYSCALL_INFO, but it didn't happen". Even in our less-than-ideal reality, though, that capability can be added now.

Within the kernel, there exists a function to change a process's system-call arguments in an architecture-independent way. Or, at least, there once was. Roland McGrath added syscall_set_arguments() to the 2.6.27 kernel in 2008, but that function never acquired any users, so Peter Collingbourne duly removed it during the 5.16 development cycle in 2021. Levin starts by reverting that patch — partially, at least. The implementation of syscall_set_arguments() on some architectures was evidently buggy enough that it was better to just provide new versions outright.

Levin also adds an internal syscall_set_nr() function to set the requested system-call number in an architecture-independent way; as can be seen from the patch adding this function, that operation must be done differently for each architecture. Of course, the level of architecture-independence achieved here is relative, since the system-call numbers themselves can vary from one architecture to the next.

With that infrastructure in place, adding PTRACE_SET_SYSCALL_INFO to ptrace() is a relatively straightforward task. At system-call entry, this call can change both the system-call number and the arguments provided, possibly yielding a result that is rather different from what the calling process intended. The system-call number can also be set to -1, which will result in the call being skipped altogether and the errno value being set to ENOSYS. The same changes can be made for system calls intercepted by seccomp(). At system-call exit, instead, only the system call's return value can be changed.

The other values found in the ptrace_syscall_info structure, including the instruction and stack pointers, cannot be modified with the new operation. That could possibly change in the future, Levin said in the cover letter, should there be a need to modify those values. There is a set of three padding bytes in the structure that must be set to zero in the current version of the patch; future versions could look there for flags indicating other changes to be made. The size of this structure is passed in from user space, meaning that it could be expanded in a compatible manner if the desire to change even more system-call-related parameters were to arise.

For now, though, the patch set is limited to the basic operations described above. One thing that is missing from the submission is a description of how this new feature might be used. One can imagine various types of sandboxing solutions that, among other things, limit the system calls a process can make, with the ability to make changes (or even emulate system calls) as needed; enhancements to seccomp() have been targeted at this sort of use case in the past. The development community may want to see more information about the intended uses this time around, but any sort of concerted opposition to this functionality would be surprising. In the end, it does not allow anything that ptrace() cannot already do.

Comments (7 posted)

Ghostty 1.0 has been summoned

By Joe Brockmeier
January 15, 2025

The Ghostty terminal emulator project has generated a surprising amount of interest, even before code was released to the public. This is in part due to the high profile of its creator, HashiCorp founder Mitchell Hashimoto. Its development was conducted behind closed doors for beta testing, until version 1.0 was released on December 26 under the MIT license. While far from finished, Ghostty is ready for day-to-day use and might be of interest to those who spend significant amounts of time at the command line.

Why?

The obvious question is "why yet another terminal emulator?" when there are plenty of open-source terminal emulators already. A quick search of the Fedora 41 package repository turns up at least a dozen options, ranging from the venerable xterm to more modern options such as GPU-accelerated terminal emulators Kitty and Alacritty, not to mention GNOME's Terminal, KDE's Konsole, and so forth.

The answer (aside from "because every person gets to decide how to spend their time") is that Hashimoto started the project in 2022 to play with the Zig programming language, do some graphics programming, and deepen his understanding of terminals. Initially, he had no plan to release a new terminal emulator. But his work on the hobby project led him to find that existing terminal emulators forced users to make tradeoffs that he didn't like. In his "Ghostty is coming" post, he says that users are forced to choose between speed, features, and platform-native GUIs. So he decided that Ghostty would be built so that users were not forced to choose:

With Ghostty, I set out to build a terminal emulator that was fast, feature-rich, and had a platform-native GUI while still being cross-platform. I believe Ghostty 1.0 achieves all of these goals.

This goal focuses on being the best existing terminal emulator. Within this goal, Ghostty 1.0 isn't trying to innovate on what a terminal can do but instead provide the best single terminal emulator experience available today on macOS and Linux.

"Boring software for a niche audience"

Hashimoto chose to work on Ghostty in public, sort of. He let on that he was working on Ghostty in 2023, and that it would eventually be released under a FOSS license. Those who wanted access prior to a public release had to join a Discord server and hope to be chosen for access. Ultimately 28,000 people joined the server, and about 5,000 were chosen to participate in the beta. The participants weren't limited to using Ghostty, they were also given access to contribute—GitHub lists more than 260 users who have made a contribution to the repository as of this writing.

In his blog post following the 1.0 release, he said that he had chosen to work on the project in a closed beta in part to manage his own time, and in part because a terminal emulator "either works or it doesn't":

You can't ship a terminal emulator that emulates only half the features Vim needs. You can't incrementally improve a terminal emulator below a certain threshold of functionality. I wanted to ensure that the first public release met that threshold. I wanted people to be able to use it productively and professionally right away.

In those regards, he said, the beta was a success. He was able to manage his bandwidth and enjoy time with his family. It also had its negative effects, such as building up unreasonable expectations about the project and causing frustration for users who wanted to try Ghostty but were unable to get into the closed beta. He did not anticipate the hype or interest, and said that he is sorry about that. "I thought I was building boring software for a niche audience [...] The good news is that Ghostty is now public and everyone can use it".

Summoning Ghostty

Everyone is a slight overstatement: Ghostty is only available currently for Linux or macOS. If I understand correctly, there are still a few people who use Windows and other niche operating systems. (For those curious, this issue on GitHub tracks progress for Windows and the problems that need to be solved.)

The Linux binaries and packages page explains how to install Ghostty for most of the popular Linux distributions, including Arch, Fedora, Gentoo, NixOS, openSUSE, and Ubuntu. There are no Debian-specific packages right now, but I was able to install the Ubuntu 22.04 package on Debian 12 and have not run into any problems with it so far.

One of the project goals is a "native" look and behavior for each operating system. Linux, of course, does not have a single native GUI toolkit as such. Ghostty uses GTK4, which the documentation calls "the closest thing to a standard GUI toolkit that exists" for Linux. Whether that will feel like a native application to users of desktops other than GNOME is debatable, but it's hard to fault application developers for targeting a single toolkit. As a result, Ghostty looks almost identical to GNOME's Terminal at first glance.

Zero-configuration philosophy

One immediate and obvious difference is that Ghostty's menu has no option to open a preferences window. The project's philosophy is that the program should have "sensible defaults" so that it does not require any configuration out of the box. The project is not dead set against allowing users to make changes, it just has a philosophy of trying to pick the "right" ones from the start. There is an long-running discussion on GitHub about acceptable out-of-the-box options for users to suggest default settings.

However, if users wish to make changes, there are many configuration options available and they are well-documented. Ghostty does have an "Open Configuration" menu option, but users may be disappointed on first use. The menu item simply opens the default text editor with a completely empty configuration file (~/.config/ghostty/config). Users can make configuration changes, save the file, and then use the "Reload Configuration" menu item or shortcut (Ctrl-<) for the changes to take effect. Running "ghostty +show-config" will output Ghostty's current configuration. Naturally, users can change keybindings if the defaults are not suitable.

Users can list Ghostty's keybindings by running "ghostty +list-keybinds", which also serves as a way to discover user-facing features such as resizing window splits or how to quickly focus a specific tab.

Ghostty comes with more than 100 themes, which can be previewed using ghostty +list-themes. This brings up a preview screen that shows the color palette for the theme with sample code and text to get a sense of what syntax highlighting might look like under that theme. If none of the existing themes are quite satisfying, users can create their own. Themes are stored under /usr/share/ghostty/themes, and each theme is simply a text file with each element color specified as by its hex value (e.g. black is #000000, aqua is #00ffff, and so forth).

[Ghostty theme selector.]

Shell integration is one of Ghostty's prominent features. This allows Ghostty to implement a number of convenience features. For example, users can select a command's output (and only its output) by holding Ctrl and clicking the left mouse button three times. Ghostty tracks the shell history, so users can easily create a scrollback file (all output to the terminal) using "C-S-j". That saves the scrollback to a temporary file (e.g. "/tmp/D4dr32Y2URuZGo35Fz5MdQ/history.txt") for future reference. Ghostty also better handles rearranging text when resizing the terminal.

In addition to features and platform integration, Ghostty uses GPU acceleration with a goal of being a fast terminal emulator. Subjectively, I'd report that Ghostty seems reasonably snappy. Scrolling back through terminal history was smooth, and it handled throwing a lot of text output at the terminal just fine. It could be that my usual workflow is not very demanding for a virtual terminal—and my typing speed is adequate, but hardly fast enough to pose a challenge to Ghostty or any other terminal.

In order to provide some sort of benchmark, I used cat to display a large (259MB) text file two times and took the second result:

    $ time cat words_alpha.txt

Using the fish shell under Ghostty, that took 7.56 seconds. Konsole took 12.031 seconds, whereas Kitty took 7.75 seconds, and Alacritty did it in just 6.93 seconds. GNOME Terminal took 12.13 seconds. That is not a particularly scientific or rigorous benchmark, but it did demonstrate that Ghostty seems to be faster than its counterparts that do not use GPU acceleration, and in the same neighborhood as those that do.

It's also worth noting that Ghostty uses the Kitty terminal graphics protocol and is capable of displaying images to the terminal as well as text.

For those who want to dive deep into the nitty gritty of its virtual terminal (VT) sequence support (also known as "escape sequences"), the documentation has a reference of all supported sequences. There is also a tracking issue that compares Ghostty to xterm's behavior, as well as a list of sequences that it does not support.

Living with the ghost

For the most part, the "zero configuration" philosophy has worked out. I configured Ghostty's theme and font, then hid the window title bar, but left everything else alone. For those who enjoy tinkering with all of the settings, there is an unofficial site called "Ghostty Config" that lets users create configurations using a web-based configuration dialog.

I've used Ghostty for all of my terminal needs since its release, which includes writing and editing articles in Emacs's terminal interface, using aerc in Ghostty, and quite a bit of work at the shell. It is not a dramatic improvement over other terminal emulators, at least for me, but its split-window features and shell integration make it a strong candidate to replace GNOME Terminal, since the latter does not support split windows. Ghostty has some nice touches, too, such as dimming the pane in a split window that does not have focus. That makes it immediately obvious which pane has focus and (in my experience) helps with concentration as well.

As one might expect, the 1.0 release is only the beginning: there are plenty of features left to implement. The roadmap and status table in the project's README lists seven high-level steps in the project's plan. The first four—standards compliance, performance, basic customization, and windowing features—are checked off as done. The next two are "native platform experiences", which includes features like settings windows for Linux and macOS, and completing the libghostty library for embeddable terminals. Last on the list are Windows support and "fancy features (to be expanded upon later)" as ambitions for future development.

Ghostty is also not quite ready for all languages, which may rule it out for some users. The features page notes that Ghostty should render the characters in Arabic, Hebrew, and other right-to-left scripts correctly, but only left-to-right text rendering is supported.

Information on contributing to the project is available in its contributing file on GitHub. The project appears to have no contributor licensing agreement (CLA). Hashimoto has said that he is interested in ensuring that Ghostty development is sustainable, but he is not seeking to make it a business. In the discussion about project sustainability Hashimoto writes that non-OSS licenses and open core business strategies "are not and will never be on the table".

In his "Ghostty is coming" blog post, he noted that he is even exploring non-profit structures for the project that might allow compensating major contributors for their work. That is a long-term goal, however, and in the sustainability discussion he writes that he fully intends to keep Ghostty his personal project with a benevolent dictator for life (BDFL) structure "for the indefinite future".

For a first release, Ghostty is a nice entrant in a crowded field of terminal emulators. It may be a bit more exciting for macOS users, who have fewer alternatives in that department, but it is well worth looking at for Linux users too. It has the makings of a strong community project already, so it will be a project to keep an eye on for those of us who live at—or frequently visit—the command line.

Comments (14 posted)

The slow death of TuxFamily

By Joe Brockmeier
January 14, 2025

TuxFamily is a French free-software-hosting service that has been in operation since 1999. It is a non-profit that accepts "any project released under a free license", whether that is a software license or a free-content license, such as CC-BY-SA. It is also, unfortunately, slowly dying due to hardware failures and lack of interest. For example, the site's download servers are currently offline with no plan to restore them.

A short history

According to a page on the site's wiki, TuxFamily was created by Julien Ducros to host projects for friends. It grew into a more general-purpose platform for free-software projects, and became a non-profit organization ("association loi de 1901") in 2001 to be able to accept donations for hardware. According to that history, by 2004 it provided services to more than 1,000 open-source projects and was "an important actor in the free software development in France".

In January 2004 the site was taken down by attackers, and a number of its administrators decided to quit TuxFamily. The remaining administrators and some new recruits set about rebuilding the service. It returned a year later in January 2005. During that time, the team created Virtual Hosting For Free Software (VHFFS), the software that now runs the site. It manages the services used to provide hosting, including DNS, downloads, mailing lists, version-control hosting, and more. The most recent release listed on its download page is VHFFS 4.6.0, which was published in October 2016. The project has had commits since the 4.6 release but it has not published anything—even security releases—since.

Fading away

The service has had its share of outages over the years, in addition to the lost year from 2004-2005. All of the announcements on its news page going back to July 2022 are about outages and power "hiccups" that have caused service interruptions for days or—in some cases—more than a week.

On December 29, 2024, TuxFamily forum member Thomas Huth observed that repository services had been down for more than a week several times in 2024—including some outages that did not make the news page. He apologized for the "provocative question" but asked "is Tuxfamily slowly fading away?" If so, what could users do to help?

Administrator Xavier Guerrin responded that the observation was correct:

It is fair to say most of the motivation is gone. Absolutely everything got old: people, machines, datacentres, architecture, services. Everything. Well, people are not that old. But they moved to other projects.

And even if there were 10 brilliant engineers with too much time on their hands to take care of TuxFamily, the cold harsh truth is that the relevance of TuxFamily (an old-school mutualised hosting platform with no way to "scale" beyond a few dozen physical machines) in the era of cloud-based computing is negligible.

The trend is clear, Guerrin said, "TuxFamily is currently walking down and into its tomb." The best way to help would be for users to move their projects, and to write about the situation "so other remaining hostees get a list of suggestions". He said that there should have been an email to hostees that it was time to move off the platform, including the alternative platforms they could consider, and what to expect from the platform in terms of services in the future. That communication was discussed "but, for various reasons, not implemented".

Yon Fernández de Retana said that it was sad, but understandable, that the end of the service might be coming soon. They had always had a good experience and "liked what tuxfamily provided more than what other places offer".

Nicolas Pomarède said that all services, except download.tuxfamily.org, appeared to be up at the moment. They asked if would be possible to restore the download service easily.

Guerrin said that it was not possible. The service had been spread across four machines at one time, and those servers had died "here and there" until only one remained. That server, located more than 600 kilometers from any TuxFamily staff, had also died. "Plus, we do not know for sure which part (power supply or motherboard) needs to be changed."

To the lifeboats

TuxFamily may not be the first, second, or third free-software-hosting service that most users think of when considering where to host a project, but it claims to host more than 2,800 projects. If those projects are still relevant, the people in charge should be actively considering new hosting.

There are, of course, many services that provide hosting for open-source and free-software projects—but few that are non-profit in nature and provide privacy-respecting services to any and all free-software projects. It is a shame to lose one. The folks who have run the site for more than 25 years deserve thanks for keeping it alive for this long.

[ Thanks to Paul Wise for the story tip. ]

Comments (9 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds