|
|
Subscribe / Log in / New account

Leading items

Welcome to the LWN.net Weekly Edition for September 18, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Fighting human trafficking with self-contained applications

By Daroc Alden
September 15, 2025

RustConf

Brooke Deuson is the developer behind Trafficking Free Tomorrow, a nonprofit organization that produces free software to help law enforcement combat human trafficking. She is a survivor of human trafficking herself. She spoke at RustConf 2025 about her mission, and why she chose to write her anti-trafficking software in Rust. Interestingly, it has nothing to do with Rust's lifetime-analysis-based memory-safety — instead, her choice was motivated by the difficulty she faces getting police departments to actually use her software. The fact that Rust is statically linked and capable of cross compilation by default makes deploying Rust software in those environments easier.

She started by pointing out that no software is going to be able to single-handedly put an end to human trafficking. Her goal for the programs Trafficking Free Tomorrow makes is to "raise the cost of selling people" to make human trafficking not economically viable. She does this by building tools for law enforcement, who are often already trying to stop human trafficking.

The problem is that trafficking is profitable, which means that the criminals who engage in it often have well-funded defenses and expensive lawyers. If there is any way for the defense to get evidence thrown out, they'll find it and do so. Before something becomes evidence in a court of law, it starts out as "stuff from a crime scene". In order to be usable as evidence, it needs to be tracked and signed-off-on at every step along the way, in order to prove that it couldn't have been tampered with.

[Brooke Deuson]

Deuson described FolSum (the web site for which is offline at the time of writing, although Deuson is working on it), which is an MIT-licensed application that helps maintain this chain of custody for digital evidence. It records hashes of folders of digital evidence, and produces a report about what has and hasn't changed since the last run of the tool. This can be used to help prove the chain of custody in court.

This idea isn't recent; Deuson has been working on it for several years, ever since she had "a bad experience in college". Surviving that experience her "really angry" and motivated her to start working with law enforcement to try to ensure it couldn't happen again. She initially wrote simple Python scripts to help with chain-of-custody problems. Those scripts worked on her machine, but she had trouble delivering the software to the people who actually need it.

The users Deuson targets are largely underfunded police departments that can't afford expensive commercial forensic solutions. The people there are usually non-technical and more used to working with paper forms for evidence tracking. They need software that is simple, self-explanatory, and capable of running in a highly locked-down enterprise environment. Deuson's first attempt at distributing her software was to bundle it using Kubernetes. That sort of worked, but it turned out to be hard to get it installed in police departments. Opening ports in the firewall is also often prohibitively hard. "Getting software into these environments is really difficult."

Eventually, she decided that the only way to make this work would be to write a single, standalone executable that does everything locally. It would need to be able to run on ancient desktop computers, in a variety of environments, without external dependencies. That's why she ultimately chose Rust to write FolSum.

Rust is probably most famous for its approach to memory safety, but she said that those weren't actually too relevant to her choice. It is important that Rust is a memory-safe language, though. Not because of the reliability of the software, but because it lets her point at things like the Biden administration's report on modern computer security or CISA's recommendations for secure software in order to justify her choice to non-technical lawyers. Being able to point at an official report that says a certain language is an approved way producing secure software is actually quite helpful for getting FolSum adopted.

The main reason she chose Rust, though, was developer ergonomics. "I'm just one person", she explained. Nobody else is currently working at Trafficking Free Tomorrow. So if she wants to produce this software, it needs to be in a language that makes it easy to meet her requirements for producing self-contained applications.

Ultimately, she's happy that she chose to experiment with Rust. Writing a local application instead of a server-based one let her keep things simple. One thing that users really liked about the Rust version of the application was that it starts quickly, she said. Lots of commercial software is big and bulky, and takes a while to start up, leaving users staring at splash screens. FolSum starts up almost as soon as the user releases the mouse button. That's important, because it builds user trust in the reliability of the application from the beginning, she said.

One of Rust's features is "fearless concurrency" — standard library APIs that make it impossible to construct data races in safe Rust code. When Deuson started writing FolSum, she didn't know anything about that. "Starting off, I didn't really know anything about concurrency. I didn't have formal training." So the first version of the program appeased Rust's concurrency model by using a single big mutex wrapped around a shared hash map.

That did work, but it led to a lot of difficult-to-debug deadlocks, "which sucks". Ultimately, she ended up rewriting the implementation to use channels, which results in fewer deadlocks. Notably, FolSum doesn't use any asynchronous code yet — it's all done with synchronous I/O, and the GUI actually runs in the same thread as the checksumming work.

The GUI is written using egui, which is an immediate-mode GUI framework, meaning that the interface is completely redrawn on every frame. Deuson called the approach "slightly cursed, but easy to reason about". The interface is basic, with no frills or animations — it's just a single window with some text and four buttons.

Deuson wrote it that way as a simple prototype, just to get something working. "I didn't think that UI would be nice, but the users actually really liked it." Now, she doesn't plan to change the interface. It turns out that non-technical users like the approach that she has called "GUI as docs", where the application puts the explanation of what it does right next to the individual buttons that do those things. Several users have told her that they wished other software was written like this, to her bafflement. For-profit software is often a forest of features, which makes it hard to find the specific thing one needs, especially if the tool is only rarely used, she said.

Some Rust features that she did really appreciate were the integrated unit tests and benchmarking libraries. They let her focus on what she felt was important, rather than spending time on boilerplate. On the other hand, she felt that people should probably avoid advanced language features and extra dependencies. She's written FolSum with basic for loops and plain imperative code, and it works well for her.

In the future, she would like to add a few carefully chosen new features to FolSum, including a progress bar and code to avoid overwhelming cheap network-attached storage. She also wants to add a crash reporter that gives users a report that they can send to her when something goes wrong. Ultimately, FolSum is a pretty small piece of software. Building it helped her iron out the web-site, continuous-integration, software-packaging, and distribution problems; now that she knows what works, future software from Trafficking Free Tomorrow is on a much firmer foundation.

There was only time for a few questions at the end of the session; one person asked how she had dealt with the social problems of getting police departments to adopt her software. Deuson explained that when talking to stakeholders, she mostly didn't try to convince them of anything technical — instead, she tries to think about who their bosses are, and who assumes the risk from choosing to use FolSum. That's where resources like the White House recommendations are really useful to convince users that it is actually a reasonable way to do things.

I asked what other anti-human-trafficking software she wanted to write in the future. Deuson responded that she had planned on "tons of stuff" including dedicated perceptual hashes for images, tools for working with recursively zipped files, a way to organize timelines of conversations, and open-source intelligence tools. The goal Deuson has set for herself, of making human trafficking economically unfeasible, is important but daunting; hopefully, her strategy of producing small, dependable tools for the most under-resourced law-enforcement agencies will help achieve it.

Comments (14 posted)

Providing support for Windows 10 refugees

By Joe Brockmeier
September 17, 2025

In October, consumer versions of Windows 10 will stop receiving security updates. Many users who would ordinarily move to the next version are blocked by Windows 11's hardware requirements unless they are willing to buy a newer PC. The "End of 10" campaign is an effort to convince those users to switch to Linux rather than sticking with an end-of-life operating system or buying a new Windows system. At Akademy 2025, Dr. Joseph De Veaugh-Geiss, Bettina Louis, Carolina Silva Rodé, and Nicole Teale discussed their work on the campaign, its progress so far, and what's next.

End of 10

The End of 10 project was dreamed up at the South Tyrol Free Software Conference (SFSCON) and launched on May 28. According to the announcement, the end of Windows 10 security updates will "turn an estimated 200 to 400 million laptops and computers worldwide into security risks and heavily polluting e-waste" simply because those systems don't meet Microsoft's requirement for a Trusted Platform Module (TPM) 2.0. Linux, of course, will run quite happily on hardware that Microsoft has deemed obsolete.

I did not have the opportunity to attend Akademy this year, but its talks were live-streamed to YouTube, and the unedited streams for all talks are available now. The End of 10 session begins at 4:13 in the room-one video. Slides are also available.

Teale introduced herself as a member of the KDE Eco team working on its Opt Green campaign, which seeks to reduce energy demands of software and extend hardware life. Louis said that she is a retired engineer and teacher based in Berlin; before joining the End of 10 project in January 2025, she "hardly knew anything about Linux and had never heard of KDE before". Her activities with the campaign are focused on bringing Linux to "open-minded elderly people in Berlin", beginning with her neighbors. Silva Rodé said that she was also "pretty new at everything"; she met De Veaugh-Geiss at a conference last year and talked to him about the End of 10 campaign, leading to her involvement.

De Veaugh-Geiss is a project and community manager for KDE e.V., working on KDE Eco. He said that the group wanted to do the update jointly because the campaign is conceptualized as a collaborative effort—thus it made sense to have the people on stage who are doing the work. There are, he noted, many others who were not on stage, and the group could only provide a general overview of the campaign. "It's really just the tip of the iceberg".

End of 10 was born out of the Opt Green campaign, he said, which is funded by Germany's Federal Environment Agency and the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection.

Support for Windows 10 ends on October 14, which coincides with the International E-Waste Day (there really is a "day" for everything...) and KDE's 29th birthday. "So it seemed things are coming together, and we should do something with this". He held a BoF at SFSCON in November of last year that brought a handful of people with ideas; that led to another BoF at the Chaos Communication Congress (CCC) in December, where people started volunteering to do work on End of 10.

Silva Rodé said that she was at CCC as part of a school event and wound up volunteering to help with translating materials into Spanish, which led to doing outreach in Latin America. "This was very exciting to me to see Latin America heavily represented".

What worked

The main goal, De Veaugh-Geiss said, was to connect new users with in-person support. He addressed the audience, "who here has helped someone install Linux that has never used it before, and then who has had to help them again afterward?" The idea was to recreate the support network at scale so that people know that they can go somewhere and get someone to help them out.

The first step was to promote existing local networks, such as Linux user groups. The second step was to do outreach to FOSS users and the media to "give the campaign some weight" so that people begin to pay attention to it. The third step was to build up new support networks. All of that, he said, had been a great success. There are more than 300 places that offer recurring support for new users, and more than 225 events have been listed on the End of 10 web site, in 40-plus countries. The web site has a current map of all of the repair cafes, independent shops, organizations, groups, and collectives that are providing support, as well as a list of upcoming events.

Some of the areas, he said, have much better representation than others. He invited people in underrepresented areas to consider getting involved to establish support networks in those areas as well.

One of the things that worked well, according to Silva Rodé, was to have concrete tasks for volunteers to take on. There were many people who really wanted to help but did not know where to begin or what to do.

De Veaugh-Geiss said that one of the ideas behind the campaign was to make it distribution-neutral and to "try to speak as a big FOSS family". The project was "at least getting there", with a number of different organizations, distributions, and companies supporting the campaign that are listed on the End of 10 web site. This includes not just the usual suspects from the free-software world but also repair collectives that have been focused on more general hardware repair. He said that the campaign was illustrating the importance of software for repairing hardware:

This is something that I think is quite abstract for many people. But when you start to realize, oh, my hardware won't work because the software has stopped being supported. I can go to a repair cafe. That becomes not abstract and very concrete.

Louis said that she lives in a "quite big cooperative apartment complex" that is home to about 1,000 people. She decided to start there, and her first presentation drew about 30 people. After that, interest in installing and using Linux took off; she began having regular office hours and organized an installation event with one-to-one support for users. "And we helped with doing backups first!"

The effort also included demo computers on hand so that users could try Linux beforehand. "If they only use the browser, afterward they say that it looks just like Windows". Overall, she said it was successful because people kept coming back for continuous support. "As we have all heard, the experience is not done just with going to an install party", people need assurance there is someone who can help them afterward too.

Opt Green

Teale said that part of the Opt Green campaign is to make people aware that they have options other than Windows or Apple. Part of that campaign has been to put ads into magazines, such as Öko-Test, a German consumer-protection publication, and Linux Magazine.

The idea was to take a distributed approach so that supporting organizations would do their own campaigns within the context of End of 10, De Veaugh-Geiss said. He noted that Zorin OS and KDE were running their own campaigns, as well as the repair initiative Runder Tisch Reparatur. "This gave us a much wider reach than we would have been able to do if we had done it on our own."

Normal people, Louis said, don't read Linux Magazine. She then backtracked a bit about suggesting that Linux users were not normal people, with a laugh, "you know what I mean". It is important, she continued, to reach out to more mainstream publications like Tom's Hardware, PC World, and non-technical publications like The Irish Times. She said that she learned about organic coverage of the campaign in broadcast media when a friend contacted her to ask if there would be more installation events. "I changed my mind after listening to the radio broadcast", her friend said.

Challenges

There was some skepticism when reaching out to existing repair cafes, De Veaugh-Geiss said. One organization in Germany said it did not see what software had to do with repair cafes and did not think that its communities would be interested in installing Linux. To persuade the organization, he proposed doing a presentation to the community, and about 180 people from different repair cafes showed up. After a presentation about e-waste and the energy consumption involved in producing new devices, many of those who attended were interested in follow-ups.

Louis said that the people she spoke to who run repair cafes had many reasons not to offer Linux support. "When somebody comes with a broken toaster to the repair cafe, afterward it's fixed and they can just use it". Linux, of course, is a totally different story. The repair cafes did not want to deal with "all these people coming to them and having to explain how to use this and how to do that". The cafes just want to repair things, and that's quite understandable.

The FOSS way of doing things is "new and foreign", De Veaugh-Geiss said, to many of the people who are excited about the environmental aspects of installing Linux. There are also challenges with building new teams and communities. There are many places that do not have the maker spaces and Linux user groups one might find in a large city like Berlin; there are a lot of gaps in the map, he said.

Teale added that keeping people hooked was also a big challenge. It's a lot of work to keep people updated and interested after the excitement of a first event. Silva Rodé added that the same is true for volunteers: "it's not a very instant gratification kind of thing. Some people join the Matrix channel, and they expect revolution the next day". When that doesn't happen, she said, they are disappointed, and then they leave. Everything is slow, and people's expectations have to be managed.

De Veaugh-Geiss said that now that the campaign had "hit the ground running", the second phase could be to try to take the enthusiasm and get people involved with the developer community. One way to do that might be to write up guides to help people learn to write bug reports. Another idea, he said, would be to do a follow-up "i-Waste" campaign for Intel-based Macs that are losing macOS support but are still good hardware.

Q&A

The first question was about the barriers people faced when moving from Windows to Linux; specifically, what were people complaining about or missing?

Louis said that she had received a lot of positive feedback on how fast Linux is, and how easy it was to use. De Veaugh-Geiss answered that there is a real vendor lock-in issue, where people need—or think they need—specific software on Linux. "If it doesn't run on Linux, it just creates a barrier that feels insurmountable".

Another attendee wanted to know if the group was keeping a database of feedback from repair cafes and others who were getting input from people. De Veaugh-Geiss said that the campaign had done a survey of about 50 repair cafes recently, asking about general obstacles and if they had seen an increase in visitors. But, at the moment, the campaign is not keeping a database of feedback or anything of that sort.

One person said that they were quite amazed by the number of events and such, but "at the same time, it still seems small compared to the vast internet". They wanted to know what the organizers thought about the scale, or if they were happy with what they had achieved.

Louis referred back to slide six in the presentation, which showed an iceberg with several penguins on top: "this is only what we see and what people have told us". She suggested that there was much more activity than what the campaign knew about directly and noted that there are people using the End of 10 hashtag ("#Endof10") on the Fediverse that are not involved with the campaign at all. "It is pretty cool to see that it's becoming its own thing and has legs beyond ours".

As always, there are some questions during the Q&A that are not actually questions; one audience member interjected that is it possible to install Windows 11 on unsupported hardware, but "there's no guarantee you'll get updates".

Someone else wondered if the campaign is reaching highly technical people or "sort of more normal people". Louis said that tech-savvy people tend to just watch some videos and do it by themselves; they are not the people that the campaign is reaching out to. Instead, the campaign is trying to reach the people who want personal support for the installation and afterward as well.

Readers interested in assisting with the campaign can find ways to get involved on the contribution page.

Comments (12 posted)

A policy for Link tags

By Jonathan Corbet
September 11, 2025
The Git source-code management system stores a lot of information about changes to code — but it does not hold everything that might be of interest to a developer who needs to investigate a specific change in the future. Commits in a repository are the end result of a (sometimes extended) discussion; often, that discussion will result in changes to the code that are not explained in the changelog. For some years now, many maintainers have followed the convention of applying a Link tag to commits that points back to the mailing-list posting of the change. Linus Torvalds has been expressing his dislike for this convention for a while, though, and its time appears to be coming to an end.

Certain source-code management systems are able to track a change through multiple versions by assigning a "change ID" to the work. Git does not do that, though, so the kernel community does not have an easy way to look at the history of a patch. In a discussion prior to the 2019 Kernel Summit, Shuah Kahn asked whether the community should adopt some sort of change-ID convention to track work heading into the kernel. Doug Anderson proposed something similar a month later. The extended discussions that followed did not lead to the adoption of a change ID, but they did bring about a related change.

Specifically, Thomas Gleixner suggested the use of a Link tag that would contain an archive URL for the posting of each commit on the mailing lists:

What's really useful is when the commit has a Link tag:
   https://lore.kernel.org/lkml/$MESSAGE-ID

and if the submitters provide the same kind of link in their V(N) submission pointing to the V(N-1) in the cover letter:

    https://lore.kernel.org/lkml/$MESSAGE-ID-V(N-1)

If it's a single patch the link can be in the patch itself after the --- separator. That allows a quick lookup of the history.

The Link tag was not new; its first appearance in the kernel repository is in this 2011 commit. That commit, along with 56 others containing Link tags, went into the 2.6.39 release. This tag was mostly used for commits to the x86 and core-kernel subsystems initially, showing up in 3-500 commits per release. As this plot shows, the use of Link tags grew slowly over time, until something happened:

[Plot of link tags per release]

The "something" that happened partway through the 5.2 development cycle was, of course, the above-mentioned discussion. There seemed to be widespread agreement that the addition of Link tags to commits would be helpful. Kees Cook posted a Git hook configuration that would add the tag automatically whenever a patch was applied with git am. Linus Walleij updated the kernel documentation to suggest using this hook; the b4 tool later gained a flag to add Link tags as well. The addition of Link tags grew accordingly; in the 6.16 release, 11,030 commits (75% of the total) included Link tags.

(It is worth noting that the other part of Gleixner's suggestion — including a link to the previous posting whenever a patch series is updated — has not been adopted as widely, though many developers do indeed include those links. These backward links are important, as any LWN kernel writer will attest, if one wishes to look at the history of a change through multiple iterations.)

During the 2019 discussion, Torvalds was lukewarm to the idea of including Link tags; he certainly did not oppose it, and said it was better than trying to create some sort of change ID. By 2022, though, he was beginning to complain about them:

I _really_ wish the -tip tree had more links to the actual problem reports and explanations, rather than links to the patch submission.

One has the actual issue. The other just has the information that is already in the commit, and the "Link:" adds almost no actual value.

This refrain would become more common over the years, culminating in some strongly worded complaints in the middle of discussions on virtual filesystem layer and io_uring patch sets. The core problem remained the same: he does not like it if he follows the URL in a Link tag, hoping to learn more about the change in question, and only finds the same information that is already contained in the changelog itself. That slows his workflow down and increases his grumpiness level.

Various developers have sought to defend the use of these tags in these discussions. Christian Brauner said: "I care that I can git log at mainline and figure out where that patch was discussed, pull down the discussion via b4 or other tooling, without having to search lore". Konstantin Ryabitsev pointed out that it is not always easy to find patch submissions by searching the archives. Jens Axboe said that the tags can help to find the cover letter for a patch series, and that they can also help to turn up discussion that happens after a patch is applied. Greg Kroah-Hartman argued for keeping the tags, saying "they work well for those that have to spelunk into our git branches all the time".

Torvalds, though, has been unmoved by these arguments and steadfastly opposed to the use of Link tags except in cases where there is something "interesting" behind the link. In the recent discussions, Axboe asked repeatedly for a proclamation from Torvalds on what the rules for Link tags should actually be (and suggested summoning LWN for a summary). Torvalds appears to have answered that request in the notes for the 6.17-rc5 release:

So if a link doesn't have any extra relevant information in it, just don't add it at all in some misguided hope that tomorrow it will be useful.

Make "Link:" tags be something to celebrate, not something to curse because they are worthless and waste peoples time.

Please?

Many maintainers are unlikely to celebrate the fact that they have to end the automatic addition of Link tags and think, for each commit, whether such a tag is "interesting" or not. But they can celebrate the fact that the time spent on that exercise will save some time responding to grumpy emails from the Chief Penguin. While the new policy may not be entirely popular among maintainers, there is at least now something approaching an actual policy around the use of these tags.

Comments (29 posted)

Creating a healthy kernel subsystem community

By Jake Edge
September 12, 2025

OSS EU

Creating welcoming communities within open-source projects is a recurring topic at conferences; those projects rely on contributions from others, so making them welcome is important. The kernel has, rather infamously over the years, been an oft-cited example of an unwelcoming project, though there have been (and are) multiple efforts to change that with varying degrees of success. Hans de Goede talked about such efforts within his corner of the kernel project in a talk (YouTube video) at Open Source Summit Europe.

De Goede introduced himself as a Red Hat employee working mostly on Linux hardware enablement, "so mostly driver-related stuff"; since the talk, he has announced that he is leaving the company after 17 years. Recently, he has been focused on laptops and MIPI cameras. He has been a maintainer for the platform-drivers-x86 (pdx86) kernel subsystem since 2020.

Friendly mailing list

He started with some suggestions on making the mailing list a "friendly and welcoming place". Kernel mailing lists have a bad reputation, where "sometimes mails can be rude or unfriendly; that has been changing lately, changing for the better". But the reputation remains and it is often seen as scary to post something to the list.

As a kernel maintainer, he wants to lead by example, so he always tries to stay professional and friendly with his responses to mailing-list posts. A question may seem stupid, but the person likely just does not have the background and knowledge that De Goede does; he tries to put himself in others' shoes and suggested that responders remember back to when they were posting their first patch. Posting a grumbling response because you are busy and stressed does not help anything; patience is a virtue, "take a deep breath, count to ten, be patient". Both the contributor and the maintainer are trying to improve the kernel, so a maintainer should assume good intentions—even after having to explain something multiple times.

[Hans de Goede]

The language barrier can be a major source of problems. English is not difficult for him, as the Netherlands does not dub TV and movies so he has been exposed to it all his life; "English is pretty close to Dutch, actually, in a way". He has, however, noticed many other participants struggling with English, so he has come up with some rules that he uses to help overcome that barrier.

First off, De Goede tries to write short, clear sentences; he has noticed that engineers often write long sentences, with multiple sub-sentences for clarification, which should be avoided in his experience. Another mistake is to have a "wall of text" rather than lots of clear, concise paragraphs; for further clarification, often a footnote can be used. Maintainers will sometimes need to ask for multiple changes to a patch, or for several different kinds of debugging information, so making that really clear, using a numbered list that has white space between each item, can help. "These are just really simple tricks, but they help to overcome the language barrier", he said.

When a maintainer is having a bad day for some reason, it is especially important that they try to follow his suggestions. Sometimes, though, miscommunication will happen even when maintainers are patient, friendly, and trying to communicate clearly. Language barriers can play a role, but email is also a less-than-perfect communication mechanism; being unable to see the other person in order to judge their attitude and emotion makes it prone to miscommunication. Because of the inherent delays in responses, it can take time to even recognize that a miscommunication has happened; it is often the case that a message initially perceived as disagreeing actually turns out to be agreeing—or at least not disagreeing as strongly as it first appeared.

He has come up with a four-step process for dealing with miscommunication, starting with identifying that one has occurred in a response. Clearly laying out the original interpretation of the other person's message, followed by a description of the new understanding of the person's meaning, are the next two steps. Finally, suggest how to move forward with the discussion and/or patches. It is important not to get caught up in questions of blame—was the original message unclear or was it just read poorly—because it doesn't matter. "Miscommunication is just something which happens, it happens all the time"; it is easy to diagnose and fix when talking face-to-face, email makes miscommunication take longer to unwind.

An audience member asked about repetitive replies: when does it make sense to turn those replies into documentation and is sending a link to the documentation considered a friendly reply? De Goede said that he was bad at turning replies into documentation but that he did have some canned answers that he can copy-and-paste into a reply along with a note that it is from a template. There is already so much documentation on submitting kernel patches that it is daunting; he thought that maybe it should be split into "Submitting patches 101" and a "201 with all the details".

Another question concerned finding the time to be able to be friendly and help newcomers. De Goede said that he was lucky to be paid for being a subsystem maintainer, "so I could use some of my boss's time for it". In addition, the subsystem he maintains is not that large; the number of postings to the mailing list, especially in the early going, was small.

Unfortunately, helping newcomers may not produce a payback, so it is "a bit of a scattershot approach". He had the good fortune to have two community members who he was able to turn into reviewers for parts of the subsystem. He noted that they were working in specific areas and asked them directly to help out; "it was a combination of being open and friendly" coupled with directly seeking out their aid when he was too busy.

The question played into several other topics that he planned to talk about, De Goede said: community growth and maintainer burnout. Obviously, spending time to grow the community can help relieve some of the burden on the existing maintainers, but it is not guaranteed, so it is a tricky balancing act. It would be nice if all maintainers could be paid for their work, but that is not the case; he was able to work on his subsystem around one day a week, which was generally enough time to deal with the most pressing things.

Unpaid maintainers may not feel like they have sufficient time to bring up newcomers, which is understandable, but may well result in burnout down the line. He suggested managing expectations with newcomers to give them a realistic idea of what level of assistance the maintainer can provide—and when. If the maintainer is simply too swamped, it may make sense to encourage a newcomer to seek out another mentor to help with their patch.

Growth

A welcoming mailing list is only one of the things needed to grow a community, he said. Newcomers' patches should probably not be overly nitpicked about minor stylistic problems (e.g. a forgotten comma at the end of the last structure initializer); the maintainer should just do some small cleanups if needed and merge the result. For seasoned developers, nitpicking may make sense, but for newcomers, the goal should be to give them "that little boost of 'yes I got my first patch upstream'; you want to hook them". Pointing out the changes needed in the merged patch may help the newcomer learn without having to go through multiple revisions of the patch for stylistic issues.

As noted, the time spent hand-holding newcomers may not pay itself back, but the investment needs to be made to add people to the project. There are not enough maintainers—who are paid for that work—to keep up with all of the newcomer mentoring that is needed. That message needs to be spread more widely, he said. He referred to the curl keynote from earlier in the day by noting that none of the car companies using the tool are contributing back to it. One way for big companies, some of which are making billions relying on open source, to give back to the community, would be to hire maintainers, he said.

Appreciating contributions and contributors is an important part of being a maintainer, both for existing community members and, especially, for newcomers. He tries to always start his patch review emails with "thank you for your patch"; it is a simple thing to do and is "so underrated" in making a difference. He also thanks people who review his patches and, importantly, thanks those he sees reviewing other people's patches.

Another way to help grow the community is to delegate tasks and to ask for help. Noting community members who are reviewing patches in a particular area, then asking them directly for help on a review or a bug, can work well to get more assistance. It does not always work, of course, since other people can also be too busy, but it is worth trying.

Two questions were asked about the level of nitpicking De Goede was talking about. He clarified that if a patch has actual problems in the logic or implementation, he will ask for changes, but that simple typo kinds of fixes are best just dealt with directly. He knows that other maintainers differ, but in his opinion it saves the maintainer time to not have to context switch and go through the review again (and, perhaps, again).

Maintainer burnout

The root cause of burnout is that kernel maintainers are "often being overasked"; they are also generally the type of people who will push themselves further than they should in pursuit of perfection. He sees that in himself, but perfect is the enemy of good; "you want to aim for 'good enough', perfect does not exist". A healthy community with assistance from trusted members helps, but is not the ultimate solution; his is a small subsystem, but can be overwhelming, and there are much larger subsystems out there.

For subsystems where there is a "firehose of patches", there is a need for a second developer who is being paid for doing review and other tasks to help reduce that burden. As an example, he pointed to the media subsystem, which he thinks is lacking in reviewer capacity for its enormous patch load. There is an effort to move to a multi-committer model, which may help some, but what is really needed, he thinks, is for some seasoned kernel developer to be paid by some interested company to do patch review. They do not even need to be familiar with the media subsystem; if it is known they are being paid to work on it for two or three days a week, it is worth the time needed to bring them up to speed.

He had a "mini-burnout" himself a while back and noticed that it was not directly caused by the kernel-maintainer role, or by his technical work at Red Hat on top, but by other factors. It came from the feeling that he was not in control of how he spent his time. "Do you feel productive at the end of your work day? Or do you have the feeling you were only spinning your wheels on useless administrative tasks?" For him, the answers to those questions were major factors.

De Goede said that he "had the unlucky and lucky experience" that someone close to him "had a pretty hefty burnout and they are only now fully recovering". So he knew the symptoms, what to look for, and what not to do: "go on go on go on until you crash; don't do that". It is important to recognize the symptoms "and pull that emergency brake early".

In March 2023, he recognized some of those symptoms in himself; armed with the knowledge of what happened with his friend, he knew he did not want to suffer the same fate. He is not a psychologist, but the symptoms he saw were things like "having a very short fuse, being tired all the time, not having energy", and getting annoyed quickly by small things. "Causes for me were feeling loss of control, not feeling appreciated, not feeling very productive, [and] not feeling heard." Those were all work-related, but there were also personal factors, such as aging parents needing care. It is always a combination of things, he said; not being able to spend the time that he felt he needed to on the kernel maintainership just added onto the pile.

Once he realized what was happening, he immediately reported the problem to his employer; "my work is making me sick, I'm calling in sick". He is lucky, he said, to be "from a European country where we have proper worker rights and labor protection". He asked for some changes at work and his management chain agreed, "which was really great". He only ended up being on full sick-leave for three weeks, and then slowly built back up to his regular work hours over a seven-week period.

He recommended that any attendees who recognize burnout symptoms in themselves intervene quickly to ask for help from professionals, someone at work, a friend, or someone else. "Do something, don't keep on the treadmill." The belief that mental illness is somehow not the same as physical illness is problematic and untrue; for one thing, eventually mental illness will lead to physical symptoms. It can be tricky to do, depending on the laws of the country you live in, but he suggested getting your employer involved if the problem is work related; "try to work with them to fix things, to change things, to make your work fun again".

His love of Linux and open-source software makes him "give it my all" every day. That makes it important that he get some positive energy back out of that effort every day, so he can do it again the next day. His manager suggested that he should not try to give 100% every day, and to keep some buffers in reserve. "I'm still working on that."

Handing off

After his mini-burnout, he was working with his manager to make some changes on the maintainer side of his job. He also would like to reduce the number of subsystems with a "bus factor" of one, including pdx86. Even though the subsystem is fairly small, it made sense to him to share the load. He noticed that many of the patches were coming from Intel, so he asked the company if it could provide a seasoned developer to co-maintain the subsystem.

The company gave Ilpo Järvinen time to help out and he has been co-maintaining pdx86 with De Goede since September 2023. In the beginning, they would swap roles for every kernel-development cycle; one would do the bug fixing for the cycle, while the other did the review and patch merging for the subsystem. Since then, De Goede has been getting more involved in the media subsystem and the MIPI camera work, in part due to the lack of reviewers for that part of the kernel, so he asked Järvinen to be the primary pdx86 maintainer in May 2025. De Goede is still part of the team, and can fill in when needed, which also helps the bus factor for pdx86.

The slides for the talk are also available.

[I would like to thank the Linux Foundation, LWN's travel sponsor, for supporting my trip to Amsterdam for Open Source Summit Europe.]

Comments (15 posted)

New kernel tools: wprobes, KStackWatch, and KFuzzTest

By Jonathan Corbet
September 15, 2025
The kernel runs in a special environment that makes it difficult to use many of the development tools that are available to user-space developers. Kernel developers often respond by simply doing without, but the truth is that they need good tools as much as anybody else. Three new tools for the tracking down of bugs have recently landed on the linux-kernel mailing list; here is an overview.

wprobes

One nice feature long found in user-space debuggers is watchpoints — the ability to request a trap whenever a particular spot in memory is accessed. Watchpoints can be useful for finding out which code is guilty of corrupting a given variable, among other things. This feature could be especially useful in the kernel context, where many things are happening at once and the source of a problem could be anywhere in a large and complex system. But kernel developers have had to do without watchpoints, especially when working outside of virtualized environments.

What the kernel does have is kprobes, which enable the placement of debugging code (almost) anywhere within a running kernel. They are somewhat similar to a dynamic breakpoint inserted by a debugger, but they do not actually stop the execution of the kernel; instead, they print some (hopefully useful) information and continue on.

This patch series from Masami Hiramatsu adds a new feature to the kprobes subsystem called "wprobes". A wprobe is similar to a user-space watchpoint, in that it traps accesses to a given range of memory; like kprobes, though, wprobes do not actually stop execution. But, with luck, they can be made to print enough information to help pinpoint the source of a problem.

A watchpoint is set up by writing a cryptic string to /sys/kernel/tracing/dynamic_events, following this format:

    w:[GRP/][EVENT] [r|w|rw]@<ADDRESS|SYMBOL[+OFFS]> [FETCHARGS]

As an example (provided in the patch cover letter), this set of commands will create a watchpoint that will trigger whenever the kernel's jiffies variable is modified:

    # cd /sys/kernel/tracing
    # echo 'w:my_jiffies w@jiffies:8 value=+0($addr)' >> dynamic_events
    # echo 1 > events/wprobes/my_jiffies/enable

In the middle line, "w:my_jiffies" sets up a watch point named my_jiffies; "w@jiffies:8" targets writes to the eight-byte jiffies variable, and "value=+0($addr)" prints out the value written to that address. The final line then activates the watchpoint.

There is also a mechanism that can create watchpoints dynamically when certain things happen; it enables watching dynamically created memory objects (slab allocations, for example) during their lifecycle. See the above-linked patch cover letter for an example, this patch for a documentation update describing wprobes, and this patch for documentation on dynamic wprobe creation.

KStackWatch

Corruption of the call stack can be an especially pernicious problem for developers in any context. The kernel can be built with stack canaries and others tools that will detect that a stack overflow has happened, but they cannot pinpoint exactly when the problem occurred. The KStackWatch patch series from Jinchao Wang aims to fill that gap by providing a lightweight tool that can trap stack-corruption events.

In essence, KStackWatch will keep an eye on a specific part of a given function's stack frame while that function is active. Like wprobes, it sets up a trap on access to the memory of interest, and prints out useful information when that memory is altered. The kernel's tracing infrastructure is used to enable this watchpoint on entry to the function of interest, and to disable it on return.

The tool adds a new control file, /proc/kstackwatch, running counter to the usual practice of putting such files in debugfs or tracefs. The administrator can enable the tool by writing a string of this form to that file:

    function+ip_offset[+depth] [local_var_offset:local_var_len]

Here, function is the name of the function of interest, and ip_offset indicates where, in the function, the entry probe should be placed. It's not entirely clear why that offset is needed, or how a user would determine what its value should be. The depth value can be used with recursive functions; it specifies which level of recursion should be watched. The local_var_offset and local_var_len parameters indicate the location and size of a specific variable on the stack to watch, expressed as an offset from the stack pointer at entry. If these values are not provided, KStackWatch will watch the stack canary instead.

The current stack watch is global across the system, there can only be one active at any given time. Writing a new configuration to /proc/kstackwatch will remove the previous watch; writing an empty string will disable the tool entirely.

When a write to the watched variable is detected, KStackWatch will output some diagnostic information to the system log. There is a module parameter, panic_on_catch, that will cause an immediate system panic when a write is detected. There does not appear to be a way to change that parameter without unloading and reloading the module.

This series is in its fourth revision as of this writing; this revision, happily, includes a basic documentation file describing KStackWatch. There has been some interesting interplay with the wprobes patch set, as each has adapted useful code from the other; both appear to be approaching a state of readiness.

KFuzzTest

Fuzz testing of the kernel is not particularly new; tools like syzkaller have been exposing kernel bugs for the better part of a decade. This testing, though, is all run from user space, so the code it can exercise is limited by the kernel's user-space API. The KFuzzTest subsystem, proposed by Ethan Graham, is an attempt to give fuzz testers a way to reach inside the kernel and abuse the interfaces of low-level functions directly.

To arrange for a kernel function to be stressed by KFuzzTest, a developer must set up some scaffolding within the kernel, most likely in the same source file as the target function. The first step is to define a structure that encapsulates the arguments to that function. If we wanted to be able to test this internal kernel function:

    int mangle_data(const u8 *data, size_t len) { /* do something interesting */ }

We would start with this definition:

    struct mangle_data_input {
    	const u8 *data;
	size_t len;
    };

Once that is in hand, it is time to define the test itself. That is done with the FUZZ_TEST() macro:

    FUZZ_TEST(test_mangle_data, mangle_data_input)
    {
    	/* Constraints and annotations here, then ... */
	mangle_data(arg->data, arg->len);
    }

This incantation defines a test named test_mangle_data. Within the body of the declaration, the variable arg will point to a mangle_data_input structure containing arguments to pass to the function; the call to the function to test is made at the end of this declaration. User space will be able to invoke this test as described below, providing whatever evil data it thinks might expose bugs. It is worth noting that KFuzzTest will not, by itself, detect those bugs; it just runs the function with the provided data. So KFuzzTest will normally be run alongside tools like KASAN to detect when things go off the rails.

The test can include constraints on the data that is passed in. For example, if the definition of test_mangle_data includes a line like:

    KFUZZTEST_EXPECT_NOT_NULL(mangle_data_input, data);

Any input value will be tested to ensure that the data pointer is non-NULL. Should that condition not be met, the test itself will not be run. There is a whole set of KFUZZTEST_EXPECT_ macros that can be used to constrain the input data to the function; they can be found in this patch.

The test definition can also contain annotations with further information about the input data for the function. For example:

    KFUZZTEST_ANNOTATE_LEN(mangle_data_input, len, data);

documents that len is meant to be the length of the data array. Other possible annotations indicate that a given argument is an array, or that it is expected to contain a C string. Annotations do not affect the running of the test itself. The constraints and annotations are compiled into a special section of the kernel executable, where user-space tools can find them.

On the user-space side, KFuzzTest sets up a debugfs directory (called fuzztest) with a subdirectory for each defined test. A user-space tool can use this directory to discover the available tests, but it must still read the kernel image file to obtain constraint and annotation information. It is also necessary to build the kernel with DWARF debugging information to provide information about the layout of structures; it is not clear why the kernel's BTF information, which is usually present and rather more compact, is not used for this purpose.

Running a test is a matter of writing some data to the input file in the test's directory; for our example above, that file would be .../kfuzztest/test_mangle_data/input. The format of that data is not simple, though. The input to a function may include pointers to a set of complex, pointer-connected data structures; KFuzzTest allows the testing tool to provide that whole set as input. To do so, user space must serialize the data into the format expected by KFuzzTest, then write the result to the input file.

The patch series includes a tool (kfuzztest-bridge) that can be used to run a test with random data; its input, too, is on the complex side. See this documentation patch for details on how all of this stuff works, and this patch for a couple of example tests. This work is still in the RFC stage, but there does appear to be a certain amount of interest in it, so it is likely to pass out of that stage at some point.

Comments (none posted)

Comparing Rust to Carbon

By Daroc Alden
September 16, 2025

RustConf

Safe, ergonomic interoperability between Rust and C/C++ was a popular topic at RustConf 2025 in Seattle, Washington. Chandler Carruth gave a presentation about the different approaches to interoperability in Rust and Carbon, the experimental "(C++)++" language. His ultimate conclusion was that while Rust's ability to interface with other languages is expanding over time, it wouldn't offer a complete solution to C++ interoperability anytime soon — and so there is room for Carbon to take a different approach to incrementally upgrading existing C++ projects. His slides are available for readers wishing to study his example code in more detail.

Many of the audience members seemed aware of Carbon, and so Carruth spent relatively little time explaining the motivation for the language. In short, Carbon is a project to create an alternative front-end for C++ that cuts out some of the language's more obscure syntax and enables better annotations for compiler-checked memory safety. Carbon is intended to be completely compatible with C++, so that existing C++ projects can be rewritten into Carbon on a file-by-file basis, ideally without changing the compiler or build system at all. Carbon is not yet usable — the contributors to the project are working on fleshing out some of the more complex details of the language, for reasons that Carruth's talk made clear.

[Chandler Carruth]

"It's always a little exciting to talk about a non-Rust programming language at RustConf," Carruth began, to general laughter. He has worked in C++ for many years, and has been working on Carbon since the project started in 2020. Currently, he is paid for his work on Carbon as part of Google's languages and compilers team. He briefly showed some research from Google indicating that the majority of the security vulnerabilities it deals with could have been prevented by memory-safe languages, but he didn't spend too long on it because he expected the audience of RustConf to be well-aware of the benefits of memory safety.

The thing is, there is a lot of existing software in the world written in C and C++. There is no magic wand to make that software go away. Migrating any of it to memory-safe languages will require those languages to integrate with the rest of the existing software ecosystem, he said. Interoperability is not just nice to have — it's a key part of what makes adopting memory-safe languages work.

Rust already has several tools to make interoperating with C/C++ code feasible. Carruth listed Rust's native foreign-function interface, bindgen and cbindgen, the cxx crate, and Google's own Crubit. But he claimed that none of these are really good solutions for existing C++ software. He defined software as existing on a spectrum between "greenfield" (new code, not tightly coupled to C++, with strong abstraction boundaries) and "brownfield" (tightly coupled to existing C++, with a large API surface). Greenfield software is relatively easy to port to Rust — it can be moved one module at a time, using existing binding tools. Brownfield software is a lot harder, because it can't be easily decomposed, and so the interface between code written in C++ and code written in Rust has to be a lot more complex and bidirectional.

The question, Carruth said, is can Rust ever close the gap? He doesn't think so — or, at least, not soon and not without a monumental effort. But Rust is not the only approach to memory safety. Ideally, existing C++ code could be made memory-safe in place. Lots of people have tried that, but "the C++ committee is probably not going to do it". There's no way to successfully add memory safety to C++ as it is, he said.

There are several languages that have managed a transition away from a base language into a more capable, flexible successor language, though: TypeScript is an evolution of JavaScript, Swift is an evolution of Objective-C, and C++ itself is an evolution of C. Carruth thinks that Carbon could be a similar evolution of C++ — a path to incremental migration toward a memory-safe language, prioritizing the most entrenched brownfield software. Rust is coming at the problem of memory safety from the greenfield direction, he said, and Carbon is coming at it from the other side. That makes Rust and Carbon quite different languages.

A closer look

The real focus of his talk was on showing where those differences are, and where he thinks each language can learn from the other. The syntaxes of Rust and Carbon are "not wildly different"; the differences he wanted to focus on were more abstract. For example, in Rust, a compilation unit is an entire crate, potentially composed of several modules. Therefore, it's allowed for modules to reference each other in a cyclic way, and that just works. That isn't something Carbon can support because "existing C++ code is often oddly dependent" on the ability to compile individual files separately. So, Carbon inherits C++'s model, complete with forward declarations, (optional) separate header files, and more complexity in the linker. This makes Carbon's model more complex, but that complexity doesn't come from nowhere — "it comes from C++".

Another example is the difference between traits and classes. Rust traits and Carbon classes are not that different, syntactically — Carbon just writes methods inside the struct definition, while Rust writes them separately — but they have major conceptual differences. Carbon has to handle inheritance, virtual functions, protected fields, and so on. "This stuff is complexity that Rust just doesn't have and doesn't have to deal with." Carbon wants to meet C++ APIs where they are, he said. One can even inherit across the C++/Carbon boundary.

This sort of difference is pervasive, he said, and comes up in all parts of the language. Operator overloading, generics, and type conversions are all more complex in Carbon. Why do it this way? Why is all of this additional complexity worth it? To explain, he showed an example of a hypothetical but not unusual C++ API:

    int EVP_AEAD_CTX_seal_scatter(
	    const EVP_AEAD_CTX *ctx,
	    std::span<uint8_t> out,
	    std::span<uint8_t> out_tag,
	    size_t *out_tag_len,
	    std::span<const uint8_t> nonce,
	    std::span<const uint8_t> in,
	    std::span<const uint8_t> extra_in,
	    std::span<const uint8_t> ad);

The example was adapted from a real function in the BoringSSL cryptography library. Each std::span is a combination of a pointer and a length. The main problem with faithfully representing it in Rust is actually not visible in the code itself; the documentation for this function explains that out must either be the same pointer as in, or a completely non-overlapping piece of memory. When the pointers are the same, the function encrypts a given plaintext input buffer in place. Otherwise, the encrypted output is written to the output buffer without disturbing the input buffer. None of the other pointers are supposed to alias.

Carbon is still a work in progress, but the current plan for expressing APIs like this in a machine-checkable way is to use "alias sets". These would be annotations showing which pointers are permitted to alias each other, and which ones aren't. The resulting Carbon code might look like this:

	fn EVP_AEAD_CTX_seal_scatter[^inout](
	    ctx: const EVP_AEAD_CTX ^*,
	    out: slice(u8 ^inout),
	    out_tag: slice(u8 ^),
	    out_tag_len: u64 ^*,
	    nonce: slice(const u8 ^),
	    input: slice(const u8 ^inout),
	    extra_input: slice(const u8 ^),
	    ad: slice(const u8 ^)) -> i32;

Here inout is a name given to a particular alias set, and used to annotate out and input. All of the other pointers in the function signature don't have an alias set specified, so the compiler would ensure they can't alias.

Trying to represent this API in Rust just doesn't work. The language never lets mutable references alias each other, so you end up having to have two separate wrapper functions with different signatures for the in-place case and the copying case. Rewriting the module that contains this function in Rust would become a complex process, intermingling the simple translation of the actual code with the refactoring of the interface.

The power of Carbon for interoperability, Carruth said, is that it lets you decouple these things and do them as small, separate steps. He showed another example of a C++ program that was actually memory-safe, but that wasn't compatible with Rust's lifetime analysis. No computerized analysis of memory safety can ever be perfect, so Carbon presumably won't be able to do much better here — but in Carbon, patterns that the compiler cannot prove to be memory safe can be turned into a warning instead of an error.

This focus on meeting C++ where it is makes Carbon a different language. It ends up being specially tailored to interoperability and gradual migration, which isn't free. This makes the language more complex than it could be otherwise, and Carruth doesn't think that's the right tradeoff for every language. But if the goal is to have memory-safe software throughout the software ecosystem, he thinks that there needs to be room for Rust and Carbon both. This isn't a competition between languages; it's two different languages working together to cover the widely divergent needs of different projects.

Comments (51 posted)

Typst: a possible LaTeX replacement

September 17, 2025

This article was contributed by Lee Phillips

Typst is a program for document typesetting. It is especially well-suited to technical material incorporating elements such as mathematics, tables, and floating figures. It produces high-quality results, comparable to the gold standard, LaTeX, with a simpler markup system and easier customization, all while compiling documents more quickly. Typst is free software, Apache-2.0 licensed, and is written in Rust.

Desire for a LaTeX replacement

LaTeX is a document typesetting system built on the foundation of Donald Knuth's TeX. LaTeX has become the standard tool for the preparation of scholarly papers and books in several fields, such as mathematics and computer science, and widely adopted in others, such as physics. TeX and LaTeX, which predate Linux, are early free software success stories. The quality of TeX's (and therefore LaTeX's) output rivals the work of skilled hand typesetters for both text and mathematics.

Despite the acclaim earned by LaTeX, its community of users has been griping about it for years, and wondering aloud whether one day a replacement might arrive. There are several reasons for this dissatisfaction: the LaTeX installation is huge, compilation of large documents is not fast, and its error messages are riddles delivered by an infuriating oracle. In addition, any nontrivial customization or alteration to the program's behavior requires expertise in an arcane macro-expansion language.

Along with the griping came resignation: after decades of talk about a LaTeX replacement with nothing plausible on the horizon, and with the recognition that LaTeX's collection of specialized packages would take years to replace, it seemed impossible to dislodge the behemoth from its exalted position.

Introducing Typst

In 2019 two German developers, Laurenz Mädje and Martin Haug, decided to try to write a LaTeX replacement "just for fun". In 2022, Mädje wrote his computer science master's thesis about Typst. In March 2023, its first pre-release beta version was announced; a month later, semantic versioning was adopted with the release of v0.1.0. Typst is now at v.0.13.1 and shows 365 contributors on its GitHub repository.

I had been aware of this project for over a year but had not paid much attention, assuming it to be yet another attempt to supplant LaTeX that was doomed to fail. A rising chorus of enthusiasm among early adopters, and the beginnings of acceptance of Typst manuscripts by scholarly journals, made me curious enough to take the young project for a spin.

Typst is available as Rust source and as a compiled binary. To install, visit the releases page and download the appropriate archive. There are options for Linux, macOS and Windows; I used the precompiled Linux version for my testing.

The "typst" command accepts several subcommands. Entering "typst fonts" lists all of the usable fonts to be found in standard locations on the machine; nonstandard font directories can be added manually. In my case, Typst found all of my 476 fonts instantly; the only ones omitted were some ancient PostScript Type 1 fonts used by LaTeX. Users who have LaTeX installed will have a large collection of OpenType and TrueType math and text fonts on their machines; Typst can use all of these. But Typst will work fine without them, as the program has a small collection of fonts built in (try "typst fonts --ignore-system-fonts" to see them).

Two other subcommands to explore are "compile", which generates the output (PDF by default, with PDF/A, SVG, and PNG available, along with HTML under development) from a source file, and "watch" for interactive editing. The watch subcommand keeps a Typst process running that incrementally and automatically compiles the document to PDF in response to changes in the source. To use "typst watch" effectively, the screen should be divided into three windows: a small terminal window to monitor the typst output for error (or success) messages, the editing window, and an area for any PDF reader that automatically reloads the displayed document when it changes (many, such as my favorite, Sioyek, do this). The result is a responsive live preview, even of large documents, due to Typst's speed and incremental compilation. For example, Frans Skarman described his experience writing his doctoral thesis in Typst, and noted that he was able to enjoy nearly instant previews of content changes to the book-length document.

How Typst improves on LaTeX

Typst output is quite close to that of LaTeX. It uses the same line-breaking algorithm developed by Donald Knuth and Michael Plass for TeX, so it creates nicely balanced paragraphs of regular text. Its mathematical typesetting algorithms are based closely on the TeX algorithms, and indeed mathematical rendering is nearly indistinguishable between the two systems.

Getting started with LaTeX can be confusing for newcomers, because it comes with several alternative "engines" reflecting the long and complex history of the project. These are the various binaries such as "pdflatex", "tex", "xelatex", "luatex", "lualatex", and more, each with somewhat different capabilities. For Typst there is only "typst".

Markup in Typst is less verbose and easier to read than in LaTeX. It dispenses with the plethora of curly brackets and backslashes littering LaTeX documents by adopting, for prose, syntax in the style of Markdown, and, for equations, a set of conventions designed for easy input. The fact that curly brackets and backslashes are awkward to type on German keyboards may have provided a little extra impetus for the developers to create an alternative markup system that doesn't require a forest of these symbols.

When users make syntax errors in markup or programming, inevitable even in Typst, the system presents them with another dramatic improvement over LaTeX (and TeX): error messages using colored glyphs that clearly point out exactly where the problem is. I've even discovered that Typst will save me from trying to run a syntactically correct infinite loop.

Here is a bit of Typst markup for a shopping list, with the resulting rendering to the right:

[Rendered list]
   = Shopping List
   
   == Vegetables
   
   + Broccoli
   + Asparagus (*fresh only*)
   + Plantains (_ripe and green_)
   
   == Booze
   
   + Rum
     - White
     - Dark
   + #underline[Good] gin  

The example gives a flavor of Typst's terse markup syntax. Headings are indicated with leading = signs. Automatically numbered lists are created by prepending + signs to items, and bulleted lists with - signs; lists can be nested. Delimiters are shown for bold text and italics. These are shortcuts, or markup syntax sugar, for Typst functions for transforming text. Not every function has a corresponding shortcut; in those cases one needs to call the function explicitly, as in the final item.

Typst input is always within one of three modes. Markup (text) mode is the default. The # sign preceding the function call in the last line of the example places Typst in "code mode". The "underline()" function accepts a number of keyword arguments that affect its behavior, and one trailing argument, in square brackets, containing the text that it modifies. In the example, we've stuck with the default behavior, but if we wanted, for example, a red underline, we could use "#underline(stroke: red)[Good] gin". Following the square-bracketed text argument, Typst returns to interpreting input in text mode.

Other functions produce output directly, rather than modifying a text argument. This bit of Typst input:

    #let word = "Manhattan"
    There are #str.len(word) letters in #word.

produces the output (in typeset form) "There are 9 letters in Manhattan.". The "len()" function is part of the "str" module, so it needs the namespace.

Let's take a look at the LaTeX equivalent for the first half of the shopping list for comparison:

   \documentclass[12pt]{article}
   \begin{document}
   \section*{Shopping List}
   
   \subsection*{Vegetables}
   
   \begin{enumerate}
   \item Broccoli
   \item Asparagus ({\bfseries fresh only})
   \item Plantains (\emph{ripe and green})
   \end{enumerate}
   \end{document}   

The first two and the last line are boilerplate that is not required in Typst. The difference in verbosity level and ease of reading the source is clear.

The third Typst mode, in addition to markup and code, is math mode, delimited by dollar signs. This is also best illustrated by an example:

   $ integral_0^1 (arcsin x)^2 (dif x)/(x^2 sqrt(1-x^2)) = π ln 2 $  

When this is compiled by Typst, it produces the result shown below:

[Rendered integral]

Those who've used LaTeX will begin to see from this example how math in Typst source is less verbose and easier to read than in LaTeX. Greek letters and other Unicode symbols can be used directly, as in modern LaTeX engines such as lualatex, which we looked at back in 2017, but with no imports required.

The advent of the LuaTeX and LuaLaTeX projects provided users who wanted to incorporate programming into their documents a more pleasant alternative to the TeX macro language. As powerful as the embedded Lua system is, however, it betrays its bolted-on status, requiring users to negotiate the interface between Lua data structures and LaTeX or TeX internals. In Typst, programming is thoroughly integrated into the system, with no seams between the language used for calculation and the constructs that place characters in the final PDF. Typst programs are invariably simpler than their LuaLaTeX equivalents. All authors using Typst will make at least some simple use of its programming language, as such basic necessities as changing fonts, or customizations such as changing the style of section headings, are accomplished by calling Typst functions.

The Typst language is somewhat similar to Rust, perhaps unsurprisingly. Most Typst functions are pure: they have no side effects and always produce the same result given the same arguments (aside from certain functions that mutate their arguments, such as array.push()). This aspect reduces the probability of difficult-to-debug conflicts among packages that plague LaTeX, and makes it easier to debug Typst documents.

Although Typst uses the same line-breaking algorithm as LaTeX, its internal approach to overall page layout is distinct. Some consequences are that Typst does a better job at handling movable elements such as floating figures, and can, for example, easily split large tables across page breaks, something that LaTeX struggles with even with specialized packages.

Typst drawbacks

Typst's page layout algorithm doesn't always permit the refinements that LaTeX is capable of. For example, Typst is not as good as LaTeX at avoiding widows and orphans. Another salient deficiency is Typst's relative lack of specialized packages, compared with the vast ecosystem produced by LaTeX's decades of community involvement. However, the relative ease of programming in Typst (and the well-organized and extensively commented underlying Rust code) suggests that this drawback may be remedied before a comparable number of decades have elapsed. Indeed, there are already over 800 packages available. Typst still cannot do everything that LaTeX can, but the breadth of its package collection is encouraging.

Almost no journals that provide LaTeX templates for submissions offer a Typst option, so physicists and mathematicians adopting Typst will need to find a way to convert their manuscripts. This is made easier for those who use Pandoc, as that conversion program handles Typst.

Another drawback is the difficulty of learning Typst. The official documentation is confusingly organized, with information scattered unpredictably among "Tutorial", "Reference", and "Guides" sections. Concepts are not always clearly explained, and sometimes not presented in a logical order. The manual is not keeping up with the rapid development of the program, and contains some out-of-date information and errors. None of this is surprising considering how quickly the project is moving, its early stage, and its small core team. A work-in-progress called the Typst Examples Book has appeared, which may be a better starting point than the official documentation.

There are other minor deficiencies compared with LaTeX, such as the inability to include PDF documents. Typst provides no analogue to LaTeX's parshape command, which lets authors mold paragraphs to, for example, wrap around complex illustrations. The situation is likely to change, however, as something like parshape is being considered for the future.

More serious is the possibility of breaking changes as the system evolves, always a risk of early adoption. I suspect, however, that these will require only minor edits to documents in most cases. Progress seems to be steady, rational, and responsive to user requests.

Conclusion

I'm using Typst in real work right now to write a physics paper. I will need to submit my manuscript using the journal's LaTeX template, but I'm taking advantage of Typst to make the entry of the paper's many equations simpler, and I'll transform the result to LaTeX with Pandoc without needing any manual adjustment. The tooling is excellent, as my preferred editor, Neovim, has support for the Tree-sitter incremental parser for Markdown and Typst, which provides syntax-aware highlighting and navigation of the source files. I use Typst's fast incremental compilation to get live feedback as I fiddle with my math markup.

I was skeptical when I downloaded Typst to try it out, but became enthusiastic within minutes, as I saw the first (of many) of its lovely error messages, and remained sanguine as I saw the quality of its output. I predict that Typst will eventually take the place of LaTeX. But even if that never comes to pass, it is a useful tool right now.

Comments (46 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds