LWN.net Weekly Edition for April 20, 2023
Welcome to the LWN.net Weekly Edition for April 20, 2023
This edition contains the following feature content:
- TOTP authentication with free software: the use of time-based, two-factor authentication doesn't require proprietary and centralized systems.
- Process-level kernel samepage merging control: an attempt to make an old kernel feature more useful.
- Avoiding the merge trap: some hints for Git repository management to help kernel subsystem managers avoid merge-window snags.
- Textual: a framework for terminal user interfaces: an extensive Python module for developers of terminal-based applications.
- Vanilla OS shifting from Ubuntu to Debian: why and how this distribution is rebasing itself.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
TOTP authentication with free software
One-time passwords (OTPs) are increasingly used as a defense against phishing and other password-stealing attacks, usually as a part of a two-factor authentication process. Perhaps the most commonly used technique is sending a numeric code to a phone via SMS, but SMS OTPs have security problems of their own. An alternative is to use time-based one-time passwords (TOTPs). The normal TOTP situation is to have all of the data locked into a proprietary phone app, but it need not be that way.The TOTP approach is simple enough; it starts with a secret shared between the client and server sides. The algorithm used to generate an OTP starts by looking at the current time, usually quantized to a 30-second interval. That time is combined with the secret, hashed, and used to generate a six-digit code that is used as the password. Both the client and server sides will generate a code at authentication time; if the client can provide the same code that the server calculates, then authentication succeeds. The code can only be used once and, in any case, is only valid for a short period.
TOTP can thus be used to prove possession of the shared secret at a specific point in time. It is convenient because it requires no special hardware; anything with a CPU and an accurate clock can generate a TOTP. On the client side, one program can be used to manage TOTPs for any number of sites. Users tend to default to proprietary phone apps like Google Authenticator, but there are some clear downsides to doing so. Among those are the unwise nature of trusting proprietary code with identity information and the pain that comes with losing the device running the app. In the free-software world, there should be a better way.
TOTP apps
A quick look on F-Droid turns up a number of free TOTP apps. Your editor gave two of them a try.
TOTP secrets are arbitrary base32 strings and, thus, not much fun to type on a handset keyboard. Happily, most sites implementing TOTP have the ability to generate a QR code with the secret, and Aegis can use the camera to read them. As a result, adding new sites is easily done.
By default, Aegis will show a screen with all known sites, displaying the current OTP for each. Tapping on a given site will copy the code for pasting into a form somewhere else. It is possible to assign sites to groups, providing a single level of organization that can be useful when the number of sites gets large. There are also facilities for searching for sites, but if that is required just to obtain an access code the usability battle has already been lost.
Aegis has various features for importing and exporting of its data. The import screen is a wonder to behold, with support for a large number of other apps. There are a few formats available for export, including an Aegis-specific JSON format and plain text. The export file will be encrypted unless the user taps past a couple of warnings about how dangerous that can be — and another warning that an unencrypted export has been made endures on the main screen.
Another popular TOTP app is FreeOTP+, which is a fork of the FreeOTP app originally released (under the Apache2 license) by Red Hat. Superficially, FreeOTP+ is similar to Aegis, in that it presents a screen full of known accounts. It does not actually display the code for any given account until it has been tapped on, though. This app seemingly does not encrypt its secrets data; it can be configured to require authentication at startup before providing any codes, but does not do so by default.
Like Aegis, FreeOTP+ can read TOTP secrets from a QR code, easing the process of setting up new sites. The import and export options for FreeOTP+ are more limited than those supported by Aegis, but they will suffice to get data into or out of the app. There is no support for organizing accounts into groups. In the end, FreeOTP+ comes across as being less well developed than Aegis but, in truth, it is more than good enough to get this simple job done.
TOTP on the desktop
Authenticator apps are convenient, but some of us still use real computers and often want to access sites that way. Your editor, unlike his offspring, does not have a phone surgically implanted, so logging into a site can lead to a scramble to figure out where the damn phone is so that the code can be produced. It sure would be nice to be able to generate the code directly on the system that is used to access a site.
The pass password manager has a number of nice features, including its command-line orientation, use of GnuPG, and use of Git to store password information. It turns out that there is also an extension called pass-otp that can be used to generate TOTP codes for a site. Once the extension is installed, using it is just a matter of adding an otpauth://totp/ line to the file for the site in question; this line is most easily obtained from a plain-text export from one of the above-mentioned apps.
The new line can be anywhere in the file, so it can coexist with the existing (reusable) password that must be the first line. The pass otp command will generate the code at any given time, likely requiring the entry of the user's GnuPG key passphrase to do it; there is an option to copy it to the clipboard for easy pasting into a web form. One thing pass otp lacks is an indication of how long the generated code will be valid.
Pass provides everything that many of us need, but for people who are more
graphically inclined, KeePassXC can
also manage TOTPs. Enabling TOTP for a site is a matter of going into the
edit screen, hitting "Advanced", then entering the otpauth://totp/
line in the provided place. After that, the application will show a little
clock face that, when clicked on, will calculate and show the code. The
application's documentation
recommends storing TOTP data in a separate database from the one containing
passwords, "
possibly even on a different computer
". Your editor
would guess that this advice is not often followed.
Summary
Given the number of options available, there is almost no reason to use a proprietary TOTP app if one does not want to. Using free-software for this purpose makes TOTP authentication available on more systems and allows the user to keep the sensitive identity information under their own control. The ease of backing up data from these applications and importing it into to others means that the loss of a phone need not cause the loss of access to important accounts on the net. This is one area where free-software users are well provided for.
Process-level kernel samepage merging control
The kernel samepage merging (KSM) feature can save significant amounts of memory with some types of workloads, but security concerns have greatly limited its use. Even when KSM can be safely enabled, though, the control interface provided by the kernel makes it unlikely that KSM actually will be used. A small patch series from Stefan Roesch aims to change this situation by improving and simplifying how KSM is managed.
As its name would suggest, KSM works by finding pages of memory with
identical contents and merging them into a single copy that is shared by
all users. An early use case, as described by
Avi Kivity in 2008 when the feature was first proposed, was "the
typical multiuser gnome minicomputer with all 150 users reading lwn.net at
the same time instead of working
"; this workload would generate a lot
of identical cache pages that could be shared rather than duplicated across
the system. There are other use cases, such as virtual machines or
containers running the same software, that could also be optimized once the
important workloads have been addressed.
There can be value in performing this kind of deduplication. Some workloads, it turns out, produce a lot of identical pages; merging those pages cuts the memory use of those workloads considerably, allowing more work to be crammed into the system. KSM thus looked appealing. The merging of this feature was delayed for some time as the result of security and patent concerns, but it finally made it into the 2.6.32 release at the end of 2009. It turns out, though, that there were more security problems inherent in this mechanism than had been originally thought.
KSM works by scanning pages in memory to locate pages with the same contents. When such pages are found, all users are made to share a single copy of that page, while the duplicate copies are returned to the system's free list. Some care must be taken, though: KSM works with anonymous pages (file-backed pages are already shared via the page cache), and the owners of those pages can change their contents at any time. One process's change should, clearly, not affect any other processes that might be sharing that page. To ensure correct behavior, the kernel marks shared pages as read-only; should a process write to such a page, a copy-on-write (COW) operation will be performed to give the writing process its own copy again. With this mechanism in place, pages can be safely shared between processes that do not trust each other, with those processes being entirely unaware of the tricks going on behind their back.
At least, that was the intent. A write to an exclusively owned page is a fast operation, while a write to a read-only page that forces a COW operation takes quite a bit longer. A hostile process can use this timing difference to determine whether a page had been shared by the kernel or not; that, in turn, allows the attacker to determine whether a page with specific contents exists in the system. Given enough time, these timing attacks can be used to find out whether a specific program is running on the system, determine another process's address-space layout randomization offsets, or even exfiltrate cryptographic keys.
The security concerns associated with KSM are sufficiently worrying that the feature is not generally enabled. The only way to turn it on is for a process that wishes to participate in sharing to make an madvise() call (MADV_MERGEABLE) to enable KSM for a specific memory range. Few programs do this, so KSM is not widely used. As Roesch notes in the cover letter, even developers who are aware of KSM and wish to enable it may be unable to if they are working in a garbage-collected language that does not provide access to madvise().
The proposed solution to this problem is a new prctl() operation (PR_SET_MEMORY_MERGE) that sets the KSM status for a process as a whole. If this operation is used to turn on KSM, every virtual memory area (VMA) within the process that could be merged will have KSM enabled, as will any eligible VMAs created after the call. This setting is inherited by any child processes so, for example, if the first process for a control group or virtual machine enables KSM, all descendant processes will have it enabled as well.
This feature allows an orchestration system for virtual machines or
containers to enable KSM for the workloads it launches; the use of KSM will
no longer be something that individual processes must opt into. The
orchestration system may know enough about the workloads it runs to be able
to determine whether KSM can be enabled safely; individual processes
generally lack that view. Systems where KSM can be enabled in this way,
Roesch said, have "shown a capacity increase of around 20%
". In
other words, the system described by Kivity 15 years ago would now be
able to support 180 unproductive LWN readers, which seems like an
improvement for everybody involved.
The patch set makes a few other changes as well, mostly aimed at improving the effectiveness metrics produced by KSM, so administrators can determine whether the run-time overhead of scanning pages is justified by the resulting memory savings.
This patch set is currently in its fifth revision. The comments on previous postings have mostly been concerned with where the memory savings are coming from. In the early days of KSM, the biggest win came from workloads that kept a lot of zero-filled pages around, but that does not appear to be the case anymore; KSM is deduplicating a lot of non-zero pages. There does not appear to be much that would block this series from landing upstream in the near future.
Avoiding the merge trap
The kernel subsystem maintainers out there probably have a deep understanding of the sinking feeling that results from opening one's inbox and seeing a response from Linus Torvalds to a pull request. When all goes well, pull requests are acted upon silently; a response usually means that all has not gone well. Several maintainers got to experience that feeling during the 6.3 merge window, which seemed to generate more than the usual number of grumpy responses related to merge commits. Avoiding that situation is not hard, though, with a bit of attention paid to how merges are done.When using a distributed system like Git, development is done in numerous parallel tracks, each of which has its own starting point. Even if a particular project starts at the tip of the mainline tree, the mainline itself is almost certain to have moved on by the time that work is ready to land there. Bringing independent lines of development back together is called "merging"; depending on what has changed, any given merge can be simple or a nasty mess of conflicting changes.
There are many projects out there that disallow merges entirely, insisting that their development repository consist of a single sequence of commits. In such projects, developers must rebase their work before proposing it for upstream. The kernel project, though, has no problem with merges; indeed, the development process would not work without them. Consider that Torvalds did 208 pulls during the 6.3 merge window, each of which added commits to the mainline. If each pull request had to be rebased each time Torvalds did a pull to avoid a merge, little actual work would get done. Instead, almost every pull into the mainline results in another merge.
Subsystem maintainers often do merges of their own, creating merge commits that end up being pushed into the mainline with the rest of their work. That is all normal, but it is those merge commits that tend to land maintainers in trouble. Why is it that Torvalds can create hundreds of merges during a typical development cycle, but subsystem maintainers get grumbled at?
There are two things to watch out for when creating a merge in a subsystem tree. The first is that Torvalds insists that each merge be properly explained. Merges are commits too, and their changelog should say why the merge is being done. It is easy to just accept the default message that Git creates when adding a merge commit, but the resulting commit will have no useful explanation, which is the equivalent of waving a large red flag for Torvalds to see. Unexplained merges thus have a high likelihood of generating one of those unwanted replies.
The other hazard is related, but arguably more subtle. Torvalds insists on an explanation of each merge because, it seems, he feels that many merges done by subsystem maintainers are unnecessary. "Back merges", where the current state of the mainline is merged back into a subsystem tree, come under extra scrutiny. Such merges clutter the development history and should be avoided if they are not needed. One way to determine whether a merge is needed is to explain the need for it; if the maintainer cannot write a changelog making the need for the merge clear, then perhaps the merge should not be done at all.
Merges into subsystem trees are done for a number of reasons. These can include merging a topic branch that is ready to head upstream or to bring in work from another developer or maintainer. The need for the merge is clear in this case, and the form that the changelog describing it should take is also clear. Most of the merges done by Torvalds himself are of this type, and each one carries a changelog describing the new functionality that the merge brings. Your editor, who has made a habit of following traffic into the mainline for rather too many years at this point, can attest that the quality of those merge messages has increased considerably over the years.
Almost any other merge is meant to bring in changes that take a different path into the mainline; back-merging from the mainline itself is the clearest example of that. Maintainers often manage two branches for work heading upstream, one for urgent fixes and one for larger work waiting for the next merge window. It is quite common to see, at some point, the fixes branch merged into the development branch, even though the fixes have likely already been sent to Torvalds separately. This is the kind of merge that Torvalds tends to question, though.
This email describes some of his thinking with regard to this sort of internal merge. One should not, he said, merge another branch just because it seems that some other work is going to need the changes found there. Instead, that other work should just be based on the fixes in the first place:
Because the "nice git way" to do that kind of thing is to actually realize "oh, I'm starting new work that depends on the fixes I already sent upstream, so I should just make a new topic branch and start at that point that I needed".And then - once you've done all the "new work" that depended on that state, only at *THAT* point do you merge the topic branch.
And look - you have exactly the same commits: you have one (or more) normal commits that implement the new feature, and you have one merge commit, but notice how much easier it is to write the explanation for the merge when you do it *after* the work.
As is so often the case, there is no hard rule here. If nothing else, it
is not uncommon for new work to depend on both the fixes and supporting
work that is in the development branch; that makes it hard to create a base
for that work without merging first. And Torvalds acknowledged that
some "superfluous merges
" are not really a problem — as long as they
are adequately explained.
So, for subsystem maintainers who would prefer not to get email from Torvalds in response to a pull request, there are a couple of simple rules to avoid the merge trap. Take the time to write an actual commit message explaining why the merge was done. But, before that, take a moment to think about whether there actually is a reason to do the merge. The result should be a welcome reduction of merge-window stress and a cleaner commit history for the kernel as a whole.
Textual: a framework for terminal user interfaces
For developers seeking to create applications with terminal user interfaces (TUIs), options have been relatively limited compared to the vast number of graphical user interface (GUI) frameworks available. As a result, many command-line applications reinvent the same user interface elements. Textual aims to remedy this: it's a rapid-application-development framework for Python TUI applications. Offering cross-platform support, Textual incorporates layouts, CSS-like styles, and an expanding collection of widgets.
While colors, interactivity, and all sorts of widgets are standard features in graphical applications, terminal applications are often a bit more basic. And although many terminal applications support the mouse, it's an exception to see scroll bars, clickable links, and buttons. Python developer Will McGugan saw an opportunity there, and started working on Textual in 2021.
Textual supports Linux, macOS, and Windows, and is MIT-licensed. After installing it using Python's package manager pip, running "python -m textual" displays a nice demo of its capabilities, as seen in the image below. A typical Textual application has a header with the application's title or other information, a footer with some shortcut keys for commands, a sidebar that can be toggled, and a main area with various widgets.
Those widgets are components of the user interface responsible for generating output for a specific region of the screen. They may respond to events, such as clicking them or typing text in them. Textual comes with two dozen basic widgets, including buttons, checkboxes, text inputs, labels, radio buttons, and even tabs. There are also some widgets for complex data structures, such as a data table, (directory) tree, list view, and Markdown viewer. Developers can also create custom widgets by extending a built-in widget class.
Textual builds upon Rich, a project that McGugan started in 2020. It started out as a simple tool to colorize text in the terminal, but has since grown into a library for rich text and nice formatting in command-line Python applications. Rich is targeted at enhancing the appearance of text output in existing console applications that aren't necessarily interactive. It doesn't offer interactive input handling or a framework for building complete applications. However, Rich renderables, which are components that implement the Rich Console protocol, can be used in Textual applications. For example, this is useful to add complex content such as a tree view in cells of Textual's DataTable, or a table as an element of Textual's OptionList.
Inspired by the web
Before building Rich and Textual, McGugan was a web developer. This shows in Textual's architecture, which heavily borrows from web-development techniques. The design of a Textual application can be done entirely outside of the Python code, by including a file with Cascading Style Sheets (CSS) directives. That way, a Textual application's code purely describes its behavior, while the CSS file defines the layout, colors, size, and borders of various widgets.
For example, here's the Python code for a simple Textual app, based on one of the examples in the Textual Guide:
from textual.app import App, ComposeResult from textual.containers import Container, Horizontal from textual.widgets import Header, Footer, Static, Button QUESTION = "Do you want to learn about Textual CSS?" class ExampleApp(App): BINDINGS = [ ("d", "toggle_dark", "Toggle dark mode"), ("q", "quit", "Quit"), ] CSS_PATH = "question.css" def compose(self) -> ComposeResult: """Place all widgets.""" yield Header() yield Footer() yield Container( Static(QUESTION, classes="question"), Horizontal( Button("Yes", variant="success"), Button("No", variant="error"), classes="buttons", ), id="dialog", ) def action_toggle_dark(self) -> None: """Toggle dark mode.""" self.dark = not self.dark def on_button_pressed(self, event: Button.Pressed) -> None: """Exit app with id of the button pressed.""" self.exit(event.button.label) if __name__ == "__main__": app = ExampleApp() print(app.run())
![Textual example [Textual example]](https://static.lwn.net/images/2023/textual-question-sm.png)
This defines an app with bindings for shortcut keys. The compose() method adds widgets to the application screen: a header, a footer, and a container with some other widgets. When a "d" is pressed on the keyboard, its corresponding action method action_toggle_dark() is called. And when one of the buttons in the dialog is pressed with the mouse, the on_button_pressed() method is called. The action_quit() method to exit the application is already defined in the base class.
The corresponding CSS file looks like this:
/* The top level dialog */ #dialog { height: 100%; margin: 4 8; background: $panel; color: $text; border: tall $background; padding: 1 2; } /* The button class */ Button { width: 1fr; } /* Matches the question text */ .question { text-style: bold; height: 100%; content-align: center middle; } /* Matches the button container */ .buttons { width: 100%; height: auto; dock: bottom; }
The Textual CSS dialect is simpler than the full CSS specification for the web, because it reflects the leaner capabilities of the terminal. Each Textual widget comes with a default CSS style, which can be changed by adding a .css file in the same directory as the application's Python files and assigning its name to the class variable CSS_PATH. In its default light and dark themes, Textual defines a number of colors as CSS variables, such as $panel, $text, and $background that are seen in the example.
Just like its web counterpart, Textual CSS knows how to use CSS selectors to define a style for a specific type of widget or a widget with a specific ID or class. In the CSS file above, #dialog refers to the Textual widget with ID dialog, Button styles all of the Button objects, and .question and .buttons define the styles for all objects with CSS classes question and buttons, respectively. There are also pseudo-classes to match widgets in a specific state, such as having the mouse cursor hover over it, being enabled or disabled, or having input focus.
If a widget needs to be rendered differently based on its state, this can be done by defining CSS classes for the different states in the .css file. Each CSS class has a different style, and the application's Python code can change the CSS class of a widget in response to an event such as a button press. And for developers who are not that comfortable with defining their own CSS classes, Vincent Warmerdam has created tuilwindcss, which is a set of CSS classes for Textual widgets. It's inspired by the Tailwind CSS framework for web sites.
Textual also has a Document Object Model (DOM), inspired by its namesake in a web browser, although Textual doesn't use documents but widgets. In Textual CSS, the DOM is a tree-like structure of the application's widgets. For example, a dialog widget may contain button widgets. CSS selectors can also be used in the application's Python code to get a list of widgets matching a selector.
Another essential concept in Textual's architecture, borrowed from web frameworks such as Vue.js and React, is reactive attributes. These are special attributes of a widget. Every time the code writes to these attributes, the widget will automatically update its output in the terminal. Developers can also implement a watch method, which is a method with a name beginning with watch_ followed by the name of the reactive attribute. This watch method will then be called whenever the attribute is modified. Changes to a reactive attribute can also be validated, for example to restrict numbers to a given range.
Async-agnostic
It's important to note that Textual is an asynchronous framework. It has an event system for key presses, mouse actions, and internal state changes. Event handlers are methods of the application or widget class prefixed with on_ followed by the name of the event. For example, when a user types in an Input widget, Textual creates a key event for each key press and sends it to the widget's message queue. Each widget runs an asyncio task, picks the message from the queue, and calls the on_key() method with the event as its first argument. The Textual documentation describes its input handling in terms of mouse actions and key presses.
Initially, Textual required the application developer to use the async and await keywords, but currently they are optional. McGugan explained in a blog article how Textual accomplishes this async-independence. His rationale for making this optional is:
This is not because I dislike async. I'm a fan! But it does place a small burden on the developer (more to type and think about). With the current API you generally don't need to write coroutines, or remember to await things. But async is there if you need it.
In the recent Textual 0.18.0 release, the developers added a Worker API to make it even easier to manage async tasks and threads. The new @work decorator turns a coroutine or a regular function into a Textual Worker object by scheduling it as either an asyncio task or a thread. This should make concurrency in Textual applications, for example handling data from the network, less error-prone.
Developer-friendly
The Textual repository on GitHub has a number of example applications, ranging from a calculator and a code browser to a puzzle game. Learning Textual development is best done by reading through the project's extensive tutorial, which builds a stopwatch application from scratch, explaining Textual's concepts along the way.
Textual also offers a couple of useful tools during development. A command like "textual run --dev my_app.py" runs an application in development mode. This allows live editing of CSS files: any changes in that file will immediately appear in the terminal without having to restart the application.
Textual also has a way to debug applications using the framework. Because TUI applications are generally unable to use print(), since it would overwrite the other application content, Textual has a debug console that shows the output of print() commands in a separate console. This can be started with a simple "textual console" command. This console also shows log messages about all events happening in the Textual app.
On the roadmap
Textual's widgets already cover many common use cases, but there are still a lot of other widgets on the roadmap. The list includes a color picker, date picker, drop-down menu, progress bar, form, and multi-line input. The developers also plan some eye candy like sparklines, plots such as bar, line, and candlestick charts, as well as images using sixels.
Some existing widgets will also be extended with extra functionality. For example, the DataTable class will gain an API to update specific rows, as well as a lazy loading API to improve the performance of large tables. Similarly, the Input widget will be extended with validation, error and warning states, and templates for specific input types such as IP addresses, currency, or credit-card numbers.
Among the other features on the roadmap, accessibility appears to be an important one. Textual currently has a monochrome mode, but there will also be a high-contrast theme and color-blind themes. Integration with screen readers for blind people is also planned.
Textual is still quite a new project, with breaking changes occasionally appearing in new releases. Developers who use Textual in their own applications should probably follow its development closely. Distribution packages of the library are likely to be outdated, so installing Textual directly from PyPI is recommended.
Fortunately, the development team is quite approachable, with a Discord server to talk to its members, as well as a blog where the developers regularly share news. It's also interesting to note that McGugan founded Textualize at the end of 2021 to develop Rich and Textual. The company's four-person developer team is planning a cloud service to run Textual apps in the web browser as easily as in the terminal.
Conclusion
In its short life, Textual has already made great strides in demonstrating the capabilities of terminal user interfaces. Various Textual-based applications, many of which can be found on Textualize employee Dave Pearson's list, showcase its potential. They include an ebook reader, task manager, Bluetooth Low Energy scanner (one of my own projects, HumBLE Explorer, see the image below), file browser, and sticky-notes application.
Textual's inspiration from web-development techniques, including its CSS dialect for styling and reactive attributes, make it a TUI framework with an innovative approach. In addition, it will be interesting to see how Textualize's plans for a cloud service turn out.
Vanilla OS shifting from Ubuntu to Debian
Vanilla OS, a lightweight, immutable operating system designed for developers and advanced users, has been using Ubuntu as its base. However, a recent announcement has revealed that in the upcoming Vanilla OS 2.0 Orchid release the project will be shifting to Debian unstable (Sid) as its new base operating system. Vanilla OS is making the switch due to Ubuntu's changes to its version of the GNOME desktop environment along with the distribution's reliance on the Snap packaging format. The decision has generated a fair amount of interest and discussion within the open-source community.
Other distributions have explored making a similar switch; for example, Linux Mint, as
Hacker News user "pyrophane" pointed out in a
comment on the Vanilla OS announcement. The
Linux Mint Debian
Edition (LMDE) was created "to ensure Linux Mint can continue to
deliver the same user experience if Ubuntu was ever to disappear
".
GNOME's customization by Ubuntu
The Vanilla OS announcement indicated that the decision to shift from Ubuntu
to Debian Sid was driven in part by the desire to
provide an unmodified experience for users. "Ubuntu provides a modified
version of the GNOME desktop, that
does not match how GNOME envisions its desktop.
" Debian is much closer
to a vanilla GNOME experience because it provides the software
without any major customization.
GNOME is designed to provide a consistent user experience across different Linux distributions. However, Ubuntu's modifications to the GNOME desktop often diverge from the upstream GNOME project's vision, leading to inconsistencies and compatibility issues with GNOME applications. This issue, that still persists today, was present as far back as 2020, when GNOME Designer, Tobias Bernard, noted the difficulties in dealing with Ubuntu:
This category also includes distributions overriding upstream decisions around system UX, as well as theming/branding issues, due to problematic downstream incentives. This means there is no clear platform visual identity developers can target.
For example, Ubuntu 18.04 (the current LTS) ships with GNOME 3.28 (from March 2018), includes significant changes to system UX and APIs (e.g. Unity-style dock, desktop icons, systray extension), and ships a branded stylesheet that breaks even in core applications.
Ubuntu's focus on Snap
One of the primary concerns cited in the announcement was the problems associated with Snap. Back in 2020, Linux Mint dropped Snaps, citing a number of problems with the format and its required connection to the Ubuntu Store. When Ubuntu decided to stop shipping Flatpak by default earlier this year, user "rtklutts" on Slashdot listed numerous problems they see with Snaps:
There are so many well documented cases where Snaps suck. Let me cover some of them here. Slow start up. No ability to control when updates happen.. i.e. forced updates that you can only delay. No ability to control what gets pulled in with the application. Many apps with same dependencies bringing in multiple copies of the same dependencies. Maker of an app has too much control over the environment on your PC. Theming doesn't apply correctly to Snap applications.
A few of these issues have also caught the eye of the developers at Vanilla
OS. "Based on our testing and many sources online, there are a lot of
issues that Snap hasn't addressed currently, like slow startups,
centralization, etc.
" Canonical controls the
official Snap store, and all Snaps must be approved by Canonical to be
distributed through it, centralizing control. That may be a concern
for Vanilla OS, since that control over the Snap store could lead to an
abuse of power.
Security and stability
As the unstable version of Debian, Sid's purpose is to be a test dummy for the distribution. That, coupled with its continuously updated model, places question marks over its stability and security for users. The Debian project warns about using the distribution:
Please note that security updates for unstable distribution are not managed by the security team. Hence, unstable does not get security updates in a timely manner. [...]
"sid" is subject to massive changes and in-place library updates. This can result in a very "unstable" system that contains packages that cannot be installed due to missing libraries, dependencies that cannot be fulfilled etc. Use it at your own risk!
Even though Debian warns against it, some users think its worth the risk and that the potential drawbacks aren't as bad as they are made out to be. In a Hacker News discussion about Vanilla's switch, user "tlamponi" compared the risks to those of Arch Linux:
I know the Debian project recommends against promoting the unstable Sid release for general (non-dev/maintainer) users, but IMO about as risky as running Arch Linux, i.e., quite safe. Debian Sid is the main initial entry point for new packages and so a rolling release which only pauses for a bit once every two years during the freeze for the stable releases.
Compared to Arch Linux it has a few advantages, like e.g., it actually cares about recording exact and sane versioned break/depends/conflicts so upgrading a system after year(s) will work out just fine without getting ones hand dirty. Further they track and install the kernel under a correctly ABI versioned path, so you can pull an upgrade with a new kernel and then still load a module from the currently booted kernel just fine, no reboot required, same for libraries. I mean I like Arch Linux, don't get me wrong it's my second favorite distro for sure, but having to immediately reboot after most updates as otherwise half the programs or kernel functionality is unavailable is a bit of a nuisance.
To address the concerns, however, the Vanilla OS developers have said that the distribution will limit the number of packages that it ships directly to the user to decrease the overall footprint of the system. Vanilla OS developers will keep up with Debian's security advisories to ensure that the base system remains secure. But they will only be testing the base image that is officially supported, where there are fewer potential sources of instability.
However, it is important to note that this limitation may be problematic for users who wish to install software outside of the core packages that Vanilla OS provides. Users will be left to their own devices for ensuring both security and system stability with regards to those packages.
While we don't know exactly which core packages will come with Vanilla's 2.0 Orchid release, we do know that they will be kept to the bare minimum, as noted in the announcement. This is not too dissimilar to its current version, which David Delony from MakeUseOf, reported on earlier this year:
The Vanilla OS desktop uses the regular GNOME 3 desktop environment. It comes with the default set of GNOME apps and not much else. This means you'll have to rely on the package manager, but Vanilla OS is hardly unusual among Linux distros for that.
Vanilla OS doesn't even come with an office suite. If you need to do word processing or spreadsheets, you'll have to install something like LibreOffice. Fortunately, it's easy to add new packages despite Vanilla OS's unorthodox architecture.
Before installing software like LibreOffice from the Sid repositories, users will now have to consider whether it will introduce instabilities or security vulnerabilities to their system, since it is not being tested (or updated) by Vanilla OS. While users are still free to install whatever they want, the knowledge of the risk it could bring may impose limitations on what users will want to install on top of the base image.
Conclusion
While Debian offers more "vanilla" experience and gives users the
freedom of choice that the Vanilla OS project values, it also comes with
potential instability and compatibility issues. Even though those concerns
may be alarming, the project will be keeping an eye on them
and has expressed a willingness to
change distributions in the future. "If we run into
stability and security issues down the line, then we will reconsider our
decision.
" We will have to wait and see how this transition
impacts the Vanilla OS community and the future growth of the distribution.
Brief items
Security
Garrett: PSA: upgrade your LUKS key derivation function
Matthew Garrett points out that many Linux systems using encrypted disks were installed with a relatively weak key derivation function that could make it relatively easy for a well-resourced attacker to break the encryption:
So, in these days of attackers with access to a pile of GPUs, a purely computationally expensive KDF is just not a good choice. And, unfortunately, the subject of this story was almost certainly using one of those. Ubuntu 18.04 used the LUKS1 header format, and the only KDF supported in this format is PBKDF2. This is not a memory expensive KDF, and so is vulnerable to GPU-based attacks. But even so, systems using the LUKS2 header format used to default to argon2i, again not a memory expensive KDF. New versions default to argon2id, which is. You want to be using argon2id.
The article includes instructions on how to (carefully) switch an installed system to a more secure setup.
Security quotes of the week
For modern UEFI systems, the firmware that's launched from the reset vector then reprograms the CPU into a sensible mode (ie, one without all this segmentation bullshit), does things like configure the memory controller so you can actually access RAM (a process which involves using CPU cache as RAM, because programming a memory controller is sufficiently hard that you need to store more state than you can fit in registers alone, which means you need RAM, but you don't have RAM until the memory controller is working, but thankfully the CPU comes with several megabytes of RAM on its own in the form of cache, so phew). It's kind of ugly, but that's a consequence of a bunch of well-understood legacy decisions.— Matthew GarrettExcept. This is not how modern Intel x86 boots. It's far stranger than that. Oh, yes, this is what it looks like is happening, but there's a bunch of stuff going on behind the scenes. Let's talk about boot security. The idea of any form of verified boot (such as UEFI Secure Boot) is that a signature on the next component of the boot chain is validated before that component is executed. But what verifies the first component in the boot chain? You can't simply ask the BIOS to verify itself - if an attacker can replace the BIOS, they can replace it with one that simply lies about having done so. Intel's solution to this is called Boot Guard.
But before we get to Boot Guard, we need to ensure the CPU is running in as bug-free a state as possible. So, when the CPU starts up, it examines the system flash and looks for a header that points at CPU microcode updates. Intel CPUs ship with built-in microcode, but it's frequently old and buggy and it's up to the system firmware to include a copy that's new enough that it's actually expected to work reliably. The microcode image is pulled out of flash, a signature is verified, and the new microcode starts running. This is true in both the Boot Guard and the non-Boot Guard scenarios. But for Boot Guard, before jumping to the reset vector, the microcode on the CPU reads an Authenticated Code Module (ACM) out of flash and verifies its signature against a hardcoded Intel key. If that checks out, it starts executing the ACM. Now, bear in mind that the CPU can't just verify the ACM and then execute it directly from flash - if it did, the flash could detect this, hand over a legitimate ACM for the verification, and then feed the CPU different instructions when it reads them again to execute them (a Time of Check vs Time of Use, or TOCTOU, vulnerability). So the ACM has to be copied onto the CPU before it's verified and executed, which means we need RAM, which means the CPU already needs to know how to configure its cache to be used as RAM.
It appears that a major problem here is that collectively we are unwilling to make any substantial investment in effective defence or deterrence. The systems that we use on the Internet are overly trusting to the point of irrational credulity. For example, the public key certification system used to secure web-based transactions is repeatedly demonstrated to be entirety untrustworthy, yet that's all we trust. Personal data is continually breached and leaked, yet all we seem to want to do is increase the number and complexity of regulations rather than actually use better tools that would effectively protect users.— Geoff Huston in a lengthy reflection on internet history—and its future
It's not just Congressdunderheads and Tiktok CEOs who treat "don't spy on under-13s" as a synonym for "don't let under-13s use this service." Every tech product designer and every general counsel at every tech company treats these two propositions as equivalent, because they are literally incapable of imagining a surveillance-free online service.— Cory Doctorow
Kernel development
Kernel release status
The current development kernel is 6.3-rc7, released on April 16. Linus said: "Let's hope we have just one more calm week, and we'll have had a nice uneventful release cycle. Knock wood".
Stable updates: 6.2.11, 6.1.24, and 5.15.107 were released on April 13.
The 6.2.12, 6.1.25, 5.15.108, 5.10.178, 5.4.241, 4.19.281, and 4.14.313 stable updates are in the review process. They are due on April 20, but some of them have been through enough release candidates that it would not be surprising to see them come out a bit later.
Quotes of the week
random rant, or what is my job as maintainer— Daniel Vetterit's not ensuring perfect code, it's building communities to keep the code alive and let it evolve
and very often the best option for that is to merge the "kinda shitty, but exists" code right now
Linus fixes a dishwasher and posts about it. As a result, my PostgreSQL database falls over.— Konstantin Ryabitsev
The concept of proper hardware/software co-design, which was postulated at least 40 years ago, is still either unknown or in its infancy at the vast majority of silicon vendors including my own employer.— Thomas GleixnerThe main concept is still to throw hardware/firmware over the fence and let software folks deal with it. That's a complete disaster and paves the way to death by complexity and unmaintainability.
As a consequence the only way for a responsible kernel maintainer is to question the design at the point where patches are posted. Therefore it's not unreasonable to ask for a rationale and concise technical arguments at that point.
Distributions
Fedora 38 released
The Fedora 38 release is available. Fedora has mostly moved past its old pattern of late releases, but it's still a bit surprising that this release came out one week ahead of the scheduled date. Some of the changes in this release, including reduced shutdown timeouts and frame pointers have been covered here in the past; see the announcement and the Workstation-edition "what's new" post for details on the rest.
If you want to use Fedora Linux on your mobile device, F38 introduces a Phosh image. Phosh is a Wayland shell for mobile devices based on Gnome. This is an early effort from our Mobility SIG. If your device isn’t supported yet, we welcome your contributions!
An openSUSE ALP status update
Richard Brown has posted an update on the status of the SUSE Adaptable Linux Platform (ALP) project and what it means for the openSUSE distribution.
The ALP concept should be flexible enough that these openSUSE Products will be able to leverage all the stuff SUSE is doing for SUSE's ALP Products, but then we (community) can add anything we want. If we find it is not flexible enough, then we (SUSE) will work to adapt it to make it possible for the community to build what it wants.So, if we the community want to build something like old Leap, that should be totally technically feasible.
The rebooting of Solus Linux
The desktop-oriented Solus distribution has been through a difficult period; this post describes the extensive changes that have been made in response.
Notably, innovation in the Linux ecosystem is presently centered around the use of application sandboxing, containers and the development of immutable operating systems with a well understood Software Bill of Materials. Each of these concepts allow for a degree of separation and stability when developing, testing and certifying software and products.The current Solus tooling, as well as the resulting packaging and development experience, is somewhat ill-suited to this objective and would most likely need a wholesale re-engineering of the tools before this becomes feasible.
However, there is a more straightforward path for Solus: Rebasing onto Serpent OS.
Development
New release: digiKam 8.0.0
The digiKam photo-management tool has announced its 8.0.0 release, after two years of development, bug fixing, and testing. Major new features include a documentation overhaul (with a new web site), support for more file formats, a new optical character recognition (OCR) tool, improved metadata handling, a neural-net-based image quality classifier, better integration with G'MIC-Qt, a Qt6-compatible code base, and lots more. See the announcement for all the details.LXD 5.13 released
Version 5.13 of the LXD virtual-machine manager has been released. New features include fast live migration, support for AMD's secure enclaves, and more. See this announcement for details.
Miscellaneous
Duffy: Run an open source-powered virtual conference!
On her blog, Máirín Duffy writes about using open-source software to run a virtual conference. The Fedora design team recently ran the first Creative Freedom Summit as a virtual conference for FOSS creative tools. The team could have used the same non-open-source platform that is used by the Flock Fedora conference, but took a different path:Using Matrix's Element client, we embedded the live stream video and an Etherpad into a public Matrix room for the conference. We used attendance in the channel to monitor overall conference attendance. We had live chat going throughout the conference and took questions from audience members both from the chat and the embedded Q&A Etherpad.
Back in 2020, the Linux Plumbers Conference also put together a virtual conference using free software, as did LibrePlanet and likely others.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Miscellaneous
Calls for Presentations
Linux Plumbers Conference CFP announcements
The 2023 Linux Plumbers Conference (November 13-15, Richmond VA, USA) has put out its calls for proposals for the refereed track (due August 6) and the microconference track (June 1). Proposals are also being accepted for the kernel-summit track.CFP Deadlines: April 20, 2023 to June 19, 2023
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
April 21 | September 7 September 8 |
PyCon Estonia | Tallinn, Estonia |
April 28 | June 14 | Ceph Days Korea 2023 | Seoul, South Korea |
May 3 | June 15 | Ceph Days Vancounver | Vancouver, Canada |
May 28 | July 13 July 16 |
Free and Open Source Yearly | Portland OR, US |
June 4 | October 20 October 22 |
Linux Fest Northwest 2023 | Bellingham, WA, US |
June 5 | September 20 September 21 |
Linux Security Summit Europe | Bilbao, Spain |
June 12 | October 3 October 5 |
PGConf NYC | New York, US |
June 16 | September 17 September 18 |
Tracing Summit | Bilbao, Spain |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: April 20, 2023 to June 19, 2023
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
April 19 April 27 |
PyCon US -- 20th Anniversary Special | Salt Lake City, US |
April 21 April 23 |
Linux Application Summit 2023 | Brno, Czechia |
April 23 April 26 |
foss-north 2023 | Göteborg, Sweden |
April 26 April 28 |
Linaro Connect | London, UK |
April 29 | 19. Augsburger Linux-Infotag | Augsburg, Germany |
May 5 | Ceph Days India 2023 | Bengaluru, India |
May 8 May 10 |
Storage, Filesystem, Memory-Management and BPF Summit | Vancouver, Canada |
May 9 May 11 |
sambaXP | Goettingen, Germany |
May 10 May 12 |
Linux Security Summit | Vancouver, Canada |
May 10 May 12 |
Open Source Summit North America | Vancouver, Canada |
May 11 | NLUUG Spring Conference | Utrecht, The Netherlands |
May 12 | PGConf.BE | Haasrode, Belgium |
May 17 | Icinga Camp Berlin | Berlin, Germany |
May 17 May 20 |
BSDCan 2023 | Ottawa, Canada |
May 23 May 30 |
Debian Reunion Hamburg 2023 | Hamburg, Germany |
May 23 May 25 |
Red Hat Summit 2023 | Boston, US |
May 26 May 28 |
openSUSE Conference 2023 | Nürnberg, Germany |
May 30 June 2 |
PGCon 2023 | Ottawa, Canada |
June 13 June 15 |
Beam Summit 2023 | New York City, US |
June 13 June 15 |
Open Infrastructure Summit | Vancouver, Canada |
June 14 June 15 |
KVM Forum 2023 | Brno, Czech Republic |
June 14 | Ceph Days Korea 2023 | Seoul, South Korea |
June 15 | Ceph Days Vancounver | Vancouver, Canada |
June 15 June 17 |
Open Source Festival Africa | Lagos, Nigeria |
June 16 June 18 |
Devconf.CZ 2023 | Brno, Czech Republic |
If your event does not appear here, please tell us about it.
Security updates
Alert summary April 13, 2023 to April 19, 2023
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DLA-3394-1 | LTS | asterisk | 2023-04-19 |
Debian | DSA-5390-1 | stable | chromium | 2023-04-16 |
Debian | DSA-5386-1 | stable | chromium | 2023-04-12 |
Debian | DLA-3391-1 | LTS | firefox-esr | 2023-04-12 |
Debian | DSA-5385-1 | stable | firefox-esr | 2023-04-12 |
Debian | DSA-5388-1 | stable | haproxy | 2023-04-13 |
Debian | DLA-3389-1 | LTS | lldpd | 2023-04-12 |
Debian | DSA-5387-1 | stable | openvswitch | 2023-04-13 |
Debian | DLA-3393-1 | LTS | protobuf | 2023-04-18 |
Debian | DSA-5389-1 | stable | rails | 2023-04-14 |
Debian | DLA-3392-1 | LTS | ruby-rack | 2023-04-17 |
Debian | DLA-3390-1 | LTS | zabbix | 2023-04-12 |
Debian | DLA-3390-1 | LTS | zabbix | 2023-04-12 |
Fedora | FEDORA-2023-3a821e6e73 | F36 | bzip3 | 2023-04-14 |
Fedora | FEDORA-2023-c08f9dfc16 | F37 | bzip3 | 2023-04-14 |
Fedora | FEDORA-2023-32c3bbbbc9 | F37 | ffmpeg | 2023-04-13 |
Fedora | FEDORA-2023-50f9eb7aca | F36 | firefox | 2023-04-15 |
Fedora | FEDORA-2023-1749adc275 | F37 | firefox | 2023-04-13 |
Fedora | FEDORA-2023-366850fc87 | F36 | ghostscript | 2023-04-15 |
Fedora | FEDORA-2023-1c172e3264 | F36 | libldb | 2023-04-16 |
Fedora | FEDORA-2023-a66bd67e34 | F37 | libpcap | 2023-04-18 |
Fedora | FEDORA-2023-dae7cc20ac | F37 | libxml2 | 2023-04-18 |
Fedora | FEDORA-2023-a521b917c8 | F38 | libxml2 | 2023-04-18 |
Fedora | FEDORA-2023-17aaa2187f | F36 | libyang | 2023-04-14 |
Fedora | FEDORA-2023-9887f01975 | F37 | libyang | 2023-04-14 |
Fedora | FEDORA-2023-88991d2713 | F38 | lldpd | 2023-04-19 |
Fedora | FEDORA-2023-f519fe8cda | F36 | mingw-glib2 | 2023-04-14 |
Fedora | FEDORA-2023-cfe20dbcab | F37 | mingw-glib2 | 2023-04-14 |
Fedora | FEDORA-2023-1176c8b10c | F37 | openssh | 2023-04-18 |
Fedora | FEDORA-2023-123647648e | F38 | openssh | 2023-04-19 |
Fedora | FEDORA-2023-0c1aaa76b6 | F37 | pdns-recursor | 2023-04-13 |
Fedora | FEDORA-2023-4936e4e7f1 | F37 | polkit | 2023-04-13 |
Fedora | FEDORA-2023-1c172e3264 | F36 | samba | 2023-04-16 |
Fedora | FEDORA-2023-a66bd67e34 | F37 | tcpdump | 2023-04-18 |
Fedora | FEDORA-2023-0e1ae0d5f6 | F36 | thunderbird | 2023-04-14 |
Fedora | FEDORA-2023-d365f19e05 | F37 | thunderbird | 2023-04-13 |
Fedora | FEDORA-2023-6f3f9ee721 | F36 | tigervnc | 2023-04-15 |
Fedora | FEDORA-2023-fe18ae3e85 | F36 | xorg-x11-server | 2023-04-14 |
Fedora | FEDORA-2023-239bae4b57 | F36 | xorg-x11-server-Xwayland | 2023-04-14 |
Mageia | MGASA-2023-0139 | 8 | ceph | 2023-04-15 |
Mageia | MGASA-2023-0141 | 8 | davmail | 2023-04-15 |
Mageia | MGASA-2023-0146 | 8 | firefox | 2023-04-15 |
Mageia | MGASA-2023-0145 | 8 | golang | 2023-04-15 |
Mageia | MGASA-2023-0143 | 8 | jpegoptim | 2023-04-15 |
Mageia | MGASA-2023-0148 | 8 | kernel | 2023-04-17 |
Mageia | MGASA-2023-0149 | 8 | kernel-linus | 2023-04-17 |
Mageia | MGASA-2023-0144 | 8 | libheif | 2023-04-15 |
Mageia | MGASA-2023-0140 | 8 | python-certifi | 2023-04-15 |
Mageia | MGASA-2023-0142 | 8 | python-flask-restx | 2023-04-15 |
Mageia | MGASA-2023-0147 | 8 | thunderbird | 2023-04-15 |
Mageia | MGASA-2023-0138 | 8 | tomcat | 2023-04-15 |
Oracle | ELSA-2023-1791 | OL7 | firefox | 2023-04-18 |
Oracle | ELSA-2023-1791 | OL7 | firefox | 2023-04-18 |
Oracle | ELSA-2023-1787 | OL8 | firefox | 2023-04-14 |
Oracle | ELSA-2023-1786 | OL9 | firefox | 2023-04-14 |
Oracle | ELSA-2023-12255 | OL7 | kernel | 2023-04-18 |
Oracle | ELSA-2023-12255 | OL8 | kernel | 2023-04-18 |
Oracle | ELSA-2023-12255 | OL8 | kernel | 2023-04-18 |
Oracle | ELSA-2023-1703 | OL9 | kernel | 2023-04-13 |
Oracle | ELSA-2023-12256 | OL7 | kernel-container | 2023-04-18 |
Oracle | ELSA-2023-12256 | OL8 | kernel-container | 2023-04-18 |
Oracle | ELSA-2023-1743 | OL8 | nodejs:14 | 2023-04-13 |
Oracle | ELSA-2023-1806 | OL7 | thunderbird | 2023-04-18 |
Oracle | ELSA-2023-1802 | OL8 | thunderbird | 2023-04-18 |
Oracle | ELSA-2023-1809 | OL9 | thunderbird | 2023-04-18 |
Red Hat | RHSA-2023:1842-01 | EL8.6 | curl | 2023-04-18 |
Red Hat | RHSA-2023:1791-01 | EL7 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1787-01 | EL8 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1792-01 | EL8.1 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1789-01 | EL8.2 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1790-01 | EL8.4 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1788-01 | EL8.6 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1786-01 | EL9 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1785-01 | EL9.0 | firefox | 2023-04-14 |
Red Hat | RHSA-2023:1841-01 | EL8.6 | kernel | 2023-04-18 |
Red Hat | RHSA-2023:1743-01 | EL8 | nodejs:14 | 2023-04-12 |
Red Hat | RHSA-2023:1742-01 | EL8.6 | nodejs:14 | 2023-04-12 |
Red Hat | RHSA-2023:1823-01 | EL8 | openvswitch2.13 | 2023-04-18 |
Red Hat | RHSA-2023:1765-01 | EL8 | openvswitch2.17 | 2023-04-13 |
Red Hat | RHSA-2023:1769-01 | EL9 | openvswitch2.17 | 2023-04-13 |
Red Hat | RHSA-2023:1766-01 | EL8 | openvswitch3.1 | 2023-04-13 |
Red Hat | RHSA-2023:1770-01 | EL9 | openvswitch3.1 | 2023-04-13 |
Red Hat | RHSA-2023:1747-01 | EL8.2 | pki-core:10.6 | 2023-04-12 |
Red Hat | RHSA-2023:1806-01 | EL7 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1802-01 | EL8 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1803-01 | EL8.1 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1805-01 | EL8.2 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1804-01 | EL8.4 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1811-01 | EL8.6 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1809-01 | EL9 | thunderbird | 2023-04-17 |
Red Hat | RHSA-2023:1810-01 | EL9.0 | thunderbird | 2023-04-17 |
Scientific Linux | SLSA-2023:1791-1 | SL7 | firefox | 2023-04-14 |
Scientific Linux | SLSA-2023:1806-1 | SL7 | thunderbird | 2023-04-17 |
Slackware | SSA:2023-102-01 | mozilla | 2023-04-12 | |
SUSE | SUSE-SU-2023:1849-1 | MP4.2 MP4.3 SLE15 SES7 SES7.1 oS15.4 | apache2-mod_auth_openidc | 2023-04-14 |
SUSE | SUSE-SU-2023:1844-1 | MP4.3 SLE15 oS15.4 | aws-nitro-enclaves-cli | 2023-04-14 |
SUSE | SUSE-SU-2023:1912-1 | SLE12 | compat-openssl098 | 2023-04-19 |
SUSE | SUSE-SU-2023:1851-1 | MP4.3 SLE15 SES7 SES7.1 | container-suseconnect | 2023-04-14 |
SUSE | SUSE-SU-2023:1855-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | firefox | 2023-04-14 |
SUSE | SUSE-SU-2023:1910-1 | SLE12 | glib2 | 2023-04-19 |
SUSE | SUSE-SU-2023:1859-1 | MP4.2 MP4.3 oS15.4 | golang-github-prometheus-prometheus | 2023-04-14 |
SUSE | SUSE-SU-2023:1858-1 | SLE12 | golang-github-prometheus-prometheus | 2023-04-14 |
SUSE | SUSE-SU-2023:1857-1 | SLE15 oS15.4 oS15.5 | golang-github-prometheus-prometheus | 2023-04-14 |
SUSE | SUSE-SU-2023:1867-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | gradle | 2023-04-17 |
SUSE | SUSE-SU-2023:1904-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | grafana | 2023-04-19 |
SUSE | SUSE-SU-2023:1902-1 | SLE12 | grafana | 2023-04-19 |
SUSE | SUSE-SU-2023:1903-1 | SLE15 oS15.4 oS15.5 | grafana | 2023-04-19 |
SUSE | SUSE-SU-2023:1852-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 | harfbuzz | 2023-04-14 |
SUSE | SUSE-SU-2023:1901-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | helm | 2023-04-18 |
SUSE | SUSE-SU-2023:1850-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | java-1_8_0-ibm | 2023-04-14 |
SUSE | SUSE-SU-2023:1848-1 | MP4.0 SLE15 oS15.4 | kernel | 2023-04-14 |
SUSE | SUSE-SU-2023:1897-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 | kernel | 2023-04-18 |
SUSE | SUSE-SU-2023:1895-1 | MP4.3 SLE15 oS15.4 | kernel | 2023-04-18 |
SUSE | SUSE-SU-2023:1894-1 | SLE12 | kernel | 2023-04-18 |
SUSE | SUSE-SU-2023:1892-1 | SLE15 SLE-m5.1 SLE-m5.2 | kernel | 2023-04-18 |
SUSE | SUSE-SU-2023:1909-1 | SLE15 oS15.4 | libgit2 | 2023-04-19 |
SUSE | SUSE-SU-2023:1854-1 | MP4.3 SLE15 oS15.4 | liblouis | 2023-04-14 |
SUSE | openSUSE-SU-2023:0090-1 | osB15 | nextcloud-desktop | 2023-04-12 |
SUSE | SUSE-SU-2023:1871-1 | SLE15 SES7 oS15.4 | nodejs10 | 2023-04-17 |
SUSE | SUSE-SU-2023:1876-1 | SLE15 SES7 SES7.1 oS15.4 | nodejs12 | 2023-04-18 |
SUSE | SUSE-SU-2023:1872-1 | SLE12 | nodejs14 | 2023-04-17 |
SUSE | SUSE-SU-2023:1875-1 | SLE15 SES7 SES7.1 oS15.4 | nodejs14 | 2023-04-18 |
SUSE | SUSE-SU-2023:1911-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 | openssl-1_1 | 2023-04-19 |
SUSE | SUSE-SU-2023:1908-1 | SLE15 | openssl-1_1 | 2023-04-19 |
SUSE | SUSE-SU-2023:1898-1 | MP4.3 SLE15 oS15.4 | openssl-3 | 2023-04-18 |
SUSE | SUSE-SU-2023:1907-1 | SLE12 | openssl | 2023-04-19 |
SUSE | SUSE-SU-2023:1877-1 | MP4.2 MP4.3 SLE15 SES7.1 oS15.4 | pgadmin4 | 2023-04-18 |
SUSE | SUSE-SU-2023:1846-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | php7 | 2023-04-14 |
SUSE | SUSE-SU-2023:1847-1 | SLE12 | php7 | 2023-04-14 |
SUSE | SUSE-SU-2023:1869-1 | OS8 OS9 SLE12 | rubygem-rack | 2023-04-17 |
SUSE | SUSE-SU-2023:1856-1 | SLE12 | tftpboot-installation images | 2023-04-14 |
SUSE | SUSE-SU-2023:1853-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | tomcat | 2023-04-14 |
SUSE | SUSE-SU-2023:1873-1 | MP4.2 SLE15 SLE-m5.2 SES7 SES7.1 | wayland | 2023-04-18 |
SUSE | SUSE-SU-2023:1860-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 | wayland | 2023-04-14 |
SUSE | SUSE-SU-2023:1874-1 | SLE15 | wayland | 2023-04-18 |
Ubuntu | USN-6018-1 | 18.04 20.04 22.04 22.10 | apport | 2023-04-13 |
Ubuntu | USN-6021-1 | 18.04 | chromium-browser | 2023-04-14 |
Ubuntu | USN-6008-1 | 14.04 16.04 18.04 20.04 22.04 | exo | 2023-04-12 |
Ubuntu | USN-6010-2 | 18.04 20.04 | firefox | 2023-04-18 |
Ubuntu | USN-6017-1 | 16.04 18.04 20.04 22.04 22.10 | ghostscript | 2023-04-13 |
Ubuntu | USN-5855-4 | 14.04 16.04 | imagemagick | 2023-04-17 |
Ubuntu | USN-6022-1 | 16.04 18.04 20.04 | kamailio | 2023-04-14 |
Ubuntu | USN-6023-1 | 18.04 20.04 | libreoffice | 2023-04-17 |
Ubuntu | USN-6025-1 | 20.04 22.04 | linux, linux-aws, linux-aws-5.15, linux-azure, linux-azure-5.15, linux-azure-fde, linux-gcp, linux-gcp-5.15, linux-gke, linux-gke-5.15, linux-gkeop, linux-ibm, linux-kvm, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-oracle, linux-oracle-5.15, linux-raspi | 2023-04-18 |
Ubuntu | USN-6024-1 | 22.04 22.10 | linux, linux-aws, linux-azure, linux-gcp, linux-hwe-5.19, linux-kvm, linux-lowlatency, linux-oracle, linux-raspi | 2023-04-18 |
Ubuntu | USN-6014-1 | 14.04 16.04 | linux, linux-kvm, linux-lts-xenial | 2023-04-12 |
Ubuntu | USN-6013-1 | 14.04 | linux-aws | 2023-04-12 |
Ubuntu | USN-6020-1 | 20.04 | linux-bluefield | 2023-04-14 |
Ubuntu | USN-6016-1 | 18.04 20.04 | node-thenify | 2023-04-13 |
Ubuntu | USN-6019-1 | 20.04 | python-flask-cors | 2023-04-13 |
Ubuntu | USN-6012-1 | 22.04 22.10 | smarty3 | 2023-04-13 |
Ubuntu | USN-6015-1 | 18.04 20.04 22.04 22.10 | thunderbird | 2023-04-13 |
Ubuntu | USN-6026-1 | 14.04 18.04 20.04 22.04 | vim | 2023-04-19 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Device drivers
Device-driver infrastructure
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet