LWN.net Weekly Edition for November 16, 2023
Welcome to the LWN.net Weekly Edition for November 16, 2023
This edition contains the following feature content:
- Faster kernel testing with virtme-ng: an LPC talk on tooling to improve testing times for kernel developers.
- The push to save Itanium: Intel's Itanium architecture is no longer supported by the kernel, but some die-hard users still hope to bring it back.
- listmount() and statmount(): a pair of proposed new system calls to ask the kernel about mounted filesystems.
- The rest of the 6.7 merge window: more changes merged for the next major kernel release.
- Using Common Lisp in Emacs: an ongoing debate over the flavor of Lisp used for development within Emacs.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Faster kernel testing with virtme-ng
Building new kernels and booting into them is an unavoidable—and time-consuming—part of kernel development. Andrea Righi works for Canonical on the Ubuntu kernel team, so he does a lot of that and wanted to find a way to speed up the task. To that end, he has been working on virtme-ng, which is a way to boot a new kernel in a virtual machine, and it does so quickly. He came to the 2023 Linux Plumbers Conference (LPC) in Richmond, Virginia to introduce the project to a wider audience.
His team builds lots of kernels for multiple architectures, often applying patches and fixes that need to be tested to ensure that they work and do not introduce regressions. There is a large testing infrastructure for that, but sometimes developers want to get more "hands on" with these kernels, for example to do a git bisect. There are often lots of build-reboot cycles for the work that he does, so his goal to reduce the time that they take.
![Andrea Righi [Andrea Righi]](https://static.lwn.net/images/2023/lpc-righi-sm.png)
When he tests a kernel, he generally does so from a clean environment, so he will redeploy the system being used from a fresh image. That ensures the previous run has not left some kind of corruption behind that will affect the current run, but that takes time too. In general, kernel development is lacking the fast edit-compile-test loop that developers are used to from user-space development.
His goal was to figure out how to, essentially, clone his development machine into another machine running the new kernel. He wanted the test system to have access to the root filesystem of the parent system; he also needed to be able to make changes in the test system without affecting the parent. That way there would be no redeployment needed and he would be able to, more or less instantly, access a system to run his tests.
Once he came up with this idea, he started looking around and encountered virtme, written by Andy Lutomirski, which did much of what he was looking for. It virtualizes the running system by creating a live snapshot of it, then starting a virtual machine with a new kernel. It exports the root filesystem to the guest using the 9p filesystem in read-only mode and allows writes to a tmpfs home directory.
He was happy to find virtme and started using it, but it had some limitations and only really covered about 80% of the features that he needed. The read-only root filesystem meant that additional packages could not be installed on the test system, which limited the testing he could do. In addition, 9p had rather poor performance, so that, for example, a git diff in the kernel tree on the guest would take five minutes to run. That limited the testing he could do, but it also affected the boot time, which was longer than he wanted. It was taking around 15 seconds to boot to a console prompt, which he thought could be improved.
Beyond that, virtme is not being maintained; Lutomirski no longer has the time to do so. Pull requests (PRs) from Righi and others were not being acted on, so he contacted Lutomirski, who encouraged him to create a fork. Righi created virtme-ng, went through the PRs on virtme and merged those that he liked.
Since then, he has added a number of different features, starting with using virtiofs and overlayfs to provide a copy-on-write export of the entire host filesystem, instead of using 9p. Virtiofs requires more infrastructure, such as a FUSE daemon running on the host, but it performs much better than 9p. Overlayfs then allows the guest to write to the filesystem, so that it can install packages, say, without actually affecting the host.
Another fairly major change was the adoption of the QEMU microvm virtual-machine type. That was suggested to him by someone on Twitter when he had tweeted about his boot-time-reduction progress. Righi also switched virtme-ng to use a custom init script written in Rust to replace the Bash init script that virtme uses. The Bash script was not much of a maintenance problem, but he had an interest in learning Rust, so he rewrote it in that language, which turned out to have some large benefits in terms of boot time.
Simply switching from 9p to virtiofs made a huge difference in terms of I/O speed; the git diff dropped from 284s to just 1.7s. That operation generates an enormous amount of I/O, which can be done efficiently in the FUSE-based virtiofs, because it puts the results into memory shared with the guest; with 9p, each I/O operation and result was a separate message using the 9p protocol. In addition, the boot time went from 6.2s to 5.2s, which was less dramatic because there is a lot less I/O for booting the system.
Adding overlayfs on top of virtiofs allowed the guest to access and write to the host filesystem—without making any permanent changes. It uses a tmpfs as the upper directory, so when the VM exits, any changes made are gone. He did encounter a problem with overlayfs using the O_NOATIME flag, which caused permission errors for the virtiofs FUSE daemon, but that has now been fixed in the virtiofs upstream.
The microvm machine type is just an option to QEMU that provides a machine optimized for boot time and memory size. It does not have PCI or ACPI, which saves time for probing and the like. Adding that into the mix dropped the boot time from 5.2s to 3.8s. That reduction is not huge in absolute terms, but it makes a large difference if you are using virtme-ng to boot lots of different kernels, he said.
Using the new Rust virtme-ng-init further reduced the boot time to 1.2s. Those boot times are measured from the time he types vng to start the VM with a new kernel until he gets to a prompt where he can start typing commands into the guest. "That is quite amazing." For a bisect run, it can make a huge difference, for example.
Righi considered doing a demo in the talk, but was concerned about it not going well. He has made a YouTube video of a live virtme-ng demo and he described some of the things that he showed in it. You can do more than just type commands at the guest's shell once the VM boots; you can run scripts in the guest from the vng command line, for example. Virtme-ng has standard input and output set up so that you can run a command on the host, piped to a VM with a certain kernel, piped to a VM with a different kernel, and so on, which allows automating testing of a set of kernels. It can also be used to run graphical applications; his final demo in the video is running the Steam client on the guest and showing that the game is perfectly playable on the host display.
He hopes that virtme-ng can bring the fast edit-compile-test cycle to kernel development. Even just over a second is nice, but that is measured on his laptop; on his more powerful home server, he can break the one-second barrier, with boot times of 0.8-0.9s. The tool is meant to be backward compatible with virtme, so he considers any problems in compatibility to be a bug that needs to be fixed; he encouraged anyone that encountered a problem of that sort to report it to the GitHub repository.
Righi has the opportunity to work with students, who are generally quite excited to be able to change and build their own kernels, but have trouble with deploying those kernels onto VMs. When he shows them virtme-ng, it really helps to get them up and running quickly; he thinks the tool can be a way to smooth the path for any new kernel developers. He also gave a talk at the OSPM power management and scheduling conference in April about being "more carbon efficient" by using virtme-ng to reduce the amount of time spent doing redeployment and the like for continuous-integration (CI) systems.
The goal of his talk, Righi said, was to raise awareness of the tool in order to increase the user base. There are some people at SUSE using the tool, including for testing live patches, some Google developers use it as well; a Debian developer has started packaging virtme-ng, so it is available for Ubuntu and other Debian-based distributions. There is work going on to provide an RPM package for virtme-ng; with that and the Debian work, most distributions should be covered soon. He would like to collect more feedback from users as it basically does what he needs now, but there are probably other use cases that could be handled.
It is not a priority, but he would like to add full systemd support to the tool; systemd does not work correctly currently because it has its own state in the host system that confuses a systemd that gets started for the guest. A systemd-based VM will not start as quickly as the custom init script, but it would bring capabilities that the current guests do not have.
For example, he wanted to run the snap server in the guest, but it is a systemd-based daemon. He has a hack to trick the snap daemon into thinking that there is a systemd running, but he would like a better solution. The snap mode (which is enabled with a command-line option) works and is generally stable, but with better systemd support, both it and Flatpak would be cleanly supported by virtme-ng.
An audience member asked about supporting alternate user IDs, so that a root filesystem from a tar file could be used instead of the host filesystems. Righi said that there is some support for that use case, but that the user ID problem is not solved cleanly; ID-mapped mounts was suggested as a potential path for handling that better. Another question was about adoption by other open-source projects; Righi said that Mutter is using virtme-ng as part of its testing. There is also an unnamed company using the tool to test webcams, which surprised him, but it turns out the company is using QEMU options to pass a USB device from the host to the guest for testing multiple kernels with the webcam.
The VMs can be used with GDB, he said responding to another query; you can set breakpoints and such, as well as create a crash dump if desired. Another attendee asked about support for signed modules; Righi said that looking into that is on his "secret to-do list", so he was happy to get the question to spur him to look at it sooner.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for assistance with my travel costs to Richmond for LPC.]
The push to save Itanium
It is (relatively) easy to add code to the kernel; it tends to be much harder to remove that code later. The most recent example of this dynamic can be seen in the story of the ia64 ("Itanium") architecture, support for which was removed during the 6.7 merge window. That removal has left a small group of dedicated ia64 users unhappy and clinging to a faint hope that this support could return in a year's time.At the end of the 1990s, it had become clear that 32-bit processors were approaching the end of their useful life for many applications; in particular, 32 bits is not enough to address the memory sizes that were beginning to show up on higher-end systems. In response, Intel launched into a program that it called "Merced" to create the successor to the x86. It was a RISC architecture, wholly incompatible with anything that Intel had sold before. But it was going to be the Next Big Thing because that was what Intel was doing.
At the time, information about this new architecture was being held under nondisclosure agreements, and it was far from clear when Linux developers would be able to port the kernel to Merced, if ever. This was before Intel's investment in Red Hat that signaled the beginning of the arrival of big money into Linux. It seemed entirely possible that Linux would be cut out of the processor that, we were all reliably informed, would be the future of computing; choices would be limited to Windows and proprietary Unix.
That, of course, is not how things worked out. Intel became one of the earliest corporate supporters of Linux and ensured, under a project known as Trillian, that Linux ran well on this new architecture, which was eventually named ia64 (or "Itanium" in the sales literature). Initial Itanium support found its way into the 2.3.43 development kernel release in early 2000. The way was clear for our bright Linux-on-Itanium future.
The only problem, of course, is that things didn't work out that way either. Early Itanium systems failed to perform at anything close to the speeds that the hype had promised. Meanwhile, AMD created the x86-64 architecture, adding 64-bit operation while maintaining as much compatibility with the vast world of deployed 32-bit software as possible. This new architecture quickly won over the market, forcing Intel to follow in AMD's footsteps; Itanium ended up as a nearly forgotten footnote. Some systems were sold, and Intel continued manufacturing the CPUs for many years, but their market was limited. Red Hat dropped ia64 support in 2010.
Through all of this, the ia64 architecture code was maintained in the kernel, but interest dropped rapidly. In recent years, the ia64 code has often been seen as a drag on kernel development in general. After a bug originating in the ia64 code was tracked down in January, kernel developers started talking more seriously about just removing support for that architecture entirely. There was some discussion at the time, with a few hobbyist users complaining about the idea, but no changes were made then.
The topic came back in May, though, When Ard Biesheuvel pushed for the removal of ia64 support from the kernel, saying that the architecture was impeding his work in the EFI subsystem:
As a maintainer, I feel uncomfortable asking contributors to build test their changes for Itanium, and boot testing is infeasible for most, even if some people are volunteering access to infrastructure for this purpose. In general, hacking on kernels or bootloaders (which is where the EFI pieces live) is tricky using remote access.The bottom line is that, while I know of at least 2 people (on cc) that test stuff on itanium, and package software for it, I don't think there are any actual users remaining, and so it is doubtful whether it is justified to ask people to spend time and effort on this.
In that discussion, John Paul Adrian Glaubitz (who maintains the Debian ia64 port) suggested that ia64 support should be kept until after the next long-term-support kernel release, after which it could be dropped. That would, he said, maximize the amount of time in which ia64 would be supported for any remaining users out there. That is how it appears to have played out: during the 6.7 merge window, ia64 support was removed. The ia64 story is now done, as far as Linux is concerned.
Except that, seemingly, it is not. Shortly after ia64 support disappeared from the kernel, Frank Scheiner complained to the mailing list, saying that he and others had been working to resolve the problems with this architecture and had been rewarded by seeing it removed anyway. Linus Torvalds responded that he might be willing to see it come back — eventually:
So I'd be willing to come back to the "can we resurrect it" discussion, but not immediately - more along the lines of a "look, we've been maintaining it out of tree for a year, the other infrastructure is still alive, there is no impact on the rest of the kernel, can we please try again"?
Scheiner was not
entirely pleased with the removal of ia64 support, but Glaubitz described
the one-year plan as "very reasonable
".
So the hobbyists who want to keep Linux support for this architecture alive, and who faced a difficult task before, have now seen the challenge become more severe. Maintaining support for an architecture out of tree is not a task for the faint of heart, especially as the mainline kernel goes forward with changes that had been held back by the need to keep ia64 working until now. To complicate the picture, as Tony Luck pointed out in May, it is entirely possible that future kernel changes may, when backported to the stable kernel updates, break ia64 in those kernels. Since nobody working on the stable updates is able to test ia64 systems (even if they wanted to), such problems could go unnoticed for some time.
One should also not miss the other condition that Torvalds placed on a
return of ia64: that "the other infrastructure is still alive
". The
ia64 enthusiasts did not miss that, so it is unsurprising that they were
concerned when Adhemerval Zanella proposed
removing ia64 support from the GNU C Library (glibc) — one of the most
important pieces of other infrastructure. Zanella pointed out that the
ia64 port is in poor shape, with a number of outstanding problems that seem
unlikely to be solved. Scheiner answered
that it might be possible to provide (limited) access to ia64 machines for
library testing, and asked for more time to address some of the problems.
Zanella, though, followed up with a
patch to remove ia64 support. Scheiner responded:
"The speed this happens really surprises me and I hope there is no need
to rush with this removal
". Other developers, though, including Joseph
Myers, Florian Weimer,
and glibc maintainer Carlos
O'Donell, are all in favor of dropping ia64 support. It would, thus,
not be surprising to see the removal happen as soon as the 2.39 release,
due in February, or at the latest in the release after that.
That, needless to say, raises the bar for ia64 supporters even further. While one should never discount what a group of determined developers can accomplish, it is probably safe to conclude that ia64 support is gone from the kernel for good. Some may see this as a disappointment, but it is also a testament to how hard the community will work to keep an architecture alive even though it never had a lot of users and has almost none now. This support, arguably, could have been removed years ago without causing any great discomfort, but taking code out of the kernel is hard. As has been seen here, though, it is occasionally possible.
listmount() and statmount()
Years ago, the list of mounted filesystems on a Unix or Linux machine was relatively short and static. Adding a filesystem, which typically involved buying a new drive, happened rarely. In contrast, contemporary systems with a large number of containers can have a long and dynamic list of mounted filesystems. As was discussed at the 2023 LSFMM+BPF Summit, the Linux kernel's mechanism for providing information about mounted filesystems has not kept up with this change, leading to system-management headaches. Now, two new system calls proposed by Miklos Szeredi look set to provide some much-needed pain relief.Even in the absence of containers, the list of mounted filesystems on a typical Linux system has grown, partly as a result of an increase in the number of virtual filesystems provided by the kernel. For example, on your editor's basic desktop system, /proc/self/mountinfo lists 34 mounts, few of which correspond to partitions on actual storage devices. As this virtual file gets longer, it becomes harder for system-management tools (and humans too) to work with. The new system calls, called listmount() and statmount(), provide an alternative to digging through the mountinfo file.
Before implementing those system calls, though, Szeredi had to address a related problem. Every mount in the system is assigned a mount ID to identify it; that ID, which is available from statx(), is the obvious way to talk about mounts in a new system call. If, however, a filesystem is unmounted, its ID will be reused by the kernel to identify a new mount in the future, making it into an ambiguous identifier. The obvious solution is to stop reusing mount IDs, since nothing really requires that behavior.
Unfortunately, there are user-space programs that assume that the mount ID is a 32-bit quantity, despite the fact that it is defined as _u64 in the statx() system call. Systemd was identified as one of those programs during the LSFMM+BPF discussions. Making a 32-bit mount ID unique over the life of the system only allows for 4 billion mounts, which is apparently constraining in some settings; it also could possibly be deliberately overflowed by an attacker. So a new, even more explicitly 64-bit, mount ID is needed. The patch series adds it, along with a new statx() flag (STATX_MNT_ID_UNIQUE) that causes the unique ID to be returned rather than the 32-bit ID (which, of course, cannot go away). As a way of avoiding confusion between the two IDs, the lowest unique mount ID is set to 232.
With that in place, the two system calls can be added; the first is:
struct __mount_arg { __u64 mnt_id; __u64 request_mask; }; int listmount(const struct __mount_arg *req, u64 *buf, size_t bufsize, unsigned int flags);
A call to listmount() will return a list of filesystems mounted below the mount point identified by req->mnt_id, where that ID must be of the unique variety. The results are returned as an array of (unique) mount IDs in buf, which is bufsize in length. Normally, unreachable mounts (that may, for example, be mounted in a different mount namespace) are omitted; adding LISTMOUNT_UNREACHABLE to flags will cause those to be listed as well; this option requires the CAP_SYS_ADMIN capability. The LISTMOUNT_RECURSIVE flag will cause listmount() to do a depth-first traversal of the hierarchy below the starting mount point and list all mounts found there; otherwise, only direct child mounts are returned. The return value is the number of mount IDs returned (or an error code).
The request_mask field of the req structure is not used by listmount() and must be zero.
The other call, statmount(), returns the details of a given mount:
int statmount(const struct __mount_arg *req, struct statmnt *buf, size_t bufsize, unsigned int flags);
For this call, req->mnt_id identifies the mount of interest as before, while req->request_mask tells the kernel which information is requested. The flags value must be zero, and buf points to a buffer (of bufsize bytes) that begins with this structure:
struct statmnt { __u32 size; /* Total size, including strings */ __u32 __spare1; __u64 mask; /* What results were written */ __u32 sb_dev_major; /* Device ID */ __u32 sb_dev_minor; __u64 sb_magic; /* ..._SUPER_MAGIC */ __u32 sb_flags; /* MS_{RDONLY,SYNCHRONOUS,DIRSYNC,LAZYTIME} */ __u32 fs_type; /* [str] Filesystem type */ __u64 mnt_id; /* Unique ID of mount */ __u64 mnt_parent_id; /* Unique ID of parent (for root == mnt_id) */ __u32 mnt_id_old; /* Reused IDs used in proc/.../mountinfo */ __u32 mnt_parent_id_old; __u64 mnt_attr; /* MOUNT_ATTR_... */ __u64 mnt_propagation; /* MS_{SHARED,SLAVE,PRIVATE,UNBINDABLE} */ __u64 mnt_peer_group; /* ID of shared peer group */ __u64 mnt_master; /* Mount receives propagation from this ID */ __u64 propagate_from; /* Propagation from in current namespace */ __u32 mnt_root; /* [str] Root of mount relative to root of fs */ __u32 mnt_point; /* [str] Mountpoint relative to current root */ __u64 __spare2[50]; char str[]; /* Variable size part containing strings */ };
The kernel will not necessarily fill in all of the fields of this structure; instead, it provides the information indicated in the req->request_mask field. The available requests are:
- STMT_SB_BASIC: "basic" superblock data from the mount, specifically the sb_dev_major, sb_dev_minor, sb_magic, and sb_flags fields.
- STMT_MNT_BASIC: more basic data: mnt_id, mnt_parent_id, mnt_id_old, mnt_parent_id_old, mnt_attr, mnt_propagation, mnt_peer_group, and mnt_master.
- STMT_PROPAGATE_FROM: fills in the propagate_from field. (See the shared subtrees documentation for details on mount propagation).
Requests that yield strings are handled a bit differently. The actual string data will be written in the memory after the structure (buf must be big enough to hold that data), and the offset of the beginning of the string will be stored in the relevant structure field. The string-returning requests are:
- STMT_FS_TYPE: stores the string representation of the filesystem type after the structure, placing the offset of the string in fs_type.
- STMT_MNT_ROOT: stores the path to root filesystem, with the offset in mnt_root.
- STMT_MNT_POINT: stores the path to the mount point, with the offset in mnt_point.
On a successful return, the mask field will be set to indicate which of the other fields in the structure were written by the kernel.
VFS maintainer Christian Brauner has accepted
this series, with the probable objective of merging it for 6.8. He
made a few changes in the process, though: struct statmnt was
renamed to struct statmount and
struct __mount_arg became struct mnt_id_req.
"Libraries can expose this in whatever form they want but we'll also
have direct consumers. I'd rather have this struct be underscore free and
officially sanctioned.
" The result has not yet shown up in linux-next,
but seems likely to do so once the 6.7 merge window has closed.
It would not be surprising if the interface provided by C libraries differed from that shown here. The mnt_id_req structure, for example, is used to simplify compatibility across multiple architectures, but user-space libraries do not have the same concerns and may not wish to expose that structure. Details like that are unlikely to be worked out before these system calls show up in a released kernel. Eventually, though, there will be a better and easier way to obtain information about which filesystems are mounted.
The rest of the 6.7 merge window
By the time that the 6.7 merge window closed on November 12, 15,418 non-merge changesets had been pulled into the mainline kernel. That makes this one of the busiest merge windows ever; if one discounts the lengthy bcachefs development history (some 2,800 commits), though, then the patch volume is roughly in line with other recent kernels. Over 5,000 of those commits were merged after our first-half merge-window summary was written.Interesting changes pulled into the mainline during the second half of the 6.7 merge window include:
Architecture-specific
- The LoongArch architecture has gained support for virtualization with KVM. There is a bit of information in this documentation commit.
- KVM on RISC-V now supports the Smstateen extension, which is intended to prevent virtual machines from using registers unknown to the hypervisor (as a result of being added by an extension the hypervisor does not support) as covert channels. KVM on RISC-V also now allows the use of the Zicond extension (adding some conditional integer operations) in guests.
- It is now possible for a KVM guest on x86 systems to have up to 4,096 virtual CPUs; the maximum is now set with a kernel configuration option.
- RISC-V has gained support for LLVM-based shadow call stack protection. Clang 17 or later is needed to enable this option.
Core kernel
- Kernel
samepage merging (KSM) tries merge anonymous pages that hold the
same contents to improve memory utilization. By its nature, it ends
up repeatedly scanning the pages that it has failed to merge, wasting
CPU time. A new "smart scan" mode tracks the pages that were
unsuccessfully scanned and causes future scans to happen at a reduced
frequency. Smart scanning is
disabledenabled by default; there is a new sysctl knob (/sys/kernel/mm/ksm/smart_scan) that can be used to query or change its state. Some documentation can be found in this commit. - The new PAGEMAP_SCAN ioctl() command, available with userfaultfd() file descriptors, can be used to detect writes to a range of memory. It is useful for certain gaming anti-cheat techniques and for CRIU. See this article and this documentation commit for more information.
Filesystems and block I/O
- The Ceph filesystem has gained support for ID-mapped mounts.
Hardware support
- GPIO and pin control: Nuvoton NPCM8XX pin controllers, Realtek DHC 1315E, DHC 1319D, and DHC 1619B pin controllers, and Amlogic T7 SoC pin controllers.
- Industrial I/O: Microchip MCP39xx, MCP246x, and MCP356x analog-to-digital converters, Linear Technology LTC2309 analog-to-digital converters, Kionix KX132-1211 accelerometers, and ROHM BM1390 pressure sensors.
- Media: Nuvoton NPCM video capture/encode engines, Digiteq Automotive MGB4 grabber cards, and onsemi MT9M114 sensors.
- Miscellaneous: Renesas R-Car Gen4 PCIe controllers, Xilinx DMA PL PCIe host bridges, Kinetic KTD2026/7 RGB/white LED controllers, Qualcomm SDX75 interconnect providers, Espressif ESP32 UARTs, Espressif ESP32 USB ACM gadget controllers, and SigmaStar SSD202D realtime clocks.
- Sound: Starfive JH7110 PWM-DAC digital-to-analog converters, Richtek RTQ9128 45W digital input amplifiers, Awinic aw87390 and aw88399 amplifiers, and AMD ACP6.3 audio platforms.
- USB: Realtek DWC3 controllers, Intel La Jolla Cove USB adapters, and NXP PTN36502 Type-C redrivers.
Miscellaneous
- See this merge message for the long list of new features and enhancements to the perf tool in 6.7.
Security-related
- The Landlock security module has gained the ability to control network connections. See this commit message and this documentation patch for a bit more information.
- The AppArmor security module can now control access to io_uring and the creation of user namespaces; this feature was merged with no documentation.
- The kernel has gained a new API for virtual-machine attestation. See this changelog for some more information.
Internal kernel changes
- The memory-management's shrinker mechanism has been reworked to eliminate some locking overhead and reduce contention. Shrinkers are now all allocated dynamically, and some common operations have been made lockless.
- After some setbacks, the work to improve the reliability and performance of printk() is back. The biggest change merged for 6.7 is a new per-console locking scheme that allows for high-priority messages (a panic, for example) to take over console output from a lower-priority message.
- The build system will now build perf BPF programs by default if a suitable version of Clang is available.
- The old videobuf layer, long the way of managing frame buffers in the media subsystem, has been removed. Drivers should have moved to videobuf2 years ago.
- The MAINTAINERS file has been updated to reflect the fact that a number of kernel mailing lists have moved off lists.linux-foundation.org and onto lists.linux.dev. The old list addresses work for now but may eventually stop working, so it would be wise to update address books and such to match the new reality.
If this were a normal nine-week development cycle, the final 6.7 release would happen on December 31. Past history suggests, though, that it is unlikely that Linus Torvalds will choose to release the kernel then, when so many developers are away from their keyboards. So a more realistic release date is January 7, 2024. Before then, though, there will surely be a lot of bugs to find and fix, as usual.
Using Common Lisp in Emacs
Lisp is one of the oldest programming languages still in use today, but it has evolved in multiple directions over its more than 60-year history. Two of the more prominent descendants, Common Lisp and Emacs Lisp (or Elisp), are fairly closely related at some level, but there is still something of a divide between them. Some recent discussion in the emacs-devel mailing list have shown that some elements from Common Lisp are not completely welcome in Elisp—at least in the code that is maintained by the Emacs project itself.
The discussion goes back to at least mid-September when the subject of
keyword
arguments, which are used extensively in Common Lisp, came
up in
another context; at that time, Richard Stallman pointed out
that, while there is some amount of Common Lisp compatibility available for
Elisp, it is "not supposed to
be used a lot
". Alfred M. Szmidt concurred,
noting that useful pieces of Common Lisp can be adopted, but: "There is
no need to make
Emacs Lisp complicated for the sake of compatibility with Common Lisp.
"
Meanwhile, Emacs maintainer Eli Zaretskii quantified the
situation: "466 out of 1637 Lisp files in Emacs require cl-lib (some of them only
during compilation, i.e. they use only the macros).
" The Elisp "cl-lib"
library (formerly "cl", which is deprecated, as Emacs regularly tells me)
provides various compatibility macros and functions for those who need or
want to use
them in Emacs.
The conversation mostly dropped for a month, but Stallman picked it up
again in mid-October; he was trying to clarify how many uses of cl-lib in
Emacs were problematic. Using cl-lib for its macros, which only happens
at compile-time, is not really much of a problem in his view, but run-time
uses of the library may be more so. As he said in 2003, Stallman dislikes Common Lisp; he does "not have a very high
opinion of many of the decisions that were made in Common Lisp
",
keyword arguments in particular.
There are, Zaretskii reported, 226 run-time uses, which Emanuel Berg converted to percentages: 28% of Elisp files use cl-lib, 15% at compile time, and 14% at run time. Stallman said that he is most concerned with anything that might cause cl-lib to be loaded when Emacs is started without any customization (as when using the -Q command-line option).
Alan Mackenzie tried to
track down how many uses of cl- symbols there were in each of
the Elisp
files, suggesting that it would be a huge chore to remove all of them
"because there are enough contributors who
think that cl-lib is just an ordinary part of Emacs to be used freely
without restraint
", though he does not count himself among them.
Zaretskii questioned the numbers,
but said that some uses of cl-lib are likely unavoidable:
We cannot possibly expect people to contribute code if we force them not to use the macros they are used to. If cl-lib is not loaded as result, that is good enough for us, I think.
Berg wondered why it
mattered whether cl-lib was not being loaded under some circumstances;
"I think it is pretty clear that
cl-lib is used quite broadly, probably because people find
it useful.
" Zaretskii replied that it matters
because the project decided not to load cl-lib in "vanilla
Emacs
" because of "bloat,
unnecessary namespace pollution, etc.
" While cl-lib is not loaded for
"emacs -Q", Berg noted that it is
likely loaded soon after Emacs starts, since the debugger, native
compilation, and many other Emacs packages rely on cl-lib. Zaretskii was
unfazed by that:
What matters to us is that this unnecessary stuff is not loaded in "emacs -Q", i.e. is not dumped and preloaded into the bare uncustomized Emacs session. This lets users start from the leanest possible Emacs, then load what they need, without being "punished" by loads of stuff they never need or want.For Emacs to preload unnecessary stuff is unclean, and we try to avoid that uncleanliness.
There were arguments made that, these days, cl-lib was just an expected part of the Emacs environment, so perhaps it should be considered for preloading as well. As might be guessed, that did not really go far. In fact, the current long thread is an offshoot of one where switching Emacs to use Common Lisp was suggested as a way to reduce the C footprint of the editor. That went over even worse, unsurprisingly.
The main objection is that developers have to learn the Common Lisp functions and such in order to understand code that uses them. While Zaretskii sees no problem with developers using cl-lib in their own code or in Emacs extensions, he does not want to see it used in the vanilla Elisp code that gets preloaded when Emacs is started. Some see that as an ideological move, but Zaretskii said that it is simply a maintainability choice that the Emacs maintainers have made.
Stallman would like to go further, however; he sees the use of cl-lib by the debugger and native compilation extensions as problems to be solved:
Let's fix the debugger, and native compilation, not to load cl-lib.It's not so bad if you load a specialized package that serves your code and it uses cl-lib. But the debugger can be loaded to debug anything, even programs you have nothing to do with. It's rude for the debugger to load cl-lib.
Likewise for native compilation. You might recompile anything.
Zaretskii, however, does
not see those uses as problems at all (for
example: "The debugger uses cl-lib for good reasons.
"). He
said that he does not see a reason to "waste our resources
" on an
effort of that sort. Stefan Kangas concurred:
"let's focus our efforts on fixing real problems
".
Stallman did
a bit of digging into some of the entries in Mackenzie's list, finding
a false positive for byte-run.el, but also that the abbrev
minor mode uses the cl-pushnew
macro defined in cl-lib. "A generally useful minor mode as abbrev.el
should not load cl-lib.
" He suggested that the definition of
cl-pushnew should move
elsewhere so that it does not cause cl-lib to get loaded. Then the other
entries
on the list should be investigated and fixed as well. After that:
Once we fix them all, how about if we add a regression test to verify that various packages anyone might load at any time do not use cl-lib. With that, bugs like this would get caught right away.
Zaretskii pointed out
that cl-pushnew is simply a thin wrapper around another function
that is defined in cl-lib, so moving it would not stop the loading of the
library. Meanwhile, a regression test would be tricky to write "since
our test suite
infrastructure, ert.el, itself uses cl-lib extensively
". Stallman came up
with a different way to write the code that uses cl-pushnew,
but it was more verbose (and repetitive)—opinions varied on whether it was
truly an
improvement.
But these days, Zaretskii said, it is pretty much a
requirement for Emacs maintainers to "be familiar with the cl-lib and cl-macs
functionalities
" and that it is not rocket science to do so. Mackenzie strongly disagreed that it
was straightforward to pick up that knowledge:
I simply don't have the brain power to memorize such a morass of arbitrary, poorly named, poorly documented stuff. [...] I frequently spend, perhaps, half of my debugging time trying to understand what some cl-* does rather than concentrating on the problem to be debugged.
It is this added complexity that Stallman was trying to avoid when he
designed Elisp,
Bob Rogers said;
"he sees cl-lib.el as a trojan horse that is
changing Emacs Lisp
". Stallman agreed with
that, noting that he was under the impression that cl-lib was simply meant
as an optional extension, which is fine, but that he objects to it becoming
essential, which is where things are heading.
There is no sharp line between the one and the other, and no simple test to determine which of those two our current situation actually is. Rather, there is a spectrum that runs from "CL [Common Lisp] support is available but you can ignore it" to "the CL constructs are an essential and unavoidable part of Emacs Lisp."The specific practical questions I've asked are efforts to evaluate where we are now along that spectrum. Of course, the answer to that isn't precise either. But I was very surprised to learn how far Emacs has gone towards the latter end.
There are other libraries that add complexity to Elisp, however, so João Távora wondered what was special about cl-lib in this regard:
Then why this laser-focus on cl-lib.el? Why not criticize the use of seq.el and map.el and pcase.el all of which "add many functions, which have many details"? Why are these libraries seemingly exempt from this discussion?
He followed that up with some further analysis of the difference in the way cl-lib is treated; there are other additions to Elisp that have been adopted along the way, some of them to the point of being preloaded (such as the seq.el library for sequence functions) these days.
Part of the difference is that Stallman truly dislikes the prevalence of keyword arguments in Common Lisp, which is something that he does not want to propagate into Elisp. Some discussion of adding an Elisp pushnew to replace cl-pushnew made it clear that he did not want an Elisp version to support the latter's keyword arguments, thus it would not be a drop-in replacement. There are, Stallman said, some places where keyword arguments might make sense in Elisp, such as for sort, but that simpler functions and macros should not add them.
The proposed change to the abbrev minor mode, which would avoid
cl-pushnew, is not something that Zaretskii sees as needed, though
he would not object if "the result is clean, tested, and is not horribly
complicated
". There is "nothing wrong with loading cl-lib when its
facilities are put to a
good use
". Stallman disagreed
and said that he has "decided to start fixing some of those files not to
use the
run-time CL features
"
To an outsider, the push to avoid cl-lib (and Common Lisp, in general) in Emacs seems a bit strange and the "rules" governing its use seem fairly arbitrary—not even the creator and the current maintainers can agree. There is obviously a balance to be struck, for maintainability and for reducing the amount of Lisp constructs that Emacs hackers need to know, but one has to wonder if that balance has tipped too far in either direction. Stallman (and some others) clearly seem to think it has, but Common Lisp has a fair share of Lisp hackers as well—many using Emacs as their development environment—so the situation is a little puzzling, at least to this neophyte (Common) Lisp hacker.
Brief items
Security
Intel's "redundant prefix issue"
Tavis Ormandy has described a bug in some Intel CPUs that can lead to a crash (or worse):
We believe this bug causes the frontend to miscalculate the size of the movsb instruction, causing subsequent entries in the ROB [reorder buffer] to be associated with incorrect addresses. When this happens, the CPU enters a confused state that causes the instruction pointer to be miscalculated.The machine can eventually recover from this state, perhaps with incorrect intermediate results, but becoming internally consistent again. However, if we cause multiple SMT or SMP cores to enter the state simultaneously, we can cause enough microarchitectural state corruption to force a machine check.
Intel has released a microcode update to address the issue.
Kernel development
Kernel release status
The current development kernel is 6.7-rc1, released on November 12. This is the largest merge window ever, but some of that was due to the bcachefs history that came with merge of that filesystem.But 6.7 is pretty big in other ways too, with12678 files changed, 838819 insertions(+), 280754 deletions(-)
which is also bigger than those historically big releases [4.9, 5.8 and 5.13]. And that's not due to bcachefs, that's actually mainly due to ia64 removal and a lot of GPU support (notably lots of AMD GPU header files again - lots and lots of lines, but there's support for new nvidia cards too).
Stable updates: none have been released in the last week.
A documentary on the development of eBPF
For folks with an interest in how extended BPF came to be and a half-hour to spare, the announcement has gone out of a new film called "eBPF: Unlocking the kernel", released at the KubeCon+CloudNativeCon event. The documentary is available on YouTube.
Development
A GNU COBOL status update
For the COBOL users out there, James K. Lowden has posted an update on the current status of the GNU COBOL compiler.
When in November we turn back our clocks, then naturally do programmers' thoughts turn to Cobol, its promise, and future.At last post, nine months ago, we were working our way through the NIST CCVS/85 test suite. I am pleased to report that process is complete. As far as NIST is concerned, gcobol is a Cobol compiler.
GNOME supported by the Sovereign Tech Fund
The GNOME Foundation has announced the receipt of a €1 million award from the German Sovereign Tech Fund. The funding will support work on accessibility, privacy, hardware support, and more.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: November 16, 2023 to January 15, 2024
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
November 30 | February 27 | Open Source Camp on Kubernetes | Nuremberg, Germany |
December 16 | April 12 April 13 |
Texas Linux Fest | Austin, Texas, US |
January 2 | April 26 April 28 |
Linux Fest Northwest | Bellingham, WA, US |
January 14 | April 16 April 18 |
Open Source Summit North America | Seattle, US |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: November 16, 2023 to January 15, 2024
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
November 25 November 26 |
MiniDebConf Cambridge | Cambridge, UK |
November 27 November 29 |
Deutsche Open Stack Tage | Berlin, Germany |
November 28 | NLUUG Fall Conference | Utrecht, The Netherlands |
November 28 November 30 |
Yocto Project Summit 2023.11 | Online |
December 1 December 3 |
GNOME.Asia 2023 | Kathmandu, Nepal |
December 2 December 3 |
EmacsConf 2023 | Online |
December 5 December 6 |
Open Source Summit Japan | Tokyo, Japan |
December 12 December 15 |
PostgreSQL Conference Europe | Prague, Czechia |
If your event does not appear here, please tell us about it.
Security updates
Alert summary November 9, 2023 to November 15, 2023
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DLA-3650-1 | LTS | audiofile | 2023-11-12 |
Debian | DSA-5550-1 | stable | cacti | 2023-11-08 |
Debian | DSA-5551-1 | stable | chromium | 2023-11-09 |
Debian | DSA-5552-1 | stable | ffmpeg | 2023-11-12 |
Debian | DLA-3653-1 | LTS | libclamunrar | 2023-11-15 |
Debian | DLA-3651-1 | LTS | postgresql-11 | 2023-11-14 |
Debian | DSA-5554-1 | stable | postgresql-13 | 2023-11-13 |
Debian | DSA-5553-1 | stable | postgresql-15 | 2023-11-13 |
Debian | DLA-3652-1 | LTS | ruby-sanitize | 2023-11-14 |
Fedora | FEDORA-2023-464caf9bb6 | F37 | CuraEngine | 2023-11-09 |
Fedora | FEDORA-2023-f3c4404efd | F38 | CuraEngine | 2023-11-09 |
Fedora | FEDORA-2023-1d57a86dfa | F39 | CuraEngine | 2023-11-09 |
Fedora | FEDORA-2023-f29e9560a1 | F38 | chromium | 2023-11-14 |
Fedora | FEDORA-2023-f83b5e84d3 | F39 | chromium | 2023-11-14 |
Fedora | FEDORA-2023-6efef709eb | F37 | community-mysql | 2023-11-10 |
Fedora | FEDORA-2023-9ff7fd16a0 | F38 | community-mysql | 2023-11-10 |
Fedora | FEDORA-2023-e7aa13efc5 | F39 | community-mysql | 2023-11-10 |
Fedora | FEDORA-2023-ce436d56f8 | F37 | frr | 2023-11-15 |
Fedora | FEDORA-2023-61abba57d8 | F38 | frr | 2023-11-15 |
Fedora | FEDORA-2023-514db5339e | F39 | frr | 2023-11-15 |
Fedora | FEDORA-2023-ed9922536e | F38 | keylime | 2023-11-12 |
Fedora | FEDORA-2023-f8d216faed | F38 | matrix-synapse | 2023-11-10 |
Fedora | FEDORA-2023-957972e77c | F39 | matrix-synapse | 2023-11-10 |
Fedora | FEDORA-2023-f3389245ce | F37 | optipng | 2023-11-14 |
Fedora | FEDORA-2023-ae05c3bca8 | F38 | optipng | 2023-11-14 |
Fedora | FEDORA-2023-125037736c | F39 | optipng | 2023-11-14 |
Fedora | FEDORA-2023-00c78aad58 | F39 | podman | 2023-11-09 |
Fedora | FEDORA-2023-1a120657f9 | F38 | python-pillow | 2023-11-12 |
Fedora | FEDORA-2023-f2a6d27239 | F37 | radare2 | 2023-11-14 |
Fedora | FEDORA-2023-ffaebb1e10 | F38 | radare2 | 2023-11-14 |
Fedora | FEDORA-2023-70578c5599 | F37 | roundcubemail | 2023-11-15 |
Fedora | FEDORA-2023-0fd9865145 | F38 | roundcubemail | 2023-11-15 |
Fedora | FEDORA-2023-cf584ed77a | F39 | roundcubemail | 2023-11-15 |
Fedora | FEDORA-2023-8dd1a1a2e6 | F37 | rubygem-rmagick | 2023-11-09 |
Fedora | FEDORA-2023-dbacf5d9f6 | F38 | tigervnc | 2023-11-13 |
Fedora | FEDORA-2023-cb3cacfef8 | F37 | webkitgtk | 2023-11-15 |
Fedora | FEDORA-2023-18cb340b28 | F37 | xorg-x11-server-Xwayland | 2023-11-10 |
Mageia | MGASA-2023-0318 | 8, 9 | freerdp | 2023-11-15 |
Mageia | MGASA-2023-0311 | 9 | gnome-shell | 2023-11-09 |
Mageia | MGASA-2023-0313 | 9 | openssl | 2023-11-09 |
Mageia | MGASA-2023-0317 | 9 | quictls | 2023-11-12 |
Mageia | MGASA-2023-0315 | 9 | squid | 2023-11-10 |
Mageia | MGASA-2023-0319 | 8, 9 | tomcat | 2023-11-15 |
Mageia | MGASA-2023-0314 | 8, 9 | vim | 2023-11-10 |
Mageia | MGASA-2023-0316 | 8, 9 | vorbis-tools | 2023-11-12 |
Mageia | MGASA-2023-0312 | 8, 9 | zlib | 2023-11-09 |
Oracle | ELSA-2023-12971 | OL7 | dnsmasq | 2023-11-09 |
Oracle | ELSA-2023-12972 | OL7 | dnsmasq | 2023-11-09 |
Oracle | ELSA-2023-12952 | OL7 | grub2 | 2023-11-13 |
Oracle | ELSA-2023-6823 | OL7 | python3 | 2023-11-10 |
Oracle | ELSA-2023-6823 | OL7 | python3 | 2023-11-10 |
Oracle | ELSA-2023-6805 | OL7 | squid | 2023-11-09 |
Oracle | ELSA-2023-6805 | OL7 | squid | 2023-11-09 |
Oracle | ELSA-2023-6267 | OL8 | squid:4 | 2023-11-09 |
Oracle | ELSA-2023-6802 | OL7 | xorg-x11-server | 2023-11-09 |
Oracle | ELSA-2023-6802 | OL7 | xorg-x11-server | 2023-11-09 |
Red Hat | RHSA-2023:7190-01 | EL8 | avahi | 2023-11-14 |
Red Hat | RHSA-2023:7177-01 | EL8 | bind | 2023-11-14 |
Red Hat | RHSA-2023:7207-01 | EL8 | c-ares | 2023-11-14 |
Red Hat | RHSA-2023:7116-01 | EL8 | c-ares | 2023-11-14 |
Red Hat | RHSA-2023:6943-01 | EL8 | cloud-init | 2023-11-14 |
Red Hat | RHSA-2023:7202-01 | EL8 | container-tools:4.0 | 2023-11-14 |
Red Hat | RHSA-2023:6938-01 | EL8 | container-tools:4.0 | 2023-11-14 |
Red Hat | RHSA-2023:6939-01 | EL8 | container-tools:rhel8 | 2023-11-14 |
Red Hat | RHSA-2023:7165-01 | EL8 | cups | 2023-11-14 |
Red Hat | RHSA-2023:7046-01 | EL8 | dnsmasq | 2023-11-14 |
Red Hat | RHSA-2023:6919-01 | EL8 | edk2 | 2023-11-14 |
Red Hat | RHSA-2023:7083-01 | EL8 | emacs | 2023-11-14 |
Red Hat | RHSA-2023:6812-01 | EL8 | fence-agents | 2023-11-08 |
Red Hat | RHSA-2023:7038-01 | EL8 | flatpak | 2023-11-14 |
Red Hat | RHSA-2023:7189-01 | EL8 | fwupd | 2023-11-14 |
Red Hat | RHSA-2023:6883-01 | EL9.0 | galera, mariadb | 2023-11-13 |
Red Hat | RHSA-2023:7053-01 | EL8 | ghostscript | 2023-11-14 |
Red Hat | RHSA-2023:6972-01 | EL8 | grafana | 2023-11-14 |
Red Hat | RHSA-2023:6795-01 | EL7 | insights-client | 2023-11-08 |
Red Hat | RHSA-2023:6811-01 | EL8.1 | insights-client | 2023-11-08 |
Red Hat | RHSA-2023:6798-01 | EL8.4 | insights-client | 2023-11-08 |
Red Hat | RHSA-2023:6796-01 | EL9.0 | insights-client | 2023-11-08 |
Red Hat | RHSA-2023:6887-01 | EL8 | java-21-openjdk | 2023-11-14 |
Red Hat | RHSA-2023:7077-01 | EL8 | kernel | 2023-11-14 |
Red Hat | RHSA-2023:6813-01 | EL8.1 | kernel | 2023-11-08 |
Red Hat | RHSA-2023:6901-01 | EL8 | kernel-rt | 2023-11-14 |
Red Hat | RHSA-2023:6799-01 | EL8.1 | kpatch-patch | 2023-11-08 |
Red Hat | RHSA-2023:7029-01 | EL8 | libX11 | 2023-11-14 |
Red Hat | RHSA-2023:6976-01 | EL8 | libfastjson | 2023-11-14 |
Red Hat | RHSA-2023:7090-01 | EL8 | libmicrohttpd | 2023-11-14 |
Red Hat | RHSA-2023:7016-01 | EL8 | libpq | 2023-11-14 |
Red Hat | RHSA-2023:7150-01 | EL8 | librabbitmq | 2023-11-14 |
Red Hat | RHSA-2023:6933-01 | EL8 | libreoffice | 2023-11-14 |
Red Hat | RHSA-2023:7052-01 | EL8 | libreswan | 2023-11-14 |
Red Hat | RHSA-2023:7109-01 | EL8 | linux-firmware | 2023-11-14 |
Red Hat | RHSA-2023:6821-01 | EL8.4 | mariadb:10.5 | 2023-11-08 |
Red Hat | RHSA-2023:6822-01 | EL8.6 | mariadb:10.5 | 2023-11-08 |
Red Hat | RHSA-2023:6940-01 | EL8 | mod_auth_openidc:2.3 | 2023-11-14 |
Red Hat | RHSA-2023:7205-01 | EL8 | nodejs:20 | 2023-11-14 |
Red Hat | RHSA-2023:7160-01 | EL8 | opensc | 2023-11-14 |
Red Hat | RHSA-2023:7174-01 | EL8 | perl-HTTP-Tiny | 2023-11-14 |
Red Hat | RHSA-2023:6886-01 | EL7 | plexus-archiver | 2023-11-13 |
Red Hat | RHSA-2023:7187-01 | EL8 | procps-ng | 2023-11-14 |
Red Hat | RHSA-2023:6944-01 | EL8 | protobuf-c | 2023-11-14 |
Red Hat | RHSA-2023:6885-01 | EL7 | python | 2023-11-13 |
Red Hat | RHSA-2023:7096-01 | EL8 | python-cryptography | 2023-11-14 |
Red Hat | RHSA-2023:7176-01 | EL8 | python-pip | 2023-11-14 |
Red Hat | RHSA-2023:7042-01 | EL8 | python27:2.7 | 2023-11-14 |
Red Hat | RHSA-2023:6823-01 | EL7 | python3 | 2023-11-08 |
Red Hat | RHSA-2023:7151-01 | EL8 | python3 | 2023-11-14 |
Red Hat | RHSA-2023:7024-01 | EL8 | python3.11 | 2023-11-14 |
Red Hat | RHSA-2023:6914-01 | EL8 | python3.11-pip | 2023-11-14 |
Red Hat | RHSA-2023:7050-01 | EL8 | python38:3.8, python38-devel:3.8 | 2023-11-14 |
Red Hat | RHSA-2023:7034-01 | EL8 | python39:3.9, python39-devel:3.9 | 2023-11-14 |
Red Hat | RHSA-2023:6967-01 | EL8 | qt5-qtbase | 2023-11-14 |
Red Hat | RHSA-2023:6961-01 | EL8 | qt5-qtsvg | 2023-11-14 |
Red Hat | RHSA-2023:7058-01 | EL8 | rhc | 2023-11-14 |
Red Hat | RHSA-2023:7025-01 | EL8 | ruby:2.5 | 2023-11-14 |
Red Hat | RHSA-2023:7112-01 | EL8 | shadow-utils | 2023-11-14 |
Red Hat | RHSA-2023:6884-01 | EL6 | squid | 2023-11-13 |
Red Hat | RHSA-2023:6805-01 | EL7 | squid | 2023-11-08 |
Red Hat | RHSA-2023:6882-01 | EL6 | squid34 | 2023-11-13 |
Red Hat | RHSA-2023:7213-01 | EL8 | squid:4 | 2023-11-14 |
Red Hat | RHSA-2023:6810-01 | EL8.1 | squid:4 | 2023-11-08 |
Red Hat | RHSA-2023:6803-01 | EL8.2 | squid:4 | 2023-11-08 |
Red Hat | RHSA-2023:6804-01 | EL8.4 | squid:4 | 2023-11-08 |
Red Hat | RHSA-2023:6801-01 | EL8.6 | squid:4 | 2023-11-08 |
Red Hat | RHSA-2023:7010-01 | EL8 | sysstat | 2023-11-14 |
Red Hat | RHSA-2023:7022-01 | EL8 | tang | 2023-11-14 |
Red Hat | RHSA-2023:6808-01 | EL8.1 | tigervnc | 2023-11-08 |
Red Hat | RHSA-2023:7065-01 | EL8 | tomcat | 2023-11-14 |
Red Hat | RHSA-2023:7166-01 | EL8 | tpm2-tss | 2023-11-14 |
Red Hat | RHSA-2023:6980-01 | EL8 | virt:rhel, virt-devel:rhel | 2023-11-14 |
Red Hat | RHSA-2023:7055-01 | EL8 | webkit2gtk3 | 2023-11-14 |
Red Hat | RHSA-2023:7015-01 | EL8 | wireshark | 2023-11-14 |
Red Hat | RHSA-2023:6802-01 | EL7 | xorg-x11-server | 2023-11-08 |
Red Hat | RHSA-2023:6916-01 | EL8 | xorg-x11-server | 2023-11-14 |
Red Hat | RHSA-2023:6917-01 | EL8 | xorg-x11-server-Xwayland | 2023-11-14 |
Red Hat | RHSA-2023:7057-01 | EL8 | yajl | 2023-11-14 |
Scientific Linux | SLSA-2023:5691 | SL7 | bind | 2023-11-09 |
Scientific Linux | SLSA-2023:5477 | SL7 | firefox | 2023-11-09 |
Scientific Linux | SLSA-2023:6162 | SL7 | firefox | 2023-11-09 |
Scientific Linux | SLSA-2023:5761 | SL7 | java-1.8.0-openjdk | 2023-11-09 |
Scientific Linux | SLSA-2023:5736 | SL7 | java-11-openjdk | 2023-11-09 |
Scientific Linux | SLSA-2023:5622 | SL7 | kernel | 2023-11-09 |
Scientific Linux | SLSA-2023:5615 | SL7 | libssh2 | 2023-11-09 |
Scientific Linux | SLSA-2023:6886 | SL7 | plexus-archiver | 2023-11-13 |
Scientific Linux | SLSA-2023:6885 | SL7 | python | 2023-11-13 |
Scientific Linux | SLSA-2023:5616 | SL7 | python-reportlab | 2023-11-09 |
Scientific Linux | SLSA-2023:6823 | SL7 | python3 | 2023-11-09 |
Scientific Linux | SLSA-2023:6805 | SL7 | squid | 2023-11-09 |
Scientific Linux | SLSA-2023:6193 | SL7 | thunderbird | 2023-11-09 |
Scientific Linux | SLSA-2023:6802 | SL7 | xorg-x11-server | 2023-11-09 |
Slackware | SSA:2023-318-01 | mariadb | 2023-11-14 | |
Slackware | SSA:2023-317-01 | tigervnc | 2023-11-13 | |
SUSE | SUSE-SU-2023:4431-1 | MP4.2 SLE15 SES7.1 | apache2 | 2023-11-13 |
SUSE | SUSE-SU-2023:4430-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.5 | apache2 | 2023-11-13 |
SUSE | SUSE-SU-2023:4432-1 | SLE15 | apache2 | 2023-11-13 |
SUSE | openSUSE-SU-2023:0368-1 | osB15 | chromium | 2023-11-14 |
SUSE | SUSE-SU-2023:4415-1 | MP4.2 MP4.3 SLE15 SES7.1 oS15.4 oS15.5 | clamav | 2023-11-10 |
SUSE | openSUSE-SU-2023:0370-1 | osB15 | connman | 2023-11-14 |
SUSE | openSUSE-SU-2023:0369-1 | osB15 | connman | 2023-11-14 |
SUSE | SUSE-SU-2023:4416-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 osM5.4 | containerized-data-importer | 2023-11-13 |
SUSE | SUSE-SU-2023:4449-1 | MP4.3 SLE15 oS15.3 oS15.4 oS15.5 | exfatprogs | 2023-11-15 |
SUSE | openSUSE-SU-2023:0360-1 | SLE12 | go1.21 | 2023-11-09 |
SUSE | SUSE-SU-2023:4414-1 | SLE15 oS15.5 | kernel | 2023-11-10 |
SUSE | SUSE-SU-2023:4429-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | kernel-firmware-nvidia-gspx-G06, nvidia-open- driver-G06-signed | 2023-11-13 |
SUSE | SUSE-SU-2023:4427-1 | SLE15 SLE-m5.5 oS15.5 | kernel-firmware-nvidia-gspx-G06, nvidia-open- driver-G06-signed | 2023-11-13 |
SUSE | openSUSE-SU-2023:0363-1 | osB15 | mupdf | 2023-11-11 |
SUSE | SUSE-SU-2023:4425-1 | SLE12 | postgresql, postgresql15, postgresql16 | 2023-11-13 |
SUSE | SUSE-SU-2023:4433-1 | SLE12 | postgresql12 | 2023-11-14 |
SUSE | SUSE-SU-2023:4434-1 | SLE12 | postgresql13 | 2023-11-14 |
SUSE | SUSE-SU-2023:4418-1 | SLE12 | postgresql14 | 2023-11-13 |
SUSE | SUSE-SU-2023:4426-1 | OS9 SLE12 | python-Django1 | 2023-11-13 |
SUSE | SUSE-SU-2023:4388-1 | MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.3 oS15.4 oS15.5 osM5.3 osM5.4 | salt | 2023-11-09 |
SUSE | SUSE-SU-2023:4387-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 osM5.4 | salt | 2023-11-09 |
SUSE | SUSE-SU-2023:4389-1 | SLE15 | salt | 2023-11-09 |
SUSE | SUSE-SU-2023:4390-1 | SLE15 | salt | 2023-11-09 |
SUSE | SUSE-SU-2023:4386-1 | SLE15 SLE-m5.5 oS15.5 | salt | 2023-11-09 |
SUSE | SUSE-SU-2023:4424-1 | SLE12 | squashfs | 2023-11-13 |
SUSE | SUSE-SU-2023:4423-1 | SLE15 | tomcat | 2023-11-13 |
SUSE | openSUSE-SU-2023:0361-1 | osB15 | tor | 2023-11-10 |
SUSE | SUSE-SU-2023:4440-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.4 oS15.5 osM5.3 osM5.4 | ucode-intel | 2023-11-14 |
SUSE | SUSE-SU-2023:4442-1 | SLE12 | ucode-intel | 2023-11-14 |
SUSE | SUSE-SU-2023:4441-1 | SLE15 | ucode-intel | 2023-11-14 |
SUSE | openSUSE-SU-2023:0365-1 | osB15 | vlc | 2023-11-12 |
SUSE | openSUSE-SU-2023:0366-1 | osB15 | vlc | 2023-11-12 |
SUSE | SUSE-SU-2023:4439-1 | MP4.3 SLE15 oS15.4 oS15.5 | w3m | 2023-11-14 |
SUSE | SUSE-SU-2023:4438-1 | MP4.2 MP4.3 SLE15 oS15.4 oS15.5 | xterm | 2023-11-14 |
Ubuntu | USN-6475-1 | 16.04 | cobbler | 2023-11-15 |
Ubuntu | USN-6449-2 | 18.04 20.04 22.04 | ffmpeg | 2023-11-15 |
Ubuntu | USN-6456-2 | 20.04 | firefox | 2023-11-14 |
Ubuntu | USN-6465-3 | 22.04 | linux-gke | 2023-11-10 |
Ubuntu | USN-6462-2 | 20.04 | linux-iot | 2023-11-10 |
Ubuntu | USN-6479-1 | 22.04 | linux-oem-6.5 | 2023-11-14 |
Ubuntu | USN-6476-1 | 22.04 23.04 23.10 | memcached | 2023-11-13 |
Ubuntu | USN-6477-1 | 16.04 18.04 20.04 22.04 23.04 23.10 | procps | 2023-11-14 |
Ubuntu | USN-6478-1 | 14.04 16.04 18.04 20.04 22.04 | traceroute | 2023-11-14 |
Ubuntu | USN-6474-1 | 14.04 16.04 18.04 20.04 22.04 | xrdp | 2023-11-09 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet