Leading items
Welcome to the LWN.net Weekly Edition for August 12, 2021
This edition contains the following feature content:
- Scanning "private" content: another blow against privacy in the name of protecting children.
- The edge-triggered misunderstanding: a change in polling behavior creates user-space trouble — over a year later.
- memfd_secret() in 5.14: what this new system call looks like and its path into the kernel.
- Hardening virtio: protecting virtualized guests from a malicious host.
- Incremental improvements in Linux Mint 20.2: an overview of changes in this release of the Linux Mint distribution.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Scanning "private" content
Child pornography and other types of sexual abuse of children are unquestionably heinous crimes; those who participate in them should be caught and severely punished. But some recent efforts to combat these scourges have gone a good ways down the path toward a kind of AI-driven digital panopticon that will invade the privacy of everyone in order to try to catch people who are violating laws prohibiting those activities. It is thus no surprise that privacy advocates are up in arms about an Apple plan to scan iPhone messages and an EU measure to allow companies to scan private messages, both looking for "child sexual abuse material" (CSAM). As with many things of this nature, there are concerns about the collateral damage that these efforts will cause—not to mention the slippery slope that is being created.
iPhone scanning
Apple's move to scan iPhone data has received more press. It would check for image hashes that match known CSAM material; the database of hashes will be provided by the National Center for Missing and Exploited Children (NCMEC). It will also scan photos that are sent or received in its messaging app to try to detect sexually explicit photos to or from children's phones. Both of those scans will be done on the user's phone, which will effectively break the end-to-end encryption that Apple has touted for its messaging app over the years.
Intercepted messages that seem to be of a sexual nature, or photos that
include nudity, will result in a variety of interventions, such as blurring
the photo
or warning about the content of the message. Those warnings will also
indicate that the user's parents will be informed; the feature is only
targeted at phones that are designated as being for a child—at least for
now. The general photo scanning using the NCMEC hashes has a number of
safeguards to try to prevent false positives; according to Apple, it
"ensures less than a one in one trillion chance per year of
incorrectly flagging a given account
". Hash matches are reported to
Apple, but encrypted as "safety vouchers" that can only be opened after
some number of matching messages are found:
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.
The Siri voice-assistant and the iPhone Search feature are also being updated to check for CSAM-related queries, routing requests for help reporting abuse to the appropriate resources, while blocking CSAM-oriented searches:
Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.
The Electronic Frontier Foundation (EFF) is, unsurprisingly, disappointed with Apple's plan:
We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.
The EFF post goes on to point to recent laws passed in some countries that could use the Apple backdoor to screen for other types of content (e.g. homosexual, satirical, or protest content). Apple could be coerced or forced into extending the CSAM scanning well beyond that fairly limited scope. In fact, this kind of expansion has already happened to a certain extent:
We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.
For its part, Apple has released a FAQ that says it will refuse any demands by governments to expand the photo scanning beyond CSAM material. There is, of course, no technical way to ensure that does not happen. Apple has bowed to government pressure in the past, making some leery of the company's assurances. As Nadim Kobeissi put it:
Reminder: Apple sells iPhones without FaceTime in Saudi Arabia, because local regulation prohibits encrypted phone calls. That's just one example of many where Apple's bent to local pressure.What happens when local regulation mandates that messages be scanned for homosexuality?
It is interesting to note that only a few years ago, Apple itself was making arguments against
backdoors with many of the same points that the EFF and many other organizations and
individuals have made. As Jonathan Mayer pointed
out:
"Just 5 years ago, Apple swore in court filings that if it built a
capability to access encrypted data, that capability would be used far
beyond its original context.
"
EU scanning
Meanwhile in the EU, where data privacy is supposed to reign
supreme,
the "ePrivacy derogation" is potentially even more problematic. It
allows communication-service providers to "monitor interpersonal
communications on a voluntary basis with the purpose of detecting and
reporting material depicting sexual abuse of minors or attempts to groom
children
". It is, of course, not a huge leap from "voluntary" to
"mandatory". As might be guessed, the scanning will not be done directly
by humans—problematic in its own right—but by computers:
The scanning of private conversations will be conducted through automated content recognition tools, powered by artificial intelligence, but under human oversight. Service providers will also be able to use anti-grooming technologies, following consultation with data protection authorities.
The EU General Data Protection Regulation (GDPR) is a sweeping framework for protecting personal data, but since the start of 2021 it no longer covers messaging services. That kind of communication falls under the ePrivacy directive instead, thus the change allowing scanning is a derogation to it. Patrick Breyer, member of the EU Parliament, has criticized the derogation on a number of grounds. He lists a number of different problems with it, including:
- All of your chat conversations and emails will be automatically searched for suspicious content. Nothing remains confidential or secret. There is no requirement of a court order or an initial suspicion for searching your messages. It occurs always and automatically.
- If an algorithms classifies the content of a message as suspicious, your private or intimate photos may be viewed by staff and contractors of international corporations and police authorities. Also your private nude photos may be looked at by people not known to you, in whose hands your photos are not safe.
[...]
- You can falsely be reported and investigated for allegedly disseminating child sexual exploitation material. Messaging and chat control algorithms are known to flag completely legal vacation photos of children on a beach, for example. According to Swiss federal police authorities, 86% of all machine-generated reports turn out to be without merit. 40% of all criminal investigation procedures initiated in Germany for “child pornography” target minors.
As Breyer pointed out, there is already proposed
legislation to make the scanning mandatory, which would break
end-to-end encryption: "Previously secure end-to-end encrypted
messenger services such as Whatsapp or Signal would be forced to install a
backdoor.
"
"Safety" vs. privacy
Both of these plans seem well-intentioned, but they are also incredibly
dangerous to privacy. The cry of "protect the children" is a potent
one—rightly so—but there also need to be checks and balances or the risks to
both children and adults are far too high. Various opponents (who were
derided as "the screeching voices of the minority
" by the
NCMEC in a memo to Apple employees) have noted that these kinds of measures
can actually harm the victims of these crimes. In addition, they presuppose
that everyone is guilty, without the need for warrants or the like, and turn over
personal data to companies and other organizations before law enforcement is even in the
picture.
As with many problems in the world today, the sexual abuse of children seems an insurmountable one, which makes almost any measure that looks likely to help quite attractive. But throwing out the privacy of our communications is not a sensible—or even particularly effective—approach. These systems are likely to be swamped with reports of completely unrelated activity or of behavior (e.g. minors "sexting" with each other) that is better handled in other ways. In particular, Breyer has suggestions for ways to protect children more effectively:
The right way to address the problem is police work and strengthening law enforcement capacities, including (online) undercover investigations, with enough resources and expertise to infiltrate the networks where child sexual abuse material is distributed. We need better staffed and more specialized police and judicial authorities, strengthening of resources for investigation and prosecution, prevention, support, training and funding support services for victims.
There have long been attacks against encryption in various forms, going back to (at least) the crypto wars in the 1990s. To those of us who lived through those times, all of this looks an awful lot like a step back toward the days of the Clipper chip, with its legally mandated crypto backdoor, and other efforts of that sort. Legislators and well-meaning organizations are seemingly unable to recognize that a backdoor is always an avenue for privacy abuse of various kinds. If it requires screeching to try to make that point—again—so be it.
The edge-triggered misunderstanding
The Android 12 beta release first appeared in May of this year. As is almost obligatory, this release features "the biggest design change in Android's history"; what's an Android release without requiring users to relearn everything? That historical event was not meant to include one change that many beta testers are noticing, though: a kernel regression that breaks a significant number of apps. This problem has just been fixed, but it makes a good example of why preventing regressions can be so hard and how the kernel project responds to them when they do happen.
Back in late 2019, David Howells made some changes to the pipe code to address a couple of problems. Unfortunately, that work caused the sort of regression that the kernel community finds most unacceptable: it slowed down (or even hung) kernel builds. After an extensive discussion, an unfortunate interaction with the GNU make job server was identified and a a fix by Linus Torvalds was applied that made the problem go away. The 5.5 kernel was released shortly afterward, kernel builds sped back up, and the problem was deemed to have been solved.
Not done yet
At the end of July, Sandeep Patil informed the community that, while the GNU make problem may have been fixed, the fix created a new problem of its own. He included a patch to revert Torvalds's fix. That revert clearly was never going to be applied on its own — kernel developers still lack an appetite for slower builds — but it did spark an investigation of the real problem.
The 2019 pipe rework and subsequent fix had been much more painful than Torvalds thought they should be, so he made a number of changes to both the organization and the behavior of the code. Specifically, an important change was made to how pipes work with system calls like epoll_wait(), poll(), and select(). If the desired type of I/O is not possible without blocking, these calls will put the calling process onto a wait queue. When the situation changes (data can now be read or written), the processes on the appropriate wait queue are woken so that they can request their I/O.
The 2019 fix changed the way that wakeup was performed. Before, a write to a pipe would unconditionally wake any waiting readers; indeed, it could wake them multiple times in a single system call. The fix changed this behavior to only perform the wakeup if the pipe buffer was empty at the start of the operation; a write to a pipe that already contained data waiting to be read would simply add the new data without causing the wakeup. The reasoning behind this change was straightforward enough: the polling system calls will return immediately if data is already available for reading, so there should not be any waiters if the pipe has available data.
On the edge
There is, however, a mode to epoll_wait() called "edge triggered" (or EPOLLET) that behaves a little differently. A process requesting an edge-triggered wait will not see an immediate return from epoll_wait() if there is available data; instead, it will wait until the situation changes. At least, that is how it worked before the 2019 patches. Once the pipe driver stopped performing wakeups any time that data arrived, processes initiating edge-triggered waits when there was already data available would not see the "edge" and would not wake.
There was, evidently, reason to wonder if this kind of problem would ever arise. The previous version of pipe_write() included this helpful comment:
/* Always wake up, even if the copy fails. Otherwise * we lock up (O_NONBLOCK-)readers that sleep due to * syscall merging. * FIXME! Is this really true? */
It turns out that it was really true. There are a number of Android
libraries, such as Realm,
that depend on getting edge-triggered wakeups even if there had been data
waiting in the pipe before the epoll_wait() call was made. The
purpose, evidently, is to wait until the pipe buffer is full, then read
all of the data in a single operation. Those
libraries broke when the 5.10 kernel went into the Android 12 beta,
bringing down a set of apps with them. Realm has since worked around the
problem but, as Patil pointed out, the joy of bundling means that "it
will be a while
before all applications incorporate the updated library
". Fixing
the kernel would repair the problem for all apps.
There seems to be widespread agreement that these libraries manifest a misunderstanding of what "edge-triggered" means and are using the edge-triggered mode incorrectly. As Torvalds explained:
This is literally an epoll() confusion about what an "edge" is.An edge is not "somebody wrote more data". An edge is "there was no data, now there is data".
And a level triggered event is *also* not "somebody wrote more data". A level-triggered signal is simply "there is data".
Notice how neither edge nor level are about "more data". One is about the edge of "no data" -> "some data", and the other is just a "data is available".
That, however, is not how edge-triggered operation was implemented for pipes. Unsurprisingly, in a demonstration of Hyrum's law in action, applications began to rely on the actual behavior of the system rather than the intended semantics. The epoll() man page agrees with Torvalds's description, describing just the sort of blocking behavior experienced by the broken apps. In the distant past, kernel developers might have just answered that these libraries are doing it wrong. But that's not how the kernel works now; thus, Torvalds continued:
But we have the policy that regressions aren't about documentation or even sane behavior.Regressions are about whether a user application broke in a noticeable way.
That interpretation of "regression" requires that the problem be fixed. And indeed that was done with a new fix that was merged for 5.14-rc4 at the end of July and was included in the 5.10.56 and 5.13.8 stable updates. This patch does not quite restore the old behavior; specifically, it will only perform one wakeup per write operation. It does appear to have fixed the problem, though.
Problem solved?
The 5.5 kernel was released in January 2020 — a time when few of us understood what was about to descend upon us, and a significant kernel regression is just another addition to the list. This regression thus endured for a year and a half, and found its way into the 5.10 long-term-stable release last December. The fact that it only surfaced now suggests a testing gap among certain users; happily, it was caught before the next Android release was finalized.
The ominous question that remains, though, is whether, in that year and a
half, any applications became dependent on the newer semantics. And,
indeed, there is already a
report (from Intel's automated test system) that the hackbench
benchmark regressed by nearly 13% after the latest fix was applied.
Torvalds responded
that he is "not sure how much hackbench really matters
" and
that the regression "probably doesn't matter one whit
". Even
so, he posted a
new patch that provides something closer to the older behavior — but
only if the pipe is being used with one of the polling functions. Should
it turn out that the hackbench regression does matter, there will be a fix
in hand.
If the latest fix breaks something else, though, kernel developers may have a difficult choice to make. It is possible that there is no way forward without leaving a broken application somewhere; this is why it is important to catch regressions early. With luck, that sort of breakage won't happen and this particular episode will now be finally done.
memfd_secret() in 5.14
The memfd_secret() system call has, in one form or another, been covered here since February 2020. In the beginning, it was a flag to memfd_create(), but its functionality was later moved to a separate system call. There have been many changes during this feature's development, but its core purpose remains the same: allow a user-space process to create a range of memory that is inaccessible to anybody else — kernel included. That memory can be used to store cryptographic keys or any other data that must not be exposed to others. This new system call was finally merged for the upcoming 5.14 release; what follows is a look at the form this call will take in the mainline kernel.The prototype for memfd_secret() is:
int memfd_secret(unsigned int flags);
The only allowed flag is O_CLOEXEC, which causes the area to be removed when the calling process calls execve(). The secret area will be available to children after a fork, though. The return value from memfd_secret() will be a file descriptor attached to the newly created secret memory area.
At this point, the process can't actually access that memory, which doesn't even have a size yet. A call must be made to ftruncate() to set the size of the area, then mmap() is used to map that "file" into the owning process's address space. The pages allocated to populate that mapping will be removed from the kernel's direct map, and specially marked to prevent them from being mapped back in by mistake. Thereafter, the memory is accessible to that process, but to nobody else, not even the kernel. The memory is thus about as well protected as it can get, which is good, but there are some consequences as well. Pointers to the secret-memory region cannot be used in system calls, for example; this memory is also inaccessible to DMA operations.
The first public posting of this work happened in October 2019. A number of changes have been made over the nearly two years that this system call has been under development — beyond the shift to a separate system call (in July 2020), which was done because this functionality was deemed to have little in common with ordinary memfds. For example, the ability to reserve a dedicated range of memory for memfd_secret() was added in the version-2 posting later in July 2020, only to be removed in version 5 in September.
Early versions of the patch set included a flag to require the removal of the pages in the secret area from the kernel's direct map. Doing so has the advantage of making the memory inaccessible to the kernel (and to anybody who might be able to compromise the kernel), but there were fears that breaking up the 1GB pages used for the direct mapping would degrade performance. Those fears have subsided over time, though; performance with 2MB pages is not much different than with 1GB pages. So that option disappeared in version 4 in August 2020, and direct-map removal became the rule.
Version 8, posted in November, added a couple of changes. Rather than allocating arbitrary kernel memory, CMA was used as the memory pool for memfd_secret(). Another change, which created a bit of controversy over the life of the patch, disables hibernation when a secret memory area is active. The purpose is to prevent the secret data from being written to persistent storage, but some users may become disgruntled if they find that they can no longer hibernate their systems. That notwithstanding, this behavior was part of the version that went into 5.14.
Since the beginning, memfd_secret() has supported a flag requesting uncached mappings — memory that bypasses the memory caches — for the secret area. This feature makes the area even more secure by eliminating copies in the caches (which could be exfiltrated via Spectre vulnerabilities), but setting memory to be uncached will drastically reduce performance. The caches exist for a reason, after all. Andy Lutomirski repeatedly opposed making uncached mappings available, objecting to the performance cost and more; Rapoport finally agreed to remove it. Version 11 removed that feature, leading to the current state where there are no flags that are specific to this system call.
In version 17 (February 2021), memfd_secret() was disabled by default and a command-line option (secretmem_enable=) was added to enable it at boot time. This decision was made out of fears that system performance could be degraded by breaking up the direct map and locking secret-area memory in RAM, so the feature is unavailable unless the system administrator turns it on. This version also ended the use of CMA for memory allocations.
And that leads to what is essentially the current state of memfd_secret(). It took 23 versions to get there, which seems like a lot, but that is often the nature of memory-management changes. Once 5.14 comes out, we'll see how big the user community is for this feature, and what changes will be deemed necessary. For now, though, this work would appear to have finally reached a successful conclusion. For the curious, this draft man page has a few more details.
Hardening virtio
Traditionally, in virtualized environments, the host is trusted by its guests, and must protect itself from potentially malicious guests. With initiatives like confidential computing, this rule is extended in the other direction: the guest no longer trusts the host. This change of paradigm requires adding boundary defenses in places where there have been none before. Recently, Andi Kleen submitted a patch set attempting to add the needed protections in virtio. The discussion that resulted from this patch set highlighted the need to secure virtio for a wider range of use cases.
Virtio offers a standardized interface for a number of device types (such as network or block devices). With virtio, the guest runs a simplified, common driver, and the host handles the connection to the real underlying device. The communication between the virtio device (host side) and the driver (guest side) happens using data structures called virtqueues, which are typically memory buffers, though the actual implementation depends on the bus used.
The scope of the hardening
In the confidential-computing world, the host is not allowed to access guest memory that was not explicitly shared with it. In addition, the guest's memory can be encrypted by the processor with a key unknown to the host. Kleen's work is designed to build on Intel's upcoming hardware feature, called Trust Domain Extensions (TDX), which is designed to protect guests in cloud environments. It is built using a number of architecture extensions, including memory encryption with Multi-Key Total Memory Encryption (MKTME) (covered here when a different memory-encryption API was proposed in 2019), and a new CPU mode called Secure-Arbitration Mode (SEAM). In the protected mode, code running under SEAM can only use a specified (encrypted) memory range, while all other processes (and DMA operations) cannot access that zone. Virtio, as a commonly used interface between the guest and the host, must take extra care to avoid compromising the security that TDX provides.
Until recently, virtio drivers assumed that the other side could be trusted. As a consequence, they have sometimes lacked necessary checks when working with the various metadata (operation descriptors, ring positions, result codes, etc.) shared with the device (i.e. the host); thus they could fail to catch bad pointers, out-of-range buffer indices, and similar errors. A malicious host could thus exploit buffer overruns and gain access to guest memory. Checking metadata from devices is also necessary in other cases, as virtio is no longer only used between a guest and a host — some physical devices are now implementing the virtio interface.
The patches can be grouped into three parts. The first one is the hardening in virtio itself, placed in virtio-ring. It also includes the disabling of some virtio modes. The second part enables the mode restrictions for x86 systems with TDX enabled. Finally, the last part includes changes in swiotlb, which enables DMA operations in situations where they are not otherwise possible by copying data through an intermediate ("bounce") buffer. The hardening included in the patch set adds additional checks for malicious pointers.
Virtio modes
Virtio defines many modes with different memory organizations, depending on the needs of the device and the driver. This creates multiple code paths to harden; apparently some of them are easier to fix than the others. Kleen decided to protect only the so-called split mode, where each virtqueue consists of different parts, each of those writable by either the driver or the device, but not by both at the same time.
In the proposed patch set, the other modes are disallowed when the guest runs in the TDX protected mode. This choice disallows indirect descriptors, a split-mode extension that allows the allocation of a number of descriptors in a separate memory area, improving performance by increasing the capacity of the ring. Also disabled is the packed mode, a more compact, in-memory layout. This restriction caused a number of comments. Jason Wang observed that disabling indirect descriptors causes a significant performance loss. Kleen had problems securing this mode and thinks it is too difficult to protect. Wang thinks the problem can be solved and promised to post a patch set.
Andy Lutomirski also disagreed with the approach of disabling all modes except one. He highlighted, later in the thread, that devices must not be allowed to corrupt the driver in any setting, so the hardening should be more generic:
For most Linux drivers, a report that a misbehaving device can corrupt host memory is a bug, not a feature. If a USB device can corrupt kernel memory, that's a serious bug. If a USB-C device can corrupt kernel memory, that's also a serious bug, although, sadly, we probably have lots of these bugs. If a Firewire device can corrupt kernel memory, news at 11. If a Bluetooth or WiFi peer can corrupt kernel memory, people write sonnets about it and give it clever names. Why is virtio special?
According to Lutomirski, the driver should be made secure for all use cases, not just the ones using TDX. Disabling other modes only when running TDX does not solve the problem, as bugs in those modes could be exploited to attack systems today. He also noted that virtio is not only implemented in software, but there are also hardware devices that expose a virtio-compatible interface. In another message he suggested splitting the driver into a modern version and a legacy one (including all modes that are not used in practice, or could not be fixed without compatibility issues) and actually harden the modern one completely.
Kleen disagreed, stating that there is no memory protection in other cases (possibly those not using a mechanism like TDX) and there is a risk of compatibility problems (that he did not identify). The boundary checks are enabled unconditionally, but the other virtio modes are only disabled when TDX is active. The discussion ended this way, without clear conclusions.
Similar work
In the discussion, Wang noted that there are similar hardening needs, including support for AMD Secure Encrypted Virtualization (SEV). Another need for virtio hardening comes from SmartNICs and devices implementing virtio, notably including vDPA — a device type that implements virtio for the data path, but has a vendor driver for the control path — and VDUSE, a vDPA device implemented in user space. They have similar problems and should not trust the metadata provided by the device. According to Kleen, those other cases should work with his changes with a few additions.
Conclusions and next steps
Hardening device drivers against malicious devices is an objective welcomed by kernel developers. The discussion shows that there is a need, with multiple use cases, and that different pieces have fixes in the works. Kleen's patch set received mixed reviews in its current form. The main issue seems to be the fact that it is too closely linked to the TDX work and the kernel developers would prefer a more generic solution. We are likely going to see more iterations of this work, and other hardening fixes in virtio, in the future.
Incremental improvements in Linux Mint 20.2
Linux Mint 20.2 "Uma" was released in Cinnamon, MATE, and Xfce editions on July 8. This new version of the popular desktop-oriented distribution has several improvements, including changes to the Update Manager, a new "Sticky Notes" app, a bulk file-renaming tool, improved file search, and better memory management in Cinnamon. Mint 20.2 is a long-term support (LTS) release that will receive security updates until 2025.
Mint releases come in multiple editions, each of which uses a different desktop environment: Cinnamon, which is based on GNOME 3 and developed by the Linux Mint project, MATE, which is a continuation of GNOME 2, and the Xfce lightweight desktop. This has been the case since the release of Mint 19 "Tara" in June 2018, which dropped the KDE edition. The Cinnamon edition is the most popular. Cinnamon is best for mid- to high-end computers; netbooks and other devices with limited resources may do better with MATE or Xfce.
This release has several changes to Mint's update handling. One significant addition is a notification system, which allows the user to receive desktop notifications when updates are available that have not been installed. Several precautions were taken to avoid causing annoyance, though there does not appear to be an easy way to completely disable the notifications. Users can set the Update Manager to only give notifications for updates older than three months or simply disable the Update Manager completely (which is not recommended unless updates are installed by a different method).
For those who prefer to install updates as soon as possible, there unfortunately does not seem to be a way to receive notifications immediately for updates. The delay between the release of an update and getting notified that it has not been installed can be configured, but the minimum value is two days. As with earlier versions of Linux Mint, however, it is still possible to automatically install updates daily. This will no longer cause excessive battery drain, because the Update Manager now checks to ensure it does not automatically update when running on battery power.
In addition to update notifications, the Update Manager can now also handle updates to Cinnamon "spices", which are panel applets, "desklets" (small applications that show on the desktop), themes, and extensions. Cinnamon 5.0, which was released with Mint 20.2, also provides a Python module for updating spices. This module was created to allow other distributions to incorporate updates for Cinnamon spices into their update managers. The new Update Manager also handles Flatpak updates, which were previously handled by a separate program.
![Sticky Notes [Sticky Notes]](https://static.lwn.net/images/2021/mint-notes-sm.png)
This release replaces Gnote, which has been the default note-taking program since the distribution switched away from Tomboy in Mint 19.3 (released in December 2019). The replacement is Sticky Notes, which was developed by Linux Mint. The main advantage of Sticky Notes over Gnote is that it supports displaying notes on the desktop. The new application supports backups and can import notes from Gnote.
Mint 20.2 adds a file-management feature with a new tool for bulk file renaming, called Bulky. This new capability is available in the Cinnamon and MATE editions, but not in Xfce, which has its own renaming tool. The release also has improvements to the search feature in Nemo, the Cinnamon file manager. Unlike the old search feature, which could search only by file name, the new one can search by name and/or contents.
Warpinator, a LAN-based file-transfer program introduced in June 2020 with all three editions of Mint 20 "Ulyana", now supports selecting a network interface (e.g. for computers with both Ethernet and WiFi connections). It also received a new compression feature, which can triple transfer speeds according to the New Features page for Cinnamon. That page also mentions that a Warpinator-compatible app is now available for Android smartphones and tablets. I tried it with my phone and laptop using a fairly slow WiFi router; it was not super fast, but was definitely usable. The app is available on the Google Play Store as "Warpinator (unofficial)". It also appears to be available on F-Droid, but at least on my phone, the F-Droid app crashes every time I click Warpinator in the search results.
Although they may be less noticeable than some of the other changes, memory-management improvements have been made in Cinnamon. Several memory leaks were fixed. Beyond that, there is an option to automatically restart Cinnamon when its memory usage exceeds a certain threshold, which prevents any remaining leaks from using all available memory. When Cinnamon restarts, the computer will seem to freeze for around a second until Cinnamon is back up and running. This release also adds a memory-monitoring system that logs Cinnamon's memory usage to aid in troubleshooting.
There are several smaller changes in 20.2 that will improve the user experience. Print drivers were updated in this release, improving support for various printers and scanners, especially HP models. Two changes were made to the Image Viewer: it now supports .svgz (gzipped SVG) files and the space bar can be used to pause and resume the slideshow. The Web App Manager, which enables the user to launch web-based apps from the desktop, now supports private-browsing mode. The screensaver now creates a separate screen-lock process, which ensures that the computer stays locked even if the screensaver process crashes. There were also several miscellaneous improvements to power management, the sound/volume applet, window management, and GPU switching.
Like other 20.x releases, Linux Mint 20.2 is based on Ubuntu 20.04 LTS. Mint 20.2 uses Linux kernel 5.4, which was also used in the rest of the 20.x series, as well as in Ubuntu 20.04.
As with Mint releases since Mint 17 "Qiana," which was released in May 2014, all current releases are LTS, so it is often not necessary to upgrade every time a new version is released. In fact, it is often better not to upgrade, as there is always a chance of a regression. The only time an upgrade is truly necessary is if the computer is currently running an unsupported release (check the list of supported releases), or if a new feature from 20.2 is wanted or needed.
Currently, Mint generally follows a pattern of one major release and three point releases for each new Ubuntu LTS release, before switching to the new Ubuntu base and incrementing the major version number. If Mint's current version system continues, Mint 20.3 will most likely be released in late 2021 or early 2022. Assuming Mint continues its current naming system, the version name will be a female name starting with "U" and ending with "a", as with other 20.x releases. Mint 20.3 will likely be followed by Mint 21, some time after the release of Ubuntu 22.04. It will naturally pick up the new software that comes in the Ubuntu release, as well as adding its own Mint-specific twists.
Page editor: Jonathan Corbet
Next page:
Brief items>>