Leading items
Welcome to the LWN.net Weekly Edition for March 30, 2023
This edition contains the following feature content:
- Rebecca Giblin on chokepoint capitalism: an Everything Open keynote on the problem of failure of competition in the creative industries and how transparency (among other things) can help.
- OpenSUSE MicroOS Desktop: a Flatpak-based immutable distribution: a look at how this immutable distribution works for desktop use.
- User-space shadow stacks (maybe) for 6.4: after many attempts, this security feature may finally be headed for a mainline release.
- The curious case of O_DIRECTORY|O_CREAT: a combination of open() flags that has always done surprising things may get fixed.
- Ubuntu stops shipping Flatpak by default: Canonical's plan to focus on the Snap format.
- Free software during wartime: recent events show that our community is not immune to events in the wider world.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Rebecca Giblin on chokepoint capitalism
The fourth and final keynote for
Everything Open 2023 was given
by Professor Rebecca Giblin of the Melbourne Law School, University of
Melbourne. It revolved
around her recent book, Chokepoint Capitalism,
which she wrote with Cory Doctorow; it is "a book about why creative
labor markets are rigged — and how to unrig them
". Giblin had planned
to be in Melbourne to give her talk in person, but "the universe had other
plans"; she got delayed in Austin,
Texas by an unexpected speaking slot at the South by
Southwest (SXSW)
conference, so she gave her talk via videoconference from there—at
nearly midnight in Austin.
She began by playing the animated teaser video for the book. It describes how the tech and content firms are choking out competition so that they can take the lion's share of any revenue generated before it ever reaches the artists and others who actually did the creative work. The book also has lots of ideas for "how we can recapture creative labor markets to make them fairer and more sustainable", Giblin said in the video.
The book had its genesis in a taxi ride across Melbourne that Doctorow and Giblin took in February 2017. They did not know each other well at that point, but they realized they had a shared vision based on their experiences in the "copyright wars". There is a position being pushed that if you want modifications to the current copyright regime it means "that you hate artists and want them to starve", but it is a false dichotomy.
Flash forward three years after the taxi ride and she was locked down in her apartment in Melbourne; while that was the opposite of freedom in lots of ways, she "decided to take it as a moment of freedom". Normally her work schedule is filled up a few years in advance, she is applying for grants for what she will be doing in four years' time, and so on; "COVID threw that all up in the air".
She quickly realized that what she wanted to do with that time was to tackle the problem of "how big business has stolen creative labor markets and all that we can do to take them back". She worked on it for a few weeks, but it was difficult because she had no one to bounce ideas off of; "I'm just arguing with myself". One day, she thought: "you know what would be better?—if Cory Doctorow wrote this book with me", she said to laughter.
So she sent Doctorow an email and asked if he wanted to write this book with her; "that's a thing you can do, you can write to Cory Doctorow and say 'write a book with me' and he'll say 'yes'", she said to even more merriment. She cautioned that Doctorow was not entirely on board with her joke; "if he were here, he would say 'please don't do that'". But she happened to hit a good time in Doctorow's schedule; working on the book with him was her "lifeline to the outside world during six long Melbourne lockdowns".
Audiblegate
They started looking at the different creative labor markets, since they were mostly only personally familiar with the book industry, and realized that those industries "are all using the same playbook". One of the more egregious examples of that playbook is in the "Audiblegate" scandal, which Giblin said should be far better known than it is. As might be guessed from the name, it involves the Audible audiobook company, which is part of Amazon and the largest vendor of audiobooks in the world.
Audible treats its customers well, with various perks for signing up for a monthly membership. One of those perks is the no-questions-asked return policy that allows readers to buy a book with their monthly credit and then return it for a new credit. Audible gives lots of opportunities to take advantage of that; it is mentioned in an email and app popup when the book is finished, is easily available on the web site, and so on.
Lots of Audible members started treating Audible as a lending library, rather than as a way to purchase audiobooks. The return offer was good for up to a year, so it was often the case that people listened to the book, enjoyed it, and returned it anyway. What they did not realize was that each of those returns would claw back the royalty paid to the author (and others) for that book.
Many authors suspected that kind of thing was happening, but the reporting from Audible did not separately account for returns—it only reported net sales. In October 2020, though, there was a reporting glitch and three weeks of returns data showed up in the authors' accounts. Suddenly authors could see that their books—many quite highly rated—were being returned in enormous numbers. Meanwhile, Audible (thus Amazon) keeps taking in the monthly fee from subscribers.
Amazon has been the master of this playbook, she said; the company even made a diagram to describe it. She showed two minutes from a longer promotional video (starting at the one-minute mark) that she and Doctorow made, which describes the Amazon "flywheel" that the company touts, while exposing what is really going on under the covers. As described by the company, the basic idea is that lower costs lead to lower prices, thus a better customer experience. That, in turn, means that there is more traffic, which attracts more sellers, leading to better selection, which improves the customer experience even more—"and the cycle continues". Amazon calls it a "virtuous cycle", but "it is not 'virtuous', it's anti-competitive", Doctorow said in the video.
What's really happening is that Amazon has always focused on locking in its customers, for example by using DRM on ebooks and audiobooks, "which cements users to Kindle and Audible". Another lock-in is the fast, free shipping it offers to Prime members; "once you pay your annual fee, Amazon becomes your default whenever you need to buy something". Locking in customers allows Amazon to lock in its suppliers as well. Publishers and small businesses cannot afford to give up access to those customers, so they keep listing their goods at Amazon even when it is bad for the business long-term. The lower cost structure that Amazon talks about is "just a euphemism for shaking down its suppliers and workers"; it uses its market power to demand discounts and high fees from those other businesses.
Amazon then uses the money it squeezed out to subsidize prices in order to eliminate competitors who actually pay fairly. As time goes on, that means Amazon's suppliers have even less choice about acceding to its demands; the low prices bring in even more customers who get locked in. "The shakedown grows more merciless and damaging as Amazon's flywheel spins faster and gains ever-more momentum."
After the video snippet, Giblin said that "all of the companies that we look at in the course of the book are doing exactly the same thing". They lock in their users and suppliers, which they can do, in part, because they got lots of "sweet, sweet VC [venture capitalist] dollars with the promise that they will chokepoint their industries" eventually, in order to eliminate competition.
Amazon once spent up to $200-million in a single month when diapers.com looked like it might be a threat in the market for baby diapers, she said. That may sound like an expensive way to corner the diaper ("nappy") market, but it is actually a "very very cheap way of sending a signal" that you are willing to use scorched-earth tactics. For that reason, VCs will not enter the "kill zone of Amazon and these other giants" because they know they will lose everything if a giant "decides that their toes are being stepped on".
Competition?
"Say what you like about capitalism, but competition is supposed to be fundamental to it", Giblin said. What we have seen over the last 40 years is a systematic effort to eliminate competition and enable these chokepoints to form. That came about because of a radical reinterpretation of antitrust law in the US, promulgated by Robert Bork, which said that those laws were not aimed at protecting competition, but were instead targeting consumer welfare. The upshot is that if a single company is dominating a market, it is not an antitrust problem unless consumers are somehow harmed, such as by price increases.
The ideal was that companies that were able to gain a market advantage of that sort would only do so temporarily until a competitor was attracted to the market by the profits being generated. Unfortunately, the 40-year project has successfully found ways to turn these "temporary" advantages into enduring ones. She put up a quote from Peter Thiel ("Competition is for losers."), noting that "they say the quiet part out loud, there's a shamelessness to it [...] 'why would you get into a market where you have to compete?'" Similarly, Warren Buffett is enamored with businesses that have "wide, sustainable moats, so in other words, barriers that stop new competitors from entering the market".
These problems are not only caused by monopolies, but also by monopsonies, which is a lesser-known and sometimes harder to grasp term. For one thing, there is no "family-destroying board game" by that name. Monopsony is the flipside to monopoly; instead of sellers that are overly powerful, a monopsony is made up of buyers that are overly powerful. In Australia, a good example is the two main grocery-store chains that have vast amounts of power compared to the farmers and food producers (other than perhaps the huge multinational producers) that want to sell to them.
So when people started looking at antitrust in this new way, they came to an obvious conclusion: monopoly pricing will hurt consumers, thus get them into trouble, but monopsony buying will not. If the only way to get in trouble is to raise prices for consumers, then the rational thing to do is to "look at the other end of the chain and find ways to squeeze your workers and your suppliers".
The tools used to create the chokepoints are different for the various content industries, but the end result of them being established is the same. Doctorow has coined a new term for this, she said, "which I really love": "enshittification".
Platforms start by directing their services to users in order to attract as many as possible. That leads suppliers to the platform so that they can reach those users. The platforms also start catering to the suppliers, at the expense of the users who are now firmly entrenched. Eventually, the platform finds ways to exploit the suppliers as well—full-on enshittification—putting all of the profits from both sides into the pockets of investors.
"The aim of enshittification is to make the service of the platform as bad as you can but just before the line where people just throw up their hands and give up on it altogether." We are seeing this play out in realtime at Twitter right now, though she thinks "Elon may be on the wrong side of the line", so we may be seeing the final stage: "enshittification death".
What to do?
Giblin said that they were determined that the book "was not going to be yet another 'chapter 11 book'", where there are ten chapters that lay out all of the problems in "excruciating detail" followed by a single chapter that handwaves at some solutions. For her, one unexpected reaction to the book has been readers who report that the first part, where the problems are described, is "utterly rage-inducing"; she has been looking at the problems for so long that she is inured to them to some extent at this point. Some readers may set the book aside (or throw it at the wall) in that first part, but she encourages people to read on, because the second half is all about solutions, which is what she wanted to focus on for the rest of the talk.
Those solutions would generally be aimed at the creative industries, but she invited attendees to look at ways to apply the same kinds of techniques to other types of industries where these schemes are also being deployed. "The 'open ethos' has a lot to teach us about how we can defeat chokepoint capitalism", Giblin said.
Transparency is one of the core values of the free and open-source software (FOSS) movement, but it is also key to other movements, such as open education, open government, open data, and more. Going back to Audiblegate, it is that mistaken revelation of the return numbers that provided the catalyst for the authors to take action, because they no longer only suspected what was going on, they now had hard evidence of it. That small glimmer of light on the underlying data, which Amazon was systematically trying to conceal from the authors, was enough to galvanize authors into action.
That led to an effective campaign to get the word out in the author community. It led Colleen Cross, a former forensic accountant turned author of financial-fraud thrillers, to look into the contracts for the Audible platform, which are deliberately opaque and confusing. She worked out that Audible could not possibly be paying out what they were supposed to be paying to independent authors; "the numbers just didn't add up in a way that that could have been plausible".
For the book, Doctorow and Giblin asked Cross how much money there is at stake; she estimates that there is "hundreds of millions of dollars on the return scam alone" plus an "enormous amount of wage theft". Based on Cross's best information, she thinks that up to 87% of the money paid in for access to Audible audiobooks is going to the platform, so that leaves just 13% to the authors and narrators. It is, of course, not clear whether that's the case because Amazon will not provide that information. "Why won't they? Because 'fuck you', that's why", Giblin said. "They don't have to and they benefit from that lack of transparency."
It is not just Amazon. The major streaming platforms, the major record labels, and others conceal their data from the artists. At a SXSW panel the previous day, a songwriter was describing her royalty statement; "it was 3000 pages long and utterly incomprehensible". This opacity only benefits the record labels, because even if there is only an "honest" mistake, there is no way for the artist to detect it—and the history of outright fraud is vast. One LA-based auditing firm that they talked to for the book had done tens of thousands of audits, mostly of record labels, which had "only once found an error that was in the artist's favor—some kind of isolated probability storm going on there".
Transparency is required for things that might affect investors; companies have all sorts of reporting obligations for things that might affect the stock price, for example. But there are no transparency requirements for artists whose paycheck, effectively, is determined by this opaque accounting. The reason is obvious, and is the same as it always is: "the purpose of the system, as Stafford Beer says, is what it does". Whenever we want to understand why something works the way that it does despite really obvious shortcomings, "the trick is to look at who's winning—and who's losing".
There are some changes that will help to introduce transparency in these markets. In 2019, the EU introduced the Directive on Copyright in the Digital Single Market, which mandates that artists and performers get new rights to find out how their works are being used, how much revenue is being generated from them, and how their share is calculated. Implementation is tricky because each member state is being aggressively lobbied to sidestep the transparency requirements, but it is a step in the right direction, Giblin said.
Interoperability, equity, and more
Another important aspect of the "open" movements is interoperability, which will be critical to defeating chokepoint capitalism. DRM is obviously a tool for locking customers in. Audible requires that all of its audiobooks have DRM, which is why Chokepoint Capitalism was not released there; it is available on all of the other audiobook platforms, however. There is a small part of it available on Audible, though; "we packaged the standalone chapter about their [Audible's] terrible, terrible abuses", she said to laughter and applause. They were surprised that they sold a few hundred copies of that, which she feels a little bad about. They did much the same with the Spotify chapter from the book on that platform.
Nobody wants to split up their libraries, just as they do not want to split up their friend group. Leaving Facebook means you risk losing access to your social and communication networks. She has not been on Facebook since around 2008; "there's a lot of barbecues that you end up just not knowing about".
We need more than just the right to bypass DRM, however, she said. We need positive rights as well; rights of access and the freedom to exit a sinking platform while continuing to stay connected to the community you are leaving behind. That means you can still enjoy the media and data you bought and have access to the content you created.
The open ethos has a strong focus on equity, Giblin said; there's little room for predatory middlemen that sit between creators and audiences—or buyers and sellers. Open access promotes equity without regard to socioeconomic standing, which is something that is lacking in the creative industries. There are only a few superstars who have the standing to negotiate equitable contracts (e.g. Taylor Swift in the music industry). The EU directive provides some hope in the form of a kind of "minimum wages for creative workers" in the form of rights to reasonable remuneration. In the new Australian cultural policy there is talk of giving creative workers employment rights in order to "treat arts workers as real workers", which is also a good step.
Many creative workers, as well as programmers, are required to sign away their copyrights to their employer, but the copyrights last for three generations or more, which is nearly always well beyond the life of the commercial interest in the work. The EU has a "use it or lose it" policy that allows creators to claw back their copyright if the work is not in use. That can help maintain access to works that have gone out of print, for example, as is being done by another project she worked on during lockdown: Untapped.
Self-determination is a critical aspect of the open ethos; people can look inside and tinker with things to make them work in ways that are better for them. The corporations that are chokepointing their industries "are deliberately depriving us of the ability to do exactly that", by locking us into their walled gardens and eliminating competition that might provide us with other choices that might work better for our needs, she said. "If we've learned anything from the open movement it's that people have a lot of very different needs."
Giblin does not think humans are good at "figuring out what are the conditions for a good life"; if we were, "we would definitely not have email", she said to laughter. There is an apocryphal idea that frogs will not jump out of a slowly warming pot of water before they get boiled to death—it's not true for frogs, but she is not so sure for humans. It seems clear that too much of our lives are not being spent on making things work well so that we can flourish as people, but is instead spent on maximizing profits for investors. The ability to change those conditions "seems critical to widening chokepoints out".
The part of the book on "collectivity" was her favorite "and certainly the most inspiring and hopeful part". There are multiple stories that they found about powerless people collectively taking on and even flipping the tables on the powerful corporations that they were locked into. For example, Uber drivers were contractually barred from bringing class-action lawsuits against the company, which meant they had to bring financially impractical individual suits or use the mandatory arbitration offered by the company. That kind of arbitration works well for companies when there are just a few cases, but thousands of drivers all brought complaints at the same time, forcing the company to eventually settle with the drivers—probably for more money than if it had been a class action.
The 2019 Writers Guild of America strike was another example of collectivity at work. Four large agencies had cornered the market on Hollywood writers, but were "feathering their own nests" at the expense of the creators they were supposed to represent. The writers realized that the agents only had the power that the writers gave them; "you can have Hollywood without agents, but you can't have Hollywood without writers". In a single week, 7000 writers fired their agents; the strike ended up lasting 22 months, but eventually "everyone rolled over" and the onerous terms were eliminated.
Movement
The open movement is broad and encompasses disparate areas like software, government, education, hardware, access to collections in museums and libraries, and more, all of which were represented at Everything Open in various ways. She believes that the open movement needs to see itself as part of an even larger movement against corporate concentration; "this too is part of the same fight".
The "strip mining of creative workers" is part of a larger project "in service of oligarchy"; it is something we are all subject to, she said. "It's not just independent creators and producers who are being screwed over here", it is nearly everyone; "if they haven't come for you yet, it's just because they haven't had time". There is a part near the end of the book where it says that the tech industry treats its workers well, because it has to; once those conditions change, the techies will find themselves in the same position as the creative workers. That feels a bit prescient, Giblin said, given the large layoffs that have rolled through the tech world since the book was written. "We're all part of the same fight."
She is often asked: "How do we start to change?" It feels like a huge project, which is "absolutely right". If you want to get to a "sustainable, fair world where there is enough for everybody" versus "violent extraction by a very few over the very many", you "wouldn't start from here, but we have to start from here, and we are starting from here". One key piece is already underway, which is to "build connection and community".
The current system is "designed to isolate us from one another", so that we have that "hollow emptiness inside of us" and we will want to fill it with ever-more production and consumption. That's not a bug of the system, it's a feature, she said. Building community and understanding how others' fights fit in with your own are "the first bites toward eating the elephant". She said that she is grateful for everyone who is working on the many different parts of the problem, including by attending the conference to see where those parts are and how they fit together. There was a lengthy round of applause for Giblin and her talk.
It was a thought-provoking keynote that will likely resonate, at least in parts, with many. While Giblin seemed optimistic about finding our way out of this hole that we have dug for ourselves, some may have to be forgiven for despairing of seeing any substantial progress in the near, or even distant, future. Beyond that, some, perhaps many, people see things rather differently than she does, so they may find her analysis and prescriptions to be off the mark. The video of her keynote is available for anyone who wants to delve into her ideas more deeply.
[Thanks to LWN subscribers for supporting my travel to Melbourne for Everything Open.]
OpenSUSE MicroOS Desktop: a Flatpak-based immutable distribution
Immutable Linux distributions are on the rise recently, with multiple popular distributions creating their own immutable versions; it could be one of the trends of 2023, as predicted. While many of these immutable distributions are focused on server use, there are also some that offer a desktop experience. OpenSUSE MicroOS Desktop is one of them, with a minimal openSUSE Tumbleweed as the base operating system and applications running as Flatpaks or in containers. In its daily use, it feels a lot like a normal openSUSE desktop. Its biggest benefit is availability of the newest software releases without sacrificing system stability.
Linux users who want to keep up with the latest software generally choose a rolling-release distribution, such as Tumbleweed, Arch Linux, or Gentoo Linux. However, this approach might introduce the risk of incompatibility between software versions or result in an unstable system. On the other hand, stable or Long-Term Support (LTS) distributions cater to the needs of users who prioritize stability over cutting-edge software.
Of course, many users want the best of both worlds: the latest software versions on a stable base operating system. There are solutions that generally bypass the distribution's native package-management system. Flatpak, Snap, and AppImage are the leading technologies for this purpose. Applications are packaged together with their dependencies, thus preventing interference with each other or the underlying distribution. With this approach, users are able to run updated software without encountering dependency woes or compromising system stability.
MicroOS on the desktop
Taking this concept to its extreme results in a small "immutable" core operating system, with as much software as possible contained in isolated packages. The operating system then has a single purpose, such as operating as a container host or providing a minimal desktop environment. All additional software is expected to be containerized or sandboxed. This approach can be implemented on both server and desktop operating systems. For desktops, there's Fedora Silverblue (with GNOME) and Fedora Kinoite (with KDE Plasma), the Ubuntu-based (soon Debian-based) Vanilla OS, Debian-based Endless OS, and openSUSE MicroOS Desktop.
Traditional desktop distributions offer a base operating system, desktop environment, and applications. In contrast, openSUSE MicroOS Desktop is a single-purpose operating system, offering the base operating system and desktop environment. The installer of the MicroOS ISO image is the same as with openSUSE's normal desktop version, but the difference lies in the system roles that the user is able to choose.
For desktop use, there are two system roles: one with GNOME, designated as a release candidate, and another with KDE Plasma, designated as alpha (see the image below). Both install MicroOS Desktop with automatic updates and rollback functionality, and they include the Podman container engine by default. The installer creates a Btrfs root filesystem for the operating system, desktop, and other tools, and this filesystem is mounted read-only after boot.
Running an immutable desktop
The first startup of MicroOS Desktop requires the typical new-installation configuration, such as selecting the language, choosing the time zone, and setting up online accounts for GNOME. What's different is that it is followed by the automatic installation of applications such as Firefox, a calculator, and a text editor. The result is a minimal, bare desktop environment.
All desktop applications in MicroOS Desktop are installed as Flatpaks in the user's directory and automatically get updated. In the GNOME version this is done using GNOME Software, which is normally used to install applications using the operating system's package manager. However, in MicroOS Desktop it's configured to only install Flatpaks from Flathub and to put them in ~/.local/share/flatpak. So installing packages using GNOME Software doesn't touch the underlying operating system. In the same way, the KDE Plasma version of MicroOS Desktop installs applications as Flatpaks using Discover.
In its everyday use, MicroOS Desktop looks much like a normal openSUSE desktop system. The base OS and desktop are actually built on the same RPM packages as openSUSE Tumbleweed, so that shouldn't be surprising. To install and manage GNOME Shell extensions, the Extension Manager is included. However, MicroOS comes with only basic configuration tools by default, such as GNOME Settings and GNOME Tweaks.
System updates are done automatically every day. This is implemented as a systemd timer unit that runs the transactional-update command, which is a wrapper script around the package manager zypper. It creates a new Btrfs snapshot of the root filesystem and then performs an update of this system. If installation of the updates was successful, the script marks the new snapshot as the default snapshot for the next boot of the system. On errors, the snapshot is discarded and the previous one remains as the default.
A reboot activates the new snapshot; if the system detects a problem during the reboot, it automatically rolls back to the previous default snapshot. Users can also manually rollback with the transactional-update rollback command. The whole process of transactional updates is explained in openSUSE's documentation.
Escaping from Flatpakland
The number of available Flatpaks is still limited compared to what's in the traditional distribution repositories. Currently, counting the entries in a flatpak remote-ls command on MicroOS Desktop shows a bit more than 3,600 Flatpaks on Flathub. So there invariably comes a time when the user needs some software that isn't available as a Flatpak. But MicroOS Desktop has a solution for this too: it comes with Distrobox in the default installation. Distrobox uses Podman to create containers that are tightly integrated with the host, sharing the user's home directory, external storage, USB devices, and graphical applications.
So if the user can't find specific software as a Flatpak, a simple distrobox-enter command in the terminal creates (or enters, if the container is already created) a container running Tumbleweed. In this container, all RPM packages available in openSUSE's repositories can be installed using the zypper command. Currently, these are more than 75,000 packages available. See the image below, which shows a Distrobox container running Tumbleweed and querying the number of available packages, alongside GNOME Software running on the host and displaying Flatpaks available from Flathub.
Distrobox is also able to export an application from the container to the host. This creates a .desktop file so that the application appears in GNOME's Activities. If the user clicks on the icon, this starts the Distrobox container in the background and opens the application's window on the desktop of the host. The application just appears as a normal desktop application of the host. Command-line applications can be exported too, for example to the user's ~/bin directory. If the user starts this script, it runs the application in the container. The other way works too: distrobox-host-exec lets the user execute a command on the host from the container.
A last resort, for software that isn't feasible to install even with Distrobox, is to install RPM packages on the host using the transactional-update command, which installs the software after making a snapshot; a reboot is then needed to make the new snapshot active so that the new software can be used. But this is only recommended for drivers, kernel modules, virtual private network (VPN) clients, and other low-level packages that have to integrate tightly with the operating system, because every extra package in the host incurs extra risk of instability. The MicroOS Desktop wiki has some tips for using transactional-update.
Newest developments
OpenSUSE MicroOS is related to the Adaptable Linux Platform (ALP), which is the minimal, immutable operating system poised to become SUSE's next-generation enterprise Linux distribution. OpenSUSE users have been encouraged to try MicroOS Desktop to see how working with an immutable desktop fits into their workflows and to provide feedback to the ALP project.
SUSE is known for its powerful configuration tool, YaST, which is able to handle all types of system administration tasks. Long-time openSUSE users will miss it for configuration in the MicroOS Desktop, although it is used to install the distribution. The YaST blog notes that some parts of YaST need to be adapted to better handle the administration of transactional systems such as MicroOS. But, then again, a minimal system probably shouldn't need too much administration.
Full-disk encryption is currently not supported in MicroOS Desktop. Users are able to customize their partitions in the installer, but this can result in a broken configuration. On Reddit, Richard Brown, MicroOS release engineer and MicroOS Desktop's main developer, said that full-disk encryption can be expected when MicroOS Desktop stops using YaST for its installation. It should be noted that Brown is moving into a different role as a Distributions Architect at SUSE in April.
MicroOS also lacks a firewall. According to Brown, this is by design,
because they cause
problems with container runtimes. He also says that it would have no
real benefit, "as you should be running your workloads in containers and
port mapping/opening/redirection is a core part of configuring a
container
". Users can still install firewalld, but it won't
be installed by default "as long as it doesn’t play well with container
tools
".
Brown is also working on a solution for developers who want to have a fully customizable desktop environment based on an immutable operating system. His Project Greybeard will be based on MicroOS Desktop using the Wayland compositor and the tiling window manager Sway. It's not an official openSUSE project (yet), and Brown considers it to be an example project for developers who want to build custom derivatives of openSUSE MicroOS or MicroOS Desktop.
Conclusion
For users who like to tinker, openSUSE MicroOS Desktop can't completely replace the normal openSUSE desktop, since options to configure the desktop are quite limited. However, in the right circumstances, openSUSE's immutable desktop is quite usable. It might be the ideal operating system for someone who is used to the way mobile operating systems work. On a Chromebook, iOS, or Android, the operating system itself isn't customizable either. Users only upgrade their operating system with image-based system upgrades, and they install isolated apps. OpenSUSE MicroOS Desktop offers the same approach for a Linux desktop.
User-space shadow stacks (maybe) for 6.4
Support for shadow stacks on the x86 architecture has been long in coming; LWN first covered this work in 2018. After five years and numerous versions, though, it would appear that user-space shadow stacks on x86 might just be supported in the 6.4 kernel release. Getting there has required a few changes since we last caught up with this work in early 2022.Shadow stacks are a defense against return-oriented programming (ROP) attacks, as well as others that target a process's call stack. The shadow stack itself is a hardware-maintained copy of the return addresses pushed onto the call stack with each function call. Any attack that corrupts the call stack will be unable to change the shadow stack to match; as a result, the corruption will be detected at function-return time and the process terminated before the attacker can take control. The above-linked 2022 article has more details on how x86 shadow stacks, in particular, work.
The current version of the patch set is the eighth revision posted by Rick Edgecombe (who took it over after some 30 revisions posted by Yu-cheng Yu).
API changes
The user-space API for working with shadow stacks has not changed much in the last year. Most operations are done with arch_prctl() calls, specifically:
- ARCH_SHSTK_ENABLE turns on the shadow stack for the current thread; shadow stacks are not enabled by the kernel when a process starts.
- ARCH_SHSTK_DISABLE disables the use of the shadow stack for the current thread.
- ARCH_SHSTK_LOCK prevents any further changes to a thread's shadow-stack status. Among other things, this operation can keep an attacker from somehow disabling the shadow stack before corrupting the call stack.
- ARCH_SHSTK_UNLOCK undoes the effect of ARCH_SHSTK_LOCK. This option was added to version 4 of the patch set in December; it exists to support functionality like Checkpoint/Restore in User Space that needs to be able to change the shadow-stack status after a process has launched. This option is only available when invoked via ptrace(); a process cannot use it on itself directly.
- ARCH_SHSTK_STATUS returns the current shadow-stack status.
Normally, the kernel handles the allocation and placement of shadow stacks, but there are occasions where an application will need to manage its shadow stacks directly. The map_shadow_stack() system call exists for this purpose; its prototype has changed a bit over the course of the last year:
void *map_shadow_stack(unsigned long address, unsigned long size,
unsigned int flags);
At one point, Andrew Morton complained about the "shstk" abbreviation, saying that it "
sounds like me trying to swear in Russian while drunk". As a result, that term was pulled out of much of the generic code, but remains in the x86 portion.
There is one other subtle change to map_shadow_stack() that affects how shadow stacks are handled in general. The shadow-stack feature has incompatibilities with 32‑bit code, especially when signals are involved. The kernel will refuse to enable a shadow stack for a thread that is running in the 32-bit mode and, in version 4 of the patch set, code was added to simply disable any signal handlers if a process switched to 32-bit mode after the shadow stack was enabled.
Beyond seeming like a bit of a hack, this approach did not fully solve the problem. As it turns out, a 64-bit thread can switch to the 32-bit mode without the kernel's knowledge or permission — meaning that the disabling of signal handlers can be circumvented. After some deliberation on how to avoid subtle problems when this happens, the decision was made (for version 5) to just always map the shadow stack at a virtual address above 4GB, making it inaccessible to 32-bit code. As a result, any attempt to switch to the 32-bit mode when a shadow stack is enabled will cause an immediate crash.
This change resulted in a new mmap() flag, MAP_ABOVE4G, which forces the mapping to be created above the 4GB virtual-address boundary. The address passed to map_shadow_stack() (if not zero, indicating no preference) must also be above 4GB or the call will fail. Someday, somebody with sufficient motivation could perhaps find a way to make 32-bit code work with shadow stacks, but given how little interest there is in 32-bit code in general, that seems unlikely to happen.
The glibc problem
While it might be nice to run all programs with shadow stacks enabled, there are applications that would break in that environment. Anything that manipulates its own call stack — just-in-time compilers, for example — will find itself out of sync with the shadow stack and brought to an untimely end. So the enabling of the shadow stack must be limited to code that can handle it.
The scheme that was developed, some time ago, was to place a special note in the .note.gnu.property ELF section of the program's executable image. If that note exists (as the result of compiler options provided when the program was built), that indicates that it is safe to run the program with the shadow stack enabled. That note is not sufficient for the kernel to make the decision, though, so the enabling of the shadow stack is left to user space, and to the C library's program loader in particular.
Enthusiastic developers in the GNU C Library (glibc) community quickly wired up support for turning on the shadow stack when it seemed appropriate; current versions of glibc are poised to turn on the shadow stack as soon as the kernel supports the feature. There is only one little problem: the glibc support was written with an early version of the user-space API in mind. That API no longer exists; trying to use it would result in crashing programs and a failure to boot. That will indeed secure it against ROP attacks, but users can be picky about just how that kind of security was achieved and may complain.
That problem was resolved early on by changing the API enough that glibc simply doesn't find it anymore and thinks that the shadow-stack functionality is not present. The glibc developers have said, though, that they intend to implement the new shadow-stack API once it is merged; thereafter, when an updated glibc shows up on a system, any program that indicates a readiness for a shadow stack will get one.
That leads to a new problem, as noted in the version-3 cover letter: not all applications that are marked as being ready really are.
But many application binaries with the bit marked exist today, and critically, it was applied widely and automatically by some popular distro builds without verification that the packages actually support shadow stack. So when glibc is updated, shadow stack will suddenly turn on very widely with some missing verification.
Applications that will break in this environment evidently include node.js and PyPy, so this seems like a real problem. A quick check on a Fedora 37 system shows that PyPy is indeed built with the shadow stack enabled:
$ readelf -n /usr/bin/pypy
Displaying notes found in: .note.gnu.property
Owner Data size Description
GNU 0x00000040 NT_GNU_PROPERTY_TYPE_0
Properties: x86 feature: IBT, SHSTK
[...]
Even if the root cause lies in user space, it can be provoked by upgrading to a new kernel, and thus looks like a kernel regression. Kernel developers generally prefer to avoid breaking systems, even if that breakage can be said to be somebody else's fault.
The ideal solution, according to Edgecombe, would be to simply move to a
new ELF bit to identify real shadow-stack readiness and have glibc
use that. Distributors could then be encouraged to be more careful about
marking applications as being shadow-stack ready. But, he said, "it
doesn’t seem like the glibc developers are interested in working on a
solution
", so something else is needed. In version 3, that
something else was a
patch disabling the shadow-stack API when the ELF bit is detected. The
idea was that distributors would eventually disable that check once they
had confirmed that all of the packages they ship included correctly marked
binaries.
The patch was described as "a bit dirty
" and included for the sake
of discussion — which indeed resulted. H.J. Lu suggested
that the right approach was just to avoid upgrading glibc until the system
was ready for it. Florian Weimer added
that most of the incompatible code is to be found in libraries that are
loaded after a process starts; the kernel test would not detect those, and
it may be too late to disable the shadow stack in any case.
After a while, Edgecombe asked Linus Torvalds what he thought should be done about this problem. Torvalds answered that he did not want to preemptively disable shadow-stack support without a reason:
Once [shadow-stack functionality] is enabled in the kernel, and it turns out that people complain that it breaks existing binaries, at that point I guess it gets disabled again. Possibly at that point using something like your suggested patch. But I'm not doing it until actual problems appear, and until we actually have this code in the kernel.
The patch disabling the shadow-stack API was duly taken out of the series. Weimer described a couple of plans for ensuring that shadow stacks could be safely enabled in distributions, claiming that adopting a new ELF bit would delay that process considerably. Shadow-stack support, he said, is not much different from supporting a new system call; that, too, can break existing applications, mostly as the result of seccomp() filters that do not understand the new call.
On to 6.4
The result of the discussion is that the kernel will take no special steps
to avoid breaking binaries that were incorrectly marked as being ready for
shadow stacks — at least, not before a problem is demonstrated. Most of
the other outstanding issues appear to be resolved, to the point
that Edgecombe prefixed the current version with a remark that "we have
a pretty good initial shadow stack implementation here
". There are a
number of desired enhancements, but those might be done better, he said,
after there has been some real-world use of the code that exists now.
So, after all this work, the 40 shadow-stack patches have been added to the tip tree, which feeds them into linux-next. If no show-stopping problems turn up over the course of the next month or so, user-space shadow-stack support for x86 systems will, most likely, move upstream during the 6.4 merge window. Finally, after a long development period, the shadow (stack) will truly know what evil lies in the heart of ROP attackers.
The curious case of O_DIRECTORY|O_CREAT
The open() system call offers a number of flags that modify its behavior; not all combinations of those flags make sense in a single call. It turns out, though, that the kernel has responded in a surprising way to the combination of O_CREAT and O_DIRECTORY for a long time. After a 2020 change made that response even more surprising, it seems likely that this behavior will soon be fixed, resulting in a rare user-visible semantic change to a core system call.The O_CREAT flag requests that open() create a regular file if the named path doesn't exist (adding O_EXCL will cause the call to fail if the path does exist). O_DIRECTORY, instead, indicates that the call should only succeed if the path exists and is a directory. It is not possible to create a directory with open(); that is what mkdir() is for. So the combination of O_CREAT and O_DIRECTORY requests the kernel to create a directory (which is supposed to already exist) as a regular file — which clearly does not make sense.
Since time immemorial, the kernel's response to the combination of those two flags has been to flag an error in most situations. If the path exists and is a regular file, open() fails and returns with an ENOTDIR error. If, instead, the path is an existing directory, the error is EISDIR — perhaps a bit surprising, given that O_DIRECTORY indicates that the path is expected to be a directory. If, however, the path does not exist at all, the open() call will succeed after creating a regular file with the indicated name, which is also a surprising result.
Recently, though, Pedro Falcato noticed that the behavior in the final case above had changed; the kernel will now return ENOTDIR if the path does not exist — but it also still creates a regular file. It is fair to say that this behavior is even more surprising than what happened before. Christian Brauner tracked the behavioral change down to this commit from Al Viro, which was merged for the 5.7 release.
Falcato included a patch to restore the previous behavior, which arguably makes a bit more sense than what the kernel does now and is, in any case, what the kernel did for a long time. But Brauner wondered if the right thing to do was to fix the kernel to do something more rational with that combination of flags:
So before we continue down that road should we maybe treat this as a chance to fix the old bug? Because this behavior of returning -ENOTDIR has existed ever since v5.7 now. Since that time we had three LTS releases all returning ENOTDIR even if the file was created.
Since, he said, nobody seems to have noticed the change over this time, it
seems likely that nobody is actually counting on the strange semantics
given to that combination of flags in the past. Linus Torvalds agreed
that actually fixing the kernel's behavior seemed like a sensible path:
"I think we can pretty much assume that there are no actual users of it,
and we might as well clean up the semantics properly
".
Falcato did
some research on what other systems do in response to that combination
of flags. NetBSD, it seems, will simply fail an open() call in
that situation, returning EINVAL. FreeBSD, instead, will
allow the call to succeed if the path exists and is a directory; otherwise
it will fail. He also noted that all of the behaviors seen — Linux pre-
and post-5.7, NetBSD, and FreeBSD — are allowed by POSIX: "I would not
call the old Linux behavior a *bug*, just really odd semantics
".
Torvalds answered
that either of the BSD behaviors would make sense, while the kernel's
current behavior "has no excuse
". The NetBSD response is "the
clearest case
", he said, but FreeBSD's behavior is closer to what Linux
did before the 5.7 change. Brauner favored
the NetBSD behavior, and put together a
patch to implement it.
As part of that work, he put some effort into searching through code
looking for cases that would be broken by the change in semantics; he came
up nearly empty:
Time was spent finding potential users of this combination. Searching on codesearch.debian.net showed that codebases often express semantical expectations about O_DIRECTORY | O_CREAT which are completely contrary to what our code has done and currently does.The expectation often is that this particular combination would create and open a directory. This suggests users who tried to use that combination would stumble upon the counterintuitive behavior no matter if pre-v5.7 or post v5.7 and quickly realize neither semantics give them what they want
Included in the patch are some links to places where developers had attempted this combination; see this libglnx comment for an example.
As the result of Brauner's patch, the combination of O_CREAT and O_DIRECTORY will cause an open() call to fail with EINVAL regardless of whether the given path exists or not. Chances are that nothing will break with this change, but he is asking for widespread testing to be sure of that. It would, after all, be annoying to have to revert this change if a problem report surfaces at some point in the future. The patch has not actually been applied as of this writing; given that there is a semantic change involved, it would be a bit surprising to see it land for 6.3. That said, your editor has been surprised by such things before.
This is one of those cases where the subtleties in the kernel's API policies come into play. In a real sense, this fix is an incompatible API change, and it will indeed break any program that is relying on the current behavior. But, in cases where no program does rely on a specific behavior, that behavior can indeed be changed. This fix seems unlikely to break anything, and so is permissible for the kernel developers to do. Should the assumption that nothing will break prove true, it may even be possible, someday, to make that flag combination do what developers evidently expect and create a directory. But first it is necessary to demonstrate that there are indeed no problems resulting from the removal of the current, strange semantics.
Ubuntu stops shipping Flatpak by default
Canonical recently announced that it will no longer ship Flatpak as part of its default installation for the various official Ubuntu flavors, which is in keeping with the practices of the core Ubuntu distribution. The Flatpak package format has gained popularity among Linux users for its convenience and ease of use. Canonical will focus exclusively on its own package-management system, Snap. The decision has caused disgruntlement among some community members, who felt like the distribution was making this decision without regard for its users.
The announcement was made on the Ubuntu Discourse Forum, where Philipp Kewisch, a Community Engineering Manager at Canonical, said:
As part of our combined efforts, the Ubuntu flavors have made a joint decision to adjust some of the default packages on Ubuntu: Going forward, the Flatpak package as well as the packages to integrate Flatpak into the respective software center will no longer be installed by default in the next release due in April 2023, Lunar Lobster. Users who have used Flatpak will not be affected on upgrade, as flavors are including a special migration that takes this into account. Those who haven't interacted with Flatpak will be presented with software from the Ubuntu repositories and the Snap Store.
Why?
In the announcement, Kewisch said the decision came from a desire to,
"improve the out-of-the-box Ubuntu experience for new users while
respecting how existing users personalize their own experiences
."
Ubuntu
is prioritizing deb and Snap, its default packaging
technologies, while no longer providing a competitor by default. This is
described as an effort to provide consistency and
simplicity for
users.
By focusing on these technologies, Ubuntu claims it can provide better community support to resolve issues in the software packages. While Canonical does not have full control over every Snap package published in the Snap Store, it does have some control over the format itself. That makes it easier for Canonical to diagnose and fix problems that arise in the packaging or distribution. Furthermore, because Canonical curates the official Snap Store, it has a degree of control over the quality of the packages that are included. It can work with developers to ensure that packages meet certain standards and do not contain obvious bugs or security vulnerabilities.
In comparison, Flatpak is developed and maintained by a community of contributors, rather than being tied to any company or organization. This can make it more difficult to coordinate bug fixes or updates, Canonical claims, since there may not be a single entity responsible for the technology. In the announcement, Kewisch mentioned fragmentation issues as a problem area:
In an ideal world, users experience a single way to install software. When they do so, they can expect that this mechanism is supported by the community and receives the majority of attention when it comes to resolving issues in software packages. When a new packaging technology is provided by default, there is an expectation that the distribution provides community support and is invested in contributing to development to resolve issues. This creates fragmentation instead of focusing on improving the technologies chosen for the distribution.
There is a key difference between having the base Flatpak package installed by default and having a Flatpak repository, such as Flathub (or something Ubuntu-specific), configured, which Ubuntu and its flavors never did. Merely removing the base Flatpak installation from the default install won't prevent users from having problems with Flatpak applications if they go ahead and install them anyway. Nor will it make those problems any easier to solve.
This adds fuel to the fire that Canonical is doing this largely to further its own interests. Since it controls the Snap Store, the company will be in a position to share in the revenue from any proprietary Snaps available there, for example. But even if Canonical has some self-serving reasons for making this change, it's important to remember that it hasn't removed Flatpak entirely; users will still be able to install the package-management system manually.
Impact
The move generated mixed reactions in the Linux community, with some users and developers expressing disappointment in the decision. Others argued in favor of Canonical's choice, agreeing with its reasoning about the unnecessary burden Flatpak places on support staff. Forum user Aaron Rainbolt ("arraybolt3") said that Ubuntu tries to change its package versions rarely and only update them for important bug fixes, which is not at all the case with Flatpaks so users may experience instability when using them, for example. In a reply, "h0lly" saw things differently:
People who opt to using flatpaks do so precisely because they do want the most recent (stable) releases, which I suspect is a lot of users evidenced by the raise of flatpak popularity.furthermore, thanks to the sandboxing flatpak apps generally work great out of the box. the picture you paint of users having a bad experience with unstable flatpaks is mostly made up. and even then, flatpak not being selected as the default source in the app store is already plenty to "guard" inexperienced users. if it was really about that, it could just display a little notice warning the user when selecting a flatpak source for the first time.
imho there is no need for Canonical to control anything here. there is absolutely nothing technical stopping its support staff from being able to say, "sorry you'll have to seek support from that flatpak's maintainer, we can't help you" and having [it] as an integrated option at the same time. although I don't think this would happen anywhere as frequently as you make it out to.
Rainbolt further defended the change, noting,
that while Flatpak may be more convenient, Snap packages will provide
greater long-term compatibility and lessen the burden placed on technical
support staff. "An app doesn't have to have anything wrong with it for it
to cause problems for technical supporters. It just has to have something
different from what the supporters are used to.
"
Another potential concern may be that Canonical could be using this decision to force package upstreams to offer a Snap version or face not being easily available in the default Ubuntu installation.
Ubuntu clearly wanted to present this decision as a united front with its flavors, but some have called that into question. As recently as December 2022, Sean Davis, Technical Lead for the Xubuntu flavor, was seen promoting Flatpak. While a lot can change in a few months, it does seem strange that Davis commented on its benefits fairly recently:
With the addition of the flatpak and gnome-software-plugin-flatpak packages, Xubuntu now supports the popular Flatpak packaging format. You can now easily install applications from Flathub with just a couple of clicks. In fact, any .flatpakref or .flatpakrepo file is natively supported thanks to GNOME Software.
The motives behind Canonical's move remain somewhat murky, but Flatpak users can be comforted by the fact that enabling the package-management system is still possible, though it's now something of a chore to do so.
Using Flatpak
For starters, it means that users will first need to manually install Flatpak and a repository, like Flathub, before they can begin to install Flatpak applications using the Ubuntu Software Center. Flatpak is a part of the universe repository, which means it is included in the community-maintained repository of Ubuntu packages that are not officially supported by Canonical. Due to this, Flatpak can be installed via the Ubuntu Software Center or the GNOME Software GUI.
Once Flatpak is installed, it can be connected to a Flatpak repository, such as Flathub. To configure Flathub, the following command can be used:
$ flatpak remote-add --if-not-exists flathub \
https://flathub.org/repo/flathub.flatpakrepo
In the announcement, Kewisch addressed some common concerns that users may have regarding the decision. For example, users would not lose access to applications that depend on the Flatpak ecosystem:
We've added a special migration that checks if you have Flatpak packages installed or remotes configured. If so, flatpak and related software centre plugins won't be auto-removed on an upgrade to Lunar Lobster. Therefore, you don't need to be concerned about this change.
Furthermore, Flatpak users do not have to worry about the package-management system being removed on current and older versions of Ubuntu either:
No, flavors are not actively removing package managers from the current or older releases. This change is for the upgrade to Lunar Lobster and beyond, where it is available but will not be installed by default in new installations.
Conclusion
Ubuntu's decision to stop shipping Flatpak by default is significant, but it is not the end of the road for the package-management system on Ubuntu. As the Linux ecosystem continues to evolve, it's likely that we will see other new technologies and approaches emerge to meet the needs of users and developers. For now, Ubuntu users who want to use Flatpak will need to adjust to the new way of doing things, but they will still have access to a same wide range of Flatpak apps.
Free software during wartime
Just over 27 years ago, John Perry Barlow's declaration of the independence of Cyberspace claimed that governments "have no sovereignty" over the networked world. In 2023, we have ample reason to know better than that, but we still expect the free-software community to be left alone by the affairs of governments much of the time. A couple of recent episodes related to the war in Ukraine are making it clear that there are limits to our independence.
The free-software community has, indeed, proved resilient to many events in the wider world. The dotcom bust mostly brought an end to the silliness and accelerated our work toward useful goals. The September 11 attacks (and the horrors that followed) had little direct effect on the community; the same is true of the 2008 economic crisis. The pandemic closed down much of the world, but seemingly sped up free-software development. Even the war in Ukraine and the upheavals around it have, apparently, barely touched our community. All of these events had (and are still having) horrific consequences for many of the people involved, but the development community as a whole was often able to carry on as if many of the world's troubles were taking place in another universe.
Recently, though, our community has been lightly touched in a couple of
ways. The ipmitool repository at GitHub was
locked, and its maintainer denied access, as a result of his status as an
employee of the sanctioned Russian firm Yadro. And, in the kernel
community, a developer with the Russian firm Baikal Electronics was told by a
networking maintainer that "We don't feel comfortable accepting patches
from or relating to hardware produced by your organization
". The
specific reasons for this discomfort were not spelled out, and no policy
for the kernel project as a whole has been expressed, but one possible
motivation, as described by
Konstantin Ryabitsev, is:
So, in reality, accepting code for any hardware into the Linux kernel means helping to test, maintain, and debug that code for years to come. The resources for that are pooled from many device manufacturers with the understanding that these efforts will be part of the tide that "lifts all boats," including their own. However, in the case of Baikal Elektroniks the situation becomes tricky. Yes, Linux is free software (free as in libre), but maintainers and CI infrastructure require funding. BE is placed under strict sanctions in many countries due to its direct affiliation with the Russian military, so companies funding CI and maintainer efforts have to consider if their money is directly benefiting a sanctioned company (and, indirectly, the Russian military).
It's worth noting that the developer involved is still active in other parts of the kernel community, but appears to have stopped sending from the Baikal Electronics domain. Meanwhile, there has been an ongoing low rumble across the net in response to the decision not to accept patches into one kernel subsystem from this company. The free-software community, some say, is without borders and should be above these sorts of disagreements.
It is true that our community often operates as if international borders did not exist. We cooperate across the globe and, often, have no idea of where our collaborators actually are. We exchange patches and projects with no worries of border checks or customs duties. The Internet and the free-software development model have truly opened up the globe to a type of obstacle-free cooperation that has not been seen before.
That said, it is naïve, at best, to think that the onset of a major war in Europe would be without consequences for our community. Millions of lives have been disrupted (or worse), economies have been upended, and the nature of world trade has changed. We are not so independent that we can expect to not be touched by such a thing. Indeed, it is arguably surprising that its effects have, so far, been so light.
For better or for worse, our "independent" development community is strongly tied to corporations. They employ many of us directly to work on our software commons. They own and run many of the resources, such as hosting sites and forges, that support our work. These companies often have no choice about whether to obey the mandates — such as the implementation of sanctions on some Russian companies — that are imposed by the governments of the world. If some free-software activity is seen (rightly or wrongly) by a company as putting it at risk of violating this kind of requirement, that company will almost certainly act to disassociate itself from that activity.
Individual developers, of course, have their opinions as well, and some of them will act on their opinions. That, too, may throw sand into the free-software machinery. But we should not blame developers who feel that specific acts run counter to either their conscience or the rules they are required to follow.
Things could be a lot worse. Our repositories are full of code from $COUNTRY_A, while $COUNTRY_B thinks that $COUNTRY_A is a threat to its ongoing prosperity or existence. We have already seen plenty of examples of countries making rules against the use of technological products coming from other countries (or specific companies within those countries). An expansion of such rules to apply to code contributions could put the status of much free software in jeopardy.
At this point, that type of mandate would likely be too crippling to consider. But the export of technology, including software, has often come under governmental scrutiny. Those of you who were not paying attention to the first release of PGP — just months before the first Linux kernel release — may want to read up on that history. It is not hard to imagine a world where, say, Linux is considered too powerful a tool to be allowed to be exported to $THAT_COUNTRY; the result could be a severe disruption of how our community works.
We are not at that point, and hopefully will not get there. For the most part, the free-software community is thriving despite the current global turmoil and, with luck, that will continue. But there can be no doubt that Barlow's declaration of independence was more aspiration than reality. We write software for the real world, and we are still intimately tied to it. Those ties will certainly make themselves felt at times. We have some control, sometimes, over how we respond to governmental mandates, but ignoring them is increasingly not an option.
Page editor: Jonathan Corbet
Next page:
Brief items>>
