|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 14, 2019

Welcome to the LWN.net Weekly Edition for February 14, 2019

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

France enters the Matrix

February 11, 2019

This article was contributed by Tom Yates


FOSDEM

Matrix is an open platform for secure, decentralized, realtime communication. Matthew Hodgson, the Matrix project leader, came to FOSDEM to describe Matrix and report on its progress. Attendees learned that it was within days of having a 1.0 release and found out how it got there. He also shed some light on what happened when the French reached out to them to see if Matrix could meet the internal messaging requirements of an entire national government.

From a client's viewpoint, Matrix is a thin set of HTTP APIs for publish-subscribe (pub/sub) data synchronization; from a server's viewpoint, it's a rich set of HTTP APIs for data replication and identity services. On top of these APIs, application servers can provide any service that benefits from running on Matrix. Principally, that has meant interoperable chat, but Hodgson noted that any kind of JSON data could be passed, including voice over IP (VoIP), virtual or augmented reality communications, and IoT messaging. That said, Matrix is independent of the transport used; although current Matrix-hosted services are built around HTTP and JSON, more exotic transports and data formats can be used and, at least in the laboratory, have been.

Because Matrix is inherently decentralized, no single server "owns" the conversations; all traffic is replicated across all of the involved servers. If you are using your server to talk to someone on, say, a gouv.fr server, and your server goes down, then because their server also has the whole conversation history, when your server comes back up, it will resync so that the conversation can continue. This is because the "first-class citizen" in Matrix is not the message, but the conversation history of the room. That history is stored in a big data structure that is replicated across a number of participants; in that respect, said Hodgson, Matrix is more like Git than XMPP, SIP, IRC, or many other traditional communication protocols.

[Matthew Hodgson]

Matrix is also end-to-end encrypted. Because your data is replicated across all participating servers, said Hodgson, if it's not end-to-end encrypted, it's a privacy train wreck. The attack envelope enlarges each time a new participant enters a conversation and gets a big chunk of conversation history synchronized to their server. In particular, admitting a new participant to your conversation should not mean disclosing the conversation history to whoever administers their server; end-to-end encryption removes that possibility.

Matrix was started back in May 2014. By Hodgson's admission, the first alpha release in September 2014 was put together far too quickly, in a process where "everybody threw lots of Python at the wall, to see what would stick". In 2015, federation became usable, Postgres was added as an internal database engine alongside SQLite, and IRC bridging was added. Later that year the project released Vector as its flagship client, which meant also releasing the first version of the client-server API.

In 2016, much hard work was done on scaling, the Vector client was rebranded as Riot (which it remains today), and end-to-end encryption was added. This latter feature turned out to be difficult in a decentralized world: if you have a stable decentralized chat room, and someone's server goes offline for a few hours, during which time a couple more devices are added to the server, then when the server comes back into federation, should those new devices be in the conversation or not? To Hodgson, this is as much a philosophical question as it is a technical one, but it had to be answered before end-to-end encryption could be implemented.

In 2017, the project added a lot of shiny user-interface whizziness: widgets, stickers, and the like, then in 2018 tried to stabilize all this by feature-freezing and pushing hard towards version 1.0. This push has included the necessary step of setting up a foundation to act as a neutral guardian of the standard. That will allow others to build on it knowing that it's stable and won't be changed at a moment's notice to suit Hodgson and the Matrix developers. Creating that stable base to hand off to the foundation meant nailing down all the protocol APIs, which has not been without pain. Some of them, particularly the federation API, have needed significant changes to correct design errors in the original specifications.

Rolling these changes out has been harder than it should have been because the Matrix developers didn't include protocol versioning in everything from the outset. Hodgson pleaded with the audience, should any of us ever build a protocol, to "make damn sure you can ratchet the version of everything from the outset, otherwise you just paint yourself into a corner. One day you discover a bug in your federation API, and then, before you can fix it, you have to retrofit a whole versioning system".

The move to 1.0 has also meant a complete rethink of certificate management. Back in 2014 the Matrix project decided to abjure traditional certificate authorities; instead, self-signed certificates would be used. To democratize decisions about who to trust, notary servers would be implemented to build a consensus about the trustability of a given TLS certificate. It was, in Hodgson's words, a disaster. While the developers were trying to fix it, Let's Encrypt came along, which made the process of getting properly-signed certificates trivial, so the self-signing experiment has been abandoned. The 0.99 version of the home server code is ACME-capable and can get a certificate directly from Let's Encrypt; the 1.0 version removes support for self-signed certificates.

At 2am on the day of Hodgson's talk, February 2, the server-server API was released at version 0.1. All five core APIs are now stable, he said, which is a necessary precondition for a 1.0 release. Two of them (the client-server and identity APIs) still need final tweaks, but apparently these are expected to have landed by the time this article is published, signaling Matrix's official exit from beta.

Matrix has already been pretty successful. Hodgson presented two graphs (active users, publicly-visible servers) both with pleasing-looking exponential curves on them. The matrix.org server has about half of the seven million globally visible accounts, so it was interesting that Hodgson announced the long-term intention to turn that server off. Apparently, once it becomes common for other organizations to offer a federated Matrix server, there will be no need for a "default" server for new users to land on. It's clear that Hodgson really does regard Matrix as a fully federated system, rather than something centralized, or centralized-plus-hangers-on.

The French connection

In early 2018, DINSIC, which Hodgson described as the "French Ministry of Digital", reached out to one of the developers working on the Riot client for Android to ask if they might get a copy for their own purposes. On investigation, it turned out that what the ministry wanted was end-to-end encrypted, decentralized communication across the entire French government, running on systems that France could provision and control. The agency thought that Matrix might be an excellent platform for providing this.

One might, said Hodgson, question why a government wanted a decentralized system. It turns out that governments, at least in France, only look centralized; in real life they're made up of ministries, departments, sub-departments, schools, hospitals, universities, and so forth. Each of these organizations will have its own operational requirements, security model and policies, and so on; a federated solution allows server operation to be decentralized to an extent that makes the most sense for server operation and for the user community. In France, the user community turns out to be about 5.5 million users — that is, about 9% of the population of the country. In addition, although this was to be a standalone deployment, DINSIC wanted the ability to federate publicly, to be able to connect to other governments, suppliers, contractors, etc., using their shiny new system but without all those external people needing accounts on it.

Because of the federated nature of what was being sought, end-to-end encryption was a requirement, which Matrix was in a position to provide. But DINSIC also wanted enterprise-grade anti-virus (AV) support, so that the ability to share documents, images, and other data through the system didn't present an exciting new infection vector. As Hodgson pointed out, AV is pretty much entirely incompatible with end-to-end encryption: if you've built a system that prevents anyone except sender and receiver from knowing what's in a file, how's a third party going to intercept it en route to scan it for known-harmful content?

In the end, this required adding the ability for files to be exfiltrated from Matrix to an arbitrary external scanning service. This service is provided with the URL for a piece of encrypted content, plus the encryption keys for that content — not for the message in which the content appears, or for the room in which the message appears — encrypted specifically for the content-scanning service. The scanning service then retrieves the content, decrypts and scans it, and proxies the result back into Matrix. Having acquired this ability, Matrix scans content both on upload and download, for extra security; the code to do all this will be making its way back into mainstream Matrix in the near future.

Having committed to using Matrix, DINSIC started work in May 2018 on Tchap, its fork of the Riot client. The current state of the work can be found on GitHub; user trials started in June. The French National Cybersecurity Agency has audited the system, as has an external body. As of January 2019, it's being rolled out across all French ministries, which involves a great deal of Ansible code. Hodgson gave a quick demo of Tchap, noting that at the moment he feels that Tchap is probably more usable than the mainstream Riot client, not least because DINSIC has had a professional user-experience (UX) agency working hard on it.

It's no secret that FOSDEM suffers from having a fixed selection of rooms, of fixed sizes, which often don't correspond to the demand for the activities in those rooms. This year, the queue for entry to the JavaScript devroom, for example, was so enormous that the door itself was quite invisible for most of the weekend. On the flip side, the huge Janson auditorium that is used for the keynotes often has to host talks given to audiences that, though they'd fill most other rooms, look a little lost in that vast cavern. Hodgson gave his talk to a Janson auditorium with virtually no free seats — the largest audience I have ever seen in there for a non-keynote talk.

Clearly, there is some interest in the project. Some of this will be curiosity about the French connection, but much will be because Matrix is a fully-working professional system that can be used productively right now. If you're in one of those organizations that uses Slack for nearly everything, there is no good reason not to start looking at a migration to Matrix, because if the migration is successful, you can bring your messaging data back in-house. Free software is about the user having control, and Matrix honors that promise.

For anyone who'd like to see the whole talk, the video can be found here.

[We would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Brussels for FOSDEM.]

Comments (23 posted)

Avoiding the coming IoT dystopia

By Jake Edge
February 12, 2019

LCA

Bradley Kuhn works for the Software Freedom Conservancy (SFC) and part of what that organization does is to think about the problems that software freedom may encounter in the future. SFC worries about what will happen with the four freedoms as things change in the world. One of those changes is already upon us: the Internet of Things (IoT) has become quite popular, but it has many dangers, he said. Copyleft can help; his talk is meant to show how.

It is still an open question in his mind whether the IoT is beneficial or not. But the "deep trouble" that we are in from IoT can be mitigated to some extent by copyleft licenses that are "regularly and fairly enforced". Copyleft is not the solution to all of the problems, all of the time—no idea, no matter how great, can be—but it can help with the dangers of IoT. That is what he hoped to convince attendees with his talk.

A joke that he had seen at least three times at the conference (and certainly before that as well) is that the "S" in IoT stands for security. As everyone knows by now, the IoT is not about security. He pointed to some recent incidents, including IoT baby monitors that were compromised by attackers in order to verbally threaten the parents. This is "scary stuff", he said.

[Bradley Kuhn]

The IoT web cameras that he uses at his house to monitor his dogs have a "great way" to avoid any firewalls they may be installed behind. They simply send the footage to the manufacturer's servers in China so that he can monitor his dogs from New Zealand or wherever else he might be. So there may be Chinese "hackers" watching his dogs all day; he hopes they'll call if they notice the dogs are out of water, he said to laughter.

As a community, we are quite good at identifying problems, which is why that joke has been repeated so often, for example. And even though we know that the IoT ecosystem is a huge mess, we haven't really done anything about it. Maybe the task just seems too large and too daunting for the community to be able to do something, but Linux has always provided the ultimate counter-example.

Many of us got started in Linux as hobbyists. There was no hardware with Linux pre-installed, so we had to get it running on our desktops and laptops ourselves. That is how he got his start in the Linux community; when Softlanding Linux System (SLS) didn't work on his laptop in 1992, he was able to comment out some code in the kernel, rebuild it, and get it running. Now it is not impossible to find pre-installed Linux on laptops, but, in truth, most people install Linux on their laptops themselves, like in 1992.

The hobbyist culture, where getting Linux on your laptop meant putting it there yourself, is part of what made the Linux community great, Kuhn said. The fact that nearly everyone at his talk was running Linux on their laptop (and only a tiny fraction of those had it come pre-installed) is a testament to the hard work that many have done over the years to make sure we can buy off-the-shelf hardware and run Linux on it. It was the hobbyist culture that made that happen and they did so in spite of, not with the help of, the manufacturers.

If he said that he didn't think it was important that users be able to install Linux on their laptops, no one would think that was a reasonable position—"not at LCA, anyway". But that is exactly what we are being told in IoT-land. The ironic thing is that most of those devices do come with Linux pre-installed. So you still can't buy a laptop with Linux on it, for the most part, but you can't buy a web camera (or baby monitor) without Linux on it.

If, in 1992, we had heard that 90+% of small devices would come with Linux pre-installed in 2019, we would have been ecstatic, he said. But the problem is that to a large extent, there are no re-installs. Some people do install alternative firmware on their devices, but the vast majority do not. And, in fact, most IoT device makers seek to make it impossible for users to re-install Linux.

Help from the GPL

But the GPL is "one of the most forward-looking documents ever written—at least in software", Kuhn said. While the GPL did not anticipate the advent of IoT, it already contains the words needed to help solve the problems that come with IoT: "the scripts used to control compilation and installation of the executable". The freedom to study the code is not enough; the GPL requires more than just the source code so that users can exercise their other software freedoms, which includes the freedom to modify and to install modified versions of the code.

The Linksys WRT54G series of home WiFi routers is one of the first IoT devices to his way of thinking. These devices brought a major change to people's homes. Like Rusty Russell (whose keynote earlier that day clearly resonated with Kuhn), he remembers going to conferences that had no WiFi. He was not surprised to hear that someone printed out the front page of Slashdot at the first LCA.

The WRT54G was the first device where "we did things right": a large group of people and organizations got together and enforced the GPL. That led to the OpenWrt project that is still thriving today. The first commit to the project's repository was the source code that the community had pried out of the hands of Linksys (which, by then, had been bought by Cisco).

But the worry is that OpenWrt is something of a unicorn. There are few devices that even have an alternative firmware project these days. It is, he believes, one of the biggest problems we face in free software. We need to have the ability to effectively use the source code of the firmware running in our devices.

Another example of a project that came about from GPL-enforcement efforts is SamyGO, which creates alternative firmware for Samsung televisions. That project is foundering at this point, he said, which is sad. He does not understand why the manufacturers don't get excited by these projects; at the very least, they will sell a few more devices. As an example, the WRT54G series is the longest-lived router product ever made; it may, in fact, be the longest-lived digital device of any kind.

GPL enforcement can make these kinds of projects happen. But there is a disconnect between two different parts of our community. The Linux upstream is too focused on big corporations and their needs, to the detriment of the small, hobbyist users, he said to applause. Multiple key members of the Linux leadership take the GPL "cafeteria style"; they really only want the C files that have changed and do not care if the code builds or installs.

But all of those leaders, and lots of others, got their start by hacking on Linux on devices they had handy. Kuhn pointed to a 2016 post by Matthew Garrett that described this problem well:

[...] do you want 4 more enterprise clustering filesystems, or another complete rewrite of the page allocator for a 3% performance improvement under a specific database workload, or do you want a bunch of teenagers who grow up hacking this stuff because it's what powers every device they own? Because honestly I think it's the latter that's helped get you where you are now, and they're not going to be there if the thing that matters to you most is making sure that large companies don't feel threatened rather than making sure that the next 19 year old in a dorm room can actually hack the code on their phone and build something better as a result. It's what brought me here in the first place, and I'm hardly the only one.

The next generation of developers will come from the hobbyist world, not from IBM and other corporations, Kuhn said. If all you need to hack on Linux is a laptop and an IoT device, that will allow lots of people, from various income levels, to participate.

Users and developers

In the software world, there is a separation between users and developers, but free software blurs that distinction. It says that you can start out as a user but become a developer if you ever decide you want to; the source code is always there waiting. He worries that our community will suffer if, someday, the only places you can install Linux are in big data centers and, perhaps, in laptops. If all the other devices that run Linux can only be re-installed by their manufacturers, where does the next crop of upstream developers come from? That scenario would be a dystopia, he said; we may have won the battle on laptops, but we're losing it for IoT.

Linux is the most important GPL program ever written, Kuhn said; it may be the most important piece of software ever written, in fact. It was successful because of, not in spite of, its GPL license; he worries that history is being rewritten on that point. Linux was also successful because users could install it on the devices they had access to. He believes that the leaders of upstream Linux are making a mistake by not helping to ensure that users can install their modified code on any and every device that runs Linux. "Tinkering is what makes free software great"; without being able to tinker, we don't have free software.

Upstream does matter, but it does not matter as much as downstream. He said he was sorry to any upstream developers of projects in the audience, but that their users are more important than they are. "It's not about you", he said with a grin.

It is amazing to him, and was unimaginable to him in 1992, that there are many thousands of upstream Linux developers today. But he really did not anticipate that two billion people would have Linux on their devices; that is even more astounding than the number of developers. But most of those two billion potential Linux developers aren't actually able to put code on their devices because those devices are locked down, the process of re-installing is too difficult, or the alternative firmware projects haven't found the time to do the reverse engineering needed to be able to do so.

The upstream Linux developers are important; they are our friends and colleagues. But upstream and downstream can and do disagree on things. Kuhn strongly believes that there is a silent plurality and a loud minority that really want to see IoT devices be hackable, re-installable, changeable, and "study-able". That last is particularly important so that we can figure out what the security and privacy implications of these devices are. Once again, that was met with much applause.

Being able to see just the kernel source is not going to magically solve all of these problems, but it is a "necessary, even if not sufficient condition". We have to start with the kernel; if some device maker wants to put spyware in, the kernel is an obvious place to do so. The kernel's license is fixed, and extremely hard to change as we frequently hear, but it is the GPLv2, which gives downstream users the rights they need. Even if upstream does not prioritize those rights the same way as its downstream users do, it has been quite nice and licensed its code in a way that assures software freedom.

There is no need for a revolution to get what we need for these IoT devices. The GPL already has the words that we need to ensure our ability to re-install Linux on these devices. We simply need to enforce those words, he said.

Call to action

Kuhn put out a call to action for attendees and others in our community. There are some active things that people can do to help this process along. The source code offer needs to be tested for every device that has one. Early on, companies figured out that shipping free software without an offer for the source code was an obvious red flag indicating a GPL violation. So they started putting in these offers with no intention of actually fulfilling them. They believe that almost no one will actually make the request.

That needs to change. He would like everyone to request the source code using the offer made in the manual of any device they buy. It is important that these companies start to learn that people do care about the source code, so even if you do not have the time or inclination to do anything with it, it should still be requested. Every time you buy a Linux-based device, you should have the source code "or something that looks like it" or you should request it.

If you have the time and skills, try to build and install the code on the device. If you don't, see if a friend can help or ask for help on the internet. Re-installing can sometimes brick your device, but that probably means that the company is in violation of the GPL. The idea is to create a culture within our community that publicizes people getting source releases and trying to use them; if they do not work, which is common, that will raise the visibility of these incomplete source releases.

"If it doesn't work, they violated the GPL, it's that simple", Kuhn said. You were promised a set of rights in the GPL and you did not get them. You can report the violation but, unfortunately, SFC is the only organization that is doing enforcement for the community at this point. It has a huge catalog of reports, so it may not be able to do much with the report, but the catalog itself is useful. It shows the extent of the problem and it helps the community recognize that there is a watchdog for the GPL out there; for better or worse, he said, that watchdog is SFC.

But he had an even bigger ask, he said. He is hoping that at least one person in the audience would step up to be the leader of a project to create an alternative firmware for some IoT device. It will have to be done as a hobbyist, because no company will want to fund this work, but it is important that every class of device has an alternative firmware project. Few people are working on this now, but if you are interested in the capabilities of some device, you could become the leader of a project to make it even better. "Revolutions are run by people who show up."

It feels like an insurmountable problem, even to him most days, but it does matter. It only requires that we exercise the rights that we already have, rights that were given to us by the upstream developers by way of the GPL.

Being able to rebuild and re-install Linux on these devices won't magically fix the numerous privacy and security flaws that they have—but it is a start. The OpenWrt project started by getting the source code it was provided running, it then started adding features. Things like VPN support and bufferbloat fixes have improved those devices immeasurably. We can restore the balance of power by putting users back in charge of the operating systems on their devices; we did it once with laptops and servers and we can do it again for IoT devices.

A WebM format video of the talk is available, as is a YouTube version.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.]

Comments (81 posted)

Blacklisting insecure filesystems in openSUSE

By Jonathan Corbet
February 8, 2019
The Linux kernel supports a wide variety of filesystem types, many of which have not seen significant use — or maintenance — in many years. Developers in the openSUSE project have concluded that many of these filesystem types are, at this point, more useful to attackers than to openSUSE users and are proposing to blacklist many of them by default. Such changes can be controversial, but it's probably still fair to say that few people expected the massive discussion that resulted, covering everything from the number of OS/2 users to how openSUSE fits into the distribution marketplace.

On January 30, Martin Wilck started the discussion with a proposal to add a blacklist preventing the automatic loading of a set of kernel modules implementing (mostly) old filesystems. These include filesystems like JFS, Minix, cramfs, AFFS, and F2FS. For most of these, the logic is that the filesystems are essentially unused and the modules implementing them have seen little maintenance in recent decades. But those modules can still be automatically loaded if a user inserts a removable drive containing one of those filesystem types. There are a number of fuzz-testing efforts underway in the kernel community, but it seems relatively unlikely that any of them are targeting, say, FreeVxFS filesystem images. So it is not unreasonable to suspect that there just might be exploitable bugs in those modules. Preventing modules for ancient, unmaintained filesystems from automatically loading may thus protect some users against flash-drive attacks.

If there were to be a fight over a proposal like this, one would ordinarily expect it to be concerned with the specific list of unwelcome modules. But there was relatively little of that. One possible exception is F2FS, the presence of which raised some eyebrows since it is under active development, having received 44 changes in the 5.0 development cycle, for example. Interestingly, it turns out that openSUSE stopped shipping F2FS in September. While the filesystem is being actively developed, it seems that, with rare exceptions, nobody is actively backporting fixes, and the filesystem also lacks a mechanism to prevent an old F2FS implementation from being confused by a filesystem created by a newer version. Rather than deal with these issues, openSUSE decided to just drop the filesystem altogether. As it happens, the blacklist proposal looks likely to allow F2FS to return to the distribution since it can be blacklisted by default.

The core of the debate, though, was over whether openSUSE should be blacklisting filesystem modules at all. Various participants made the claim that broad filesystem support was part of what attracted them to Linux in the first place, and that they would be upset if things stopped working. The plan is to continue to ship the blacklisted kernel modules, so getting support back would be a simple matter of (manually) deleting an entry from the list, but many users, it was argued, are not capable of making such a change. Blacklisting filesystem types, thus, makes the distribution less friendly for a certain class of users.

As others noted, though, that class of users is likely to be quite small. One would not expect to encounter many users who are running a system that dual-boots between Linux and, say, OS/2, but who also lack the skills to edit a configuration file. That is likely to be the case regardless of which Linux distribution is installed, but openSUSE in particular is not generally viewed as a beginner's Linux. On the other hand, blacklisting risky filesystems could potentially provide a layer of protection for millions of users — a rather larger group.

One commonly heard theme was that disabling these filesystems by default would put openSUSE at a competitive disadvantage with regard to other distributions that have not made this change. Users would note that some other distribution is able to mount their obscure filesystem while openSUSE is not (by default), so they would gravitate to competing distributions. This, Liam Proven said, would be bad:

To thrive, Linux distros have to attract users from other Linux distros. If you only get income from your existing customers, then you are going to die, because sometimes your customers will die. They will fail, or go bankrupt, or get bought out, or switch providers, or something.

This is a law of the market.

It is fair to say that many members of the openSUSE community do not see their situation that way and do not see maximizing the number of openSUSE users as being an important goal in its own right. Richard Brown asserted that "openSUSE is a community project, therefore one of our first concerns should always remain ensuring our Project is self-sustaining"; trying to support users whose interests diverge significantly from those of the community runs counter to that goal. Michal Kubecek agreed, saying:

We have limited resources and we have to weigh carefully what we use them for. Focusing on the kind of users you want to attract (unwilling to think about problems, unwilling to learn things, unwilling to invest their time and energy) means getting a lot of users who will need a lot of help even with the basic tasks. That means that skilled users will either spend a lot of time helping them or (more likely) will simply stop helping.

In the end, the argument that making this change would cause openSUSE to lose users did not carry the day. Another "helpful" participant repeatedly told the community to implement a microkernel model for filesystems so that security problems could be contained. As of this writing, that suggestion, too, has failed to find a critical mass of support.

The part of the discussion that did go somewhere had to do with what happens when a filesystem fails to mount because the requisite module has been blacklisted. In response, there has been some work done to improve the documentation and the messages emitted by the system when that happens so that users know what is going on and what to do about it. There were also concerns about existing systems that would fail to boot after an update — an outcome that all were able to agree is not entirely optimal. The solution to this problem appears to be a scheme whereby filesystem modules would be removed from the blacklist if they are loaded in the kernel that is running when the update is installed. So a system that is actively using one of the blacklisted filesystem types would continue to have access to it by default.

As of this writing, a pair of proposals for the actual changes are under consideration. One of them adds the blacklist and the mechanism described above; the other improves the output from a failed mount attempt, noting that the blacklist may be involved. The long discussion has mostly wound down with the proposal looking mostly like it did at the outset. In the end, it seems, the openSUSE community feels that protecting users against attacks on old and unmaintained code is more important than having ancient filesystems work out of the box.

Comments (41 posted)

Concurrency management in BPF

By Jonathan Corbet
February 7, 2019
In the beginning, programs run on the in-kernel BPF virtual machine had no persistent internal state and no data that was shared with any other part of the system. The arrival of eBPF and, in particular, its maps functionality, has changed that situation, though, since a map can be shared between two or more BPF programs as well as with processes running in user space. That sharing naturally leads to concurrency problems, so the BPF developers have found themselves needing to add primitives to manage concurrency (the "exchange and add" or XADD instruction, for example). The next step is the addition of a spinlock mechanism to protect data structures, which has also led to some wider discussions on what the BPF memory model should look like.

A BPF map can be thought of as a sort of array or hash-table data structure. The actual data stored in a map can be of an arbitrary type, including structures. If a complex structure is read from a map while it is being modified, the result may be internally inconsistent, with surprising (and probably unwelcome) results. In an attempt to prevent such problems, Alexei Starovoitov introduced BPF spinlocks in mid-January; after a number of quick review cycles, version 7 of the patch set was applied on February 1. If all goes well, this feature will be included in the 5.1 kernel.

BPF spinlocks

BPF spinlocks can only be placed inside structures that, in turn, are stored in BPF maps. Such structures should contain a field like:

    struct bpf_spin_lock lock;

A BPF spinlock inside a given structure is meant to protect that structure in particular from concurrent access. There is no way to protect other data, such as an entire BPF map, with a single lock.

From the point of view of a BPF program, this lock will behave much like an ordinary kernel spinlock. An example provided with the patch-set cover letter starts by defining a structure containing a counter:

    struct hash_elem {
    	int cnt;
    	struct bpf_spin_lock lock;
    };

Code that is compiled to BPF could then increment the counter atomically with something like the following:

    struct hash_elem *val = bpf_map_lookup_elem(&hash_map, &key);
    if (val) {
    	bpf_spin_lock(&val->lock);
    	val->cnt++;
    	bpf_spin_unlock(&val->lock);
    }

BPF programs run in a restricted environment, so there is naturally a long list of rules that regulate the use of spinlocks. Only certain types of maps (hash, array, and control-group local storage) support spinlocks at all, and only one spinlock is allowed within any given map element. BPF programs can only acquire one lock at a time (to head off deadlock worries), cannot call external functions while holding a lock, and must release a lock prior to returning. Direct access to the struct bpf_spin_lock field by the BPF program is disallowed. A number of other rules apply as well; see this patch for a more complete list.

Access to BPF spinlocks from user space is naturally different; a user-space process cannot be allowed to hold BPF spinlocks for unbounded periods of time since that would be an easy way to lock up a kernel thread. The complex bpf() system call thus does not get the ability to manipulate BPF spinlocks directly. Instead, it gains a new flag (BPF_F_LOCK) that can be added to the BPF_MAP_LOOKUP_ELEM and BPF_MAP_UPDATE_ELEM operations to cause the spinlock contained within the indicated element to be acquired for the duration of the operation. Reading an element does not reveal the contents of the spinlock field, and updating an element will not change that field.

One implication of this design is that user space cannot use BPF spinlocks to protect complex changes to structures stored in BPF maps; even the simple counter-incrementing example shown above would not be possible, since the lock cannot be held over the full operation (reading the counter, incrementing it, and storing the result). The implicit assumption seems to be that such manipulations will be done on the BPF side, so the locking functionality serves mostly to keep user space from accessing a structure that a BPF program has partially modified. For example, a test program included with the patch set includes a BPF portion that repeatedly picks a random value, then sets every element of an array to that value while holding the lock. The user-space side reads that array under lock and verifies that all elements are the same, thus showing that the element was not read in the middle of an update operation.

The patch set has seen a number of changes as the result of review comments. One significant added restriction is that BPF spinlocks cannot be used in tracing or socket-filter programs due to preemption-related issues. Those restrictions seem likely to be lifted in the future, but other types of BPF programs (including most networking-related programs) should be able to use BPF spinlocks once the feature goes upstream.

The BPF memory model

In the conversation around version 4 of the patch set, Peter Zijlstra asked about the overall memory model for BPF. In contemporary systems, there is a lot more to concurrency control than spinlocks, especially when the desire is to minimize the cost of that control. Access to shared data can be complicated by the tendency of modern hardware to cache and reorder memory accesses, with the result that changes made on one CPU can appear in a different order elsewhere. Concurrency-aware code may have to make careful use of memory barriers to ensure that changes are globally visible in the right order.

Such code tends to be tricky when written for a single architecture, but it is further complicated by the fact that, naturally, every CPU type handles these concurrency issues differently. Kernel developers have done a lot of work to hide those differences to the greatest extent possible; details of that work can be found in Documentation/memory-barriers.txt and the formal kernel memory-model specification. All of that work refers to kernel code running natively on the host processor, though, not code running under the BPF virtual machine. As BPF programs run in increasingly concurrent environments, the need to specify the memory model under which they run will grow.

Starovoitov, who remains the leader of the kernel's BPF efforts, has proved resistant to defining the memory model under which BPF programs run:

What I want to avoid is to define the whole execution ordering model upfront. We cannot say that BPF ISA is weakly ordered like alpha. Most of the bpf progs are written and running on x86. We shouldn't twist bpf developer's arm by artificially relaxing memory model. BPF memory model is equal to memory model of underlying architecture. What we can do is to make it bpf progs a bit more portable with smp_rmb instructions, but we must not force weak execution on the developer.

This approach concerns the developers who have gone to a lot of effort to specify what the kernel's memory model should be in general; Will Deacon said outright that "I don't think this is a good approach to take for the future of eBPF". Paul McKenney has suggested that BPF should simply follow the memory model used by the rest of the kernel. Starovoitov doesn't want to do that, though, saying "tldr: not going to sacrifice performance".

That part of the conversation ended without any conclusions beyond a suggestion to talk further about the issue, either in a phone call or at an upcoming conference. It's not clear whether this off-list discussion has happened as of this writing. What seems clear, though, is that these issues are better worked out soon rather than having to be managed in an after-the-fact manner later on. Concurrency issues are hard enough when the underlying rules are well understood; they become nearly impossible when different developers are assuming different rules and code accordingly.

Comments (14 posted)

io_uring, SCM_RIGHTS, and reference-count cycles

By Jonathan Corbet
February 13, 2019
The io_uring mechanism that was described here in January has been through a number of revisions since then; those changes have generally been fixing implementation issues rather than changing the user-space API. In particular, this patch set seems to have received more than the usual amount of security-related review, which can only be a good thing. Security concerns became a bit of an obstacle for io_uring, though, when virtual filesystem (VFS) maintainer Al Viro threatened to veto the merging of the whole thing. It turns out that there were some reference-counting issues that required his unique experience to straighten out.

The VFS layer is a complicated beast; it must manage the complexities of the filesystem namespace in a way that provides the highest possible performance while maintaining security and correctness. Achieving that requires making use of almost all of the locking and concurrency-management mechanisms that the kernel offers, plus a couple more implemented internally. It is fair to say that the number of kernel developers who thoroughly understand how it works is extremely small; indeed, sometimes it seems like Viro is the only one with the full picture.

In keeping with time-honored kernel tradition, little of this complexity is documented, so when Viro gets a moment to write down how some of it works, it's worth paying attention. In a long "brain dump", Viro described how file reference counts are managed, how reference-count cycles can come about, and what the kernel does to break them. For those with the time to beat their brains against it for a while, Viro's explanation (along with a few corrections) is well worth reading. For the rest of us, a lighter version follows.

Reference counts for file structures

The Linux kernel uses the file structure to represent an open file. Every open file descriptor in user space is represented by a file structure in the kernel; in essence, a file descriptor is an index into a table in struct files_struct, where a pointer to the file structure can be found. There is a fair amount of information kept in the file structure, including the current position within the file, the access mode, the file_operations structure, a private_data pointer for use by lower-level code, and more.

Like many kernel data structures, file structures can have multiple references to them outstanding at any given time. As a simple example, passing a file descriptor to dup() will allocate a second file descriptor referring to the same file structure; many other examples exist. The kernel must keep track of these references to be able to know when any given file structure is no longer used and can be freed; that is done using the f_count field. Whenever a reference is created, by calling dup(), forking the process, starting an I/O operation, or any of a number of other ways, f_count must be increased. When a reference is removed, via a call to close() or exit(), for example, f_count is decreased; when it reaches zero, the structure can be freed.

Various operations within the kernel can create references to file structures; for example, a read() call will hold a reference for the duration of the operation to keep the file structure in existence. Mounting a filesystem contained within a file via the loopback device will create a reference that persists until the filesystem is unmounted again. One important point, though, is that references to file structures are not, directly or indirectly, contained within file structures themselves. That means that any given chain of references cannot be cyclical, which is a good thing. Cycles are the bane of reference-counting schemes; once one is created, none of the objects contained within the cycle will ever see their reference count return to zero without some sort of external intervention. That will prevent those objects from ever being freed.

Enter SCM_RIGHTS

Unfortunately for those of us living in the real world, the situation is not actually as simple as portrayed above. There are indeed cases where cycles of references to file structures can be created, preventing those structures from being freed. This is highly unlikely to happen in the normal operation of the system, but it is something that could be done by a hostile application, so the kernel must be prepared for it.

Unix-domain sockets are used for communication between processes running on the same system; they behave much like pipes, but with some significant differences. One of those is that they support the SCM_RIGHTS control message, which can be used to transmit an open file descriptor from one process to another. This feature is often used to implement request-dispatching systems or security boundaries; one process has the ability to open a given file (or network socket) and make decisions on whether another process should get access to the result. If so, SCM_RIGHTS can be used to create a copy of the file descriptor and pass it to the other end of the Unix-domain connection.

SCM_RIGHTS will obviously create a new reference to the file structure behind the descriptor being passed. This is done when the sendmsg() call is made, and a structure containing pointers to the file structure being passed is attached to the receiving end of the socket. This allows the passing side to immediately close its file descriptor after passing it with SCM_RIGHTS; the reference taken when the operation is queued will keep the file open for as long as it takes the receiving end to accept the new file and take ownership of the reference. Indeed, the receiving side need not have even accepted the connection on the socket yet; the kernel will stash the file structure in a queue and wait until the receiver gets around to asking for it.

Queuing SCM_RIGHTS messages in this way makes things work the way application developers would expect, but it has an interesting side effect: it creates an indirect reference from one file structure to another. The file structure representing the receiving end of an SCM_RIGHTS message, in essence, owns a reference to the file structure transferred in that message until the application accepts it. That has some important implications.

Suppose some process connects to itself via a Unix-domain socket, so it has two file descriptors, call them FD1 and FD2, one corresponding to each end of the connection. It then proceeds to use SCM_RIGHTS to send FD1 to FD2 and the reverse; each file descriptor is sent to the opposite end. We now have a situation where the file structure at each end of the socket indirectly holds a reference to the other — a cycle, in other words. This can work just fine; if the process then accepts the file descriptor sent to either end (or both), the cycle will be broken and all will be well.

If, however, the process closes FD1 and FD2 without accepting the transferred file descriptors, it will remove the only two references to the underlying file structures — except for those that make up the cycle itself. Those file structures will have a permanently elevated reference count and can never be freed. If this happens once as the result of an application bug, there is no great harm done; a small amount of kernel memory will be leaked.. If a hostile process does it repeatedly, though, those cycles could eventually consume a great deal of memory.

There are other ways of using SCM_RIGHTS to create this kind of cycle as well. The problem always involves descriptor-passing datagrams that have never been received, though; this fact is used by the kernel to detect and break cycles. When a file structure corresponding to a Unix-domain socket gains a reference from an SCM_RIGHTS datagram, the inflight field of the corresponding unix_sock structure is incremented. If the reference count on the file structure is higher than the inflight count (which is the normal state of affairs), that file has external references and is thus not part of an unreachable cycle.

If, instead, the two counts are equal, that file structure might be part of an unreachable cycle. To determine whether that is the case, the kernel finds the set of all in-flight Unix-domain sockets for which all references are contained in SCM_RIGHTS datagrams (for which f_count and inflight are equal, in other words). It then counts how many references to each of those sockets come from SCM_RIGHTS datagrams attached to sockets in this set. Any socket that has references coming from outside the set is reachable and can be removed from the set. If it is reachable, and if there are any SCM_RIGHTS datagrams waiting to be consumed attached to it, the files contained within that datagram are also reachable and can be removed from the set.

At the end of an iterative process, the kernel may find itself with a set of in-flight Unix-domain sockets that are only referenced by unconsumed (and unconsumable) SCM_RIGHTS datagrams; at this point, it has a cycle of file structures holding the only references to each other. Removing those datagrams from the queue, releasing the references they hold, and discarding them will break the cycle.

As one might imagine, given that the VFS is involved, there is more complexity than has been described above and some gnarly locking issues involved in carrying out these operations. See Viro's message for the gory details.

Fixing io_uring

Among the features provided by io_uring is the ability to "register" one or more files with an open ring; that speeds I/O operations by eliminating the need to acquire and release references to the registered files every time. When a file is registered with an io_uring, the kernel will create and hold a reference for the duration of that registration. This is a useful feature but it contained a problem that, seemingly, only somebody with a Viro-level understanding of the VFS could spot, describe, and fix; it is a new variant on the cycle problem described above. In short: a process could create a Unix-domain socket and register both ends with an io_uring. If it were then to pass the file descriptor corresponding to the io_uring itself over that socket, then close all of the file descriptors, a cycle would be created. The io_uring code was unprepared for that eventuality.

Viro proposed a solution that involves making the file registration mechanism set up the SCM_RIGHTS data structures as if the registered file descriptor were being passed over a Unix-domain socket. There is a useful analogy here; registering a file can be thought of as passing it to the kernel to be operated on directly. Once the setup has been done, the same cycle-breaking logic will find (and fix) cycles created using io_uring structures.

Jens Axboe, the author of io_uring, implemented the solution and verified that it works. With that issue resolved, it appears that the path to merging io_uring in the 5.1 development cycle may be clear. In the process, a bit of light has been shed on a corner of the VFS that few people understand. The problem of a lack of people with a wide understanding of the VFS layer as a whole, though, is likely to come up again; it rather looks like a cycle that we have not yet gotten out of.

Comments (14 posted)

Page editor: Jonathan Corbet

Brief items

Security

CVE-2019-5736: runc container breakout

Anybody running containerized workloads with runc (used by Docker, cri-o, containerd, and Kubernetes, among others) will want to make note of a newly disclosed vulnerability known as CVE-2019-5736. "The vulnerability allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host." LXC is also evidently vulnerable to a variant of the exploit.

Full Story (comments: 20)

Google releases ClusterFuzz

Google has announced the release of its ClusterFuzz fuzz-testing system as free software. "ClusterFuzz has found more than 16,000 bugs in Chrome and more than 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz. It is an integral part of the development process of Chrome and many other open source projects. ClusterFuzz is often able to detect bugs hours after they are introduced and verify the fix within a day."

Comments (none posted)

Security quotes of the week

HP doesn’t spell out any consequences in their terms of service for failure to send the ink back, so we checked with a support agent. They helpfully explained that nothing happens if you fail to send them back, but the cartridges would stop working. You’ll have to buy more ink on your own if you want to keep printing. HP ships specially marked ink as part of this process, and your printer recognizes that it is intended for Instant Ink subscribers only. It’s essentially DRM, but instead of locking down a digital movie or book, this locks down a physical product: the ink in your printer.

Instant Ink requires an internet connection for your printer. HP explains that they monitor your ink levels, so they know when to send you more, but as described in their Terms of Service the other reason for this is to remotely disable your ink cartridges if you cancel, or if there are any issues with your payment.

Josh Hendrickson at How-To Geek

Ultimately, I suspect the best solution is to move most of our discussions to dedicated fora like GitLab issues, or something like Discourse. Fundamentally, the thing we're trying to do (send email to thousands of people at a time using a fake From address) is ... kind of the opposite of what the 2019 Internet wants us to do. Every few months the major providers drop more of our mail as they become more aggressive with spam, and every few months their userbase increases by a non-trivial amount.

We've done a lot of work on our email infrastructure, and are doing our best to be a responsible citizen within the constraint of having to launder mail and forge identity on an industrial scale, but it's coming to the point where it just may not be possible to run such a service at such a scale anymore. [...]

Of course we do not have any plans to stop providing email any time soon, but it might be worth thinking about what you can do to reduce your dependency on email lists. At the current rate of degradation, it might be non-viable quicker than you'd think. Maybe this is unduly gloomy, but the entire internet's direction of travel has been away from services like Mailman, and its velocity is only increasing.

Daniel Stone (Thanks to Jani Nikula)

Comments (4 posted)

Kernel development

Kernel release status

The current development kernel is 5.0-rc6, released on February 10. Linus said: "So while I would have wished for less at this point, nothing in there looks all that odd or scary. I think we're still solidly on track for a normal release."

Stable updates: 4.4.174 was released on February 8. The patches went out for review on February 7; this release contains a backport of a fix for the FragmentSmack denial-of-service vulnerability. "Many thanks to Ben Hutchings for this release, it's pretty much just his work here in doing the backporting of networking fixes to help resolve "FragmentSmack" (i.e. CVE-2018-5391)."

The (huge) 4.20.8, 4.19.21, 4.14.99, and 4.9.156 updates were released on February 12.

Comments (none posted)

Distributions

Distribution quotes of the week

OpenSUSE is certainly not one of the largest distributions, measured by number of users. But I dare to say that we might have achieved an interesting ratio of skilled users and users willing to contribute, whether it's bugfixing, packaging or testing and meaningful reporting. There is a long running joke that the reason why openSUSE has no community is that whenever someone becomes really active in openSUSE community, he sooner or later ends up as a SUSE employee. Sure, it's just a joke but as with many others, there is some deep truth hidden in it. Do we want to risk these users just to attract a lot of passive ones or even pure consuments of resources? I don't.
Michal Kubecek

For me, the email that I received last month is an indication that the openSUSE community can do more to retain its users. And that the openSUSE community should have a better ‘Marketing strategy’ (for the lack of a better term) to make the Contributor Journey a smoother experience. To try to get the roadblocks out of the way for the people that want to be informed or be involved. It is an area where I could see myself contributing to in the future.
Martin de Boer

Comments (2 posted)

Development

GTK+ renamed to GTK

The GTK+ toolkit project has, after extensive deliberation, decided to remove the "+" from its name. "Over the years, we had discussions about removing the '+' from the project name. The 'plus' was added to 'GTK' once it was moved out of the GIMP sources tree and the project gained utilities like GLib and the GTK type system, in order to distinguish it from the previous, in-tree version. Very few people are aware of this history, and it's kind of confusing from the perspective of both newcomers and even expert users; people join the wrong IRC channel, the URLs on wikis are fairly ugly, etc."

Full Story (comments: 16)

LibreOffice 6.2 released

The LibreOffice 6.2 release is out. The headline feature this time around appears to be "NotebookBar": "a radical new approach to the user interface - based on the MUFFIN concept". Other changes include a reworking of the context menus, better change-tracking performance, better interoperability with proprietary file formats, and more.

Full Story (comments: 2)

Plasma 5.15 released

KDE has announced the release of Plasma 5.15. "Plasma 5.15 brings a number of changes to the configuration interfaces, including more options for complex network configurations. Many icons have been added or redesigned to make them clearer. Integration with third-party technologies like GTK and Firefox has been improved substantially." This release also features improvements to the Discover software manager. Many other tweaks and improvements are covered in the changelog.

Comments (none posted)

PyPy 7.0.0 released

Version 7.0.0 of the PyPy Python interpreter is out. This release supports no less than three upstream Python versions: 2.7, 3.5, and 3.6 (as an alpha release). "All the interpreters are based on much the same codebase, thus the triple release".

Full Story (comments: none)

Development quotes of the week

However, little about Wayland is inherently network opaque. Things like sending pixel buffers to the compositor are already abstracted on Wayland and a network-backed implementation could be easily made. The problem is that no one seems to really care: all of the people who want network transparency drank the anti-Wayland kool-aid instead of showing up to put the work in. If you want to implement this, though, we’re here and ready to support you! Drop by the wlroots IRC channel and we’re prepared to help you implement this.
Drew DeVault

I want to add one comment: someone on the thread said: "we are a small niche market". No.. we're a growing niche market. I can assure you of that. This market is supporting several companies who market pre-installed machines with Linux based desktop and are thriving. It might be slow, but conversions are happening.
Sriram Ramkrishna

Comments (7 posted)

Miscellaneous

The CNCF 2018 annual report

For those wondering what the Cloud Native Computing Foundation is up to, its 2018 annual report [PDF] is now out. "KubeCon + CloudNativeCon has expanded from its start with 500 attendees in 2015 to become one of the largest and most successful open source conferences ever. The KubeCon + CloudNativeCon North America event in Seattle, held December 10-13, 2018, was our biggest yet and was sold out several weeks ahead of time with 8,000 attendees."

Comments (none posted)

FSF Annual Report now available

The Free Software Foundation has announced that its annual report for fiscal year 2017 is available. "The Annual Report reviews the FSF's activities, accomplishments, and financial picture from October 1, 2016 to September 30, 2017. It is the result of a full external financial audit, along with a focused study of program results. It examines the impact of the FSF's events, programs, and activities, including the annual LibrePlanet conference, the Respects Your Freedom (RYF) hardware certification program, and the fight against Digital Restrictions Management (DRM)."

Comments (2 posted)

The OpenStack Foundation's 2018 annual report

The OpenStack Foundation has issued its 2018 annual report. "2018 was a productive year for the OpenStack community. A total of 1,972 contributors approved more than 65,000 changes and published two major releases of all components, code named Queens and Rocky. The component project teams completed work on themes related to integrating with other OpenStack components, other OpenStack Foundation Open Infrastructure Projects, and projects from adjacent communities. They also worked on stability, performance, and usability improvements. In addition to that component-specific work, the community continued to expand our OpenStack-wide goals process, using a few smaller topics to refine the goal selection process and understand how best to complete initiatives on such a large scale."

Comments (none posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Miscellaneous

Calls for Presentations

CFP Deadlines: February 14, 2019 to April 15, 2019

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
February 14 April 13 OpenCamp Bratislava Bratislava, Slovakia
February 15 April 30
May 2
Linux Storage, Filesystem & Memory Management Summit San Juan, Puerto Rico
February 16 May 29
May 30
DevOps Days Toronto 2019 Toronto, Canada
February 22 June 24
June 26
KubeCon + CloudNativeCon + Open Source Summit Shanghai , China
February 28 May 16 Open Source Camp | #3 Ansible Berlin, Germany
February 28 June 4
June 6
sambaXP 2019 Goettingen, Germany
March 4 June 14
June 15
Hong Kong Open Source Conference 2019 Hong Kong, Hong Kong
March 10 April 6 Pi and More 11½ Krefeld, Germany
March 17 June 4
June 5
UK OpenMP Users' Conference Edinburgh, UK
March 19 October 13
October 15
All Things Open Raleigh, NC, USA
March 24 July 17
July 19
Automotive Linux Summit Tokyo, Japan
March 24 July 17
July 19
Open Source Summit Tokyo, Japan
March 25 May 3
May 4
PyDays Vienna 2019 Vienna, Austria
April 1 June 3
June 4
PyCon Israel 2019 Ramat Gan, Israel
April 2 August 21
August 23
Open Source Summit North America San Diego, CA, USA
April 2 August 21
August 23
Embedded Linux Conference NA San Diego, CA, USA
April 12 July 9
July 11
​Xen Project ​Developer ​and ​Design ​Summit Chicago, IL, USA

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Netdev 0x13 Conference update

Early bird registration for Netdev ends February 20. The conference will take place March 20-22 in Prague, Czech Republic at the Grandium Prague. Twenty-two new talks have been accepted, click below for more details, and the schedule should be available soon.

Full Story (comments: none)

Debian Bug Squashing Parties

Debian Bug Squashing Parties are open to all, experienced Debian developers and newcomers. If you are interested in helping to improve Debian, and thereby all of its many derivatives, go to a BSP near you. Registration is required. Some upcoming parties include: See the BSP wiki page for more information.

Comments (none posted)

Events: February 14, 2019 to April 15, 2019

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 25
February 26
Vault Linux Storage and Filesystems Conference Boston, MA, USA
February 26
February 28
16th USENIX Symposium on Networked Systems Design and Implementation Boston, MA, USA
March 5
March 6
Automotive Grade Linux Member Meeting Tokyo, Japan
March 7
March 10
SCALE 17x Pasadena, CA, USA
March 7
March 8
Hyperledger Bootcamp Hong Kong
March 9 DPDK Summit Bangalore 2019 Bangalore, India
March 10 OpenEmbedded Summit Pasadena, CA, USA
March 12
March 14
Open Source Leadership Summit Half Moon Bay, CA, USA
March 14 Icinga Camp Berlin Berlin, Germany
March 14 pgDay Israel 2019 Tel Aviv, Israel
March 14
March 17
FOSSASIA Singapore, Singapore
March 19
March 21
PGConf APAC Singapore, Singapore
March 20 Open Source Roundtable at Game Developers Conference San Francisco, CA, USA
March 20
March 22
Netdev 0x13 Prague, Czech Republic
March 21 gRPC Conf Sunnyvale, CA, USA
March 23 Kubernetes Day Bengaluru, India
March 23
March 24
LibrePlanet Cambridge, MA, USA
March 23
March 26
Linux Audio Conference San Francisco, CA, USA
March 29
March 31
curl up 2019 Prague, Czech Republic
April 1
April 4
‹Programming› 2019 Genova, Italy
April 1
April 5
SUSECON 2019 Nashville, TN, USA
April 2
April 4
Cloud Foundry Summit Philadelphia, PA, USA
April 3
April 5
Open Networking Summit San Jose, CA, USA
April 5
April 7
Devuan Conference Amsterdam, The Netherlands
April 5
April 6
openSUSE Summit Nashville, TN, USA
April 6 Pi and More 11½ Krefeld, Germany
April 7
April 10
FOSS North Gothenburg, Sweden
April 10
April 12
DjangoCon Europe Copenhagen, Denmark
April 13 OpenCamp Bratislava Bratislava, Slovakia
April 13
April 17
ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments Providence, RI, USA

If your event does not appear here, please tell us about it.

Security updates

Alert summary February 7, 2019 to February 13, 2019

Dist. ID Release Package Date
Arch Linux ASA-201902-8 aubio 2019-02-12
Arch Linux ASA-201902-3 chromium 2019-02-11
Arch Linux ASA-201902-9 curl 2019-02-12
Arch Linux ASA-201902-1 dovecot 2019-02-11
Arch Linux ASA-201902-2 firefox 2019-02-11
Arch Linux ASA-201902-13 lib32-curl 2019-02-12
Arch Linux ASA-201902-12 lib32-libcurl-compat 2019-02-12
Arch Linux ASA-201902-11 lib32-libcurl-gnutls 2019-02-12
Arch Linux ASA-201902-10 libcurl-gnutls 2019-02-12
Arch Linux ASA-201902-7 libu2f-host 2019-02-12
Arch Linux ASA-201902-14 python-django 2019-02-13
Arch Linux ASA-201902-15 python2-django 2019-02-13
Arch Linux ASA-201902-5 rdesktop 2019-02-12
Arch Linux ASA-201902-6 runc 2019-02-12
Arch Linux ASA-201902-4 spice 2019-02-11
CentOS CESA-2019:0229 C7 ghostscript 2019-02-09
CentOS CESA-2019:0231 C7 spice 2019-02-09
CentOS CESA-2019:0232 C6 spice-server 2019-02-08
CentOS CESA-2019:0269 C6 thunderbird 2019-02-08
CentOS CESA-2019:0270 C7 thunderbird 2019-02-09
Debian DLA-1671-1 LTS coturn 2019-02-11
Debian DLA-1672-1 LTS curl 2019-02-11
Debian DSA-4386-1 stable curl 2019-02-06
Debian DLA-1667-1 LTS dovecot 2019-02-07
Debian DSA-4390-1 stable flatpak 2019-02-12
Debian DLA-1666-1 LTS freerdp 2019-02-09
Debian DLA-1670-1 LTS ghostscript 2019-02-11
Debian DLA-1664-1 LTS golang 2019-02-06
Debian DLA-1668-1 LTS libarchive 2019-02-07
Debian DLA-1669-1 LTS libreoffice 2019-02-08
Debian DLA-1662-1 LTS libthrift-java 2019-02-06
Debian DSA-4389-1 stable libu2f-host 2019-02-11
Debian DSA-4388-1 stable mosquitto 2019-02-10
Debian DLA-1661-1 LTS mumble 2019-02-06
Debian DLA-1665-1 LTS netmask 2019-02-06
Debian DSA-4387-1 stable openssh 2019-02-09
Debian DLA-1674-1 LTS php5 2019-02-12
Debian DLA-1663-1 LTS python3.4 2019-02-07
Debian DLA-1660-1 LTS rssh 2019-02-05
Debian DSA-4377-2 stable rssh 2019-02-11
Debian DLA-1673-1 LTS wordpress 2019-02-12
Fedora FEDORA-2019-7eb8c71fe8 F28 buildbot 2019-02-11
Fedora FEDORA-2019-7e722314f3 F29 buildbot 2019-02-11
Fedora FEDORA-2019-43489941ff F29 curl 2019-02-12
Fedora FEDORA-2019-fd9345f44a F29 flatpak 2019-02-13
Fedora FEDORA-2019-077a3f23c0 F29 ghostscript 2019-02-12
Fedora FEDORA-2019-0c1be924df F28 gvfs 2019-02-08
Fedora FEDORA-2019-8f2b27efce F29 java-1.8.0-openjdk 2019-02-11
Fedora FEDORA-2019-96ac060af3 F28 java-11-openjdk 2019-02-09
Fedora FEDORA-2019-a8f918005c F28 mingw-libconfuse 2019-02-12
Fedora FEDORA-2019-9ccbbfeae1 F29 mingw-libconfuse 2019-02-12
Fedora FEDORA-2019-7696bb57ca F28 pdns-recursor 2019-02-13
Fedora FEDORA-2019-f44f095639 F29 pdns-recursor 2019-02-13
Fedora FEDORA-2019-6cfd17b03d F28 phpMyAdmin 2019-02-09
Fedora FEDORA-2019-09ae31d880 F29 phpMyAdmin 2019-02-09
Fedora FEDORA-2019-40f4af0687 F28 poppler 2019-02-08
Fedora FEDORA-2019-333a7aa511 F28 radvd 2019-02-12
Fedora FEDORA-2019-5146cd34e2 F28 rdesktop 2019-02-13
Fedora FEDORA-2019-ac70292cfc F29 rdesktop 2019-02-13
Fedora FEDORA-2019-f1626b52e9 F28 slurm 2019-02-09
Fedora FEDORA-2019-e66b1889ec F29 slurm 2019-02-09
Fedora FEDORA-2019-a095a16c47 F29 spice 2019-02-09
Fedora FEDORA-2018-b89746cb9b F29 tomcat 2019-02-13
Fedora FEDORA-2018-51ce232320 F29 xerces-c27 2019-02-13
Mageia MGASA-2019-0063 6 cinnamon 2019-02-13
Mageia MGASA-2019-0076 6 docker 2019-02-13
Mageia MGASA-2019-0072 6 dovecot 2019-02-13
Mageia MGASA-2019-0066 6 golang 2019-02-13
Mageia MGASA-2019-0071 6 java-1.8.0-openjdk 2019-02-13
Mageia MGASA-2019-0062 6 jruby 2019-02-13
Mageia MGASA-2019-0074 6 libarchive 2019-02-13
Mageia MGASA-2019-0073 6 libgd 2019-02-13
Mageia MGASA-2019-0075 6 libtiff 2019-02-13
Mageia MGASA-2019-0070 6 libvncserver 2019-02-13
Mageia MGASA-2019-0068 6 opencontainers-runc 2019-02-13
Mageia MGASA-2019-0067 6 openssh 2019-02-13
Mageia MGASA-2019-0065 6 python-marshmallow 2019-02-13
Mageia MGASA-2019-0069 6 thunderbird 2019-02-13
Mageia MGASA-2019-0064 6 transfig 2019-02-13
openSUSE openSUSE-SU-2019:0161-1 15.0 java-11-openjdk 2019-02-12
openSUSE openSUSE-SU-2019:0152-1 15.0 openssl-1_1 2019-02-08
openSUSE openSUSE-SU-2019:0143-1 15.0 python-python-gnupg 2019-02-07
openSUSE openSUSE-SU-2019:0163-1 15.0 python-slixmpp 2019-02-13
openSUSE openSUSE-SU-2019:0159-1 42.3 python-urllib3 2019-02-12
openSUSE openSUSE-SU-2019:0155-1 15.0 python3 2019-02-09
openSUSE openSUSE-SU-2019:0154-1 42.3 rsyslog 2019-02-08
openSUSE openSUSE-SU-2019:0153-1 15.0 subversion 2019-02-08
Oracle ELSA-2019-4533 OL5 kernel 2019-02-07
Oracle ELSA-2019-4532 OL6 kernel 2019-02-07
Oracle ELSA-2019-4533 OL6 kernel 2019-02-07
Oracle ELSA-2019-4531 OL6 kernel 2019-02-06
Oracle ELSA-2019-4531 OL7 kernel 2019-02-06
Oracle ELSA-2019-4532 OL7 kernel 2019-02-07
Oracle ELSA-2019-4541 OL7 kernel 2019-02-12
Oracle ELSA-2019-4541 OL7 kernel 2019-02-12
Red Hat RHSA-2019:0309-01 EL6 chromium-browser 2019-02-11
Red Hat RHSA-2019:0304-01 EL7 docker 2019-02-11
Red Hat RHSA-2019:0324-01 EL7.4 kernel 2019-02-12
Red Hat RHSA-2019:0342-01 EL7 redhat-virtualization-host 2019-02-13
Red Hat RHSA-2019:0303-01 EL7 runc 2019-02-11
Scientific Linux SLSA-2019:0270-1 SL7 thunderbird 2019-02-06
Slackware SSA:2019-037-01 curl 2019-02-06
Slackware SSA:2019-043-01 lxc 2019-02-12
Slackware SSA:2019-038-01 php 2019-02-07
SUSE SUSE-SU-2019:0313-1 OS7 SLE12 LibVNCServer 2019-02-09
SUSE SUSE-SU-2019:13952-1 SLE11 LibVNCServer 2019-02-12
SUSE SUSE-SU-2019:0283-1 SLE15 LibVNCServer 2019-02-07
SUSE SUSE-SU-2019:0341-1 MP3.2 SMS3.2 2019-02-13
SUSE SUSE-SU-2019:13947-1 SLE11 avahi 2019-02-08
SUSE SUSE-SU-2019:0285-1 SLE15 avahi 2019-02-07
SUSE SUSE-SU-2019:0339-1 SLE12 curl 2019-02-13
SUSE SUSE-SU-2019:0286-1 SLE15 docker 2019-02-07
SUSE SUSE-SU-2019:0330-1 etcd 2019-02-12
SUSE SUSE-SU-2019:0336-1 OS7 SLE12 firefox 2019-02-12
SUSE SUSE-SU-2019:0273-1 SLE15 firefox 2019-02-06
SUSE SUSE-SU-2019:13948-1 SLE11 fuse 2019-02-08
SUSE SUSE-SU-2019:0320-1 SLE12 kernel 2019-02-11
SUSE SUSE-SU-2019:0284-1 SLE12 libunwind 2019-02-07
SUSE SUSE-SU-2019:0334-1 SLE15 nginx 2019-02-12
SUSE SUSE-SU-2019:0333-1 SLE12 php7 2019-02-12
SUSE SUSE-SU-2019:0271-1 SLE15 python 2019-02-06
SUSE SUSE-SU-2019:13951-1 SLE11 python-numpy 2019-02-12
SUSE SUSE-SU-2019:0272-1 SLE15 rmt-server 2019-02-06
SUSE SUSE-SU-2019:0337-1 runc 2019-02-12
SUSE SUSE-SU-2019:13943-1 SLE11 spice 2019-02-07
SUSE SUSE-SU-2019:0338-1 SLE15 thunderbird 2019-02-12
Ubuntu USN-3882-1 14.04 16.04 18.04 18.10 curl 2019-02-06
Ubuntu USN-3888-1 18.04 18.10 gvfs 2019-02-12
Ubuntu USN-3884-1 14.04 16.04 18.04 18.10 libarchive 2019-02-07
Ubuntu USN-3883-1 14.04 16.04 libreoffice 2019-02-06
Ubuntu USN-3871-5 14.04 16.04 18.04 linux-azure 2019-02-07
Ubuntu USN-3878-2 18.10 linux-azure 2019-02-07
Ubuntu USN-3885-1 14.04 16.04 18.04 18.10 openssh 2019-02-07
Ubuntu USN-3886-1 14.04 16.04 18.04 18.10 poppler 2019-02-11
Ubuntu USN-3890-1 16.04 18.04 18.10 python-django 2019-02-13
Ubuntu USN-3887-1 14.04 16.04 18.04 18.10 snapd 2019-02-12
Ubuntu USN-3889-1 18.04 18.10 webkit2gtk 2019-02-13
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 5.0-rc6 Feb 10
Greg KH Linux 4.20.8 Feb 12
Greg KH Linux 4.19.21 Feb 12
Greg KH Linux 4.14.99 Feb 12
Greg KH Linux 4.9.156 Feb 12
Greg KH Linux 4.4.174 Feb 08
Ben Hutchings Linux 3.16.63 Feb 11

Architecture-specific

Christophe Leroy powerpc/32: Implement fast syscall entry Feb 08

Build system

Enrico Weigelt, metux IT consult RFC: refactoring the deb build Feb 07

Core kernel

Patrick Bellasi Add utilization clamping support Feb 08
Jens Axboe io_uring IO interface Feb 07
John Ogness printk: new implementation Feb 12
Frederic Weisbecker softirq: Per vector masking v2 Feb 12
Roman Gushchin freezer for cgroup v2 Feb 12

Device drivers

Matti Vaittinen support ROHM BD70528 PMIC Feb 07
Claudiu.Beznea@microchip.com add support for SAM9X60 pin controller Feb 07
Jorge Ramirez-Ortiz USB SS PHY for Qualcomm's QCS404 Feb 07
Martin Blumenstingl Amlogic Meson6/8/8b/8m2 SoC RTC driver Feb 09
Tomasz Duszynski add support for PMS7003 PM sensor Feb 09
Linus Walleij DRM driver for ST-Ericsson MCDE Feb 07
Oded Gabbay Habana Labs kernel driver Feb 11
Yangbo Lu Add ENETC PTP clock driver Feb 12
Bartosz Golaszewski mfd: add support for max77650 PMIC Feb 13

Device-driver infrastructure

Documentation

Filesystems and block layer

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Lucas De Marchi kmod 26 Feb 07

Page editor: Rebecca Sobol


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds