LWN.net Weekly Edition for August 29, 2019
Welcome to the LWN.net Weekly Edition for August 29, 2019
This edition contains the following feature content:
- Open-source voting for San Francisco: creating an open-source voting system for a major city is not as easy as one might think.
- Ask the TAB: a question-and-answer session with members of the Linux Foundation's Technical Advisory Board.
- Inline encryption for filesystems: kernel support for storage devices with encryption offload features.
- Restricting path name lookup with openat2(): a new system call to make file operations more secure.
- Linker limitations on 32-bit architectures: the challenges of building huge programs on small machines.
- Debating the Cryptographic Autonomy License: the Open Source Initiative ponders a license designed to prevent the locking-up of user data.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Open-source voting for San Francisco
To open-source fans, the lure of open-source voting systems is surely strong. So a talk at 2019 Open Source Summit North America on a project for open-source voting in San Francisco sounded promising; it is a city with lots of technical know-how among its inhabitants. While progress has definitely been made—though at an almost glacially slow speed—there is no likelihood that the city will be voting using open-source software in the near future. The talk by Tony Wasserman was certainly interesting, however, and provided a look at the intricacies of elections and voting that make it clear the problem is not as easy as it might at first appear.
Wasserman is a professor of software management practice at Carnegie Mellon Silicon Valley and a San Francisco resident; he was asked to serve on an advisory committee on open-source voting for the city. San Francisco is about 11x11km, with around 800,000 people; roughly 500,000 of those are registered voters and nearly 350,000 turned out for the November 2018 election. He said that 70% participation by registered voters is a pretty good turnout for the US.
There are two different organizations within the city government that handle elections: the elections commission and the elections department. The commission is tasked with making the policies and plans for elections, while the department actually implements them, runs the elections, and reports the results. The elections department also handles "problem" ballots and registrations; as part of that, it stores 20 years of paper ballots underneath city hall, which he found astonishing.
The goal of the project is to develop the country's first open-source voting system for political elections, which could potentially have a broad impact if it is successful, both locally and nationally. There are other justifications for it as well, including providing transparency for voters and the expectation of saving money. There are only three (down from four due to a merger) companies that sell election systems in the US; they are not cheap and moving to open-source would provide freedom from being locked into those vendors.
History
The history of the idea goes back a long time, so far, in fact, that a San Francisco civil grand jury investigated the whole process. It produced a report [PDF] that described everything that had been done up through 2018. The idea was first raised in 2005, but it was not until 2008 that the board of supervisors (similar to a city council) established a voting system task force. The board of supervisors is under the mayor and all of the governing bodies are for both the city and county of San Francisco, which are governed together.
![Tony Wasserman [Tony Wasserman]](https://static.lwn.net/images/2019/ossna-wasserman-sm.jpg)
The voting system task force eventually found that open source is a good idea for election software—in 2011. The board of supervisors evidently mulled that for a while and in 2014 passed a unanimous resolution to move forward. The elections commission also voted unanimously, in 2015, to proceed. That made the papers and other media, Wasserman said, including on the front page of the San Francisco Examiner.
In 2016, the mayor budgeted $300,000 for an initial study and in 2017, the elections commission formed the open source voting technical advisory committee (OSVTAC), of which Wasserman is one of five members. Meanwhile, the city had awarded a contract to a consulting company (Slalom) to produce a plan for moving forward.
In 2018, the report from Slalom was delivered; "let's just say that it wasn't very strong", he said. OSVTAC recommended against following the suggestions from Slalom and the commission agreed. So the commission passed a new resolution [PDF] to continue to proceed on the open-source voting path. "You get the idea; it is taking a while". Around this time, the civil grand jury released its report, which was "pretty comprehensive" and "actually a good report", he said; it laid out all of the reasons why things were not moving faster.
In 2018, $1.7 million was allocated for ongoing project work over the next two years; there is hope that either the mayor's office or the state of California will add to that pile. Instead of putting the work out to bid, the Department of Technology (DT) took the work on and hired a project manager to oversee it. The first task was to do an "extensive study" of all the other open-source voting efforts it could find.
At the end of July, the first community meeting was held to start to include the public in the process. Various advocacy groups attended, along with the governmental representatives and interested citizens; it was facilitated by an outside mediator. The discussion was about the goals of the effort and is the start of building a community around it, he said. "And that's where we are."
Why so long?
He said that attendees may have noticed that he did not say anything about elections being held using open-source software. That has not happened and is not going to happen anytime soon. "Why is that?"
There are a number of reasons, but the first is that elections are complex. He gave a rundown of the different ways that San Francisco is partitioned for electoral reasons. Those regions are for electing officials at different levels of government: 11 supervisor districts, two US congress districts, two state assembly districts (that are, naturally, different than the congressional districts), as well as various other districts (rapid transit, air quality, school, ...) each with their own unique boundaries. His relatively close neighbor could have a very different ballot from his.
Beyond that, candidate names often have to be randomized to combat the "first in list" effect where those without strong opinions will simply pick the first candidate. San Francisco uses ranked-choice voting, which needs to be accommodated. There are lots of different ways to vote, as well, including early voting, vote-by-mail, absentee voting, and voting on election day in the local precinct. Provisions for disabled voters need to be made. Ballots need to be available in four languages city-wide (English, Chinese, Spanish, and Tagalog) and in two more (Korean and Vietnamese) in certain precincts. And on and on.
Once the voting has been done, the votes need to be counted. There are provisional, challenged, and improperly marked ballots to deal with. There needs to be support for poll-watchers to ensure the fairness of the election. In order to verify the results, 1% of the paper ballots are randomly sampled a few days after the election; so the reported results from some precincts will be compared to the hand-tallied ballots. All of that needs to be managed and administered.
Voting systems are under attack as well, as we have seen in some recent elections. Sadly, all of the voting equipment he knows of runs Windows 7. Perhaps those machines could be upgraded to Windows 10 and run the programs under some kind of emulation, but that is not happening. That is particularly worrisome when our elections are a global target, he said. The current voting system is not transparent, however; it is expensive and leads to vendor lock-in. There are also few or no provisions for updating the software that runs on these system. So it is not just Windows 7 running on them, but unpatched Windows 7 in many cases.
There are also government procurement issues; proposals have to be put out for competitive bidding among interested vendors. This is part of why he believes that the DT took on the work to develop the open-source voting system rather than have it go out to bid; it saved a round of proposals and bids. There are also various government transparency laws ("sunshine laws") that can complicate things at times. For example, OSVTAC meetings must be announced 72 hours in advance and OSVTAC members cannot privately email each other about anything related to the OSVTAC. If he wants to send something out to the rest of the committee, he has to send it to the chair, who will distribute it to the members.
More than just technical
So there are some complex technical hurdles, but those are not the main problem areas. He referred to PESTLE, which relates all of the factors that make up the "context of software development": Political, Economic, Social, Technical, Legal, and Environmental. Dealing with the technical side is less difficult than some of the other factors; one of the committee members fairly easily put together a program that could scan ballots, associate the marks with names, and tabulate the results, for example.
There is a lot of citizen concern about voting and elections in the US. Some of it is about voter fraud (votes from those who are ineligible), which "barely exists as far as anyone else can tell", Wasserman said. Other concerns are about the accuracy of the vote tabulation and vendor fraud.
The current voting systems are generally internet-enabled, not for full internet access but for the machines to be able talk to the vendor's servers. One of the main vendors is Diebold, which has been identified in the past as having tampered with machines and results. Paper ballots are an important component to address the concerns. He noted that the US state of Georgia has been ordered by a judge to run its next election using paper ballots because its voting machines were deemed unsafe.
While open-source advocates see open-source voting as a way to provide transparency and to build trust in the system, others do not necessarily see it that way. There are some who are concerned that open source would allow attackers to change the code in the systems mid-stream. Voting systems are frequently challenged; any open-source effort will similarly be challenged. Part of the problem is that many people simply do not understand what open source is and means.
San Francisco is not the first to try to come up with an open-source voting system. He listed half a dozen different efforts to create those kinds of systems; some of those systems have been used, but not for political elections. The Los Angeles County Voting System for All People (VSAP) has been approved by the California secretary of state (something that is required for any voting system used in the state), but has not been used for a real election. It is claimed to be open source, he said, but is not. It is built on open-source components and will run on Linux, but the code belongs to Los Angeles; "you can't get it".
The Open Source Election Technology Institute (OSET) is creating ElectOS, which is meant to be an open election system, but it hasn't built anything yet. It has a "wonderful board of advisors", but there's not much happening, at least yet, he said. Helios is used by the Open Source Initiative (OSI) for its elections, so it works, but it hasn't been used for anything on a large scale. The same is true for the others, such as Cornell's Condorcet Internet Voting Service (CIVS), which is the oldest effort he has found.
Wasserman had just returned from Amsterdam where he has a friend who is on an open-source voting committee as well. He was introduced to a system used in the elections in the Netherlands, which are also complicated in part because there are more than two dozen parties participating. The detailed specifications are available (in Dutch) as is the source code, but it is not open source. You can look at the code, but as far as he can tell, it is owned by the German firm that wrote it.
Meanwhile, the OSVTAC has made a set of recommendations on what the system should look like. It includes hardware and software requirements for voting, tabulating, securing, and overseeing San Francisco elections. There are a number of other organizations that are supportive of the effort (EFF, OSI, GitHub, Common Cause, ...). There is an effort to raise more money to continue the project past the end of the year.
He is hopeful that something comes of the project. Its funding is not assured, though there are several potential sources. It is all being done in the open and those interested can follow on the GitHub pages and on the city's project page. He finds it to be an interesting problem and has learned a great deal about elections in the process. Given the other high-powered organizations that are also interested, hopefully more will happen with open-source voting even if the effort in San Francisco does not result in a working system.
Open source and voting would seem, at first blush at least, to be obvious partners. It is more than a little concerning that figuring out the will of the people in a democracy is subject to the whims (and/or political leanings) of a small number of proprietary vendors—in the US at least. As Wasserman described, however, there are a lot of moving parts that have to be handled. Like tax reporting, accounting, and other areas where open-source software tends to fall short, election management may just not be an itch that enough of us are truly interested in scratching. Hopefully that situation is improving.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to attend Open Source Summit in San Diego.]
Ask the TAB
The Linux Foundation (LF) Technical Advisory Board (TAB) is meant to give the kernel community some representation within the foundation. In a "birds of a feather" (BoF) session at the 2019 Open Source Summit North America, four TAB members participated in an "Ask the TAB" session. Laura Abbott organized the BoF and Tim Bird, Greg Kroah-Hartman, and Steven Rostedt joined in as well. In the session, the history behind the TAB, its role, and some of its activities over the years were described.
Abbott started things off by noting that she is one of the newest members of the TAB, so she asked Kroah-Hartman, who is the longest-serving member, to give some of the history. At the time the Open Source Development Labs (OSDL) merged with the Free Standards Group in 2007 (which he characterized as "when we overthrew OSDL") to form the LF, the kernel community was quite unhappy with how OSDL had been run. The kernel developers made a list of six or eight demands and the LF met five of them. One of those was to form an advisory board to help the organization with various technical problems it might encounter.
A perfect example of how the TAB works is in the handling of UEFI secure boot for Linux, he said. It was identified by various Linux vendors as a looming problem and the TAB did a lot of work to help resolve it so that Linux would run on those systems. Now Linux has a representative on the UEFI committee to help ensure those kind of problems do not arise again. That also is an example of the "two-way street" between the LF and the TAB; either side can bring problems to the other in order to resolve them.
Bird is also a fairly new member of the TAB; he has been on it for two years and is up for election at the Kernel Summit, which will be held with the Linux Plumbers Conference (LPC) in September. The TAB has ten members, each of whom serve for two years; each year five seats are up for election. The election is done as an in-person vote as the summit—something that is seen as suboptimal at this point. He said that Abbott and the TAB are working on changing that, something that Abbott would return to later in the BoF.
TAB issues
Some of the things that have come up during Bird's tenure are providing an advisory role to LPC, which is something the TAB has always done, and the adoption of the kernel's code of conduct (CoC), which was the biggest issue raised while he has been on the TAB. Before the committee to handle CoC reports was formed, he worked with the other TAB members to field the few reports that came in. Kroah-Hartman said there have been just four CoC reports over the last year.
From the audience, Frank Rowand asked about the two-way street; are there examples of how that works over the last year? Bird said that an interesting one surrounded Huawei and its placement on a US government list regarding export restrictions. People in the community were concerned about how that might impact open-source projects, especially in terms of contributions from Huawei. The LF put out a "nice, clarifying statement" on that based on input from its legal team. That is the kind of resource that the LF can provide to the community, he said.
Rostedt continued in that vein, noting that legal advice is one of the areas where the LF is quite helpful to the community. It has lawyers that the TAB can speak with; those lawyers are not the TAB's legal representation, of course, but they do provide some background and advice on various issues. That is another two-way street as both Rostedt and Kroah-Hartman said that they had spoken at various LF legal events to help fill in the lawyers on how kernel development works.
Another thing that the TAB has been involved in is providing guidance on how to deal with the GPL-violation accusations by Patrick McHardy, Kroah-Hartman said. McHardy is a former kernel developer who is using GPL violations to get in the door at companies in Germany, but the real dispute turns into a contract issue when companies sign his cease-and-desist letter. The TAB advised the LF on the issue; the advice was turned into a "nice document" that any company that needs it can get. McHardy can be successfully fought off, Kroah-Hartman said, so it is important for companies to understand how to do so. "It is not a GPL violation that is being addressed" by McHardy's efforts, he said.
TAB value
Abbott asked Rostedt, who has been elected to the TAB twice, what value he sees in it. His idea is to work on getting more interaction between certain subsystems in Linux, Rostedt said. In particular, he would like to see better cooperation between systemd and other parts of the community. He wants to help mend the relationships that have broken down between the kernel and systemd developers, for example. He apologized to the systemd developers for his role in that and they are receptive to a reconciliation. His role on the TAB and as part of the LPC committee provides resources that he can use to help mend that rift.
He also noted that there has been an effort of late to make the TAB more transparent. He and Bird have been working on publishing minutes from the TAB meetings, for example. The TAB is meant to be reactive to the problems that are brought to it, Kroah-Hartman said. For example, the kernel community's position statement on kernel modules was borne out of community concerns about closed-source modules. The TAB led the effort in putting that together.
Bird said that the LF has a lot of resources and that it is his impression that the community doesn't use those resources as efficiently as it could. The TAB would like to hear more suggestions of things that the LF could do to help the ecosystem and the community. There was a need to mentor companies about how to use and contribute to Linux at one time, Kroah-Hartman said; that was something he did a lot of using LF resources, but there isn't so much of a need for that anymore. He also pointed out that the chair of the TAB is a member of the LF board of directors, so they can directly influence LF policy that way. Bird said that he thinks there is more the LF could potentially do to help attract more members to the community, so he would like to hear suggestions on things that could be done.
Elections
Abbott then moved into the planned changes for TAB elections. At one time, it made sense to do them in-person at the Kernel Summit, but those days are behind us. Getting everyone together for a vote is difficult, so the plan is to eventually move to electronic voting so that more people can vote. The TAB is taking it a step at a time; this year, there will be an experiment with electronic voting using the same pool of people (those attending LPC). She joked that "naturally, the first thing you do when you get elected to a board is to try to change the voting".
The voting procedure was a known problem within the TAB, Bird said, but the issue came to a head as part of the work on the CoC. Multiple people mentioned the in-person voting problem as part of their suggestions when commenting on the CoC situation; that spurred more action, he said. Rostedt added that this year voting is still only open to attendees, but that the TAB will be trying electronic voting; there is a backup plan for using the regular balloting if anything goes awry.
An audience member was concerned that his employer (and likely others) didn't want its employees to contribute to open source. In fact, even doing so in his off time might be grounds for firing. Beyond that, he is unsure how to even get started with the kernel or open source. Bird suggested that anyone with Linux experience has no trouble finding another job if they wish to. Those who need a place to start will find the kernel to be a "target-rich environment", he said with a chuckle. There are lots of areas that look great on the surface but once you dig into the details, you will find lots of places to help—and not just with code.
You need to figure out "what itch you are trying to scratch", Abbott said. There are lots of directions that being a "kernel developer" might mean: do you want to learn more about hardware, about operating systems, in general, or perhaps there is some part of your day job that you want to learn more about. She always suggests that people try to find a way to tie kernel development to their job, by learning about kernel tracing, for example. That knowledge might come in handy when they encounter a problem that might be a kernel bug.
Matthew Wilcox wondered about the makeup of the TAB; it consists only of kernel developers, but there are other open-source technical communities (e.g. GNOME or Dronecode) that might need their own representation. He asked: are there any plans to open up the TAB to other communities? Kroah-Hartman said that the makeup of the TAB is part of the LF charter; there was no foundation for the kernel community until the LF was formed. "We love those other communities", he said, but they have their own foundations and organizations.
Succession
Another audience member noted the resignation of Guido van Rossum from his role in the Python community and the process of finding a new governance model. He wondered if there were any thoughts of doing something like that for Linux and what role the TAB might play in that process. There was some, perhaps nervous, laughter in the room—succession planning is a perennial topic, but the TAB is not normally thought of in those terms.
Abbott reiterated that the TAB is simply a technical advisory committee for the LF. It has no sway over the governance of the kernel itself and, in particular, it is not a steering committee like the one that now governs Python. Rostedt pointed out that the TAB has no actual power to do anything and, in reality, its influence derives from the influence of its members, not the body itself.
There has already been something of a trial run in terms of the succession question, Bird said. When Linus Torvalds stepped back during the CoC incident, Kroah-Hartman ably filled in. The kernel community's processes are "really good", Bird said, so he is not worried what will happen when Torvalds retires or whatever. The question of what happens if Torvalds "gets hit by a bus" used to come up frequently, but doesn't really get raised much anymore.
Kroah-Hartman said that there were some infrastructure glitches that needed to be worked out when he stepped in. He thought that he had the permissions needed to update the mainline tree, but that turned out not to be the case; now there three separate people who have those permissions. But succession and backup maintainers is not only an issue for the mainline; to that end, that morning Sasha Levin had put out a 5.2 stable kernel release candidate (and eventually, put out the 5.2.10 release). But, if all of those who have the credentials to update Torvalds's tree are on the same bus, "you guys are on your own and I won't care", Kroah-Hartman said to laughter.
Over the last few years, people have been thinking more about maintainers, succession, and topics of that nature, Abbott said. One part of that is the work that TAB member Dan Williams is doing on a kernel maintainer's guide in consultation with all of the maintainers, Kroah-Hartman said. Abbott added that the kernel community is looking at efforts like KernelCI to automate its processes so that it can scale without necessarily adding more maintainers. KernelCI is an example of the TAB pointing the LF at a project that needed funding, Kroah-Hartman said. The LF agreed and found companies to fund what is now an LF project for building and testing many different kernel versions on lots of different hardware.
With that, time ran out on the BoF. It provided a nice look at some of what goes on with a fairly high-profile organization in our community that often seems somewhat opaque. It is clear that the TAB is trying to change that appearance and, to the extent it can, get the word out into the community about what it is doing—and what it would like to do in the future.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to attend Open Source Summit in San Diego.]
Inline encryption for filesystems
The encryption of data at rest is increasingly mandatory in a wide range of settings from mobile devices to data centers. Linux has supported encryption at both the filesystem and block-storage layers for some time, but that support comes with a cost: either the CPU must encrypt and decrypt vast amounts of data moving to and from persistent storage or it must orchestrate offloading that work to a separate device. It was thus only a matter of time before ways were found to offload that overhead to the storage hardware itself. Satya Tangirala's inline encryption patch set is intended to enable the kernel to take advantage of this hardware in a general manner.The Linux storage stack consists of numerous layers, so it is unsurprising that an inline encryption implementation will require changes at a number of those layers. Hardware-offloaded encryption will clearly require support from the device driver to work, but the knowledge of which encryption keys to use typically comes from the filesystem running at the top of the stack. Communicating that information from the top to the bottom requires a certain amount of plumbing.
Low-level support
At the lowest level, device drivers that support inline encryption will have to provide an operations structure like this:
struct keyslot_mgmt_ll_ops { int (*keyslot_program)(void *ll_priv_data, const u8 *key, enum blk_crypto_mode_num crypto_mode, unsigned int data_unit_size, unsigned int slot); int (*keyslot_evict)(void *ll_priv_data, const u8 *key, enum blk_crypto_mode_num crypto_mode, unsigned int data_unit_size, unsigned int slot); bool (*crypto_mode_supported)(void *ll_priv_data, enum blk_crypto_mode_num crypto_mode, unsigned int data_unit_size); int (*keyslot_find)(void *ll_priv_data, const u8 *key, enum blk_crypto_mode_num crypto_mode, unsigned int data_unit_size); };
The interface is designed around hardware that provides a fixed number of "key slots", each of which can hold a cryptographic context — the algorithm to be used, associated parameters (the block size, for example), and the key. These functions exist to program a crypto context into the hardware, remove a crypto context from the hardware, determine whether a specific context is supported, and to determine which slot, if any, is already programmed for a given context. Drivers will register this structure and provide the total number of slots available.
Since the number of key slots provided by the hardware is fixed, it's entirely possible that there will not be enough to handle all of the I/O requests to a given device over a short period of time. That may not be a problem for a device that is occupied by a single, encrypted filesystem, but the situation could be different if there are a lot of filesystems present, or if per-directory encryption (as supported by the ext4 filesystem) is in use. So the kernel needs a way to arbitrate access to key slots, preferably one that limits the amount of (possibly expensive) slot reprogramming required.
That arbitration begins with a "key-slot manager" abstraction. It keeps track of which slots are available at any given time and, for those that are busy, how many references (held by in-flight I/O operations) exist to each. The key-slot manager can be used to allocate slots and program encryption contexts. In normal usage, many I/O requests will use the same context, so the key-slot manager tries to keep the most frequently used keys available to the hardware and avoids programming the same key into multiple slots.
Moving up a layer, the patch set adds a new bio_crypt_ctx structure to the BIO structure (which represents an I/O request). When filesystem code originates a request, it can add the relevant context information, and the BIO structure will carry that information through to the block device executing the request. Adding this information requires changes in other parts of the block layer; for example, two adjacent requests cannot be merged if they are using different encryption contexts.
blk-crypto
Adding key information to the BIO structure isn't quite enough, though. There is still the issue of slot management and actually programming crypto contexts into the hardware; while filesystems could arguably handle this work, it almost certainly makes sense to handle this common task within the block layer itself. But there is a further complication: the device to which a filesystem submits an I/O request may not be the device that ultimately handles that request. For example, a filesystem may be based on a RAID "device" created by the device-mapper layer; code at the filesystem level will be entirely unaware of the real physical devices that have been assembled into the virtual device it sees. So filesystems cannot directly handle details like key-slot management.
The solution to this problem is the blk-crypto subsystem, which handles the details of managing key slots and getting the key information through to the right device drivers. Whenever a BIO is submitted for execution, the blk-crypto code reacts to the presence of a crypto context by allocating a slot from the key-slot manager associated with the (immediate) target device. That implies, for reasons that we'll return to shortly, that subsystems like the device mapper must implement a simple key-slot manager, even though they perform no encryption themselves.
Layered devices like the device manager will make any necessary modifications to BIOs they receive (including possibly splitting them into multiple BIOs), then turn around and resubmit the resulting BIOs to the lower-level devices. When this happens, the blk-crypto layer will release the key slot allocated at the intermediate level and allocate a new slot for the lower-level device, propagating the key material downward. This procedure will happen as many times as necessary until the BIO reaches a device that actually performs I/O.
The blk-crypto code has one other useful feature: if the target device does not actually support the type of encryption requested by the filesystem (or any encryption at all), blk-crypto will fall back to using the kernel's crypto layer instead. So filesystems can request encrypted data storage without any knowledge of whether the underlying hardware supports inline encryption or not. This functionality may eventually replace the fscrypt code currently used by ext4 and F2FS to implement encryption.
The crypto-layer fallback explains why intermediate block layers must provide their own key-slot managers. The block layer never knows whether a given BIO will be resubmitted to a lower-level device later on, so it must assume that every submission is the final one. As a result, if encryption is being performed by the kernel's crypto layer, that must happen before submitting the BIO to the device; there will be no opportunity to do so afterward. The lack of a key-slot manager for any given device is a signal to the block layer that inline encryption is not supported, so the crypto-layer fallback will be performed in that case; thereafter there is no point in using inline encryption for that request even if turns out to be available. Adding a key-slot manager to layers like the device mapper is, among other things, a way of preventing the block layer from falling back too soon.
The patch set includes a low-level implementation for Universal Flash Storage devices and upper-level support for the F2FS filesystem. As one might expect, this work is being driven by Android use cases; encrypted filesystems are important for Android devices, and offloading the actual encryption to the hardware should save both CPU time and power.
Given the potential value of this feature, it is not surprising that there have been a few attempts to add support to the kernel. The patch set mentions three of them: a hardware-specific solution that lacks generality, one that is implemented within the crypto layer (seen as the wrong place, since it's not a general cryptographic primitive), and one that requires the device mapper to function. This implementation is an attempt to avoid those problems and provide a more general solution. It is in its fourth revision and appears to be getting close to being ready to head upstream.
Restricting path name lookup with openat2()
Looking up a file given a path name seems like a straightforward task, but it turns out to be one of the more complex things the kernel does. Things get more complicated if one is trying to write robust (user-space) code that can do the right thing with paths that are controlled by a potentially hostile user. Attempts to make the open() and openat() system calls safer date back at least to an attempt to add O_BENEATH in 2014, but numerous problems remain. Aleksa Sarai, who has been working in this area for a while, has now concluded that a new version of openat(), naturally called openat2(), is required to truly solve this problem.The immediate purpose behind openat2() is to allow a program to safely open a path that is possibly under the control of an attacker; in practice, that means placing restrictions on how the lookup process will be carried out. Past attempts have centered around adding new flags to openat(), but there are a couple of problems with that approach: openat() doesn't check for unknown flags, and the number of available bits for new flags is not large. The failure to check for unknown flags is a well-known antipattern. A program using a path-restricting flag needs to know whether the requested behavior is understood by the kernel or not; the alternative is to accept security vulnerabilities on kernels that do not implement those flags.
Changing openat() to return errors when passed unknown flags is not an option; that would almost certainly break existing code. So the only alternative is to create a new system call that does check its flags; thus openat2():
struct open_how { __u32 flags; union { __u16 mode; __u16 upgrade_mask; }; __u16 resolve; __u64 reserved[7]; /* must be zeroed */ }; int openat2(int dfd, const char *filename, const struct open_how *how);
(Note that, as always, this is how the system call is presented at the user-space boundary; C libraries could expose something different.)
Rather than add another argument for the new flags, Sarai opted to move all of the behavior-affecting information into a separate structure. Among other things, this means that, unlike open() and openat(), the new system call always has the same number of arguments. The flags field holds the same O_ flags understood by open() and openat(), but unknown flags in this field will result in an error. Similarly, the mode field contains the permission bits used when a new file is created.
The resolve field, instead, contains a new set of flags that control how pathname lookup will be performed. The flags implemented in the patch set are:
- RESOLVE_NO_XDEV
- The lookup process will not be allowed to cross any mount points (including bind mounts). In other words, the file to be opened must reside on the same mount as the dfd descriptor (or the current working directory if dfd is passed as AT_FDCWD).
- RESOLVE_NO_MAGICLINKS
- There are relatively few types of objects that can be found in a filesystem directory; they include regular files, directories, devices, FIFOs, and symbolic links. With this option, Sarai is in essence acknowledging that there is another type that has been lurking in plain sight for years: the "magic link". Examples include the links found in /proc/PID/fd directories; they are implemented by the kernel and have some possibly surprising properties.
The presence of this flag will prevent a path lookup operation from traversing through one of these magic links, thus blocking (for example) attempts to escape from a container via a /proc entry for an open file descriptor.
- RESOLVE_NO_SYMLINKS
- Blocks any traversal through symbolic links, including magic links. This option differs from the O_NOFOLLOW flag in that it prevents following a link at any point in the lookup process, while O_NOFOLLOW only applies to the last component in the path.
- RESOLVE_BENEATH
- The lookup process is contained within the directory tree below the starting point; attempts to use components like "../" to escape that tree will generate an error.
- RESOLVE_IN_ROOT
- This flag causes the lookup process to behave as if a chroot() to the starting point had been performed. Absolute paths will begin relative to the starting directory, and "../" will not proceed above that directory. Some work has been done to make RESOLVE_IN_ROOT free of some of the race conditions that plague chroot(); see this changelog for some details.
The issue of magic links appears a few times in this patch set. The RESOLVE_NO_MAGICLINKS option prevents them from being traversed (or opened), but it turns out that there are numerous cases where it is indeed useful to open such links. The problem is that allowing that to happen can be dangerous; the runc container breakout vulnerability reported in February was the result of hostile code using the /proc/PID/exe link to open the runc binary for write access. That can make for leaky containers, to put it mildly.
The first patch in the series changes the semantics of open() (in all variants) when magic links are involved: while previously the permission bits on the magic link itself were ignored, now they are taken into account. Then, for example, the permissions for /proc/PID/exe are changed in the kernel to disallow opening for write access, blocking the runc breakout attack.
This change enables one other feature provided by openat2() (and, indeed, openat() as well). There is a new flag (O_EMPTYPATH) that causes the path argument to be ignored; instead, the call will simply reopen the file descriptor passed as the dfd argument using the new mode provided. A common use case is to reopen a file descriptor initially opened with O_PATH to gain access to file contents or metadata — access which is otherwise not possible with an O_PATH descriptor (see the man page for details on O_PATH). Programs have typically done this sort of reopening using a path into /proc/PID/fd, but O_EMPTYPATH will work even if /proc is not available.
Finally, the new API also allows the placement of limits on how a file descriptor created with O_PATH can be "upgraded" as described above. When openat2() is used to open an O_PATH file descriptor, the upgrade_mask field in the how structure can be used to limit the access that can be obtained by reopening in the future. Specifying UPGRADE_NOREAD will prevent reopening with read access, and UPGRADE_NOWRITE will prevent the acquisition of write access. This restriction can limit the damage should a hostile program obtain access to an O_PATH file descriptor.
Previous versions of this patch set have generated a fair amount of discussion. The relative quiet after this posting may reflect the fact that most of the concerns raised have been addressed over time — or possibly just that the people who would comment on it are attending the Linux Security Summit and falling behind on email. Either way, there is a clear demand for the ability to restrict how file path names are traversed so, sooner or later, some version of this patch set seems likely to find its way into the mainline.
Linker limitations on 32-bit architectures
Before a program can be run, it needs to be built. It's a well-known fact that modern software, in general, consumes more runtime resources than before, sometimes to the point of forcing users to upgrade their computers. But it also consumes more resources at build time, forcing operators of the distributions' build farms to invest in new hardware, with faster CPUs and more memory. For 32-bit architectures, however, there exists a fundamental limit on the amount of virtual memory, which is never going to disappear. That is leading to some problems for distributions trying to build packages for those architectures.
Indeed, with only 32-bit addresses, there is no way for a process to refer to more than 4GB of memory. For some architectures, the limit is even less — for example, MIPS hardware without the Enhanced Virtual Addressing (EVA) extensions is hardwired to make the upper 2GB of the virtual address space accessible from the kernel or supervisor mode only. When linking large programs or libraries from object files, ld sometimes needs more than 2GB and therefore fails.
Building on 32-bit machines
This class of problems was recently brought up by Aurelien Jarno in a cross-post to multiple Debian mailing lists. Of course, he is not the first person to hit this; a good example would be in a Gentoo forum post about webkit-gtk from 2012. Let's follow through this example, even though there are other packages (Jarno mentions Firefox, Glasgow Haskell Compiler, Rust, and scientific software) where a 32-bit build has been reported to run out of virtual address space.
If one attempts to build webkit-gtk now, first the C++ source files are grouped and concatenated, producing so-called "unified sources". This is done in order to reduce compilation time. See this blog post by WebKit developer Michael Catanzaro for more details. Interestingly, a similar approach had been suggested for WebKit earlier, but it targeted reducing memory requirements at the linking stage.
Unified sources are compiled into the object code files by g++, which knows how to invoke the actual compiler (cc1plus) and assembler (as). At this stage, the most memory-hungry program is cc1plus, which usually consumes ~500-800MB of RAM, but there are some especially complex files that cause it to take more than 1.5GB. Therefore, on a system with 32-bit physical addresses, building webkit-gtk with parallelism greater than -j2 or maybe -j3 might well fail. In other words, due to insufficient memory, on "real" 32-bit systems one may not be able to take the full advantage of multi-processing. Thankfully, both on 32-bit x86 and MIPS, CPU architecture extensions exist that allow the system as a whole (but not an individual process) to use more than 4GB of physical RAM.
It is a policy of Debian, Fedora, and lots of other distributions (but, notably, not Gentoo) to generate debugging information while compiling C/C++ sources. This debugging information is useful when interpreting crash dumps — it allows seeing which line of the source code a failing instruction corresponds to, observing how exactly local and global variables are located in memory, and determining how to interpret various data structures that the program creates. If the -g compiler option is used to produce debugging information, the resulting object files have sizes between several KB and 10MB. For example, WebDriverService.cpp.o takes 2.1MB of disk space. Of that, only 208KB are left if debugging symbols are discarded. That's right — 90% of the size of this particular file is taken by debugging information.
When compilation has created all of the object files that go into an executable or library, ld is invoked to link them together. It combines code, data, and debug information from object files into a single ELF file and resolves cross-references. During the final link of libwebkit2gtk-4.0.so.37, 3GB of object files are passed to ld at once. This doesn't fail on x86 with Physical Address Extension (PAE), but comes quite close. According to the report produced with -Wl,--stats in LDFLAGS:
ld.gold: total space allocated by malloc: 626630656 bytes total bytes mapped for read: 3050048838 output file size: 1727812432 bytes
Adding the first two lines gives 3.6GB allocated by ld.gold. There are some legitimate questions here regarding resource usage. First, what is all this memory (or address space) used for? Second, can ld use less memory?
Well, there are two implementations of ld in the binutils package: ld.bfd (old, but still the default) and ld.gold (newer). By default, both implementations use mmap() to read object files as inputs. That is, ld associates a region of its virtual address space with every object file passed to it so that memory operations on these regions are redirected by the kernel, through page faults, to the files. This programming model is convenient and, in theory, reduces the physical memory requirements. That is because the kernel can, at its discretion, repurpose physical memory that keeps the cached content of object files for something else and then transparently reread bytes from the file again when ld needs them. Also, ld.gold, by default, uses mmap() to write to the output file. The downside is that there must be sufficient address space to assign to the input and output files or the mmap() operation will fail.
Both ld.bfd and ld.gold offer options that are intended to conserve memory, but actually help only sometimes:
$ ld.bfd --help --no-keep-memory Use less memory and more disk I/O --reduce-memory-overheads Reduce memory overheads, possibly taking much longer --hash-size=<NUMBER> Set default hash table size close to <NUMBER> $ ld.gold --help --no-keep-files-mapped Release mapped files after each pass --no-map-whole-files Map only relevant file parts to memory (default) --no-mmap-output-file Do not map the output file for writing
When a distribution maintainer sees a build failure on a 32-bit architecture, and one of the options listed above helps, a patch gets quickly submitted upstream so that the option is used by default in the affected application or library. For webkit-gtk, that is indeed the case. In other words, all the low-hanging fruit is already collected.
A package without full debugging information is still better than no package at all.
So, reluctantly, maintainers are forced to
compress
debugging information,
reduce its level of detail,
or, in extreme cases,
completely
remove
it. Still, in Debian, some packages are excluded from some architectures
because they cannot be built due to running out of virtual memory.
"We are at a point were we should probably look for a real solution
instead of relying on tricks
", Jarno concluded.
Cross-compiling
A reader with some background in embedded systems might ask: why does Debian attempt to build packages on weak and limited 32-bit systems at all? Wouldn't it be better to cross-compile them on a much larger amd64 machine? Indeed, that's what the Yocto Project and Buildroot do, and they have no problem producing a webkit-gtk package with full debugging information for any architecture. However, cross-compilation (in the sense used by these projects) invariably means the inability to run compiled code. So packages that test properties of the target system by building and running small test programs are no longer able to do so; they are forced to rely on external hints, hard-coded information, or (often pessimistic) assumptions. Moreover, the maintainer is no longer able to run the test suite as a part of the package build. Such limitations are unacceptable for the Debian release team, and therefore native builds are required. And running tests under QEMU-based emulation is out of question, too, because of possible flaws in the emulator itself.
Jarno proposed to use Debian's existing support for "multiarch", which is the ability to co-install packages for different architectures. For example, on a system with an amd64 kernel, both i386 and amd64 packages can be installed and both types of programs will run. The same applies to 32-bit and 64-bit MIPS variants. It would be, therefore, enough to produce cross-compilers that are 64-bit executables but target the corresponding 32-bit architecture. For gcc it is as easy as creating a wrapper that adds the -m32 switch. However, gcc is not the only compiler — there are also compilers for Haskell, Rust, OCaml, and other languages not supported by gcc; they may need to be provided as well.
If one were to implement this plan, a question arises how to make those compilers available. That is, should a package that demands extraordinary amounts of memory at build time explicitly depend on the cross-compiler or should it be provided implicitly? Also, there is an obstacle on the RISC-V architectures: a 64-bit CPU does not implement the 32-bit instructions, so one cannot assume that just-compiled 32-bit programs will run. The same is sometimes true for Arm: there is arm64 hardware that does not implement 32-bit instructions. However, this obstacle is not really relevant, because Debian does not support 32-bit RISC-V systems anyway, and arm64 hardware that supports 32-bit instructions is readily available, is not going away, and thus can be used on the build systems. So, for the cases where this approach can work, Ivo De Decker (a member of the Debian release team) expressed readiness to reevaluate the policy.
On the other hand, Sam Hartman, the current Debian Project Leader, admitted that he is not convinced that it is a good idea to run the build tools (including compilers and linkers) on 32-bit systems natively. Of course, this viewpoint is at odds with the release team position, and the problem of getting accurate build-time test results still needs to be solved. One solution mentioned was a distcc setup, but this turned out to be a false lead, because with distcc, linking is done locally.
There was also a subthread started by Luke Kenneth Casson Leighton, where he brought up ecological concerns — namely, that perfectly good hardware is forced to go to landfills because of the resource-wasteful algorithms now employed in the toolchain. In Leighton's viewpoint, the toolchain must be fixed in order to be able to deal with files bigger than the available virtual memory, especially because it was able to deal with such files decades ago.
Right now, 32-bit systems are still useful and can run general-purpose
operating systems. Only a few big packages are directly affected by the
limitation of available address space at build time.
But ultimately, we have to agree with
Hartman's conclusion: "if no one does the work, then we will lose the
32-bit architectures
" — at least for Debian.
Debating the Cryptographic Autonomy License
If one were to ask a group of free-software developers whether the community needs more software licenses, the majority of the group would almost certainly answer "no". We have the licenses we need to express a range of views of software freedom, and adding to the list just tends to create confusion and compatibility issues. That does not stop people from writing new licenses, though. While much of the "innovation" in software licenses in recent times is focused on giving copyright holders more control over how others use their code (while still being able to brand it "open source"), there are exceptions. The proposed "Cryptographic Autonomy License" (CAL) is one of those; its purpose is to give users of CAL-licensed code control over the data that is processed with that code.Van Lindberg first went to the Open Source Initiative's license-review list with the CAL in April. At that point, it ran into a number of objections and was rejected by the OSI's license review committee. The license has since been revised to address (most of) the objections that were raised; a third version with minor tweaks was posted on August 22.
At its core, the CAL is a copyleft license; distribution of derived products is only allowed if the corresponding source is made available under the same (or a compatible) license. The third version added one exception: release of source can be delayed for up to 90 days if required to comply with a security embargo. There is, however, another set of conditions that sits at the core of the CAL and are its reason for existence; they are worth quoting in full:
In addition to providing each Recipient the opportunity to have Access to the Source Code, You cannot use the permissions given under this License to interfere with a Recipient’s ability to fully use an independent copy of the Work generated from the Source Code You provide with the Recipient’s own User Data.
"User Data" means any data that is either an input to or an output from the Work, or is necessary for the functioning of the system, in which the Recipient has an existing ownership interest, an existing right to possess, or has been generated for or uniquely assigned to the Recipient.
4.2.1. No Withholding User Data
Throughout any period in which You exercise any of the permissions granted to
You under this License, You must also provide to any Recipient to whom you
provide services via the Work, a no-charge copy, provided in a commonly used
electronic form, of the Recipient’s User Data in your possession, to the
extent that such User Data is available to You for use in conjunction with
the Work.
4.2.2. No Technical Measures that Limit Access
You may not, by means of cryptographic controls, control of encryption keys,
seeds, hashes, or any other technological protection measures, or any other
method, limit a Recipient’s ability to access any functionality present in
Recipient's independent copy of the Work, or to deny a Recipient full control
of the Recipient’s User Data.
The license also includes a prohibition against the use of contractual measures to prevent users from exercising the rights described above. The intent of all this language is relatively straightforward: if you are a user of an application (perhaps hosted on the net somewhere), you have the right to extract your data from that application to use with your own modified version of the code. Control of data is not to be used to lock users into a de-facto proprietary system.
This language turned out to be quite controversial on the license-review list. The most vocal opponent is undoubtedly Bruce Perens, who expressed concerns about what others may try to put into future licenses based on the example of the CAL:
So, I am really most concerned with the precedent that this license would establish, which would lead to more severe terms being applied to the data processed. And thus I suggest that OSI not accept any such terms at all.
He went on to argue that the prohibition against locking up users' data should be seen as a field-of-use restriction, which would make the CAL non-compliant with section 6 of the Open Source Definition. Perens compared the CAL to the "badgeware" licenses that the OSI accepted back in 2007. The OSI took a lot of criticism for that decision; Perens suggested that the same thing could happen this time around.
Josh Berkus, instead, characterized
Perens's reasoning as "a 'slippery slope' argument
" and argued
that the CAL should be accepted:
He also rejected the field-of-use argument as being "a huge
stretch
" and concluded by saying: "While it is not the OSI's
job to challenge Amazon's business model, it is not our job to protect it
either
".
That was not the end of the discussion; indeed, it went on at such length that list moderator Simon Phipps eventually asked the participants to reduce the rate at which they were posting. Little new ground was covered in all that volume, though.
In the end, the OSI will have to decide whether mandating access to the data managed by a given program is an appropriate feature of an open-source license. The badgeware decision was not seen at the time (or later) as being one of the OSI's finest moments; the decision on the CAL carries similar risks. Badgeware licenses, though, sought to increase the rights of copyright holders, while the CAL gives rights to software users. Whether that difference will tip the balance in this decision remains to be seen. The actual decision, it seems, is likely to be made at a face-to-face meeting in mid-November.
Brief items
Security
Backdoor code found in 11 Ruby libraries (ZDNet)
ZDNet reports on the discovery of a set of malicious libraries in the RubyGems repository. "The individual behind this scheme was active for more than a month, and their actions were not detected. Things changed when the hacker managed to gain access to the RubyGems account of one of the rest-client developers, which he used to push four malicious versions of rest-client on RubyGems. However, by targeting such a high-profile project that has over 113 million total downloads on RubyGems, the hacker also brought a lot of light to their operation, which was taken down within a few hours after users first spotted the malicious code in the rest-client library."
Backdoors in Webmin
Anybody using Webmin, a web-based system-administration tool, will want to update now, as it turns out that the system has been backdoored for over a year. "At some time in April 2018, the Webmin development build server was exploited and a vulnerability added to the password_change.cgi script. Because the timestamp on the file was set back, it did not show up in any Git diffs. This was included in the Webmin 1.890 release."
Security quote of the week
The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.
Kernel development
Kernel release status
The current development kernel is 5.3-rc6, released on August 25, the 28th anniversary of the initial Linux announcement. "I’m doing a (free) operating system (more than just a hobby) for 486 AT clones and a lot of other hardware. This has been brewing for the last 28 years, and is still not done. I’d like any feedback on any bugs introduced this release (or older bugs too, for that matter)."
Stable updates: 5.2.10, 4.19.68, 4.14.140, 4.9.190, and 4.4.190 were also released on August 25. The 5.2.11, 4.19.69, and 4.14.141 updates are in the review process; they are due on August 29.
Microsoft to put exFAT support into the kernel
Linux support for the exFAT filesystem has had a long and troubled history; Microsoft has long asserted patents in this area that have prevented that code from being merged into the kernel. Microsoft has just changed its tune, announcing that upstreaming exFAT is now OK: "It’s important to us that the Linux community can make use of exFAT included in the Linux kernel with confidence. To this end, we will be making Microsoft’s technical specification for exFAT publicly available to facilitate development of conformant, interoperable implementations. We also support the eventual inclusion of a Linux kernel with exFAT support in a future revision of the Open Invention Network’s Linux System Definition, where, once accepted, the code will benefit from the defensive patent commitments of OIN’s 3040+ members and licensees."
Quote of the week
It should be that reviewers get credit for finding bugs in patches (no credit for complaining about checkpatch issues, that is its own reward).
Distributions
Distribution quote of the week
If we are going to mandate something - or even, if we are going to change our current stance (which seems to be that this is a "nice to have"), then a discussion of the upsides and downsides - particularly, with a practical focus - is necessary.
Development
GNOME Foundation launches Coding Education Challenge
The GNOME Foundation, with support from Endless, has announced the Coding Education Challenge, a competition aimed to attract projects that offer educators and students new and innovative ideas to teach coding with free and open source software. "Anyone is encouraged to submit a proposal. Individuals and teams will be judged through three tiers of competition. Twenty winners will be selected from an open call for ideas and will each receive $6,500 in prize money. Those winners will progress to a proof of concept round and build a working prototype. Five winners from that round will be awarded $25,000 and progress to the final round where they will turn the prototype into an end product. The final winner will receive a prize of $100,000 and the second placed product a prize of $25,000."
Rust is the future of systems programming, C is the new Assembly (Packt)
Packt has published a lengthy writeup of a talk by Josh Triplett on work being done to advance the Rust language for system-level programming. "Systems programming often involves low-level manipulations and requires low-level details of the processors such as privileged instructions. For this, Rust supports using inline Assembly via the 'asm!' macro. However, it is only present in the nightly compiler and not yet stabilized. Triplett in a collaboration with other Rust developers is writing a proposal to introduce more robust syntax for inline Assembly."
Development quote of the week
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
- DistroWatch Weekly (August 26)
- Lunar Linux Weekly News (August 23)
- Ubuntu Weekly Newsletter (August 24)
Development
- Emacs News (August 26)
- LLVM Weekly (August 26)
- LXC/LXD/LXCFS Weekly Status (August 27)
- OCaml Weekly News (August 27)
- Perl Weekly (August 26)
- Weekly changes in and around Perl 6 (August 26)
- PostgreSQL Weekly News (August 25)
- Python Weekly Newsletter (August 22)
- Ruby Weekly News (August 22)
- This Week in Rust (August 27)
Meeting minutes
- Fedora FESCO meeting minutes (August 26)
Calls for Presentations
CFP Deadlines: August 29, 2019 to October 28, 2019
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
August 30 | October 10 October 11 |
PyCon ZA 2019 | Johannesburg, South Africa |
August 31 | November 12 November 15 |
Linux Application Summit | Barcelona, Spain |
August 31 | November 7 | Open Source Camp | #4 Foreman | Nuremberg, Germany |
September 2 | October 3 | PostgresqlConf South Africa | Cape Town, South Africa |
September 9 | October 26 | FOSS XR Conference | Amsterdam, The Netherlands |
September 13 | October 26 | The Future is Open | Rochester, NY, USA |
September 14 | October 31 | Real-time Linux Summit - At ELCE 2019 | Lyon, France |
September 16 | November 4 November 8 |
Tcl 2019 - 26th Annual Tcl/Tk Conference | Houston, TX, USA |
September 22 | December 9 | Open FinTech Forum | New York, NY, USA |
October 2 | November 2 November 3 |
OpenFest 2019 | Sofia, Bulgaria |
October 4 | November 2 November 3 |
Konference OpenAlt | Brno, Czechia |
October 6 | November 16 November 17 |
Capitole du Libre 2019 | Toulouse, France |
October 25 | March 23 March 27 |
[CANCELED] SUSECON 2020 | was Dublin, Online only |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: August 29, 2019 to October 28, 2019
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
August 26 August 30 |
FOSS4G 2019 | Bucharest, Romania |
September 2 September 6 |
EuroSciPy 2019 | Bilbao, Spain |
September 3 September 6 |
Open Source Firmware Conference 2019 | Sunnyvale & Menlo Park, CA, USA |
September 4 September 6 |
GNU Hackers Meet | Madrid, Spain |
September 6 September 9 |
PyColorado | Denver, CO, USA |
September 7 September 13 |
Akademy 2019 | Milan, Italy |
September 9 September 11 |
Linux Plumbers Conference | Lisbon, Portugal |
September 9 September 11 |
HashiConf 2019 | Seattle, WA, USA |
September 11 September 13 |
PostgresOpen | Orlando, FL, USA |
September 11 September 12 |
Cloud Foundry Summit Europe | The Hague, The Netherlands |
September 11 September 13 |
LibreOffice Conference | Almeria, Spain |
September 12 | Kernel Maintainer Summit | Lisbon, Portugal |
September 12 September 15 |
GNU Tools Cauldron | Montreal, Canada |
September 16 September 20 |
GNU Radio Conference 2019 | Huntsville, AL, USA |
September 19 September 20 |
DPDK Userspace 2019 | Bordeaux, France |
September 20 September 22 |
All Systems Go! 2019 | Berlin, Germany |
September 21 September 23 |
State of the Map | Heidelberg, Germany |
September 21 | Central Pennsylvania Open Source Conference | Lancaster, PA, USA |
September 23 September 24 |
Lustre Administrator and Developer Workshop 2019 | Paris, France |
September 23 September 25 |
Open Networking Summit | Antwerp, Belgium |
September 25 September 27 |
Kernel Recipes | Paris, France |
October 1 October 4 |
Alpine Linux Persistence and Storage Summit | Lizumerhütte, Tyrol, Austria |
October 2 | OpenInfra Days Italy 2019 | Milan, Italy |
October 2 October 4 |
X.Org Developer's Conference | Montreal, Canada |
October 3 | OpenInfra Days Italy 2019 | Rome, Italy |
October 4 October 5 |
Linux Piter 2019 | Saint Petersburg, Russia |
October 5 October 6 |
openSUSE.Asia Summit | Bali, Indonesia |
October 7 October 9 |
LinuxConf [ZA] | Johannesburg, South Africa |
October 8 October 9 |
PostgresConf South Africa 2019 | Johannesburg, South Africa |
October 8 | Introduction to Coccinelle | Munich, Germany |
October 8 October 11 |
Zeek Week (formerly BroCon) | Seattle, WA, USA |
October 10 October 11 |
PyCon ZA 2019 | Johannesburg, South Africa |
October 10 October 13 |
Ubucon Europe 2019 | Sintra, Portugal |
October 12 October 13 |
WineConf 2019 | Toronto, Canada |
October 13 October 15 |
All Things Open | Raleigh, NC, USA |
October 15 October 18 |
PostgreSQL Conference Europe | Milan, Italy |
October 19 | GNU/LinuxDay in Vorarlberg | Dornbirn, Austria |
October 19 October 20 |
OGGCAMP 19 | Manchester, UK |
October 22 October 23 |
LLVM Dev Mtg | San Jose, CA, USA |
October 25 October 27 |
NixCon | Brno, Czech Republic |
October 26 | FOSS XR Conference | Amsterdam, The Netherlands |
October 26 | The Future is Open | Rochester, NY, USA |
October 26 October 27 |
Sonoj Convention | Cologne, Germany |
October 27 October 30 |
27th ACM Symposium on Operating Systems Principles | Huntsville, Ontario, Canada |
If your event does not appear here, please tell us about it.
Security updates
Alert summary August 22, 2019 to August 28, 2019
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Arch Linux | ASA-201908-11 | firefox | 2019-08-24 | |
Arch Linux | ASA-201908-9 | libreoffice-still | 2019-08-24 | |
Arch Linux | ASA-201908-13 | nginx | 2019-08-24 | |
Arch Linux | ASA-201908-12 | nginx-mainline | 2019-08-24 | |
Arch Linux | ASA-201908-10 | subversion | 2019-08-24 | |
Debian | DSA-4509-1 | stable | apache2 | 2019-08-26 |
Debian | DLA-1896-1 | LTS | commons-beanutils | 2019-08-24 |
Debian | DLA-1893-1 | LTS | cups | 2019-08-22 |
Debian | DSA-4510-1 | stable | dovecot | 2019-08-28 |
Debian | DSA-4508-1 | stable | h2o | 2019-08-24 |
Debian | DLA-1894-1 | LTS | libapache2-mod-auth-openidc | 2019-08-23 |
Debian | DLA-1895-1 | LTS | libmspack | 2019-08-23 |
Debian | DSA-4505-1 | stable | nginx | 2019-08-22 |
Debian | DLA-1886-2 | LTS | openjdk-7 | 2019-08-23 |
Debian | DSA-4506-1 | stable | qemu | 2019-08-24 |
Debian | DSA-4507-1 | stable | squid | 2019-08-24 |
Debian | DLA-1897-1 | LTS | tiff | 2019-08-25 |
Debian | DLA-1898-1 | LTS | xymon | 2019-08-26 |
Fedora | FEDORA-2019-4bed83e978 | F29 | docker | 2019-08-27 |
Fedora | FEDORA-2019-5b54793a4a | F30 | docker | 2019-08-27 |
Fedora | FEDORA-2019-099575a123 | F30 | httpd | 2019-08-23 |
Fedora | FEDORA-2019-2b8ef08c95 | F30 | kubernetes | 2019-08-26 |
Fedora | FEDORA-2019-355f6e10c1 | F29 | libmodbus | 2019-08-25 |
Fedora | FEDORA-2019-4942e01cdc | F30 | libmodbus | 2019-08-25 |
Fedora | FEDORA-2019-099575a123 | F30 | mod_md | 2019-08-23 |
Fedora | FEDORA-2019-9013b5e75d | F29 | nfdump | 2019-08-24 |
Fedora | FEDORA-2019-0fbfb00cbb | F30 | nfdump | 2019-08-24 |
Fedora | FEDORA-2019-8a437d5c2f | F29 | nghttp2 | 2019-08-27 |
Fedora | FEDORA-2019-81985a8858 | F30 | nghttp2 | 2019-08-23 |
Fedora | FEDORA-2019-befd924cfe | F30 | nginx | 2019-08-22 |
Fedora | FEDORA-2019-5a6a7bc12c | F30 | nodejs | 2019-08-25 |
Fedora | FEDORA-2019-ac709da87f | F30 | patch | 2019-08-23 |
openSUSE | openSUSE-SU-2019:1983-1 | 15.0 15.1 | ImageMagick | 2019-08-21 |
openSUSE | openSUSE-SU-2019:2007-1 | dkgpg, libTMCG | 2019-08-25 | |
openSUSE | openSUSE-SU-2019:2000-1 | 15.1 | go1.12 | 2019-08-24 |
openSUSE | openSUSE-SU-2019:1997-1 | neovim | 2019-08-24 | |
openSUSE | openSUSE-SU-2019:2017-1 | putty | 2019-08-27 | |
openSUSE | openSUSE-SU-2019:1985-1 | 15.0 15.1 | putty | 2019-08-21 |
openSUSE | openSUSE-SU-2019:1988-1 | 15.0 | python | 2019-08-23 |
openSUSE | openSUSE-SU-2019:1989-1 | 15.1 | python | 2019-08-23 |
openSUSE | openSUSE-SU-2019:2005-1 | 15.1 | qbittorrent | 2019-08-25 |
openSUSE | openSUSE-SU-2019:1994-1 | 15.0 15.1 | schismtracker | 2019-08-23 |
openSUSE | openSUSE-SU-2019:1999-1 | teeworlds | 2019-08-24 | |
openSUSE | openSUSE-SU-2019:1990-1 | thunderbird | 2019-08-23 | |
openSUSE | openSUSE-SU-2019:2015-1 | vlc | 2019-08-26 | |
openSUSE | openSUSE-SU-2019:2008-1 | zstd | 2019-08-25 | |
Oracle | ELSA-2019-2571 | OL7 | pango | 2019-08-27 |
Oracle | ELSA-2019-2571 | OL7 | pango | 2019-08-27 |
Red Hat | RHSA-2019:2545-01 | RHAE2.6 | Ansible | 2019-08-21 |
Red Hat | RHSA-2019:2544-01 | RHAE2.7 | Ansible | 2019-08-21 |
Red Hat | RHSA-2019:2543-01 | RHAE2.8 | Ansible | 2019-08-21 |
Red Hat | RHSA-2019:2542-01 | RHAE2.8 | Ansible | 2019-08-21 |
Red Hat | RHSA-2019:2552-01 | RHOE | atomic-openshift-web-console | 2019-08-21 |
Red Hat | RHSA-2019:2538-01 | EL7 | ceph | 2019-08-21 |
Red Hat | RHSA-2019:2566-01 | EL7.5 | kernel | 2019-08-27 |
Red Hat | RHSA-2019:2553-01 | EL7 | qemu-kvm-rhev | 2019-08-22 |
Red Hat | RHSA-2019:2565-01 | EL7.5 | ruby | 2019-08-27 |
Scientific Linux | SLSA-2019:2079-1 | SL7 | Xorg | 2019-08-26 |
Scientific Linux | SLSA-2019:2332-1 | SL7 | advancecomp | 2019-08-26 |
Scientific Linux | SLSA-2019:2057-1 | SL7 | bind | 2019-08-26 |
Scientific Linux | SLSA-2019:2075-1 | SL7 | binutils | 2019-08-26 |
Scientific Linux | SLSA-2019:2162-1 | SL7 | blktrace | 2019-08-26 |
Scientific Linux | SLSA-2019:2051-1 | SL7 | compat-libtiff3 | 2019-08-26 |
Scientific Linux | SLSA-2019:2181-1 | SL7 | curl | 2019-08-26 |
Scientific Linux | SLSA-2019:2060-1 | SL7 | dhcp | 2019-08-26 |
Scientific Linux | SLSA-2019:2197-1 | SL7 | elfutils | 2019-08-26 |
Scientific Linux | SLSA-2019:2048-1 | SL7 | exempi | 2019-08-26 |
Scientific Linux | SLSA-2019:2101-1 | SL7 | exiv2 | 2019-08-26 |
Scientific Linux | SLSA-2019:2037-1 | SL7 | fence-agents | 2019-08-26 |
Scientific Linux | SLSA-2019:2157-1 | SL7 | freerdp and vinagre | 2019-08-26 |
Scientific Linux | SLSA-2019:2462-1 | SL7 | ghostscript | 2019-08-26 |
Scientific Linux | SLSA-2019:2281-1 | SL7 | ghostscript | 2019-08-26 |
Scientific Linux | SLSA-2019:2118-1 | SL7 | glibc | 2019-08-26 |
Scientific Linux | SLSA-2019:2145-1 | SL7 | gvfs | 2019-08-26 |
Scientific Linux | SLSA-2019:2258-1 | SL7 | http-parser | 2019-08-26 |
Scientific Linux | SLSA-2019:2343-1 | SL7 | httpd | 2019-08-26 |
Scientific Linux | SLSA-2019:2141-1 | SL7 | kde-workspace | 2019-08-26 |
Scientific Linux | SLSA-2019:2285-1 | SL7 | keepalived | 2019-08-26 |
Scientific Linux | SLSA-2019:2029-1 | SL7 | kernel | 2019-08-26 |
Scientific Linux | SLSA-2019:2137-1 | SL7 | keycloak-httpd-client-install | 2019-08-26 |
Scientific Linux | SLSA-2019:2298-1 | SL7 | libarchive | 2019-08-26 |
Scientific Linux | SLSA-2019:2047-1 | SL7 | libcgroup | 2019-08-26 |
Scientific Linux | SLSA-2019:2308-1 | SL7 | libguestfs-winsupport | 2019-08-26 |
Scientific Linux | SLSA-2019:2052-1 | SL7 | libjpeg-turbo | 2019-08-26 |
Scientific Linux | SLSA-2019:2049-1 | SL7 | libmspack | 2019-08-26 |
Scientific Linux | SLSA-2019:2130-1 | SL7 | libreoffice | 2019-08-26 |
Scientific Linux | SLSA-2019:2290-1 | SL7 | libsolv | 2019-08-26 |
Scientific Linux | SLSA-2019:2136-1 | SL7 | libssh2 | 2019-08-26 |
Scientific Linux | SLSA-2019:2053-1 | SL7 | libtiff | 2019-08-26 |
Scientific Linux | SLSA-2019:2294-1 | SL7 | libvirt | 2019-08-26 |
Scientific Linux | SLSA-2019:2126-1 | SL7 | libwpd | 2019-08-26 |
Scientific Linux | SLSA-2019:2169-1 | SL7 | linux-firmware | 2019-08-26 |
Scientific Linux | SLSA-2019:2327-1 | SL7 | mariadb | 2019-08-26 |
Scientific Linux | SLSA-2019:2276-1 | SL7 | mercurial | 2019-08-26 |
Scientific Linux | SLSA-2019:2112-1 | SL7 | mod_auth_openidc | 2019-08-26 |
Scientific Linux | SLSA-2019:2237-1 | SL7 | nss, nss-softokn, nss-util, and nspr | 2019-08-26 |
Scientific Linux | SLSA-2019:2077-1 | SL7 | ntp | 2019-08-26 |
Scientific Linux | SLSA-2019:2154-1 | SL7 | opensc | 2019-08-26 |
Scientific Linux | SLSA-2019:2143-1 | SL7 | openssh | 2019-08-26 |
Scientific Linux | SLSA-2019:2304-1 | SL7 | openssl | 2019-08-26 |
Scientific Linux | SLSA-2019:2125-1 | SL7 | ovmf | 2019-08-26 |
Scientific Linux | SLSA-2019:2033-1 | SL7 | patch | 2019-08-26 |
Scientific Linux | SLSA-2019:2097-1 | SL7 | perl-Archive-Tar | 2019-08-26 |
Scientific Linux | SLSA-2019:2046-1 | SL7 | polkit | 2019-08-26 |
Scientific Linux | SLSA-2019:2022-1 | SL7 | poppler | 2019-08-26 |
Scientific Linux | SLSA-2019:2189-1 | SL7 | procps-ng | 2019-08-26 |
Scientific Linux | SLSA-2019:2030-1 | SL7 | python | 2019-08-26 |
Scientific Linux | SLSA-2019:2035-1 | SL7 | python-requests | 2019-08-26 |
Scientific Linux | SLSA-2019:2272-1 | SL7 | python-urllib3 | 2019-08-26 |
Scientific Linux | SLSA-2019:2078-1 | SL7 | qemu-kvm | 2019-08-26 |
Scientific Linux | SLSA-2019:2135-1 | SL7 | qt5 | 2019-08-26 |
Scientific Linux | SLSA-2019:2110-1 | SL7 | rsyslog | 2019-08-26 |
Scientific Linux | SLSA-2019:2028-1 | SL7 | ruby | 2019-08-26 |
Scientific Linux | SLSA-2019:2099-1 | SL7 | samba | 2019-08-26 |
Scientific Linux | SLSA-2019:2283-1 | SL7 | sox | 2019-08-26 |
Scientific Linux | SLSA-2019:2229-1 | SL7 | spice-gtk | 2019-08-26 |
Scientific Linux | SLSA-2019:2177-1 | SL7 | sssd | 2019-08-26 |
Scientific Linux | SLSA-2019:2091-1 | SL7 | systemd | 2019-08-26 |
Scientific Linux | SLSA-2019:2205-1 | SL7 | tomcat | 2019-08-26 |
Scientific Linux | SLSA-2019:2178-1 | SL7 | udisks2 | 2019-08-26 |
Scientific Linux | SLSA-2019:2336-1 | SL7 | unixODBC | 2019-08-26 |
Scientific Linux | SLSA-2019:2159-1 | SL7 | unzip | 2019-08-26 |
Scientific Linux | SLSA-2019:2280-1 | SL7 | uriparser | 2019-08-26 |
Scientific Linux | SLSA-2019:2017-1 | SL7 | zsh | 2019-08-26 |
Scientific Linux | SLSA-2019:2196-1 | SL7 | zziplib | 2019-08-26 |
Slackware | SSA:2019-238-01 | kernel | 2019-08-26 | |
SUSE | SUSE-SU-2019:2237-1 | SLE15 | apache2 | 2019-08-28 |
SUSE | SUSE-SU-2019:2219-1 | OS8 | ardana-ansible, ardana-db, ardana-freezer, ardana-glance, ardana-input-model, ardana-nova, ardana-osconfig, ardana-tempest, caasp-openstack-heat-templates, crowbar-core, crowbar-ha, crowbar-openstack, crowbar-ui, documentation-suse-openstack-cloud, galera-python-clustercheck, openstack-cinder, openstack-glance, openstack-heat, openstack-horizon-plugin-monasca-ui, openstack-horizon-plugin-neutron-fwaas-ui, openstack-ironic, openstack-keystone, openstack-manila, openstack-monasca-agent, openstack-monasca-api, openstack-monasca-persister, openstack-monasca-persister-java, openstack-murano, openstack-neutron, openstack-neutron-gbp, openstack-neutron-lbaas, openstack-nova, openstack-octavia, python-Beaver, python-oslo.db, python-osprofiler, python-swiftlm, venv-openstack-magnum, venv-openstack-monasca, venv-openstack-monasca-ceilometer, venv-openstack-murano, venv-openstack-neutron | 2019-08-26 |
SUSE | SUSE-SU-2019:2236-1 | SLE12 | fontforge | 2019-08-28 |
SUSE | SUSE-SU-2019:14155-1 | SLE11 | ghostscript-library | 2019-08-28 |
SUSE | SUSE-SU-2019:2213-1 | SLE15 | go1.11 | 2019-08-23 |
SUSE | SUSE-SU-2019:2214-1 | SLE15 | go1.12 | 2019-08-23 |
SUSE | SUSE-SU-2019:14151-1 | SLE11 | kvm | 2019-08-21 |
SUSE | SUSE-SU-2019:2231-1 | SLE15 | libreoffice | 2019-08-28 |
SUSE | SUSE-SU-2019:1606-2 | OS8 SLE12 SES5 | libssh2_org | 2019-08-21 |
SUSE | SUSE-SU-2019:2227-1 | OS8 SLE12 SES5 | libvirt | 2019-08-28 |
SUSE | SUSE-SU-2019:2223-1 | SLE15 | podman, slirp4netns and libcontainers-common | 2019-08-27 |
SUSE | SUSE-SU-2019:2228-1 | SLE15 | postgresql10 | 2019-08-28 |
SUSE | SUSE-SU-2019:2159-1 | OS7 OS8 SLE12 SES4 SES5 | postgresql96 | 2019-08-21 |
SUSE | SUSE-SU-2019:2211-1 | SLE15 | python-SQLAlchemy | 2019-08-23 |
SUSE | SUSE-SU-2019:2212-1 | SLE15 | python-Twisted | 2019-08-23 |
SUSE | SUSE-SU-2019:2221-1 | SLE12 | qemu | 2019-08-27 |
SUSE | SUSE-SU-2019:2192-1 | SLE15 | qemu | 2019-08-21 |
SUSE | SUSE-SU-2019:2209-1 | OS7 SES4 | rubygem-loofah | 2019-08-23 |
SUSE | SUSE-SU-2019:2229-1 | SLE15 | slurm | 2019-08-28 |
SUSE | SUSE-SU-2019:2191-1 | SLE15 | wavpack | 2019-08-21 |
Ubuntu | USN-4110-1 | 16.04 18.04 19.04 | dovecot | 2019-08-28 |
Ubuntu | USN-4108-1 | 18.04 | libzstd | 2019-08-21 |
Ubuntu | USN-4109-1 | 18.04 | openjpeg2 | 2019-08-21 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Rebecca Sobol