|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for April 13, 2017

Defending copyleft

By Jake Edge
April 12, 2017

LibrePlanet

For some years now, Bradley Kuhn has been the face of GPL enforcement. At LibrePlanet 2017, he gave a talk about that enforcement and, more generally, whether copyleft is succeeding. Enforcing the GPL is somewhat fraught with perils of various sorts, and there are those who are trying to thwart that work, he said. His talk was partly to clear the air and to alert the free-software community to some backroom politics he sees happening in the enforcement realm.

Most of the work that Kuhn's employer, the Software Freedom Conservancy (SFC), does is not dealing with licensing issues. But people love hearing about copyleft, he said. In addition, free-software developers like those at LibrePlanet have a right to know what's going on politically. There is a lot of politics going on behind the scenes.

Kuhn works for a charity, not a traditional company or a trade association. That means he has the freedom and, in some sense, the obligation to give attendees the whole story from his point of view, he said. He is lucky to be able to work in that fashion. Kuhn then took a bit of a spin through his history with copyleft and why he decided to step up for it.

He studied computer science, but at a liberal arts college, which has given him something of a interdisciplinary approach. He was the first to bring a laptop to class there (a Sager from 1992); it had a battery that lasted all of 25 minutes, so he had two to last for a whole class. He noted that in those days, people read Computer Shopper magazine and would order systems out of it by dialing companies on rotary phones.

He read Linus Torvalds's famous email, more or less in realtime (which, for Usenet, meant "within a few days"). The "just a hobby" phrase really struck him, because he was a part of that culture. In those days, that's who read Computer Shopper and bought computers. He needed an operating system, so he installed GNU/Linux on his computer. That meant he had the source and could make patches to fix problems that he had. He did not send his patches upstream, though, which is unfortunate because he could perhaps have become a Linux developer and could enforce his own copyrights, rather than others', if he had.

A strategy

Copyleft is simply a strategy, he said; it is a tool to try to ensure software freedom for everyone. It is not a moral principle, so if it doesn't work, we should switch to another strategy that does. There is no harm in asking whether copyleft is working, but we should avoid the "truthiness and revisionist history" about GPL enforcement that has become common.

For example, he pointed to a response from Greg Kroah-Hartman in the contentious discussion about GPL enforcement prior to the 2016 Kernel Summit. In it, Kroah-Hartman said that he did want companies to comply with the license, but didn't think "that suing them is the right way to do it, given that we have been _very_ successful so far without having to do that". Kuhn said he did not know what "quantum reality" Kroah-Hartman lives in, but that the GPL has been enforced with lawsuits a number of times along the way.

The enforcement of the GPL directly resulted in the OpenWrt distribution for wireless routers. The first commit to OpenWrt was the source release that Linksys and Cisco made due to the GPL compliance efforts. That has resulted in one of the most widely deployed Linux systems in the world. In an aside to Richard Stallman, Kuhn noted that he said "Linux", rather than "GNU/Linux", because these systems mostly consist of the Linux kernel plus BusyBox and not the GNU system.

Kuhn admitted that most of the code released did not go into the upstream projects, which has led to some criticism of those efforts. One persistent critic has been former BusyBox developer Rob Landley who said that the BusyBox compliance efforts had never resulted in any code that was useful to the upstream project. According to Kuhn, that idea is based on a misunderstanding, which he and Landley resolved at linux.conf.au 2017.

Getting code upstream is only a secondary effect of copyleft's primary goal, Kuhn said. Whether the projects get the code is not the fundamental reason that copyleft exists; instead, it is meant to ensure that downstream users of the software can get access to the code so they can become upstream developers. They may not do so, but they will have the opportunity to; that will also provide plenty of learning opportunities for those users.

Stallman said that getting contributions for upstream is a focus of the open-source movement, while getting the code into users' hands is what free software is all about. There is another concern, though, even when the source code is available, Kuhn said. Devices like those for the Internet of Things (IoT) and other embedded use cases often fail to release the "scripts used to control compilation" (from the GPLv2), which means that users cannot build and install the code onto their devices. That is an important part of the software freedom that copyleft tries to ensure.

He pointed to another response in the Kernel Summit discussion thread: Matthew Garrett pointed out that getting companies to participate has some positive effects, but that there are other considerations:

And do you want 4 more enterprise clustering filesystems, or another complete rewrite of the page allocator for a 3% performance improvement under a specific database workload, or do you want a bunch of teenagers who grow up hacking this stuff because it's what powers every device they own? Because honestly I think it's the latter that's helped get you where you are now, and they're not going to be there if the thing that matters to you most is making sure that large companies don't feel threatened rather than making sure that the next 19 year old in a dorm room can actually hack the code on their phone and build something better as a result.

Today's teenager does not have Kuhn's luggable laptop, but does have access to routers, tablets, refrigerators, televisions, and so on. He noted that the SamyGO project got its start from a lawsuit filed to "liberate TV firmware hacking". The base for the project came from the source code released by Samsung due to a BusyBox GPL enforcement suit.

In addition, Harald Welte filed at least fifteen GPL compliance suits in Germany over ten years starting in 2004. So, to say that Linux has been successful without lawsuits, as Kroah-Hartman did, is not an accurate summary of the history. Kuhn would argue that Linux and other software released under the GPL have been successful because enforcement actions and lawsuits have happened regularly over the years.

We can't go to a parallel universe and replay the experiment without lawsuits to see what the outcome would be, but it is political FUD to say that GPL enforcement and lawsuits are some newfangled idea that endangers Linux. If that is true, it has been that way since the first suits were filed back in 2002.

Less compliance

In the meantime, though, compliance has become less common as more and more devices with GPL-covered code are released. If you go to Best Buy and buy an IoT gadget or other device, it almost certainly will contain GPL-covered code and is highly likely to be in violation of the license. It is not just BusyBox and Linux, either; Samba, FFmpeg, and other projects are also having their code built into these devices.

Welte discovered that GPL enforcement is not particularly enjoyable work; he is no longer enforcing the GPL and has moved on to other interesting projects. That leaves SFC as the sole organization doing community-oriented enforcement. The principles that have been used for years to govern what it means to do community-oriented enforcement were refined and published in 2015. The principles embody the idea of using the tool of copyleft to maximize software freedom for users, he said.

The principles also clearly state that legal action is a last resort. It is the last tool to be used, not the first. He and Kroah-Hartman agree on 99.99% of GPL enforcement strategy, Kuhn said, but differ on one minor point. He talked with Kroah-Hartman about the VMware lawsuit at a conference recently and it became apparent what the difference between their positions is. SFC had talked to VMware for years and the company said that it did not agree that it was violating the license, so it would never comply. According to Kuhn, Kroah-Hartman believes that Christoph Hellwig (and SFC) should have just walked away and let VMware violate the license forever. Kuhn said, at best, that would turn the GPL into the LGPL; "at the worst we would have completely eviscerated copyleft".

Kroah-Hartman's employer, the Linux Foundation (LF), has a more aggressive position, Kuhn said. He thinks that is because it does not have software freedom as a priority. As an example of that, he mentioned a conversation he had with LF legal counsel Karen Copenhaver at LinuxCon 2013; she told him that the LF prioritizes Linux jobs. If allowing proprietary kernel modules creates more Linux jobs, that, to her, is an acceptable tradeoff. But Kuhn believes there are some jobs that shouldn't be done, especially if they are harmful to the community.

Criticism

There is a strange element to the criticisms about lawsuits, he said. Companies in the tech industry sue each other all the time—hundreds of times each year. But even if he uses "aggressive overcounting", he can only come up with about 50 lawsuits against GPL violators in the last twenty years. The SFC has remained under continuous attack though it has only funded one lawsuit in its entire history, he said.

The politics surrounding our community are nasty and not transparent; much of what happens goes on in backchannels that our community does not have access to. The policies that various organizations are pushing are not in the open. For example, the LF has declared war on copyleft, he said. It is a trade association that is run by big companies that would prefer that copyleft would go away. They would also like enforcement to cease because it scares their customers.

Others, like VMware, thumb their noses at the GPL. That is because companies are not really interested in software freedom, which is logical from their point of view, Kuhn said, even though he disagrees. This bias against copyleft licenses has been going on for a long time; copyleft projects get replaced with non-copyleft alternatives so that companies can make proprietary versions when they wish to.

But a talk by Eben Moglen the previous day (which hearkened back to his talk at Columbia Law School in November 2016) suggested that lawsuits are driving companies away from copylefted code and away from copylefted kernels in particular. Kuhn does not see how that can be true since there have been non-copylefted kernels available for decades. It is also another example of a lack of transparency in the politics, Kuhn said, because Moglen and the Software Freedom Law Center (SFLC) are working with the LF on its anti-copyleft work.

Kuhn said he doesn't think of the LF, SFLC, or Moglen as enemies, though he does think they are misguided and a bit hypocritical. He said he would like to end the rumor mill and the backchannel politics to give all of the free-software community the ability to weigh in on these issues.

For another example of these politics, Kuhn pointed to a particular section of a video [WebM] of a talk by former Debian project leader Neil McGovern. Some ways into the talk, McGovern noted that Debian had asked SFLC for advice about distributing ZFS; even though SFLC opined that Oracle would not sue Debian for doing so, the project decided not to distribute ZFS in binary form for reasons of morality. When McGovern was asked about releasing the advice publicly, he declined since it was not something Debian wanted to advocate, but shortly thereafter something eerily similar was published—with "Debian's name filed off".

If there is a non-copyleft kernel coming for Android, as Moglen had predicted the previous day, it is not all that different from where things are today, Kuhn said. He has a hard time finding a Linux kernel in an embedded device that is complying with the GPL. In effect, Linux is a non-copyleft kernel in most cases because of that.

Not magic pixie dust

Copyleft is not "magic pixie dust" that you sprinkle on your code and magically get software freedom. It only works if someone does the work to ensure that the copyleft license is complied with. By some historical accident, he and Karen Sandler are the ones doing that for the Linux project. Kuhn is not in love with GPL enforcement. It is truly boring work and is politically dangerous. It has had an effect on his career, since there are probably only two organizations that he can ever work for: the SFC or the FSF. He would much rather be a free-software developer, but that doesn't seem to be in the cards.

He would not be doing this enforcement work without a mandate, however. Linux copyright holders asked the SFC to do this work. The future of copyleft is in the hands of the copyright holders, which are increasingly for-profit companies. The interests of those companies may align with the free-software community at times, but may not at other times. He advocated that free-software developers demand that they hold onto their copyrights in the code they create, rather than allowing the companies that employ them to hold them. It is clear that many companies are willing to leave copyleft undefended, he said. In order to defend it, we will need to have our own copyrights in the code.

Overall, he is rather baffled by how things have worked out. The SFC has spent years trying to work with the LF and others on GPL enforcement issues, but new heights of criticism are regularly reached, he said. His best guess is that powerful entities are concerned that developers will be the ones to determine the future of copyleft rather than the entities. Historically, free-software developers have been good at defending software freedom, he said, so we should hold our own copyrights, license code under the GPL, and defend the GPL when it is violated.

In "half a minute" of his reactions, Stallman noted that it would be really nice if the GPL enforced itself. If companies could be brought into compliance without harsh words and actions, that would be great, but it might not be enough. There is a need for visible action so that would-be violators recognize that there are effective actions that can be taken. There is tremendous hostility to the GPL and copyleft, but it is counterproductive to not do enforcement as a way to fend off that hostility. It may well be that Google's non-copyleft kernel becomes successful and replaces Linux, but if we try to avoid that by not enforcing the GPL, the outcome is the same.

For those interested, Kuhn's slides and a WebM video of the talk are available.

[I would like to thank the Linux Foundation for travel assistance to Cambridge, MA for LibrePlanet.]

Comments (23 posted)

Overlayfs snapshots

By Jake Edge
April 12, 2017

Vault

At the 2017 Vault storage conference, Amir Goldstein gave a talk about using overlayfs in a novel way to create snapshots for the underlying filesystem. His company, CTERA Networks, has used the NEXT3 ext3-based filesystem with snapshots, but customers want to be able to use larger filesystems than those supported by ext3. Thus he turned to overlayfs as a way to add snapshots for XFS and other local filesystems.

NEXT3 has a number of shortcomings that he wanted to address with overlayfs snapshots. Though it only had a few requirements, which were reasonably well supported, NEXT3 never got upstream. It was ported to ext4, but his employer stuck with the original ext3-based system, so the ext4 version was never really pushed for upstream inclusion.

[Amir Goldstein]

One of the goals of the overlayfs snapshots (ovfs-snap) project is for it to be included upstream for better maintainability. It will also allow snapshots at a directory subtree level; the alternative mechanisms for snapshots, Btrfs or LVM thin provisioning (thinp), are volume-level snapshots. Those two also allow writable and nested snapshots, while ovfs-snap does not. The "champion feature" for ovfs-snap is that the inode and page cache are shared, which is not true of the others. For a large number of containers, it becomes inefficient to have multiple copies of the same data in these caches, he said.

Goldstein then moved into a demonstration of the feature. In previous versions of the talk, he did the demo at the end but, based on some feedback, has moved it near the beginning. It is a bit hard to describe, however, as with many demos. The basic idea is that snapshot mounts turn overlayfs on its head: the lower layer, which normally doesn't change in a normal overlayfs mount, is allowed to change, while the upper layer is modified to cover up the changes made in the lower so that the upper has the same contents as the lower at the time of the snapshot.

This is done using a special "snapshot mount" that is a shim over the lower layer to intercept filesystem operations to it. Before those operations are performed, the current state of the lower layer is copied up to the upper layer. The upper layer is a snapshot overlay (which is different from a snapshot mount) that effectively records the state of the lower layer before any changes are made to it.

So the lower layer must be accessed through the snapshot mount, but the upper layer is simply a regular overlayfs that can be accessed as usual to get a view of the filesystem at the time of the snapshot. Multiple snapshots can be supported by having a lower layer shared between multiple upper layers, each of which hides any changes made to the lower after they were mounted (which is when the snapshot is taken).

These snapshots can work for any size of directory tree. It will also work on top of Btrfs, XFS, or another local filesystem. The upper layer will record what has changed, but at the file level, not at the block level. One consequence of this design is that changing one byte of a large file results in a copy-up operation for the whole file. In addition, currently only one copy-up at a time is supported, so a large copy-up blocks any others.

Some new features are coming that will address some of these problems. For the container use case, Goldstein said, the copy-up performance issue is not usually a real problem. But for his use case, with large XFS files, copy-up performance is important. So, for 4.10, a "clone up" operation was added when the underlying filesystem supports the clone operation (as XFS and others do). The clone will do a copy-on-write "copy" of the file before it is modified so that only changed blocks actually get copied. There is also support for doing concurrent copy-up operations that is coming in 4.11.

Goldstein presented a couple of different use cases for ovfs-snap. For a short-lived snapshot for backup purposes, an ovfs-snap provides a file-level copy-on-write filesystem. Changes to the lower layer trigger the copy-up so the snapshot is consistent with the state at the time of the backup. The lower layer can be accessed at near-native performance, while accessing the snapshot can tolerate some lesser performance, he said.

One could also use ovfs-snap to allow access to multiple previous versions of the filesystem. Multiple upper layers can be composed to create a view of the filesystem at any of the snapshot times, while the lower layer remains mounted and accessible. Those snapshots are read-only, however, unlike Btrfs or LVM thinp snapshots.

The rules for maintaining an overlay that represents a snapshot are fairly straightforward. Files must copied (or cloned) up before they are modified or deleted in the lower layers. A whiteout marking a deletion must be added before a file gets created in the lower layer. A directory in the snapshot overlay must be redirected when a directory in the lower layer gets renamed. Finally, when a lower layer directory gets deleted, an opaque directory must be created in the snapshot.

Taking a snapshot is a somewhat complicated process (see slide 15 in Goldstein's slides [PDF] for more information). Simplifying that process is on the to-do list for the project. There are also plans to support merging snapshots as well as working on getting the code upstream. He finished the talk with the inevitable invitation to help work on the project; he pointed those interested at the project wiki.

[I would like to thank the Linux Foundation for travel assistance to Cambridge, MA for Vault.]

Comments (none posted)

Page editor: Jonathan Corbet

Security

Network security in the microservice environment

April 12, 2017

This article was contributed by Tom Yates


CloudNativeCon+KubeCon
We have seen that a microservice architecture is intimately tied to the use of a TCP/IP network as the interconnecting fabric, so when Bernard Van De Walle from Aporeto gave a talk at CloudNativeCon and KubeCon Europe 2017 on why we shouldn't bother securing that network, it seemed a pretty provocative idea.

Back in the old days, said Van De Walle, the enterprise had its computing infrastructure in data centers — big racks of servers, all with workloads running on them. The interconnecting network was divided into zones according to functional and security requirements. The zones were separated by big firewalls which filtered traffic based on source and destination IP address (or subnet) and port range. Modulo a few disasters, this worked OK.

But then came microservices, which took away workload predictability. No longer could one point at a box and say "this is a Java box, it runs the JVM", or "this is an Apache box, it runs the web server". Now any given pod may end up running on any node, at least by default, and this is a big challenge to the traditional model of firewalls. One could run one firewall per node, but people have started to deploy VPNs and software-defined networks, and once you do that, each new pod deployment requires updating the node firewalls across the entire deployment. The traditional firewall-as-gatekeeper model has real problems in this world.

So we should step back and think about this again; perhaps the network is not the best place to provide our interconnection security. Indeed, Van De Walle went on, we could embrace the contrary and assume that the network provides no security at all. This changes our threat model; the internal network is now just as untrusted as the external one. We cannot [Bernard Van De
Walle] now rely on IP addresses and port numbers as authenticators; each flow must be authorized some other way. Kubernetes, he said, defaults to zero-trust networking; that is, by default the network is flat from the ground up. The IP address is dynamically assigned and carries no real information. Instead, Kubernetes has objects with associated identifiers — name, namespace, and labels — these are the foundation for your identity.

Kubernetes has made attempts to address the flatness of the network before. In v1.4, noted Van De Walle, network policies were introduced. The policy engine was based on a whitelist model; that is, everything was forbidden unless you explicitly authorized it. Policies are ingress-only, so you get to police traffic coming into your pod, but not that going out of it. Network policies are not active by default; they are activated on a per-namespace basis by the use of an annotation in the relevant YAML file. (For those who are not familiar with Kubernetes, I should point out that nearly everything involves YAML. The creation, modification, or destruction of just about any kind of resource involves the changes being specified in a YAML file, which is then referenced on the command line. If you're going to start using Kubernetes, prepare to start dreaming in YAML.)

In Van De Walle's opinion, Kubernetes network policies are fairly good. Rules are applied to specific pods, which are selected by a role label. Although the filtering is ingress filtering, you specify which traffic is allowed to enter a pod based on the role of the sender. For example, a policy might apply to all nodes whose role was backend; that policy would permit any traffic originating from a pod whose role was frontend. Rules are additive; each rule allows a new class of traffic, and traffic need only match any one rule in order to be permitted ingress.

There are already a number of projects working with (and extending) the network policies mechanism on the market; Van De Walle named Project Calico as an example, and there are others. But these implementations are tied to the networking backend, because, at the end of the day, policing is still based on IP addresses. So Aporeto has developed Trireme, a way of securing data flows that is independent of IP address information or the network fabric, and based entirely around the pod label.

Trireme adds a signed-identity-exchange phase on top of TCP's three-way handshake. The signed identity is then used to implement Kubernetes's native network policies with an enhanced degree of reliability. The iptables "mangle" table is used to send packets involved in the handshake via Trireme's user-space daemon, which must be installed on each authenticating node and which adds the identity exchange on top of the TCP handshake. Signatures are authenticated via a pre-shared key, or full PKI. If the pre-shared key option is used — only recommended for small deployments — Aporeto suggests distributing the key as a Kubernetes secret; if PKI is used, each node must be supplied with its private key and the relevant certificates.

Because this authentication is implemented on the node level, the pod is completely unaware that it is happening; it just never hears incoming traffic that fails the Trireme test. At this point, Van De Walle did a pleasingly quirky demo where a server pod was deployed that took requests over the network for the "Beer of the Day", which it returned from a random list of German beers. Two client pods were also deployed, both running a tiny application that continuously retrieved Beers of the Day from the server; one client possessed the tokens to assert an identity via Trireme, and one did not. When no network policy was in force, both clients were able to retrieve the beer of the day; when a network policy allowing only the approved client to connect to the server was applied, the non-approved client could no longer retrieve the daily beer.

Trireme is particularly helpful when not everything is happening in a single Kubernetes cluster. The ability to federate clusters across multiple data centers is coming; because this will almost inherently involve network address translation (NAT), authentication via source IP becomes extremely difficult. But as long as a TCP connection can be made, Trireme can layer its identity exchange on top. Future plans for Trireme include the ability to require encryption on-demand on a connection-by-connection basis, though this will slow data flows as now all packets in a flow will need to go via the user-space daemon, for encryption to be applied/removed.

There are problems, or at least corner cases. Because TCP is stateful, and the netfilter state engine is used to recognize the packets involved in a new connection in order to send those (and only those) via the Trireme user-space daemon, every connection set up before any policies are applied remains valid after policy application, even if the policy should have forbidden it. Aporeto is experimenting with flushing the TCP connection table in order to address this problem.

The slides from Van De Walle's talk are available those who are interested. Trireme is an elegant implementation of a clever idea, but for me its greatest value may be in encouraging me to recognize that zero-trust networking is a good way to think in a containerized microservice environment; that the old days, when access to the private network bolstered or indeed established your right to access the information stored thereon, might just be passing away.

[Thanks to the Linux Foundation, LWN's travel sponsor, for assistance in getting to Berlin for CNC and KubeCon.]

Comments (16 posted)

Brief items

Pandavirtualization: Exploiting the Xen hypervisor (Project Zero)

The latest installment from Google's Project Zero covers the development of an exploit for this unpleasant Xen vulnerability. "To demonstrate the impact of the issue, I created an exploit that, when executed in one 64-bit PV guest with root privileges, will execute a shell command as root in all other 64-bit PV guests (including dom0) on the same physical machine."

Comments (8 posted)

Over The Air: Exploiting Broadcom’s Wi-Fi Stack (Part 2) (Project Zero)

Here's the second part in the detailed Google Project Zero series on using the Broadcom WiFi stack to compromise the host system. "In this post, we’ll explore two distinct avenues for attacking the host operating system. In the first part, we’ll discover and exploit vulnerabilities in the communication protocols between the Wi-Fi firmware and the host, resulting in code execution within the kernel. Along the way, we’ll also observe a curious vulnerability which persisted until quite recently, using which attackers were able to directly attack the internal communication protocols without having to exploit the Wi-Fi SoC in the first place! In the second part, we’ll explore hardware design choices allowing the Wi-Fi SoC in its current configuration to fully control the host without requiring a vulnerability in the first place."

Comments (24 posted)

Security updates

Alert summary April 6, 2017 to April 12, 2017

Dist. ID Release Package Date
Arch Linux ASA-201704-3 mediawiki 2017-04-10
Arch Linux ASA-201704-2 python-django 2017-04-09
Arch Linux ASA-201704-1 python2-django 2017-04-09
Debian DLA-893-1 LTS bouncycastle 2017-04-10
Debian DSA-3829-1 stable bouncycastle 2017-04-11
Debian DSA-3828-1 stable dovecot 2017-04-10
Debian DSA-3828-2 stable dovecot 2017-04-11
Debian DSA-3827-1 stable jasper 2017-04-07
Debian DLA-887-1 LTS libdatetime-timezone-perl 2017-04-07
Debian DLA-891-1 LTS libnl 2017-04-10
Debian DLA-892-1 LTS libnl3 2017-04-10
Debian DLA-888-1 LTS logback 2017-04-08
Debian DLA-890-1 LTS ming 2017-04-10
Debian DLA-889-1 LTS potrace 2017-04-09
Debian DLA-894-1 LTS samba 2017-04-11
Debian DLA-886-1 LTS tzdata 2017-04-07
Fedora FEDORA-2017-b38b98727e F25 curl 2017-04-09
Fedora FEDORA-2017-174cb400d7 F24 flatpak 2017-04-11
Fedora FEDORA-2017-047cffb598 F25 ghostscript 2017-04-09
Fedora FEDORA-2017-712a186f5f F24 icecat 2017-04-07
Fedora FEDORA-2017-674d306f51 F25 icecat 2017-04-07
Fedora FEDORA-2017-ab3acddd21 F25 libtiff 2017-04-10
Fedora FEDORA-2017-51979161f4 F25 tigervnc 2017-04-07
Fedora FEDORA-2017-7e5b5201e7 F25 xen 2017-04-05
Fedora FEDORA-2017-054729ab08 F25 xen 2017-04-09
Gentoo 201704-02 chromium 2017-04-10
Gentoo 201704-01 qemu 2017-04-10
Gentoo 201704-03 xorg-server 2017-04-10
openSUSE openSUSE-SU-2017:0969-1 42.1 42.2 apparmor 2017-04-10
openSUSE openSUSE-SU-2017:0955-1 42.1 42.2 clamav-database 2017-04-06
openSUSE openSUSE-SU-2017:0961-1 ffmpeg 2017-04-07
openSUSE openSUSE-SU-2017:0958-1 42.2 ffmpeg 2017-04-07
openSUSE openSUSE-SU-2017:0942-1 42.1 42.2 libpng12 2017-04-05
openSUSE openSUSE-SU-2017:0937-1 42.1 42.2 libpng16 2017-04-05
openSUSE openSUSE-SU-2017:0941-1 42.2 nodejs4 2017-04-05
openSUSE openSUSE-SU-2017:0982-1 42.2 php7 2017-04-11
openSUSE openSUSE-SU-2017:0973-1 42.2 pidgin 2017-04-11
openSUSE openSUSE-SU-2017:0935-1 42.1 samba 2017-04-05
openSUSE openSUSE-SU-2017:0944-1 42.2 samba 2017-04-05
openSUSE openSUSE-SU-2017:0980-1 42.1 42.2 slrn 2017-04-11
Oracle ELSA-2017-0893 OL6 389-ds-base 2017-04-11
Oracle ELSA-2017-0892 OL6 kernel 2017-04-11
Red Hat RHSA-2017:0893-01 EL6 389-ds-base 2017-04-11
Red Hat RHSA-2017:0892-01 EL6 kernel 2017-04-11
Red Hat RHSA-2017:0933-01 EL7 kernel 2017-04-12
Red Hat RHSA-2017:0931-01 EL7 kernel-rt 2017-04-12
Red Hat RHSA-2017:0932-01 MRG/EL6 kernel-rt 2017-04-12
Scientific Linux SLSA-2017:0893-1 SL6 389-ds-base 2017-04-11
Scientific Linux SLSA-2017:0892-1 SL6 kernel 2017-04-11
Scientific Linux SLSA-2017:0630-1 SL6 tigervnc 2017-04-05
Slackware SSA:2017-098-01 libtiff 2017-04-08
Slackware SSA:2017-100-01 vim 2017-04-10
SUSE SUSE-SU-2017:0946-1 SLE11 jasper 2017-04-05
SUSE SUSE-SU-2017:0983-1 SLE12 xen 2017-04-11
Ubuntu USN-3258-2 16.04 16.10 dovecot 2017-04-11
Ubuntu USN-3258-1 16.04 16.10 dovecot 2017-04-10
Ubuntu USN-3257-1 16.04 16.10 webkit2gtk 2017-04-10
Full Story (comments: none)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.11-rc6, released on April 9. Linus said: "Things are looking fairly normal, so here's the regular weekly rc. It's a bit bigger than rc5, but not alarmingly so, and nothing looks particularly worrisome."

The 4.11 regression report for April 9 lists 15 known problems.

Stable updates: 4.10.9, 4.9.21, and 4.4.60 were released on April 8, followed by 4.10.10, 4.9.22, and 4.4.61 on April 12.

Comments (none posted)

Quotes of the week

I don't think we should ever enable full address space for all applications. There's no point. /bin/true doesn't need more than 64TB of virtual memory. And I hope never will.
Kirill Shutemov (thanks to Jon Masters)

You realize that people have said that about just about every memory threshold from 64K onward?
H. Peter Anvin

Comments (19 posted)

Vetter: Review, not Rocket Science

Daniel Vetter discusses how to get people to review code. "The take away from these two articles seems to be that review is hard, there’s a constant lack of capable and willing reviewers, and this has been the state of review since forever. I’d like to counter pose this with our experiences in the graphics subsystem, where we’ve rolled out a well-working review process for the Intel driver, core subsystem and now the co-maintained small driver efforts with success, and not all that much pain."

Comments (8 posted)

Kernel development news

A report from Netconf: Day 1

April 11, 2017

This article was contributed by Antoine Beaupré


Netconf/Netdev

As is becoming traditional, two times a year the kernel networking community meets in a two-stage conference: an invite-only, informal, two-day plenary session called Netconf, held in Toronto this year, and a more conventional one-track conference open to the public called Netdev. I was invited to cover both conferences this year, given that Netdev was in Montreal (my hometown), and was happy to meet the crew of developers that maintain the network stack of the Linux kernel.

This article covers the first day of the conference which consisted of around 25 Linux developers meeting under the direction of David Miller, the kernel's networking subsystem maintainer. Netconf has no formal sessions; although some people presented slides, interruptions are frequent (indeed, encouraged) and the focus is on hashing out issues that are blocked on the mailing list and getting suggestions, ideas, solutions, and feedback from their peers.

Removing ndo_select_queue()

One of the first discussions that elicited a significant debate was the ndo_select_queue() function, a key component of the Linux polling system that determines when and how to send packets on a network interface (see netdev_pick_tx and friends). The general question was whether the use of ndo_select_queue() in drivers is a good idea. Alexander Duyck explained that Intel people were considering using ndo_select_queue() for receive/transmit queue matching. Intel drivers do not currently use the hook provided by the Linux kernel and it turns out no one is happy with ndo_select_queue(): the heuristics it uses don't really please anyone. The consensus (including from Duyck himself) seemed to be that it should just not be used anymore, or at least not used for that specific purpose.

The discussion turned toward the wireless network stack, which uses it extensively, but for other purposes. Johannes Berg explained that the wireless stack uses ndo_select_queue() for traffic classification, for example to get voice traffic through even if the best-effort queue is backed up. The wireless stack could stop using it by doing flow control completely inside the wireless stack, which already uses the fq_codel flow-control mechanism for other purposes, so porting away from ndo_select_queue() seems possible there.

The problem then becomes how to update all the drivers to change that behavior, which would be a lot of work. Still, it seems people are moving away from a generic ndo_select_queue() interface to stack-specific or even driver-specific (in the case of Intel) queue management interfaces.

refcount_t followup

There was a followup discussion on the integration of the refcount_t type into the network stack, which we covered recently. This type is meant to be an in-kernel defense against exploits based on overflowing or underflowing an object's reference count.

The consensus seems to be that having refcount_t used for debugging is acceptable, but it cannot be enabled by default. An issue that was identified is that the networking developers are fairly sure that introducing refcount_t would have a severe impact on performance, but they do not have benchmarks to prove it, something Miller identified as a problem that needs to be worked on. Miller then expressed some openness to the idea of having it as a kernel configuration option.

A similar discussion happened, on the second day, regarding the KASan memory error detector which was covered when it was introduced in 2014. Eric Dumazet warned that there could be a lot of issues that cannot be detected by KASan because of the way the network stack often bypasses regular memory-allocation routines for performance reasons. He also noted that this can sometimes mean the stack may go over the regular 10% memory limit (the tcp_mem parameter, described in the tcp(7) man page) for certain operations, especially when rebuilding out of order packets with lots of parallel TCP connections.

Therefore it was proposed that these special memory recycling tricks could be optionally disabled, at run or compile-time, to instrument proper memory tracking. Dumazet argued this was a situation similar to refcount_t in that we need a way to disable high performance to make the network stack easier to debug with KAsan.

The problem with optional parameters is that they are often disabled in production or even by default, which, in turn, means that critical bugs cannot actually be found because the code paths are not tested. When I asked Dumazet about this, he explained that Google performs integration testing of new kernels before putting them in production, and those toggles could be enabled there to find and fix those bugs. But he agreed that certain code paths are then not tested until the code gets deployed in production.

So it seems the status quo remains: security folks wants to improve the reliability of the kernel, but the network folks can't afford the performance cost. Yet it was clear in the discussions that the team cares about security issues and wants those issues to be fixed; the impact of some of the solutions is just too big.

Lightweight wireless management packet access

Berg explained that some users need to have high-performance access to certain management frames in the wireless stack and wondered how to best expose those to user space. The wireless stack already allows users to clone a network interface in "monitor" mode, but this has a big performance cost, as the radiotap header needs to be constructed from scratch and the packet header needs to be copied. As wireless improves and the bandwidth rises to gigabit levels, this can become significant bottleneck for packet sniffers or reporting software that need to know precisely what's going on over the air outside of the regular access point client operation.

It seems the proper way to do this is with an eBPF program. As Miller summarized, just add another API call that allows loading a BPF program into the kernel and then those users can use a BPF filtering point to get the statistics they need. This will require an extra hook in the wireless stack, but it seems like this is the way that will be taken to implement this feature.

VLAN 0 inconsistencies

Hannes Frederic Sowa brought up the seemingly innocuous question of "how do we handle VLAN 0?" In theory, VLAN 0 means "no VLAN". But the Linux kernel currently handles this differently depending on whether the VLAN module is loaded and whether a VLAN 0 interface was created. Sometimes the VLAN tag is stripped, sometimes not.

It turns out the semantics of this were accidentally changed last time there was a change here and this was originally working but is now broken. Sowa therefore got the go-ahead to fix this to make the behavior consistent again.

Loopy fun

Then it came the turn of Jamal Hadi Salim, the maintainer of the kernel's traffic-control (tc) subsystem. The first issue he brought up is a problem in the tc REDIRECT action that can create infinite loops within the kernel. The problem can be easily alleviated when loops are created on the same interface: checks can be added that just drop packets coming from the same device and rate-limit logging to avoid a denial-of-service (DoS) condition.

The more serious problem occurs when a packet is forwarded from (say) interface eth0 to eth1 which then promptly redirects it from eth1 back to eth0. Obviously, this kind of problem can only be created by a user with root access so, at first glance, those issues don't seem that serious: admins can shoot themselves in the foot, so what?

But things become a little more serious when you consider the container case, where an untrusted user has root access inside a container and should have constrained resource limitations. Such a loop could allow this user to deploy an effective DoS attack against a whole group of containers running on the same machine. Even worse, this endless loop could possibly turn into a deadlock in certain scenarios, as the kernel could try to transmit the packet on the same device it originated from and block, progressively filling the queues and eventually completely breaking network access. Florian Westphal argued that a container can already create DoS conditions, for example by doing a ping flood.

According to Salim, this whole problem was created when two bits used for tracking such packets were reclaimed from the skb structure used to represent packets in the kernel. Those bits were a simple TTL (time to live) field that was incremented on each loop and dropped after a pre-determined limit was reached, breaking infinite loops. Salim asked everyone if this should be fixed or if we should just forget about this issue and move on.

Miller proposed to keep a one-behind state for the packet, fixing the simplest case (two interfaces). The general case, however, would requite a bitmap of all the interfaces to be scanned, which would impose a large overhead. Miller said an attempt to fix this should somehow be made. The root of the problem is that the network maintainers are trying to reduce the size of the skb structure, because it's used in many critical paths of the network stack. Salim's position is that, without the TTL fields, there is no way to fix the general case here, and this constitutes a security issue. So either the bits need to be brought back, or we need to live with the inherent DoS threat.

Dumping large statistics sets

Another issue Salim brought up was the question of how to export large statistics sets from the kernel. It turns out that some use cases may end up dumping a lot of data. Salim mentioned a real-world tc use case that calls for reading six-million entries. The current netlink-based API provides a way to get only 20 entries at a time, which means it takes forever to dump the state of all those policy actions. Salim has a patch that changes the dump size be eight times the NLMSG_GOOD_SIZE, which improves performance by an order of magnitude already, although there are issues with checking the user-space buffer size there.

But a more complete solution is needed. What Salim proposed was a way to ask only for the states that changed since the last dump was requested. He has a patch to add a last_access field to the netlink_callback structure used by netlink_dump() to output data; that raised the question of how to actually use that field. Since Salim fetches that data every five seconds, he figured he could just tell the kernel to return all the nodes that changed in that period. But then if the dump takes more than five seconds to complete, the next dump may be missing states that changed during the extra delay. An alternative mechanism would be for the user-space utility to keep the time stamp it requested and use that as a delta for the next dump.

It turns out this is a larger problem than just tc. Dumazet mentioned this was an issue with fq_codel classes: he would even like to be able to dump those statistics faster than every five seconds. Roopa Prabhu mentioned that Cumulus also has similar problems dumping stats from bridges, so clearly a more generic solution is needed here. There is, however, a fundamental problem with dumping large statistics sets from the kernel: those statistics are constantly changing while the dump is created and unless versioning or locking mechanisms are used — which would slow things down — the data returned is bound to be only an approximation of reality. Salim promised to send a set of RFC patches to further discussions regarding this issue, but during the following Netdev conference, Berg published a patch to fix this ten-year-old issue, which brought cheers from the audience.

[The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Berg, Dumazet, Salim, and Sowa for their time taken for a technical review of this article.]

Comments (2 posted)

A report from Netconf: Day 2

April 12, 2017

This article was contributed by Antoine Beaupré


Netconf/Netdev

This article covers the second day of the informal Netconf discussions, held on on April 4, 2017. Topics discussed this day included the binding of sockets in VRF, identification of eBPF programs, inconsistencies between IPv4 and IPv6, changes to data-center hardware, and more. (See this article for coverage from the first day of discussions).

How to bind to specific sockets in VRF

One of the first presentations was from David Ahern of Cumulus, who presented a few interesting questions for the audience. His first was the problem of binding sockets to a given interface. Right now, there are four different ways this can be done:

  • the old SO_BINDTODEVICE generic socket option (see socket(7))
  • the IP_PKTINFO, IP-specific socket option (see ip(7)), introduced in Linux 2.2
  • the IP_UNICAST_IF flag, introduced in Linux 3.3 for WINE
  • the IPv6 scope ID suffix, part of the IPv6 addressing standard

So there's a problem of having too many ways of doing the same thing, something that cannot really be fixed without breaking ABI compatibility. But even worse, conflicts between those options are not reported by the kernel so it's possible for a user to set up socket flags in a way that certain flags override others and there are no checks made or errors reported. It was agreed that the user should get some notification of conflicting changes here, at least.

Furthermore, binding sockets to a specific VRF (Virtual Routing and Forwarding) device is not currently possible, so Ahern asked what the best way to do this would be, considering the many options available. A use case example is a UDP multicast socket that could be bound to a specific interface within a VRF.

This is an old problem: Tom Herbert explained that there were previous discussions about making the bind() system call more programmable so that, for example, you could bind() a UDP socket to a discrete list of IP addresses or a subnet. So he identified this issue as a broader problem that should be addressed by making the interfaces more generic.

Ahern explained that it is currently possible to bind sockets to the slave device of a VRF even though that should not be allowed. He also raised the question of how the kernel should tell which socket should be selected for incoming packets. Right now, there is a scoring mechanism for UDP sockets, but that cannot be used directly in this more general case.

David Miller said that there are already different ways of specifying scope: there is the VRF layer and the namespace ("netns") layer. A long time ago, Miller reluctantly accepted the addition of netns keys everywhere, swallowing the performance cost to gain flexibility. He argued that a new key should not be added and instead existing infrastructure should be reused. Herbert argued this was exactly the reason why this should be simplified: "if we don't answer the question, people will keep on trying this". For example, one can use a VRF to limit listening addresses, but it gets complicated if we need a device for every address. It seems the consensus evolved towards using, IP_UNICAST_IF, added back in 2012, which is accessible for non-root users. It is currently limited to UDP and RAW sockets, but it could be extended for TCP.

XDP and eBPF program identification

Ahern then turned to the problem of extracting BPF programs from the kernel. He gave the example of a simple cBPF (classic BPF) filter that checks for ARP packets. If the filter is read back from the kernel, the user gets a blob of binary data, which is hard to interpret. There is an kernel verifier that can show C-like output, but that is also difficult to interpret. Ahern then added annotations to his slide that showed what the original program actually does, which was a good demonstration of why such a feature is needed.

Ahern explained that, at least for cBPF, it should be possible to recover the original plaintext, or at least something close to the original program. A first step would be to replace known constants (like 0x806 for ARP). Even with eBPF, it should be possible to improve the output. Alexei Starovoitov, the BPF maintainer, explained that it might make sense to start by returning information about the maps used by an eBPF program. Then more complex data structures could be inspected once we know their type.

The first priority is to get simple debugging tools working but, in the long term, the goal is a full decompiler that can reconstruct instructions into a human-readable program. The question that remains is how to return this data. Ahern explained that right now the bpf() system call copies the data to a different file descriptor, but it could just fill in a buffer. Starovoitov argued for a file descriptor; that would allow the kernel to stream everything through the same descriptor instead of having many attach points. Netlink cannot be used for this because of its asynchronous nature.

A similar issue regarding the way we identify express data path (XDP) programs (which are also written in BPF) was raised by Daniel Borkmann from Covalent. Miller explained that users will want ways to figure out which XDP program was installed, so XDP needs an introspection mechanism. We currently have SHA-1 identifiers that can be internally used to tell which binary is currently loaded but those are not exposed to user space. Starovoitov mentioned it is now just a boolean that shows if a program is loaded or not.

A use case for this, on top of just trying to figure out which BPF program is loaded, is to actually fetch the source code of a BPF program that was deployed in the field for which the source was lost. It is still uncertain that it will be possible to extract an exact copy that could then be recompiled into the same program. Starovoitov added that he needed this in production to do proper reporting.

IPv4/IPv6 equivalency

The last issue — or set of issues — that Ahern brought up was the question of inconsistencies between IPv4 and IPv6. It turns out that, because both protocols were (naturally) implemented separately, there are inconsistencies in how they are handled in the Linux kernel, which affect, among other things, the VRF framework. The first example he gave was the fact that IPv6 addresses added on the loopback interface generate unreachable routes in the main routing table, yet this doesn't happen with IPv4 addresses. Hannes Frederic Sowa explained this was part of the IPv6 specification: there are stronger restrictions on loopback interfaces in IPv6 than IPv4. Ahern explained that VRF loopback interfaces do not implement these restrictions and wanted to know if this was a problem.

Another issue is that anycast routes are added to the wrong interface. This is apparently not specific to VRF: this was done "just because Java", and has been there from day one. It seems that the Java Virtual Machine builds its own routing table and assumes this behavior, so changing this would break every JVM out there, which is obviously not acceptable.

Finally, Martin Kafai Lau asked if work should be done to merge the IPv4 and IPv6 FIB (forwarding information base) trees. The FIB tree is the data structure that represents routing tables in the Linux kernel. Miller explained that the two trees are not semantically equivalent: while IPv6 does source-address lookup and routing, IPv4 does not. We can't remove the source lookups from IPv6, because "people probably use that". According to Alexander Duyck, adding source tables to IPv4 would degrade performance to the level of IPv6 performance, which was jokingly referred to as an incentive to switch to IPv6.

More seriously, Sowa argued that using the same compressed tree IPv4 uses in IPv6 could make sense. People may want to have source routing in IPv4 as well. Miller argued that the kernel is optimized for 32-bit addresses in IPv4, and conceded that it could be scaled to 64-bit subnets, but 128-bit addresses would be much harder. Sowa suggested that they could be limited to 64 bits, as global routes that are announced over BGP usually have such a limit, and more specific routes are usually at discrete prefixes like /65, /127 (for interconnect links) or /128 for (for point-to-point links). He expressed concerns over the reliability of such an implementation so, at this point, it is unlikely that the data structures could be merged. What is more likely is that the code path could be merged and simplified, while keeping the data structures separate.

Modules options substitutions

The next issue that was raised was from Jiří Pírko, who asked how to pass configuration options to a driver before the driver is initialized. Some chips require that some settings be sent before the firmware is loaded, which leads to a weird situation where there is a need to address a device before it's actually recognized by the kernel. The question then can be summarized as to how to pass information to a device that doesn't exist yet.

The answer seems to be that devlink could do this, as it has access to the full device tree and, therefore, to devices that can be addressed by (say) PCI identifiers. Then a possible devlink command could look something like:

    devlink dev pci/0000:03:00.0 option set foo bar

This idea raised a bunch of extra questions: some devices don't have a one-to-one mapping with the PCI bridge identifiers, for example, meaning that those identifiers cannot be used to access such devices. Another issue is that you may want to send multiple settings in a single transaction, which doesn't fit well in the devlink model. Miller then proposed to let the driver initialize itself to some state and wait for configuration to be sent when necessary. Another way would be to unregister the driver and re-register with the given configuration. Shrijeet Mukherjee explained that right now, Cumulus is doing this using horrible startup script magic by retrying and re-registering, but it would be nice to have a more standard way to do this.

Control over UAPI patches

Another issue that came up was the problem of changes in the user-space API (UAPI) which break backward compatibility. Pírko said that "we have to be more careful about those changes". The problem is that reviewers are not always available to make detailed reviews of such changes and may not notice API-breaking changes. Pírko proposed creating a bot to check if a given patch introduces UAPI changes, changes in structs, or in netlink enums. Miller said he could block merges until discussions happen and that patchwork, which Miller uses to process patches from the mailing list, does some of this. He also pointed out there aren't enough test cases in the first place.

Starovoitov argued UAPI isn't special, there are other ways of breaking backward compatibility. He expressed concerns that such a bot could create a false sense that everything is fine while a patch could break compatibility and not be detected. Miller countered that UAPI is special in that "we're stuck with it forever". He then went on to propose that, since there's a maintainer (or more) for each module, he can make sure that each maintainer explicitly approves changes to those modules.

Data-center hardware changes

Starovoitov brought up the issue of a new type of hardware that is currently being deployed in data centers called a "multi-host NIC" (network interface card). It's a single NIC that is connected to multiple servers. Facebook, for example, uses this in its Yosemite platform that shoves twelve servers into a 2U rack mount, in three modules. Each module is made of four servers connected to the traditional switch fabric with a single NIC through PCI-Express. Mellanox and and Broadcom also have similar devices.

One question is how to manage those devices. Since they are connected through a PCI-Express bus, Linux will see them as a NIC, yet they are also a little like switches, in that they interconnect multiple servers. Furthermore, the kernel security model assumes that a NIC is trusted, and gladly opens its own memory to NICs through DMA; this can become a huge security issue when the NIC is under the control of another server. This can especially become problematic if we consider that there could be TLS hardware offloading in the future with the introduction of in-kernel TLS stacks.

The other problem is the question of reliability: since those devices are currently "dumb", they need to be managed just like a regular NIC. If the host managing the card crashes, it could disable a whole set of servers that rely on the same NIC. There could be an election process among the servers, but that complicates significantly what used to be a simple PCI connection.

Mukherjee pointed out that the model Cisco uses for this is that the "smart NIC" is a "slave" of the main switch fabric. It's a daughter card, which makes it easier to manage from a network perspective. It is clear that Linux will need a way to represent those devices, probably through the newly introduced switchdev or DSA (distributed switch architecture), but it will be something to keep an eye on as density increases in the data center.

There were many more discussions during Netconf, too many to cover here, but in the end, Miller thanked everyone for all the interesting topics as the participants dispersed for a day off to travel to Montreal to attend the following Netdev conference.

[The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Alexei Starovoitov for his time taken for a technical review of this article.]

Comments (none posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.11-rc6 Apr 09
Greg KH Linux 4.10.10 Apr 12
Greg KH Linux 4.10.9 Apr 08
Greg KH Linux 4.9.22 Apr 12
Greg KH Linux 4.9.21 Apr 08
Greg KH Linux 4.4.61 Apr 12
Greg KH Linux 4.4.60 Apr 08
Steven Rostedt 4.4.60-rt73 Apr 11
Steven Rostedt 3.12.72-rt97 Apr 11
Steven Rostedt 3.2.88-rt126 Apr 11

Architecture-specific

Build system

Core kernel code

Al Viro uaccess unification Apr 07
Goldwyn Rodrigues No wait AIO Apr 11
Dave Martin Signal frame expansion support Apr 12
Paul E. McKenney SRCU callback parallelization for 4.12 Apr 12

Device drivers

Smitha T Murthy Add MFC v10.10 support Apr 06
Oleksij Rempel nvmem: add snvs_lpgpr driver Apr 06
Yannick Fertre STM32 Independent watchdog Apr 06
Fabrice Gasnier Add STM32H7 DAC driver Apr 06
olivier moysan Add STM32 SAI support Apr 10
olivier moysan ASoC: stm32: Add I2S driver Apr 06
sean.wang@mediatek.com net-next: dsa: add Mediatek MT7530 support Apr 07
Eugeniy Paltsev dmaengine: Add DW AXI DMAC driver Apr 07
Icenowy Zheng AXP803 PMIC support for Pine64 Apr 08
Christopher Bostic FSI device driver implementation Apr 10
Thierry Escande Google VPD sysfs driver Apr 11
thor.thayer@linux.intel.com Add Altera I2C Controller Driver Apr 11
Jacopo Mondi iio: adc: Maxim max9611 driver Apr 06
Shawn Guo Add ZTE VGA driver support Apr 06
Raviteja Garimella Support for USB DRD Phy driver for NS2 Apr 12
David Lechner LEGO MINDSTORMS EV3 Battery Apr 11
Vishwanathapura, Niranjana Omni-Path Virtual Network Interface Controller (VNIC) Apr 11

Device driver infrastructure

Documentation

Filesystems and block I/O

Memory management

js1304@gmail.com Introduce ZONE_CMA Apr 11

Networking

Security-related

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Connecting Kubernetes services with linkerd

April 10, 2017

This article was contributed by Tom Yates


CloudNativeCon+KubeCon
When a monolithic application is divided up into microservices, one new problem that must be solved is how to connect all those microservices to provide the old application's functionality. Kubernetes provides service discovery, but the results are presented to the pods via DNS, which can be a bit of a blunt instrument. DNS also doesn't provide much beyond round-robin access to the discovered services. Linkerd, which is now officially a Cloud-Native Computing Foundation project, is a transparent proxy which solves this problem by sitting between those microservices and routing their requests. Two separate CNC/KubeCon events — a talk by Oliver Gould briefly joined by Oliver Beattie, and a salon hosted by Gould — provided a view of linkerd and what it can offer.

Gould, one of the original authors of linkerd, used to work for Twitter in production operations during its crazy growth phase, when the site was down a lot. During the 2010 World Cup, every time a goal was scored, Twitter went down. He was a Twitter user, and after finding himself rooting for 0-0 draws because they would keep the site up, realized that Twitter had operations problems, and he could probably help. So he went to work for them.

In those days, Twitter's main application was a single, monolithic program, written in Ruby on Rails, known internally as the monorail. This architecture was already known to be undesirable; attempts were being made to split the application up, but to keep stability everything had a slow release cycle — new code often taking weeks to get into production — except the monorail, which was released daily. [Oliver Gould] So anything that anyone wanted to see in production in any reasonable timescale got shoehorned into the monorail, which didn't help the move to microservices. It also didn't help that the people who were trying to deploy microservices had to reinvent their own infrastructure — load-balancing, handling retries and timeouts, and the like — and these are not easy problems, so some of them were not doing it very well.

So Gould wrote a tool called Finagle, which is a fault-tolerant, protocol-agnostic remote procedure call system that provides all these services. It helped, so Twitter ended up fixing a lot of extant problems inside Finagle, and finally everything at Twitter ended up running on top of it. There are a number of consequent benefits to this; Finagle sees nearly everything, so you have a natural instrumentation point for metrics and tracing. However, Finagle is written in Scala, which Gould concedes is "not for everyone".

He left Twitter convinced that well-instrumented glue that is built to be easily usable can be helpful; turning his attention to the growing use of Docker and Kubernetes, he wrote linkerd to provide Finagle-like functionality for HTTP requests by acting as an intelligent web proxy. The fundamental idea is that applications shouldn't have to know who they need to talk to; they should ask linkerd for a service, and linkerd should take care of tracking who is currently offering that service, selecting the best provider, transporting the request to that provider, and returning the answer.

Facilities that linkerd provides to assist with this include service discovery, load balancing, encryption, tracing and logging, handling retries, expiration and timeouts, back-offs, dynamic routing, and metrics. One of the more elegant wrinkles Gould mentioned was that it can do per-request routing; for example, an application can send an HTTP header informing linkerd that this particular request should go via some alternative path, possibly a staging or testing path. Many statistics are exported; projects like linkerd-viz give a dashboard-style view of request volumes, latencies, and success rates.

Deadlines are something a microservice connector needs to care about. The simplistic approach of having each individual service have its own timeouts and retry budgets doesn't really work when multiple services contribute to the provision of a business feature. If the top service's timeout triggers, the fact that a subordinate service is merrily retrying the database for the third time according to its own timeout and retry rules is completely lost; the top service times out and the end-user is disappointed, while the subordinate transactions may still be needlessly trying to complete. Linkerd, because it is mediating all these transactions, allows the setting of per-feature timeouts, so that each service contributing toward that feature has its execution time deducted from the feature timeout, and the whole chain can be timed out when this expires. Services that are used in providing more than one feature can take advantage of more generous timeouts when they are invoked to provide important features, without having to permit such a long wait when they're doing something quick and dirty.

Retries are also of concern. The simplistic approach of telling a service to retry after failure a finite number of times (say three) fails when things go bad, because each retry decision is taken in isolation. Just as the system is being stressed, the under-responsive service will be hit with four times the quantity of requests it normally gets, as everyone retries it. Linkerd, seeing all these requests as it does, can set a retry budget, allowing up to (say) 20% of requests to retry, thus capping the load on that service at 1.2 times normal. It makes no sense to set a traditional retry limit at a non-integer value like 1.2; this can only meaningfully be done by an overlord which sees and mediates everything.

This high-level view also allows linkerd to propagate backpressure. Consider a feature provided by several stacked microservices, each of which invokes the next one down the stack. When a service somewhere down in the stack has reached capacity, applying backpressure allows that service to propagate the problem as far up the stack as possible. This allows users whose requests will exceed system capacity to quickly see a response informing them that their request will not be serviced, and thus add no further (pointless) load to the feature stack, instead of sitting there waiting for a positive response that will never come, and overloading the feature while they do so. At this point in the talk, an incredulous question from the audience prompted Gould to confirm that all this functionality is in the shipping linkerd; it's not vaporware intended for some putative future version.

Gould's personal pick for most important feature in linkerd is request-aware load balancing. Because linkerd mediates each request, it knows how long each takes to complete, and it uses this information to load-balance services on an exponentially-weighted moving average (EWMA) basis, developed at Twitter. New nodes are drip-fed an increasing amount of traffic until responsiveness suffers, at which point traffic is backed off sharply. He presented data from a test evaluating latencies for three different load-balancing algorithms: round-robin, queue depth, and EWMA, in an application where large numbers of requests were distributed between many nodes, one of which was forced to deliver slow responses. Each algorithm failed to deliver prompt responses for a certain percentage of requests, but the percentage in question varied notably between algorithms.

The round-robin approach only succeeded for 95% of requests; Gould noted that: "Everywhere I've been on-call, 95% is a wake-me-up success rate, and I really, really don't like being woken up." Queue-depth balancing, where new requests are sent to the node which is currently servicing fewest requests, improved things: 99% of clients got typically fast response; but EWMA managed better than 99.9% of clients seeing no sharp increase in latency.

Linkerd is relatively lightweight, using about 100MB of memory in normal use. It can be deployed in a number of ways, including either a centralized resilient cluster of linkerds, or one linkerd per node. Gould noted that the best deployment depends on what you're trying to do with linkerd, but that many people prefer one linkerd per node because TLS is one of the many infrastructural services that linkerd provides, so one-per-node lets you encrypt all traffic between nodes without applications having to worry about it.

One limitation of linkerd is that it only supports HTTP (and HTTPS) requests; it functions as a web proxy, and not every service is provided that way. Gould was very happy to announce the availability of linkerd-tcp, a more-generic proxy which tries to extend much of linkerd's functionality into general TCP-based services. It's still in beta, but attendees were encouraged to play with it.

Gould was open about the costs of a distributed architecture: "Once you're in a microservice environment, you have applications talking to each other over the network. Once you have a network, you have many, many, many, many more failures than you did when you just linked to a library. So if you don't have to do it, you really shouldn't... Microservices are something you have to do to keep your organization fast when managing builds gets too hard."

He was equally open about linkerd having costs of its own, not least in complexity. In response to being asked at what scale point the pain of not having linkerd is likely to outweigh the pain of having it, he replied that it was when your application is complex enough that it can't all fit in one person's head. At that point, incident responses become blame games, and you need something that does the job of intermediating between different bits of the application in a well-instrumented way, or you won't be able to find out what's wrong. While it was nice to hear another speaker being open about containerization not being some panacea, if I had a large, complex ecosystem of microservices to keep an eye on, I'd be very interested in linkerd.

[Thanks to the Linux Foundation, LWN's travel sponsor, for assistance in getting to Berlin for CNC and KubeCon.]

Comments (23 posted)

Brief items

Distribution quotes of the week

All this new hardware has meant I have had to run Debian Testing. Combine shiny new hardware with the shiny new software needed to drive it, and random little surprises become part of ones life. Coming close to dropping your new laptop because of a burning sensation as you retrieve it from it's bag wasn't surprising or even unexpected - not to me anyway.

Anyway, this discussion prompted me to get off my bum and look at why unattended-upgrades wasn't working. Turns out the default install has "label=Debian-Security", and all these laptops are running testing. I guess the assumption that people running testing have the wherewithal to configure their machines properly isn't unreasonable.

Russell Stuart

Bad news for Unity, good news for unity.
Epistaxis

If you never got to experience the Enlightenment desktop, back in the day, I highly recommend you give Bodhi Linux a try. It has just the right combination of “Those were the days” and “Hey, this works really well.” This modern take on the old classic will have your hardware screaming and you configuring the desktop like it was 1999!
Jack Wallen (Linux.com review)

Comments (none posted)

Anbox - Android in a Box

Simon Fels introduces his Anbox (Android in a Box) project, which uses LXC containers to bring Android applications to your desktop. "Anbox uses Linux namespaces (user, network, cgroup, pid, ..) to isolate the Android operating system from the host. For Open GL ES support Anbox takes code parts from the Android emulator implementation to serialize the command stream and send it over to the host where it is mapped on existing Open GL or Open GL ES implementations." Anbox is still pre-alpha so expect crashes and instability.

Comments (17 posted)

OpenBSD 6.1 released

OpenBSD 6.1 has been released. This version adds the arm64 platform, using clang as the base system compiler. The loongson platform supports systems with Loongson 3A CPU and RS780E chipset. The armish, sparc, and zaurus platforms have been retired.

Comments (none posted)

Open Build Service 2.8 Released

Open Build Service 2.8 has been released. "We’ve been hard at work to bring you many new features to the UI, the API and the backend. The UI has undergone several handy improvements including the filtering of the projects list based on a configurable regular expression and the ability to download a project’s gpg key and ssl certificate (also available via the API). The API has been fine-tuned to allow more control over users including locking or deleting them from projects as well as declaring users to be sub-accounts of other users. The backend now includes new features such as mulibuild - the ability to build multiple jobs from a single source package without needing to create local links. Worker tracking and management has also been enhanced along with the new obsservicedispatch service which handles sources in an asynchronous queue. Published packages can now be removed using the osc unpublish command." The reference server http://build.opensuse.org is available for all developers to build packages for the most popular distributions.

Comments (2 posted)

Page editor: Rebecca Sobol

Development

Reproducible builds

By Jake Edge
April 12, 2017

LibrePlanet

At his LibrePlanet 2017 talk, Vagrant Cascadian gave an overview of the reproducible builds project, which seeks to make it so that all software projects can be reliably built in such a way that users can ensure that the source code provided is the same as what was used to build a binary. His talk was partly aimed at getting attendees ready for a two-slot hands-on workshop on how to actually turn a software project into one that can be reproducibly built. LibrePlanet was held March 25-26 in Cambridge, Massachusetts at the Stata Center on the campus of MIT.

[Vagrant Cascadian]

Cascadian has been involved in free software for a long time. He remembers getting a whole bunch of Linux distribution CDs in the mail and finding one in particular, Debian, that stood out, in part because of its social contract. But he soon realized that even though the source code is available, there is no way to be sure that the binaries that get installed actually come from that source. Obviously, if there was no connection between the two, it would be noticeable, so the kinds of changes that could slip through are the "small, insidious changes".

In addition, reproducibility is a key component of the scientific method. If you are building software and it is not reproducible, "how is that science?" There are some simple checks that could be done using checksums or hashes of the output of a test suite, for example, but that only tests areas that we already know are problematic. The project wants to find things that we don't know about, so it is focused on creating binaries that are bit-for-bit identical.

Software is built from more than just the source code, and the binary that results is affected by various other things: the build instructions, toolchain (compiler, linker, libraries, and so on), and the environment (time of build, running kernel version string, and others). The environment is what generally makes reproducible builds difficult; by and large those pieces aren't really needed. If that gets removed, and the same versions of the toolchain pieces are used, it should result in identical binaries that can then be verified by anyone.

Cascadian noted that the famous "Reflections on Trusting Trust [PDF]" lecture by Ken Thompson in 1984 and pointed out that little has been done to fix the problem in the intervening years. David A. Wheeler's Diverse Double-Compiling technique could be used to combat attacks of the nature that Thompson described. However, in order to use the double-compiling technique, reproducible builds are needed.

[Stata Center]

Reproducibility is important for other reasons too, Cascadian said. He pointed to an off-by-one error in OpenSSH (CVE-2002-0083) that led to privilege escalation. It could be fixed using a hex editor—or it could be reintroduced that way. In addition, we had never seen a "trusting trust" attack until 2015, when the XcodeGhost malware used a compiler backdoor to add malicious code to some 4000 apps in Apple's AppStore.

Furthermore, if you are not running the software you think you are, it undermines all of the promises that free software brings. You can still run the code, "I guess", but studying the code is severely hampered if other code is included behind the scenes. You can try to fix the code, but it is moot if other code can be injected. And you certainly don't want to share the code if you don't know what's actually in it. So it undermines the four freedoms.

Reproducible builds have been mentioned on the Debian mailing lists since back in 2007. In late 2014, Debian started automatically rebuilding the 25,000 source packages in its archive. Currently, it is building 1600-2200 packages per day for each of four different architectures (amd64, i386, arm64, and armhf). The reproducible builds project has gotten to the point where all but 5% of the software in Debian testing, which amounts to 1300 packages, can be built reproducibly.

The biggest problem area for making a package that can build reproducibly is timestamps embedded in the binary. That is how he got involved in the project. He is a maintainer of the U-Boot boot loader project and noticed that it was listed as a reproducible build, but knew that was impossible due to the inclusion of build timestamps in the binary. The best way forward is for projects to remove the timestamps entirely and use a commit ID or commit timestamp. But for those projects that really need the build timestamp, adding support for the SOURCE_DATE_EPOCH environment variable will allow building reproducibly.

There are other common problems that make bit-for-bit identical binaries difficult. That includes things like time zones, file sort order, build paths, and locales. At this point, the project is working on the "last mile" problems; work is progressing on handling build path differences, for example.

He noted that he had mostly talked about Debian, but there are a "huge number of other projects" that are also working on the problem. Several Linux distributions (Fedora, openSUSE, Tails, Arch) are part of the effort, as are applications such as Bitcoin and Tor Browser. NixOS and GNU Guix are particularly interesting because they already incorporate the idea of reproducibility to some extent.

Moving forward, he said, there is of course more work to do. Since Debian can reproducibly build 95% of its 25,000 packages, though, it is clearly edging out of the proof-of-concept stage. He would like to see a way for users to be able to only install reproducible packages and to be able to specify a threshold of other users who have built the code identically before a package will be installed. Eventually distributions with support for that will come out; Debian will be one of them, but not in the next release that is due soon. He would also like to see reproducible builds as a standard development practice in the free-software world.

He concluded by thanking several organizations that have supported the developers working on the project: the Core Infrastructure Initiative, ProfitBricks, and Codethink. He also thanked the developers and others who are working hard on reproducible builds. He reminded attendees of the upcoming workshop and suggested that they bring their favorite project along to work on making it reproducibly buildable.

[I would like to thank the Linux Foundation for travel assistance to Cambridge, MA for LibrePlanet.]

Comments (4 posted)

Brief items

Development quotes of the week

...if you are just somebody that would like to start contributing with anything:
  • Choose a project that you like
  • Download the code
  • Compile the application
  • Choose anything easy to be your first fix
  • Create a Fix for that
  • Let it sink for a few hours and feel the inner peace
  • Go outside, see a movie, go on a date.
  • When you are back, take a *deep breath*
  • Send the patch
  • it’s ok if you faint later
Tomaz Canabrava

Let me disabuse you of any myths. I have worked in software for 20 years. I have worked in large enterprises, and scrappy startups. This software is by FAR the largest, most complex codebase I have ever interacted with. Submission of any new code was seriously considered and reviewed before it entered production (sometimes to a pedantic degree), after which JD put all new code through 10s of thousands of hours of testing on production equipment. Production and release cycles take on the order of months to ensure that we don't kill people. These are not riding lawnmowers. They are 30-ton combines, and 20 ton tractors tilling fields, with massive horsepower behind them. They have a real potential to end peoples lives in the event of failure, and these tractors do (in testing) fail in spectacular ways. If a team of hundred of engineers struggle with their codebase internally, Joe Farmer isn't going to have a fucking clue how to repair their software correctly.

Now should you, in theory, have the right to modify equipment you own? Sure. Absolutely. Hell, John Deere tractors run on open source software. But trust me on this, locking this down is a very good idea.

If you have the drive to make open source tractor software AND can make absolutely certain no-one ever dies from code you write, then go do it. Just keep in mind that the engineers that work on this shit really care about keeping people safe.

throwaway_jddev (Thanks to Paul Wise)

The DRM community really has come a long, long, way. Great to see it so thriving and healthy that people are actively dusting off ancient drivers which never got merged, deleting most of them in the process, and getting them in just because the process works so well.
Daniel Stone

Cap'n Proto's capability system does not allow one to send a promise to a third party. It's possible in theory but in practice it'll lead to pain, suffering and CORBA.
Cyberax (Thanks to Jeroen Nijhof)

Comments (3 posted)

The new contribution workflow for GNOME

The GNOME Project has announced a streamlined contribution system built around a Flatpak-based build system. "No specific distribution required. No specific version required. No dependencies hell. Reproducible, if it builds for me it will build for you. All with an UI and integrated, no terminal required. Less than five minutes of downloading plus building and you are contributing."

Comments (11 posted)

Haas: New Features Coming in PostgreSQL 10

Here's an extensive summary of new features in the upcoming PostgreSQL 10 release from Robert Haas. "PostgreSQL has had physical replication -- often called streaming replication -- since version 9.0, but this requires replicating the entire database, cannot tolerate writes in any form on the standby server, and is useless for replicating across versions or database systems. PostgreSQL has had logical decoding -- basically change capture -- since version 9.4, which has been embraced with enthusiasm, but it could not be used for replication without an add-on of some sort. PostgreSQL 10 adds logical replication which is very easy to configure and which works at table granularity, clearly a huge step forward. It will copy the initial data for you and then keep it up to date after that."

Comments (4 posted)

Nginx 1.12 Released

The Nginx web server version 1.12 has been released, "incorporating new features and bug fixes from the 1.11.x mainline branch - including variables support and other improvements in the stream module, HTTP/2 fixes, support for multiple SSL certificates of different types, improved dynamic modules support, and more." The changelog has more details.

Comments (10 posted)

Portable Computing Language (pocl) v0.14 released

Pocl aims to become a performance portable open source (MIT-licensed) implementation of the OpenCL standard. Version 0.14 adds support for LLVM/Clang 4.0 and 3.9 and a new binary format that enables running OpenCL programs on hosts without online compiler support. There is also initial support for out-of-order command queue task scheduling and plenty of bug fixes.

Comments (none posted)

Stone: Ubuntu rejoins the GNOME fold

Daniel Stone considers the future of the Linux desktop in the light of Ubuntu's return to GNOME. "The world in 2017, however, is a very different place. KMS provides us truly device-independent display control, Vulkan and EGL provide us GPU acceleration independent of window system, xkbcommon provides shared keyboard mechanics, and logind lets us do all these things without ever being root. GBM allocates our buffers, and the universal allocator, borne out of discussions with the whole community including NVIDIA, will soon join the family. Mir leans heavily on all these technologies, so the change is a bit less seismic than you might think."

Comments (11 posted)

Page editor: Rebecca Sobol

Announcements

Brief items

freedesktop.org CoC

Freedesktop.org has adopted a code of conduct. "The culture of our member projects reflect on us as a wider organisation, and the problems of abusive and bullying behaviour weren’t solving themselves. In some specific cases we looked at, we were told directly by senior figures in the project that the lack of a defined fd.o-wide CoC made it harder for them to enforce it themselves. In the end, the only course of action was completely clear: that we take the same approach to unacceptable behaviour as we do to legally-unacceptable content. Enforcing it across the platform gives everyone complete clarity of what’s required (i.e. behaving like reasonable human beings)."

Comments (2 posted)

Mozilla Awards $365,000 to Open Source Projects as part of MOSS

The Mozilla Open Source Support (MOSS) program awards grants to projects "that contribute to our work and to the health of the Internet." Recent recipients include SecureDrop, libjpeg-turbo, LLVM, LEAP Encryption Access Project, and Tokio. There have also been MOSS supported audits of ntp, ntpsec, curl, and more. "We ran a major joint audit on two codebases, one of which is a fork of the other – ntp and ntpsec. ntp is a server implementation of the Network Time Protocol, whose codebase has been under development for 35 years. The ntpsec team forked ntp to pursue a different development methodology, and both versions are widely used. As the name implies, the ntpsec team suggest that their version is or will be more secure. Our auditors did find fewer security flaws in ntpsec than in ntp, but the results were not totally clear-cut."

Comments (6 posted)

Newsletters

Kernel development

Distributions and system administration

Development

Meeting minutes

Articles of interest

Silber: A new vantage point

Jane Silber announces the end of her tenure as CEO of Canonical. "Over the next three months I will remain CEO but begin to formally transfer knowledge and responsibility to others in the executive team. In July, Mark [Shuttleworth] will retake the CEO role and I will move to the Canonical Board of Directors. In terms of a full-time role, I will take some time to recharge and then seek new challenges."

Comments (5 posted)

Calls for Presentations

Call For Participation for the 2017 Python Language Summit

The Python Language Summit is an invitation-only event for the developers of Python implementations. It will be held May 17 in Portland, Oregon, co-located with PyCon. The call for participation has been extended until April 20.

Full Story (comments: none)

PgDay Argentina 2017 in Santa Fe - Call For Papers is Open

PgDay Argentina will take place June 9 in Santa Fe, Argentina. "If you work with PostgreSQL, we would like to hear about your experience. Presentations can be on any topic related to PostgreSQL. Talks can be about how you deal with tools, migration, existing features and new ones to appear in PG10, performance tuning." The call for papers is open and no deadline is specified.

Full Story (comments: none)

1st Call For Papers - 24th Annual Tcl/Tk Conference (Tcl'2017)

The Tcl/Tk Conference will take place October 16-20 in Houston, Texas. "The program committee is asking for papers and presentation proposals from anyone using or developing with Tcl/Tk (and extensions)." The submission deadline is August 21.

Full Story (comments: none)

CFP Deadlines: April 13, 2017 to June 12, 2017

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
April 14 June 30 Swiss PGDay Rapperswil, Switzerland
April 16 July 9
July 16
EuroPython 2017 Rimini, Italy
April 18 October 2
October 4
O'Reilly Velocity Conference New York, NY, USA
April 20 April 28
April 29
Grazer Linuxtage 2017 Graz, Austria
April 20 May 17 Python Language Summit Portland, OR, USA
April 23 July 28
August 2
GNOME Users And Developers European Conference 2017 Manchester, UK
April 28 September 21
September 22
International Workshop on OpenMP Stony Brook, NY, USA
April 30 September 21
September 24
EuroBSDcon 2017 Paris, France
May 1 May 13
May 14
Linuxwochen Linz Linz, Austria
May 1 October 5 Open Hardware Summit 2017 Denver, CO, USA
May 2 October 18
October 20
O'Reilly Velocity Conference London, UK
May 5 June 5
June 7
coreboot Denver2017 Denver, CO, USA
May 6 September 13
September 15
Linux Plumbers Conference 2017 Los Angeles, CA, USA
May 6 September 11
September 14
Open Source Summit NA 2017 Los Angeles, CA, USA
May 7 August 3
August 8
PyCon Australia 2017 Melbourne, Australia
May 15 June 3 Madrid Perl Workshop Madrid, Spain
May 21 June 24 Tuebix: Linux Conference Tuebingen, Germany
May 29 September 6
September 8
PostgresOpen Silicon Valley, CA, USA
May 29 June 24
June 25
Enlightenment Developer Days 2017 Valletta, Malta
May 30 July 3
July 7
13th Netfilter Workshop Faro, Portugal
May 30 October 10
October 12
Qt World Summit Berlin, Germany
May 31 September 7 ML Family Workshop Oxford, UK
May 31 September 8 OCaml Users and Developers Workshop Oxford, UK
May 31 October 23
October 29
Privacyweek Vienna (Wien), Austria
June 1 September 18
September 19
OpenMP Conference Stony Brook, NY, USA
June 5 September 25
September 26
Open Source Backup Conference 2017 Köln, Germany
June 5 September 14
September 15
Linux Security Summit Los Angeles, CA, USA
June 5 September 8
September 9
PyCon Japan Tokyo, Japan
June 8 August 25
August 27
GNU Hackers' Meeting 2017 Kassel, Germany
June 11 August 6
August 12
DebConf 2017 Montreal, Quebec, Canada

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Contribute your skills to Debian in Montreal

There will be a bug squashing party in Montreal, Canada on April 14. "Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated."

Full Story (comments: none)

EuroPython updates

EuroPython will take place July 9-16 in Rimini, Italy. Tickets are available and applications for financial aid must be submitted by April 24.

Comments (none posted)

Events: April 13, 2017 to June 12, 2017

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
April 10
April 13
IXPUG Annual Spring Conference 2017 Cambridge, UK
April 17
April 20
Dockercon Austin, TX, USA
April 21 Osmocom Conference 2017 Berlin, Germany
April 22 16. Augsburger Linux-Infotag 2017 Augsburg, Germany
April 26 foss-north Gothenburg, Sweden
April 28
April 29
Grazer Linuxtage 2017 Graz, Austria
April 28
April 30
Penguicon Southfield, MI, USA
May 2
May 4
3rd Check_MK Conference Munich, Germany
May 2
May 4
samba eXPerience 2017 Goettingen, Germany
May 2
May 4
Red Hat Summit 2017 Boston, MA, USA
May 4
May 5
Lund LinuxCon Lund, Sweden
May 4
May 6
Linuxwochen Wien 2017 Wien, Austria
May 6
May 7
LinuxFest Northwest Bellingham, WA, USA
May 6
May 7
Community Leadership Summit 2017 Austin, TX, USA
May 6
May 7
Debian/Ubuntu Community Conference - Italy Vicenza, Italy
May 8
May 11
O'Reilly Open Source Convention Austin, TX, USA
May 8
May 11
OpenStack Summit Boston, MA, USA
May 8
May 11
6th RISC-V Workshop Shanghai, China
May 13
May 14
Open Source Conference Albania 2017 Tirana, Albania
May 13
May 14
Linuxwochen Linz Linz, Austria
May 16
May 18
Open Source Data Center Conference 2017 Berlin, Germany
May 17 Python Language Summit Portland, OR, USA
May 17
May 21
PyCon US Portland, OR, USA
May 18
May 20
Linux Audio Conference Saint-Etienne, France
May 22
May 25
OpenPOWER Developer Congress San Francisco, CA, USA
May 22
May 24
Container Camp AU Sydney, Australia
May 22
May 25
PyCon US - Sprints Portland, OR, USA
May 23 Maintainerati London, UK
May 24
May 26
PGCon 2017 Ottawa, Canada
May 26
May 28
openSUSE Conference 2017 Nürnberg, Germany
May 31
June 2
Open Source Summit Japan Tokyo, Japan
June 1
June 2
Automotive Linux Summit Tokyo, Japan
June 3 Madrid Perl Workshop Madrid, Spain
June 5
June 7
coreboot Denver2017 Denver, CO, USA
June 9 PgDay Argentina 2017 Santa Fe, Argentina
June 9
June 10
Hong Kong Open Source Conference 2017 Hong Kong, Hong Kong
June 9
June 11
SouthEast LinuxFest Charlotte, NC, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds