|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 21, 2016

On the boundaries of GPL enforcement

By Jake Edge
July 20, 2016

Last October, the Software Freedom Conservancy (SFC) and Free Software Foundation (FSF) jointly published "The Principles of Community-Oriented GPL Enforcement". That document described what those organizations believe the goal of enforcement efforts should be and how those efforts should be carried out. Several other organizations endorsed the principles, including the netfilter project earlier this month. It was, perhaps, a bit puzzling that the project would make that endorsement at that time, but a July 19 SFC blog post sheds some light on the matter.

There have been rumblings for some time about a kernel developer doing enforcement in Germany that might not be particularly "community-oriented", but public information was scarce. Based on the blog post by Bradley Kuhn and Karen Sandler, though, it would seem that Patrick McHardy, who worked on netfilter, is the kernel developer in question. McHardy has also recently been suspended from the netfilter core team pending his reply to "severe allegations" with regard to "the style of his license enforcement activities".

The SFC post is a bit more specific about what McHardy has been accused of:

There are few public facts on Patrick's enforcement actions, though there are many rumors. That his enforcement work exists is indisputable, but its true nature, intent, and practice remains somewhat veiled. The most common criticism that we hear from those who have been approached by Patrick is an accusation that he violates one specific Principle: prioritizing financial gain over compliance.

There is, it seems, a subculture of GPL enforcement out there that is effectively doing enforcement for profit:

Specifically, we remain aware of multiple non-community-oriented GPL enforcement efforts, where none of those engaged in these efforts have endorsed our principles nor pledged to abide by them. These "GPL monetizers", who trace their roots to nefarious business models that seek to catch users in minor violations in order to sell an alternative proprietary license, stand in stark contrast to the work that Conservancy, FSF and gpl-violations.org have done for years.

It is not clear whether McHardy is among the GPL monetizers or not, though there is seemingly no evidence that his efforts have led to any code being released. In addition, repeated attempts by both SFC and the netfilter team to discuss his enforcement efforts have not been answered—or even acknowledged. According to the blog post, the SFC invited McHardy to join in the drafting of the principles. That invitation went unanswered, as did another to endorse the principles after they were published. Amid the accusations from companies about his actions, some kind of response to SFC or the netfilter team would seem to make sense. The absence of that response speaks volumes, at least to some.

In fact, if McHardy disagrees with some parts of the principles, the SFC has invited him (or others who are enforcing the GPL) to "publicly propose updates or modifications to the Principles". There is a new mailing list available to host those kinds of discussions.

But if, in fact, the enforcement actions taken by McHardy are being done as a for-profit exercise, it is hard to see what response he could make. He is under no real obligation to work with others who are also enforcing the license. If he disagrees with the principles, engaging with the community about his objections would certainly be welcome, but it is apparently not a priority for him.

It is a topic that should be discussed in our communities, however, and the release of the principles was partly meant to foster that discussion. What should the primary goal of enforcement be? Should companies be "punished" for violating the GPL and, if so, how? If compliance is the goal, how should enforcement activities be funded? And so on.

When the SFC began a fundraising campaign to support its GPL enforcement efforts late last year, some were incredulous that enforcement was not self-sustaining. But prioritizing sustainability has dangers of its own, from taking money to overlook compliance problems to holding up settlements even after the company has come into compliance over monetary issues. If the goal is to get the software released, as the philosophical underpinnings of the GPL imply, then compliance should clearly be the overarching consideration.

There are elements that would like to see the GPL not be enforced at all, of course. In effect, that turns the GPL into the BSD license, which has plenty of implications of its own. There is no real way to know how much the GPL has helped in the rise of Linux versus its non-copylefted alternatives, but it would be hard to argue that the license played no role whatsoever.

If enforcement were to stop, at least in the community-oriented sense, what would be the effect on the companies that work hard (and spend lots of money) to ensure their compliance with the license? They would largely be safe from any for-profit shakedown enforcement efforts, but those with deep pockets are generally already safe from those tactics.

As with many things in open-source communities, there are lots of different—sometimes conflicting—opinions about license enforcement, its benefits and drawbacks, and so forth. There is room for trying multiple approaches, but enforcement, at least under the principles that have been defined so far, is not an inexpensive proposition. That suggests that either some of the deep-pocketed organizations in our communities step up or that we continue muddling along on the current path.

The lack of any real consensus on license enforcement, especially within the commercial side of the community, does leave room for some to abuse the process. Some of that is already happening, but success in GPL shakedowns could lead to more participants. There is a risk of a huge wave of copyright trolls using the GPL to extract money from companies, which would not be a pleasant outcome.

In the end, license enforcement is up to the collective copyright holders; if most of those are happy with the current state of affairs, it is hard to see how things will change. The GPL is meant to level the playing field, so that all participants have the same rights—and responsibilities—to the code. But if that playing field is seen as "level enough", even while GPL violations abound, enforcement may well be seen by major players as more trouble than it is worth.

Comments (13 posted)

Snap interfaces for sandboxed applications

By Nathan Willis
July 20, 2016

Last week, we took a look at the initial release of the "portal" framework developed for Flatpak, the application-packaging format currently being developed in GNOME. For comparison, we will also explore the corresponding resource-control framework available in the Snap format developed in Ubuntu. The two packaging projects have broadly similar end goals, as many have observed, but they tend to vary quite a bit in the implementation details. Naturally, those differences are of particular importance to the intended audience: application developers.

There is some common ground between the projects. Both use some combination of techniques (namespaces, control groups, seccomp filters, etc.) to restrict what a packaged application can do. Moreover, both implement a "deny by default" sandbox, then provide a supplemental means for applications to access certain useful system resources on a restricted or mediated basis. As we will see, there is also some overlap in what interfaces are offered, although the implementations differ.

Snap has been available since 2014, so its sandboxing and resource-control implementations have already seen real-world usage. That said, the design of Snap originated in the Ubuntu Touch project aimed at smartphones, so some of its assumptions are undergoing revision as Snap comes to desktop systems.

In the Snap framework, the interfaces that are defined to provide access to system resources are called, simply, "interfaces." As we will see, they cover similar territory to the recently unveiled "portals" for Flatpak, but there are some key distinctions.

Two classes of Snap interfaces are defined: one for the standard resources expected to be of use to end-user applications, and one designed for use by system utilities. Snap packages using the standard interfaces can be installed with the snap command-line tool (which is the equivalent of apt for .deb packages). Packages using the advanced interfaces require a separate management tool.

The standard interfaces defined as of the Ubuntu 16.04 release are:

  • network: which provides access to the network
  • network-bind: which allows the package to run a server bound to a network port
  • unity7: which allows the package to access the Unity 7 desktop environment's services (such as notifications, the application menu, and the input-method switcher)
  • x11: which allows the package to access the X server
  • pulseaudio: which allows playback-only access to the PulseAudio sound server
  • opengl: which allows access to the machine's OpenGL hardware
  • home: which allows access to non-hidden files in the user's $HOME directory
  • gsettings: which allows the package to access the user's GSettings data
  • optical-drive: which provides read-only access to the first optical drive in the system
  • mpris: which mediates access to Media Player Remote Interfacing Specification (MPRIS) programs
  • camera: which provides access to the first video camera on the system

From that list, the unity7, x11, opengl, home, and gsettings interfaces are "reserved," meaning that Snap packages using these features will be reviewed when they are submitted for inclusion in Ubuntu's Snap repository. No such review takes place for individual .snap files that a user acquires and installs on their own, of course. Perhaps a more relevant safeguard is that the home, mpris, and camera interfaces do not allow the application to auto-connect: manual approval by the user is required.

The mpris interface is special for another reason, in that Snap packages can register themselves as either a "slot" (that is, a service provider) or a "plug" (that is, an MPRIS controller). So a remote-control application would register as a plug, while a music server would register as a slot.

In all other cases, the slot end of the interface is an existing system facility, and permission to access it is governed by some aspect of the sandbox. For example, device control groups are used to provide access to hardware. A device control group is created for each application; an application using the camera interface will have /dev/video0 included in its control group, while default applications will not.

By default, every Snap application is confined to a sandbox defined with a generic AppArmor policy. Each interface that is enabled layers on an additional AppArmor policy that enables the capabilities required for the chosen service. The interfaces requested by the application are listed in its package manifest, which is read by the package installer.

Every executable in a Snap package receives its own security label, of the form snap.packagename.command, which can be used to audit application behavior at a per-executable level, in addition to implementing the relevant access controls. A command-line tool is available that will show all installed Snap packages and which interfaces each has subscribed to; it can also be used to disconnect a package from an interface.

It is also worth pointing out, however, that AppArmor is not explicitly required in order to run the Snap system. The default sandbox policy and the per-interface policies could be defined for SELinux or some other Linux Security Module (LSM). Doing so in a consistent manner across several LSMs would require careful policy writing, of course. Zygmunt Krynicki has recently been working on an SELinux implementation intended to enable Snap packages to run on Fedora and Arch Linux.

The advanced interfaces include:

  • cups-control: which provides access to the Common UNIX Printing System (CUPS) control socket
  • firewall-control: which allows the application to configure the firewall
  • locale-control: which allows the application to alter the system locale
  • log-observe: which allows the application to read system logs and to change the rate-limiting of kernel logs
  • mount-observe: which enables the application to read filesystem-mount information
  • network-control: which allows the application to configure networking
  • network-observe: which lets the application query network status
  • serial-port: which provides access to serial ports
  • snapd-control: which allows the application to manage Snap packages
  • system-observe: which allows the application to query system information
  • timeserver-control: which allows the application to manage timeservers

The advanced interfaces seem to be derived, in large part, from Snap's origin as the package system for Ubuntu Touch. As a result, there is not yet a desktop tool provided to enable installation or management of Snap packages that request access to these interfaces.

Comparisons

An interesting distinction between Snap and Flatpak is that Flatpak has, so far, placed greater emphasis on user intervention as a means of mediating access to system resources. Six of Flatpak's initial ten portals require the user to approve resource-access requests, while only three of Snap's interfaces do the same. The portal model mimics the "Intents" system used by Android, in which a system process mediates each sensitive request from the sandboxed app—asking the user to verify the request interactively, then relaying the response back to the app.

So far, Snap's model has worked more like "app permissions" in Android; the installation tool asks the user to approve usage of the requested interface. When reached for comment, though, Ubuntu's Ted Gould said he thought it likely that Snap on the desktop would eventually move to a more interactive process.

There are a number of other differences in the security models employed by Flatpak and Snap, so a direct comparison is of limited value. For instance, all Snap packages get access to a per-application private directory in which they can store files. Consequently, some of the need for Flatpak's Documents portal is alleviated.

As a separate example, Snap's unity7 interface includes all of the desktop environment's services, while Flatpak's Screenshot, Inhibit, and Notification portals provide granular access to separate desktop features. But, on the flip side, Snap packages can include multiple executables, each of which has its own list of specified interfaces. Thus, a GUI application might be configured for a limited set of interfaces, while a related management tool might request additional access to the system.

In other words, while Flatpak provides per-feature granularity of certain services, Snap offers finer-than-package-level granularity for interfaces. Which is more useful to developers may depend on whom one asks.

Looking forward

Snap, like Flatpak, is undergoing heavy development, and the development version of the system already supports several additional interfaces that were not released with Ubuntu 16.04. The new additions include interfaces for Bluetooth access, geolocation, and accessing ModemManager. In addition, quite a few other interface proposals are being tracked in the project's issue tracker, including interfaces for working with power-management information, GPIOs, and Industrial I/O (IIO).

It remains to be seen whether or not Flatpak's portals and Snap's interfaces will evolve toward a set of consistent (if not identical) resource-control definitions. Certainly some of the existing portals and interfaces implement similar ideas. And several of the portals proposed by developers working on Flatpak implement features already covered in Snap interfaces.

Developers would certainly benefit from having similar sets of permissions to consider when creating their packages—as, no doubt, would end users. Anyone familiar with the recent history of desktop Linux distributions may find it hard to muster any hope that the two projects will attempt to arrive at compatible solutions, of course. But it is still early in the process for both projects, and the importance of sandboxed desktop application packages seems to be well-understood by all. So perhaps the projects can at least drift toward a common goal.

Comments (1 posted)

Anonymous publishing with Riffle

By Nathan Willis
July 20, 2016

Preserving anonymity online is an understandably hot topic these days. But it can be confused with related concepts like privacy and secure communication. A new protocol called Riffle was recently published [PDF] by researchers at MIT; it offers a different take on anonymity than that implemented by other projects. A Riffle network could be used to implement an anonymous but verifiable blogging or publishing platform: one in which the messages are visible to everyone, but the identity of all users remains hidden.

For comparison, the most well-known anonymity project is, no doubt, Tor, which enables users to access Internet services without revealing their physical location on the network. It is possible to use Tor to access publishing services like Twitter and, thus, to broadcast content to the Internet at large without revealing one's identity. But Tor is just as useful at solving other problems, such as accessing remote servers that are blocked by a firewall. While important, that usage of Tor does not necessarily involve anonymity; one could, for instance, use it to log in to Facebook, and Tor alone does not prevent the use of web trackers by sites.

Furthermore, Tor is the focus of near-constant attacks (against the network itself and against the algorithms that keep it working), and it may be vulnerable to large-scale traffic analysis—such as a national ISP could perform. One of the stated goals of Riffle is to prevent such traffic analysis, which has led to popular reports and online discussions referring to Riffle as a Tor competitor.

But Riffle, in fact, tackles a narrower problem set. In a Riffle network, every message sent or file uploaded is eventually published in plaintext form where everyone can see it. The Riffle protocol offers strong guarantees that the identity of the message's uploader cannot be discovered—even in cases where multiple servers in the network have been compromised.

Background

The system builds on two existing ideas: verifiable shuffles [PDF] and dining cryptographer networks (DC-nets). Verifiable shuffles enable a server to reorder a set of incoming messages before sending them back out in a seemingly random sequence, while allowing the participants to verify that all of the output messages correspond to the inputs. In particular, participants can verify that all of their messages were delivered and that no phony messages were inserted.

Such shuffle algorithms generate a reordered sequence of messages as well as a proof that can be used to verify the validity of the shuffling step, but that cannot be used to reverse-engineer the permutation used. They can also be employed by pools of servers in so-called "mixnets." That makes network traffic difficult to analyze in the hopes of discovering who sent any particular message (thus guarding against Tor's weakness), but it comes at an inconveniently high computational cost.

DC-nets provide strong anonymity by having each node in the network transmit a message that is cryptographically mixed with the message of a neighbor node, which is then mixed with the message of that node's neighbor, and so forth. The pairwise mixing of messages also makes it impossible for cheaters to crack the encryption unless they control every node. But DC-nets require every node to send traffic in every round, and they do not scale well because the message channel is broadcast-only: all messages are passed around to all participants, making poor use of the available bandwidth. With more than a few dozen participants, throughput slows to a crawl.

The Dissent system was able to overcome some of traditional DC-nets' limitations by splitting the nodes into server and client classes. The servers (presumably higher-end hardware) can take care of the verifiable shuffle without imposing that computational burden on the clients. Dissent also altered the trust model versus traditional DC-nets (in which every node participated in the message-passing step), letting the servers shoulder much of that burden as well. The authors demonstrated that, as long as at least one server remains uncompromised, clients could trust the entire Dissent server pool. Nevertheless, the authors of the Riffle paper said, Dissent still slows down proportionally as more users join the network.

Riffle

Riffle is, in many respects, an iteration on Dissent designed to overcome that system's bandwidth-sharing problem. Like Dissent, Riffle has servers and clients. But clients each consume only bandwidth proportional to their own message size. Furthermore, the computational load on the servers is reduced, with the intensive verifiable-shuffle calculations only performed on periodic occasions (such as when the set of available servers changes).

The bandwidth reduction is accomplished by using different upload and download mechanisms. When a client sends a message, it is placed into the network's verifiable-shuffle system. When a client downloads messages, it uses a separate private information retrieval (PIR) protocol.

In the setup phase, each server first generates a public key pair and publishes the public key to all of the clients. The servers in the server pool also generate a set of permutations they will use for the verifiable shuffle and exchange proofs with the other servers. Once setup is complete, however, the servers continue to use this fixed shuffle, alleviating the need to compute a new shuffle for every round of messages, as was done in traditional verifiable-shuffle mixnets.

In the communication phase, each client onion-encrypts its outgoing message—meaning that the message is encrypted, in turn, with the public key of each server. But each client opens a channel to just one server and sends it the onion-encrypted message, breaking it into fixed-size chunks to better obscure message size.

Next, each server decrypts the message chunks it has received using its private key, then shuffles the chunks and relays them to the next server. ElGamal keys are used for the onion-encryption stage because they are commutative; the servers in the pool can each decrypt the message they see using their own key, and the order of the original encryption does not affect the output. Once the messages reach the last server in the pool, they have been decrypted by every private key and are now plaintext.

Finally, the clients download the messages they want using PIR. In this scheme, the server pool publishes an index of the chunks available at each server, and the clients request a random set of indexes that includes the messages it is interested in. That way, each client can only use as much bandwidth as it desires; if some other client has uploaded a large document, other clients are not obligated to download it. The authors of the paper note, however, that the PIR step is not critical to the rest of the system; if a shared message channel is preferable, users could simply run Riffle in a broadcast manner instead.

The computational savings occur because Riffle does the setup phase only once per "epoch," which entails the verifiable-shuffle computations by the servers, and uses less expensive TLS channels to exchange all other content. What constitutes an epoch is not strictly defined; whenever a new server leaves or joins the pool, the server will have to re-do the verifiable-shuffle setup, but the network could be configured to periodically start a new epoch for added security.

The paper goes on to demonstrate that Riffle is resistant to the same attacks that verifiable-shuffle mixnets and older DC-nets protect against. The real question is whether or not the new system results in a scheme that can scale better to large networks. The authors cite tests showing that Riffle can achieve an average bandwidth of 100KBps for a network of 200 clients when tuned for file-sharing usage. Tuning for speed instead, as one might do for a Twitter-style microblogging network, the authors claim to support a network of 10,000 clients with a message latency of less than one second.

The Riffle paper's main author, Albert Kwon, has released a prototype implementation on GitHub, which he called "performance accurate (but probably not security accurate)". The repository includes client and server code, both written in Go, but it, regrettably, does not have any license attached.

There are clearly plenty of interesting ideas to be found in Riffle, although at this point it is hard to say whether or not the paper or its concepts will have a practical impact on projects like Tor. One subscriber did post links to the Riffle paper to the tor-talk mailing list, but it has elicited no discussion so far.

It is easy to speculate that some of Riffle's components (such as verifiable shuffles) could potentially be added to Tor, and might even be beneficial. But perhaps Riffle will prove most interesting to developers tackling different problems altogether. There might be uses for a Twitter-like microblogging service that has strong anonymity baked in from the start, or for an anonymous file-sharing network. Either of those use cases could prove to be a valuable tool for end users, without requiring use of the more general-purpose Tor network.

Comments (2 posted)

Page editor: Jonathan Corbet

Security

Typosquatting in package repositories

By Jake Edge
July 20, 2016

"Typosquatting" is normally associated with registering domain names that are variants of popular domains, such that a user trying to reach the site might mistype and land on the variant page, which might be serving up malware, ads, or some kind of phishing scam. But in early June, Nikolai Tschacher reported on some research he had done that used typos in package names for languages like Python and Ruby to show that their package repositories were vulnerable; an attacker could use that flaw to execute code on remote systems. It was a fairly eye-opening report (as is the thesis [PDF] it was based on) that bears some further scrutiny.

Package managers often require privileges, which makes packages both a security danger and a target for attackers. Distribution packages are normally signed by a distribution key, which makes it much harder—though certainly not impossible—for an attacker to subvert those packages. But language package repositories, or those for frameworks like Node.js, are not so centralized. In fact, they are meant to be places where anyone can upload their code, with little or no vetting of that code.

So it is relatively easy for an attacker to upload a package with malware to sites like the Python Package Index (PyPI), RubyGems.org, or npmjs.org, but that is only part of the puzzle. In order to get users to actually install the packages, they must be enticed to do that somehow—that's where typosquatting comes into play.

So, if there is a popular PyPI package called "requests", which, of course, there is, then a typo version of the name, "reqeusts", say, might find its way to some systems. A user who typed:

    $ sudo pip install reqeusts
would be installing a potentially malicious package—and doing so as root.

The danger of installing language packages as root is well known, but it is still regularly done. The pip command for Python runs setup.py from the package as part of the installation process. That makes it easy for a malicious actor to run their code to get malware onto the system—or for a researcher to add some non-malicious test code to the system. The JavaScript npm package manager and the Ruby gem package manager also provide ways to execute code at installation time.

So Tschacher created some 200 packages that each contained a notification program and used some form of typo for its name. He uploaded them to the repositories over a few months and waited to see what notifications he would get. The notification program gathered some basic information about the system and whether the user was doing the installation with administrative rights; that information was sent back to his server. The notification program also printed a warning to the user that explained that they had probably grabbed the wrong package with a link to a page about his research.

In two phases that totaled roughly two months, Tschacher gathered information from more than 17,000 unique IP addresses. Most of those were for PyPI packages (15,221), with far lesser amounts for RubyGems.org (1631) and npmjs.com (525) files. Those differences may reflect the relative popularity of the package repositories and/or cultural differences in those language communities. In any case, a whopping 43% of the installers did so with administrative rights.

There are other statistics that he gathered and reported in the blog post and thesis. For example, the installation requests came from a broad swath of the internet, including a few from .gov and .mil domains. Interestingly, roughly 10% of the IP addresses he could resolve to a hostname were requests from Amazon's AWS cloud service.

One other piece of his research was for the notification program to check the .bash_history files on Linux and other Unix systems to report on what other incorrect package names had been tried on the system. These might be standard library package names (e.g. urllib2) that can be registered in a repository, popular names of other tools (e.g. git, docker), or just shortened names of real packages (e.g. scikit rather than scikit-learn). He used some of the names he harvested that way in the second phase of running the experiment with good success.

His post lists several ways for package repositories to avoid these kinds of problems, starting with the obvious: "Prevent Direct Code Execution on Installations". His other suggestions in the post are to generate (and blacklist) potential typo candidate names and to analyze the repository server log files to find potential typos as well. The thesis itself goes into much more detail on ideas for reducing the vulnerability's footprint.

At some level, it is not terribly surprising that installing code uploaded by random folks on the internet is dangerous. Doing so as root is even more so, but there is generally plenty an attacker can do even if their code is only granted access to an unprivileged user account. Even if the typosquatting problem were reduced (by limiting the registration of typo package names, say) and the installation of the package did not directly run code provided by the attacker, there would still be concerns. Eventually, users may get the typo into their code and "import reqeusts" will obviously have to execute the code supplied by the reqeusts module—limiting typo registrations will reduce the problem, but can hardly eliminate it. Not to mention that users may simply be able to be tricked into installing any package name that an attacker chooses.

Curated package repositories, like those run by Linux distributions and others, go a long way toward eliminating these problems. But they also have to put a fair amount of bureaucracy between a code purveyor and the user in order to avoid distributing malicious code—which is just what PyPI and repositories like that are trying to avoid.

Something interesting to ponder is what might have happened to Tschacher had he done that research in the US. From his thesis it seems that he did correspond with PyPI operators and others while the research was ongoing, who asked him to make some changes (such as removing the pip typos in .bash_history piece), but were fairly tolerant overall. On the other hand, there are various US computer laws that have sometimes been (ab)used by (over)zealous prosecutors to go after security and other researchers. One hopes that legitimate research such as this would not be so affected.

Comments (4 posted)

Brief items

Security quotes of the week

At a 2013 technology conference, Google CEO Eric Schmidt tried to reassure the audience by saying that he was 'pretty sure that information within Google is now safe from any government's prying eyes'.

A more accurate statement might have been: 'Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about.' The other thing Schmidt didn't say is: 'And of course, we still have complete access to it all, and can sell it to whomever we want… and you will have no recourse.'

Bruce Schneier

Android uses multiple layers of protection to keep users safe. One of these layers is verified boot, which improves security by using cryptographic integrity checking to detect changes to the operating system. Android has alerted about system integrity since Marshmallow, but starting with devices first shipping with Android 7.0, we require verified boot to be strictly enforcing. This means that a device with a corrupt boot image or verified partition will not boot or will boot in a limited capacity with user consent. Such strict checking, though, means that non-malicious data corruption, which previously would be less visible, could now start affecting process functionality more.
Sami Tolvanen on the Android Developers Blog

The bug resides in a code library used in a wide range of telecommunication products, including radios in cell towers, routers, and switches, as well as the baseband chips in individual phones. Although exploiting the heap overflow vulnerability would require great skill and resources, attackers who managed to succeed would have the ability to execute malicious code on virtually all of those devices. The code library was developed by Pennsylvania-based Objective Systems and is used to implement a telephony standard known as ASN.1, short for Abstract Syntax Notation One.
Dan Goodin in Ars Technica

Comments (3 posted)

Ubuntu forums compromised

Canonical has disclosed that the Ubuntu forum system has been compromised. "The attacker had the ability to inject certain formatted SQL to the Forums database on the Forums database servers. This gave them the ability to read from any table but we believe they only ever read from the ‘user’ table. They used this access to download portions of the ‘user’ table which contained usernames, email addresses and IPs for 2 million users. No active passwords were accessed."

Comments (44 posted)

Tor veteran Lucky Green exits, torpedos critical 'Tonga' node and relays (The Register)

The Register reports that longtime Tor contributor Lucky Green is quitting and closing down the node and bridge authority he operates. "Practically, it's a big deal. Bridge Authorities are part of the infrastructure that lets users get around some ISP-level blocks on the network (not, however, defeating deep packet inspection). They're also incorporated in the Tor code, meaning that to remove a Bridge Authority is going to need an update." The shutdown is scheduled for August 31. (Thanks to Nomen Nescio)

Comments (8 posted)

New vulnerabilities

atomic-openshift: information leak

Package(s):atomic-openshift CVE #(s):CVE-2016-5392
Created:July 15, 2016 Updated:July 20, 2016
Description:

From the Red Hat advisory:

The Kubernetes API server contains a watch cache that speeds up performance. Due to an input validation error OpenShift Enterprise may return data for other users and projects when queried by a user. An attacker with knowledge of other project names could use this vulnerability to view their information.

Alerts:
Red Hat RHSA-2016:1427-01 atomic-openshift 2016-07-14

Comments (none posted)

binutils: multiple vulnerabilities

Package(s):binutils CVE #(s):CVE-2016-2226 CVE-2016-4487 CVE-2016-4488 CVE-2016-4489 CVE-2016-4490 CVE-2016-4492 CVE-2016-4493 CVE-2016-6131
Created:July 18, 2016 Updated:July 20, 2016
Description: From the Debian LTS advisory:

Some minor security issues have been identified and fixed in binutils in Debian LTS. These are:

CVE-2016-2226: Exploitable buffer overflow.

CVE-2016-4487: Invalid write due to a use-after-free to array btypevec.

CVE-2016-4488: Invalid write due to a use-after-free to array ktypevec.

CVE-2016-4489: Invalid write due to integer overflow.

CVE-2016-4490: Write access violation.

CVE-2016-4492: Write access violations.

CVE-2016-4493: Read access violations.

CVE-2016-6131: Stack buffer overflow when printing bad bytes in Intel Hex objects

Alerts:
Debian-LTS DLA-552-1 binutils 2016-07-18

Comments (none posted)

ecryptfs-utils: two vulnerabilities

Package(s):ecryptfs-utils CVE #(s):CVE-2016-6224 CVE-2015-8946
Created:July 20, 2016 Updated:November 2, 2016
Description: From the Red Hat bugzilla:

CVE-2015-8946: A vulnerability was found in ecryptfs-setup-swap script that is provided by the upstream ecryptfs-utils project.

On systems using systemd 211 or newer and GPT partitioning, the unencrypted swap partition was being automatically activated during boot and the encrypted swap was not used. This was due to ecryptfs-setup-swap not marking the swap partition as "no-auto", as defined by the Discoverable Partitions Spec.

CVE-2016-6224: A vulnerability was found in ecryptfs-setup-swap script that is provided by the upstream ecryptfs-utils project.

When GPT swap partitions are located on NVMe or MMC drives, ecryptfs-setup-swap fails to mark these swap partitions as "no-auto".

As a consequence, when using encrypted home directory with an NVMe or MMC drive, the swap is left unencrypted. There's also a usability issue in that users are erroneously prompted to enter a pass-phrase to unlock their swap partition at boot.

This vulnerability exists due to an incomplete fix for CVE-2015-8946

Alerts:
Fedora FEDORA-2016-70b5173c05 ecryptfs-utils 2016-11-01
Ubuntu USN-3032-1 ecryptfs-utils 2016-07-14
Fedora FEDORA-2016-41301e2187 ecryptfs-utils 2016-07-20

Comments (none posted)

firefox: code execution

Package(s):MozillaFirefox, MozillaFirefox-branding-SLE, mozilla-nss CVE #(s):CVE-2016-2824
Created:July 14, 2016 Updated:July 20, 2016
Description: From the SUSE advisory:

CVE-2016-2824: Out-of-bounds write with WebGL shader (MFSA 2016-53) (bsc#983651).

Alerts:
SUSE SUSE-SU-2016:2061-1 firefox, nspr, nss 2016-08-12
SUSE SUSE-SU-2016:1799-1 MozillaFirefox, MozillaFirefox-branding-SLE, mozilla-nss 2016-07-14

Comments (none posted)

graphicsmagick: out-of-bounds read

Package(s):graphicsmagick CVE #(s):CVE-2016-8808
Created:July 15, 2016 Updated:July 20, 2016
Description:

From the Mageia advisory:

A read out-of-bound in the parsing of gif files using GraphicsMagick.

Alerts:
Mageia MGASA-2016-0252 graphicsmagick 2016-07-14

Comments (none posted)

httpd: HTTP redirect

Package(s):httpd apache apache2 CVE #(s):CVE-2016-5387
Created:July 19, 2016 Updated:August 22, 2016
Description: From the Red Hat advisory:

It was discovered that httpd used the value of the Proxy header from HTTP requests to initialize the HTTP_PROXY environment variable for CGI scripts, which in turn was incorrectly used by certain HTTP client implementations to configure the proxy for outgoing HTTP requests. A remote attacker could possibly use this flaw to redirect HTTP requests performed by a CGI script to an attacker-controlled proxy via a malicious HTTP request.

Alerts:
openSUSE openSUSE-SU-2016:2115-1 apache2-mod_fcgid 2016-08-19
Fedora FEDORA-2016-a29c65b00f perl-CGI-Emulate-PSGI 2016-08-09
Fedora FEDORA-2016-683d0b257b perl-CGI-Emulate-PSGI 2016-08-08
Debian-LTS DLA-568-1 wordpress 2016-07-29
Fedora FEDORA-2016-df0726ae26 httpd 2016-07-27
Mageia MGASA-2016-0262 apache 2016-07-26
Fedora FEDORA-2016-9fd9bfab9e httpd 2016-07-22
Debian-LTS DLA-553-1 apache2 2016-07-20
Debian DSA-3623-1 apache2 2016-07-20
Ubuntu USN-3038-1 apache2 2016-07-18
Scientific Linux SLSA-2016:1421-1 httpd 2016-07-18
Scientific Linux SLSA-2016:1422-1 httpd 2016-07-18
Oracle ELSA-2016-1421 httpd 2016-07-18
Oracle ELSA-2016-1421 httpd 2016-07-18
Oracle ELSA-2016-1422 httpd 2016-07-18
openSUSE openSUSE-SU-2016:1824-1 apache2 2016-07-19
CentOS CESA-2016:1421 httpd 2016-07-18
CentOS CESA-2016:1421 httpd 2016-07-18
CentOS CESA-2016:1422 httpd 2016-07-18
Red Hat RHSA-2016:1420-01 httpd24-httpd 2016-07-18
Red Hat RHSA-2016:1421-01 httpd 2016-07-18
Red Hat RHSA-2016:1422-01 httpd 2016-07-18
Gentoo 201701-36 apache 2017-01-15
Slackware SSA:2016-358-01 httpd 2016-12-23

Comments (none posted)

java-1.8.0-openjdk: multiple vulnerabilities

Package(s):java-1.8.0-openjdk CVE #(s):CVE-2016-3458 CVE-2016-3500 CVE-2016-3508 CVE-2016-3550 CVE-2016-3587 CVE-2016-3598 CVE-2016-3606 CVE-2016-3610
Created:July 20, 2016 Updated:September 13, 2016
Description: From the Red Hat advisory:

* Multiple flaws were discovered in the Hotspot and Libraries components in OpenJDK. An untrusted Java application or applet could use these flaws to completely bypass Java sandbox restrictions. (CVE-2016-3606, CVE-2016-3587, CVE-2016-3598, CVE-2016-3610)

* Multiple denial of service flaws were found in the JAXP component in OpenJDK. A specially crafted XML file could cause a Java application using JAXP to consume an excessive amount of CPU and memory when parsed. (CVE-2016-3500, CVE-2016-3508)

* Multiple flaws were found in the CORBA and Hotsport components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. (CVE-2016-3458, CVE-2016-3550)

Alerts:
SUSE SUSE-SU-2016:2726-1 java-1_8_0-ibm 2016-11-04
Gentoo 201610-08 oracle-jdk-bin 2016-10-15
openSUSE openSUSE-SU-2016:2451-1 php5 2016-10-04
SUSE SUSE-SU-2016:2408-1 php5 2016-09-28
SUSE SUSE-SU-2016:2347-1 java-1_7_1-ibm 2016-09-21
SUSE SUSE-SU-2016:2328-1 php53 2016-09-16
Ubuntu USN-3077-1 openjdk-6 2016-09-12
SUSE SUSE-SU-2016:2286-1 java-1_7_0-ibm 2016-09-10
SUSE SUSE-SU-2016:2261-1 java-1_7_1-ibm 2016-09-07
Fedora FEDORA-2016-c07d18b2a5 java-1.8.0-openjdk-aarch32 2016-08-29
Oracle ELSA-2016-1776 java-1.6.0-openjdk 2016-08-26
Oracle ELSA-2016-1776 java-1.6.0-openjdk 2016-08-26
Oracle ELSA-2016-1776 java-1.6.0-openjdk 2016-08-26
Scientific Linux SLSA-2016:1776-1 java-1.6.0-openjdk 2016-08-26
CentOS CESA-2016:1776 java-1.6.0-openjdk 2016-08-26
CentOS CESA-2016:1776 java-1.6.0-openjdk 2016-08-26
CentOS CESA-2016:1776 java-1.6.0-openjdk 2016-08-26
Red Hat RHSA-2016:1776-01 java-1.6.0-openjdk 2016-08-26
Ubuntu USN-3062-1 openjdk-7 2016-08-16
openSUSE openSUSE-SU-2016:2058-1 OpenJDK7 2016-08-12
openSUSE openSUSE-SU-2016:2051-1 java-1_8_0-openjdk 2016-08-11
openSUSE openSUSE-SU-2016:2050-1 java-1_7_0-openjdk 2016-08-11
openSUSE openSUSE-SU-2016:2052-1 java-1_7_0-openjdk 2016-08-11
Red Hat RHSA-2016:1587-01 java-1.8.0-ibm 2016-08-10
Red Hat RHSA-2016:1588-01 java-1.7.1-ibm 2016-08-10
Red Hat RHSA-2016:1589-01 java-1.7.0-ibm 2016-08-10
SUSE SUSE-SU-2016:2012-1 java-1_8_0-openjdk 2016-08-09
SUSE SUSE-SU-2016:1997-1 java-1_7_0-openjdk 2016-08-09
openSUSE openSUSE-SU-2016:1979-1 java-1_8_0-openjdk 2016-08-06
Debian-LTS DLA-579-1 openjdk-7 2016-08-05
Debian DSA-3641-1 openjdk-7 2016-08-04
Arch Linux ASA-201608-5 jre7-openjdk-headless 2016-08-05
Arch Linux ASA-201608-4 jre7-openjdk 2016-08-05
Arch Linux ASA-201608-3 jdk7-openjdk 2016-08-05
Mageia MGASA-2016-0273 java-1.8.0-openjdk 2016-08-03
Fedora FEDORA-2016-c60d35c46c java-1.8.0-openjdk 2016-07-29
Fedora FEDORA-2016-588e386aaa java-1.8.0-openjdk 2016-07-28
Scientific Linux SLSA-2016:1504-1 java-1.7.0-openjdk 2016-07-27
Oracle ELSA-2016-1504 java-1.7.0-openjdk 2016-07-27
Oracle ELSA-2016-1504 java-1.7.0-openjdk 2016-07-27
Oracle ELSA-2016-1504 java-1.7.0-openjdk 2016-07-27
Ubuntu USN-3043-1 openjdk-8 2016-07-27
CentOS CESA-2016:1504 java-1.7.0-openjdk 2016-07-27
CentOS CESA-2016:1504 java-1.7.0-openjdk 2016-07-27
CentOS CESA-2016:1504 java-1.7.0-openjdk 2016-07-27
Red Hat RHSA-2016:1504-01 java-1.7.0-openjdk 2016-07-27
Scientific Linux SLSA-2016:1458-1 java-1.8.0-openjdk 2016-07-20
Oracle ELSA-2016-1458 java-1.8.0-openjdk 2016-07-20
Oracle ELSA-2016-1458 java-1.8.0-openjdk 2016-07-20
CentOS CESA-2016:1458 java-1.8.0-openjdk 2016-07-20
CentOS CESA-2016:1458 java-1.8.0-openjdk 2016-07-20
Red Hat RHSA-2016:1475-01 java-1.8.0-oracle 2016-07-21
Red Hat RHSA-2016:1476-01 java-1.7.0-oracle 2016-07-21
Red Hat RHSA-2016:1477-01 java-1.6.0-sun 2016-07-21
Red Hat RHSA-2016:1458-01 java-1.8.0-openjdk 2016-07-20
Gentoo 201701-43 icedtea-bin 2017-01-19

Comments (none posted)

kernel: code execution

Package(s):kernel CVE #(s):CVE-2016-4794
Created:July 14, 2016 Updated:July 20, 2016
Description: From the openSUSE advisory:

CVE-2016-4794: Use-after-free vulnerability in mm/percpu.c in the Linux kernel allowed local users to cause a denial of service (BUG) or possibly have unspecified other impact via crafted use of the mmap and bpf system calls (bnc#980265).

Alerts:
Oracle ELSA-2016-2574 kernel 2016-11-10
Red Hat RHSA-2016:2584-02 kernel-rt 2016-11-03
Red Hat RHSA-2016:2574-02 kernel 2016-11-03
Mageia MGASA-2016-0283 kernel-tmb 2016-08-31
Mageia MGASA-2016-0284 kernel-linus 2016-08-31
Ubuntu USN-3057-1 linux-snapdragon 2016-08-10
Ubuntu USN-3056-1 linux-raspi2 2016-08-10
Ubuntu USN-3054-1 linux-lts-xenial 2016-08-10
Ubuntu USN-3053-1 linux-lts-vivid 2016-08-10
Ubuntu USN-3055-1 kernel 2016-08-10
Mageia MGASA-2016-0271 kernel 2016-07-31
openSUSE openSUSE-SU-2016:1798-1 kernel 2016-07-14
Scientific Linux SLSA-2016:2574-2 kernel 2016-12-14
Oracle ELSA-2016-3644 kernel 4.1.12 2016-11-21
Oracle ELSA-2016-3644 kernel 4.1.12 2016-11-21

Comments (none posted)

kernel: two vulnerabilities

Package(s):kernel CVE #(s):CVE-2016-5696 CVE-2016-6156
Created:July 20, 2016 Updated:September 28, 2016
Description: From the Red Hat bugzilla:

CVE-2016-5696: A flaw was found in the implementation of the Linux kernels handling of networking challenge ack where an attacker is able to determine the shared counter.

This may allow an attacker to inject or take over a TCP connection between a server and client without having to be a traditional Man In the Middle (MITM) style attack.

CVE-2016-6156: Double-fetch vulnerability was found in /drivers/platform/chrome/cros_ec_dev.c in the Chrome driver in the Linux kernel before 4.6.1.

In function ec_device_ioctl_xcmd(), the driver fetches user space data by pointer arg via copy_from_user(), and this happens twice at line 137 and line 145 respectively.

Alerts:
Oracle ELSA-2016-2574 kernel 2016-11-10
openSUSE openSUSE-SU-2016:2625-1 kernel 2016-10-25
Oracle ELSA-2016-2006 kernel 2016-10-04
Red Hat RHSA-2016:1939-01 kernel 2016-09-27
Oracle ELSA-2016-3617 kernel 2016-09-22
Oracle ELSA-2016-3617 kernel 2016-09-22
Ubuntu USN-3084-4 linux-snapdragon 2016-09-19
Ubuntu USN-3084-3 linux-raspi2 2016-09-19
Ubuntu USN-3084-2 linux-lts-xenial 2016-09-19
Ubuntu USN-3084-1 kernel 2016-09-19
Oracle ELSA-2016-1847 kernel 2016-09-14
openSUSE openSUSE-SU-2016:2290-1 kernel 2016-09-12
SUSE SUSE-SU-2016:2245-1 kernel 2016-09-06
SUSE SUSE-SU-2017:0471-1 kernel 2017-02-15
Debian-LTS DLA-609-1 kernel 2016-09-03
Debian DSA-3659-1 kernel 2016-09-04
Red Hat RHSA-2016:1814-01 kernel 2016-09-06
Red Hat RHSA-2016:1815-01 kernel 2016-09-06
Ubuntu USN-3070-3 linux-snapdragon 2016-08-30
Ubuntu USN-3070-2 linux-raspi2 2016-08-30
Ubuntu USN-3070-4 linux-lts-xenial 2016-08-30
Mageia MGASA-2016-0283 kernel-tmb 2016-08-31
Ubuntu USN-3072-2 linux-ti-omap4 2016-08-29
Ubuntu USN-3071-2 linux-lts-trusty 2016-08-29
Ubuntu USN-3072-1 kernel 2016-08-29
Ubuntu USN-3071-1 kernel 2016-08-29
Ubuntu USN-3070-1 kernel 2016-08-29
Slackware SSA:2016-242-01 kernel 2016-08-29
Slackware SSA:2016-236-03 kernel 2016-08-23
Scientific Linux SLSA-2016:1664-1 kernel 2016-08-23
Oracle ELSA-2016-1664 kernel 2016-08-23
Red Hat RHSA-2016:1664-01 kernel 2016-08-23
Red Hat RHSA-2016:1657-01 kernel 2016-08-23
CentOS CESA-2016:1664 kernel 2016-08-23
Scientific Linux SLSA-2016:1633-1 kernel 2016-08-19
CentOS CESA-2016:1633 kernel 2016-08-20
Arch Linux ASA-201608-17 linux-lts 2016-08-21
Oracle ELSA-2016-1633 kernel 2016-08-18
SUSE SUSE-SU-2017:0437-1 the Linux Kernel 2017-02-09
Red Hat RHSA-2016:1631-01 realtime-kernel 2016-08-18
Red Hat RHSA-2016:1632-01 kernel-rt 2016-08-18
Red Hat RHSA-2016:1633-01 kernel 2016-08-18
Arch Linux ASA-201608-15 linux-zen 2016-08-17
Oracle ELSA-2016-3595 kernel 3.8.13 2016-08-15
Oracle ELSA-2016-3595 kernel 3.8.13 2016-08-15
Oracle ELSA-2016-3594 kernel 4.1.12 2016-08-15
Oracle ELSA-2016-3594 kernel 4.1.12 2016-08-15
Arch Linux ASA-201608-13 linux-grsec 2016-08-14
Arch Linux ASA-201608-12 kernel 2016-08-14
Mageia MGASA-2016-0271 kernel 2016-07-31
Fedora FEDORA-2016-784d5526d8 kernel 2016-07-19
Fedora FEDORA-2016-9a16b2e14e kernel 2016-07-20
SUSE SUSE-SU-2016:3304-1 kernel 2016-12-30
SUSE SUSE-SU-2016:3069-1 kernel 2016-12-09
openSUSE openSUSE-SU-2016:3021-1 kernel 2016-12-06
SUSE SUSE-SU-2016:2976-1 the Linux Kernel 2016-12-02
SUSE SUSE-SU-2016:2912-1 kernel 2016-11-25

Comments (none posted)

libarchive: multiple vulnerabilities

Package(s):libarchive CVE #(s):CVE-2015-8916 CVE-2015-8917 CVE-2015-8919 CVE-2015-8920 CVE-2015-8921 CVE-2015-8922 CVE-2015-8923 CVE-2015-8924 CVE-2015-8925 CVE-2015-8926 CVE-2015-8928 CVE-2015-8930 CVE-2015-8931 CVE-2015-8932 CVE-2015-8933
Created:July 15, 2016 Updated:July 20, 2016
Description:

From the Ubuntu advisory:

Hanno Böck discovered that libarchive contained multiple security issues when processing certain malformed archive files. A remote attacker could use this issue to cause libarchive to crash, resulting in a denial of service, or possibly execute arbitrary code.

Alerts:
CentOS CESA-2016:1844 libarchive 2016-09-16
CentOS CESA-2016:1850 libarchive 2016-09-15
Scientific Linux SLSA-2016:1850-1 libarchive 2016-09-12
Scientific Linux SLSA-2016:1844-1 libarchive 2016-09-12
Red Hat RHSA-2016:1850-01 libarchive 2016-09-12
Red Hat RHSA-2016:1844-01 libarchive 2016-09-12
Debian DSA-3657-1 libarchive 2016-08-30
openSUSE openSUSE-SU-2016:2036-1 libarchive 2016-08-11
SUSE SUSE-SU-2016:1939-1 bsdtar 2016-08-02
SUSE SUSE-SU-2016:1909-1 libarchive 2016-07-29
Debian-LTS DLA-554-1 libarchive 2016-07-21
Ubuntu USN-3033-1 libarchive 2016-07-14
Gentoo 201701-03 libarchive 2017-01-01

Comments (none posted)

libgd2: two vulnerabilities

Package(s):libgd2 CVE #(s):CVE-2016-6132 CVE-2016-6214
Created:July 18, 2016 Updated:July 20, 2016
Description: From the Debian advisory:

Several vulnerabilities were discovered in libgd2, a library for programmatic graphics creation and manipulation. A remote attacker can take advantage of these flaws to cause a denial-of-service against an application using the libgd2 library (application crash), or potentially to execute arbitrary code with the privileges of the user running the application.

Alerts:
openSUSE openSUSE-SU-2016:2363-1 gd 2016-09-24
openSUSE openSUSE-SU-2016:2117-1 gd 2016-08-19
Ubuntu USN-3060-1 libgd2 2016-08-10
Mageia MGASA-2016-0258 libgd 2016-07-26
Fedora FEDORA-2016-615f3bf06e gd 2016-07-24
Debian DSA-3619-1 libgd2 2016-07-15
Gentoo 201612-09 gd 2016-12-04

Comments (none posted)

openjpeg2: multiple vulnerabilities

Package(s):openjpeg2 CVE #(s):CVE-2016-3183 CVE-2016-3181 CVE-2016-3182 CVE-2016-4796 CVE-2016-4797
Created:July 15, 2016 Updated:July 20, 2016
Description:

From the Fedora advisory:

CVE-2016-3182: Heap corruption in opj_free function.

CVE-2016-3181: Out-of-bounds read in opj_tcd_free_tile function.

CVE-2016-3183: Out-of-bounds read in sycc422_to_rgb function.

CVE-2016-4797: Division-by-zero in function opj_tcd_init_tile in tcd.c.

CVE-2016-4796: Heap buffer overflow in function color_cmyk_to_rgb in color.c.

Alerts:
Mageia MGASA-2016-0362 openjpeg2 2016-11-03
Fedora FEDORA-2016-14d8f9b4ed mingw-openjpeg2 2016-07-18
Fedora FEDORA-2016-8fa7ced365 mingw-openjpeg2 2016-07-18
Fedora FEDORA-2016-d2ab705e4a openjpeg2 2016-07-16
Fedora FEDORA-2016-abdc548f46 openjpeg2 2016-07-14
Gentoo 201612-26 openjpeg 2016-12-08

Comments (none posted)

pagure: unspecified

Package(s):pagure CVE #(s):
Created:July 19, 2016 Updated:July 20, 2016
Description:

Pagure 2.2.2 fixes undisclosed vulnerabilities.

Alerts:
Fedora FEDORA-2016-dede12f0a2 pagure 2016-07-18

Comments (none posted)

perl: code execution

Package(s):perl CVE #(s):CVE-2016-6185
Created:July 18, 2016 Updated:September 16, 2016
Description: From the Red Hat bugzilla:

An arbitrary code execution can be achieved if loading code from untrusted current working directory despite the '.' is removed from @INC. Vulnerability is in XSLoader that uses caller() information to locate .so file to load. If malicious attacker creates directory named `(eval 1)` with malicious binary file in it, it will be loaded if the package calling XSLoader is in parent directory.

Alerts:
Mageia MGASA-2016-0299 perl-XSLoader 2016-09-16
openSUSE openSUSE-SU-2016:2313-1 perl 2016-09-15
Debian-LTS DLA-565-1 perl 2016-07-28
Debian DSA-3628-1 perl 2016-07-25
Fedora FEDORA-2016-742bde2be7 perl 2016-07-18
Fedora FEDORA-2016-485dff6060 perl 2016-07-18
Fedora FEDORA-2016-eb2592245b perl 2016-07-15
Gentoo 201701-75 perl 2017-01-30

Comments (none posted)

python-django: cross-site scripting

Package(s):python-django CVE #(s):CVE-2016-6186
Created:July 19, 2016 Updated:August 31, 2016
Description: From the Debian advisory:

It was discovered that Django, a high-level Python web development framework, is prone to a cross-site scripting vulnerability in the admin's add/change related popup.

Alerts:
Mageia MGASA-2016-0282 python-django 2016-08-31
Red Hat RHSA-2016:1594-01 python-django 2016-08-11
Red Hat RHSA-2016:1595-01 python-django 2016-08-11
Red Hat RHSA-2016:1596-01 python-django 2016-08-11
Fedora FEDORA-2016-97ca9d52a4 python-django 2016-08-02
Fedora FEDORA-2016-b7e31a0b9a python-django 2016-08-02
Arch Linux ASA-201607-11 python2-django 2016-07-22
Arch Linux ASA-201607-10 python-django 2016-07-22
Debian-LTS DLA-555-1 python-django 2016-07-21
Ubuntu USN-3039-1 python-django 2016-07-19
Debian DSA-3622-1 python-django 2016-07-18

Comments (none posted)

ruby-eventmachine: denial of service

Package(s):ruby-eventmachine CVE #(s):
Created:July 18, 2016 Updated:August 8, 2016
Description: From the Debian LTS advisory:

EventMachine, a Ruby network engine could be crashed by opening a high number of parallel connections (>= 1024) towards a server using the EventMachine engine. The crash happens due to the file descriptors overwriting the stack.

Alerts:
Mageia MGASA-2016-0276 ruby-eventmachine 2016-08-06
Debian-LTS DLA-549-1 ruby-eventmachine 2016-07-15

Comments (none posted)

sudo: race condition

Package(s):sudo CVE #(s):CVE-2015-8239
Created:July 18, 2016 Updated:July 27, 2016
Description: From the Red Hat bugzilla:

A vulnerability in functionality for adding support of SHA-2 digests along with the command was found. The sudoers plugin performs this digest verification while matching rules, and later independently calls execve() to execute the binary. This results in a race condition if the digest functionality is used as suggested (in fact, the rules are matched before the user is prompted for a password, so there is not negligible time frame to replace the binary from underneath sudo). Versions affected are since 1.8.7.

Alerts:
Mageia MGASA-2016-0261 sudo 2016-07-26
Fedora FEDORA-2016-90836ca57d sudo 2016-07-15
Fedora FEDORA-2016-f1e8e27e27 sudo 2016-07-16

Comments (none posted)

util-linux: denial of service

Package(s):util-linux CVE #(s):CVE-2016-5011
Created:July 15, 2016 Updated:December 15, 2016
Description:

From the Mageia advisory:

The util-linux libblkid is vulnerable to a Denial of Service attack during MSDOS partition table parsing, in the extended partition boot record (EBR). If the next EBR starts at relative offset 0, parse_dos_extended() will loop until running out of memory. An attacker could install a specially crafted MSDOS partition table in a storage device and trick a user into using it. This library is used, among others, by systemd-udevd daemon.

Alerts:
Oracle ELSA-2016-2605 util-linux 2016-11-10
Red Hat RHSA-2016:2605-02 util-linux 2016-11-03
Mageia MGASA-2016-0256 util-linux 2016-07-14
Scientific Linux SLSA-2016:2605-2 util-linux 2016-12-14
openSUSE openSUSE-SU-2016:3102-1 util-linux 2016-12-12
openSUSE openSUSE-SU-2016:2840-1 util-linux 2016-11-17

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 4.7-rc7; there is no new -rc release this week due to Linus's travel plans. The final 4.7 release is still expected on July 24.

The current 4.7 regression report shows that a handful of regressions remain unfixed.

Stable updates: none have been released in the last week.

Comments (none posted)

Quotes of the week

Copy and paste is not an excuse for bad code.
Thomas Gleixner

There are some kernel developers who would not speak to me again if I told them I was playing with web technologies.
— James Bottomley (heard at LinuxCon Japan)

Comments (none posted)

An honorary degree for Alan Cox

Congratulations are due to Alan Cox, who was awarded an honorary degree by Swansea University for his work with Linux. "Alan started working on Version 0. There were bugs and problems he could correct. He put Linux on a machine in the Swansea University computer network, which revealed many problems in networking which he sorted out; later he rewrote the networking software. Alan brought to Linux software engineering discipline: Linux software releases that were tested, corrected and above all stable. On graduating, Alan worked at Swansea University, set up the UK Linux server and distributed thousands of systems."

Comments (34 posted)

Kernel development news

Controlling access to the memory cache

By Jonathan Corbet
July 20, 2016

LinuxCon Japan
Access to main memory from the processor is mediated (and accelerated) by the L2 and L3 memory caches; developers working on performance-critical code quickly learn that cache utilization can have a huge effect on how quickly an application (or a kernel) runs. But, as Fenghua Yu noted in his LinuxCon Japan 2016 talk, the caches are a shared resource, so even a cache-optimal application can be slowed by an unrelated task, possibly running on a different CPU. Intel has been working on a mechanism that allows a system administrator to set cache-sharing policies; the talk described the need for this mechanism and how access to it is implemented in the current patch set.

Control over cache usage

Yu started off by saying that a shared cache is subject to the "noisy neighbor" problem; a program that uses a lot of cache entries can cause the eviction of entries used by others, hurting their performance. The L3 cache is shared by all CPUs on the same socket, so the annoying neighbor need not be running on the same processor; a cache-noisy program can create problems for others running on any CPU in the socket. A low-priority process that causes cache churn can slow down a much higher-priority process; increased interrupt-response latency is another problem that often results.

The solution to the problem is to eliminate cache sharing between parts of the system that should be isolated from each other; this is done by partitioning the available cache. Each partition is shared between fewer [Fenghua Yu] processes and, thus, has fewer conflicts. There is an associated cost, clearly, in that a process running on a partitioned cache has a smaller cache. That, Yu said, can affect the overall throughput of the system, but that is a separate concern.

Intel's cache-partitioning mechanism is called "cache allocation technology," or CAT. Haswell-generation (and later) server chips have support for CAT at the L3 (socket) level. The documentation also describes L2 (core-level) support, but that feature is not available in any existing hardware.

In a CAT-enabled processor, it is possible to set up one or more cache bitmaps ("CBMs") describing the portion of the cache that may be used. If, on a particular CPU, the L3 cache is divided into 20 slices, then a CBM of 0xfffff describes the entire cache, while 0xf8000 and 0x7c00 describe two disjoint regions, each covering 25% of the cache.

The CBMs are kept in a small table, indexed by a "class of service ID" or CLOSID. The CLOSID will eventually control multiple resources (L2 cache, for example, or something entirely different) but, in current processors, it only selects the active CBM for the L3 cache. At any given time, a specific CLOSID will be active in each CPU, controlling which portion of the cache that CPU can make use of. Each CPU has its own set of CLOSIDs; they are not a system-wide resource.

Kernel support is needed to make proper use of the CAT functionality. The number of CLOSIDs available is relatively small, so the kernel must arbitrate access to them. Like any resource-allocation technology, CAT control must be limited to privileged users or it will be circumvented. Yu described how CAT policies can be controlled via the interfaces implemented in the current patch set but, before getting into that, it's worthwhile to step away from the talk for a moment and look at the history of this interface.

Unsuccessful kittens

New hardware features often present interesting problems when the time comes to add support to the kernel. It is relatively easy to add that support as a simple wrapper and access-control layer around the feature, but care must be taken to avoid tying the interface to the hardware itself. A vendor's idea of how the feature should work can change over time, and other manufacturers may have ideas of their own. Any interface that is unable to evolve with the hardware will become unsupportable over time and have to be replaced. So it is important to provide an interface that abstracts away the details of how the hardware works to the greatest extent possible. At the same time, though, the interface cannot be so abstract that it makes some important functionality unavailable.

The first public attempt at CAT support in the kernel appears to be this patch set posted by Vikas Shivappa in late 2014. The approach taken was to use the control-group mechanism to set the CBM for groups of processes; the CLOSID mechanism was hidden by the kernel and not visible to user space at all. The initial review discussion focused on some of the more glaring deficiencies in the patch set, so it took a while before developers started to point out that, perhaps, control groups were not the right solution to this problem; it seems that they abstracted things a little too much.

There were a few complaints about the control-group interface, but by far the loudest was that it failed to reflect the fact that CAT works on a per-CPU basis — each processor has its own set of CLOSIDs and its own active policy at any given time. The proposed interface was tied to processes rather than processors, and it forced the use of a single policy across the entire system. There are plenty of real-world use cases that want to have different cache-utilization policies on different CPUs in the same system, but the control-group mechanism could not express those policies. This problem was exacerbated by the fact that the number of CLOSIDs is severely limited; making it impossible for each CPU to use its own CLOSID-to-CBM mappings made that limitation much more painful.

Beyond setting up different policies on different CPUs, many users would like to use the CPU as the primary determinant for cache policy. For example, a specific CPU running an important task could be given exclusive access to a large portion of the cache. If the task in question is bound to that processor, it will automatically get access to that cache reservation; any related processes — kernel threads performing work related to that task, for example — will also be able to use that cache space. This mode, too, is not well supported by an interface based on control groups. In its absence, users would have to track down each helper process and manually add it to the correct control group, a tedious and error-prone task.

The problem was discussed repeatedly as new versions of the patch set came out during much of early 2015. At one point, Marcelo Tosatti posted an interface based on ioctl() calls that was meant to address some of the concerns, but it seems there was little interest in bringing ioctl() into the mix. In November, Thomas Gleixner posted a description of how he thought the interface should work for discussion. He said that a single, system-wide configuration was not workable and that "we need to expose this very close to the hardware implementation as there are really no abstractions which allow us to express the various bitmap combinations" His overall suggestion was to create a new virtual filesystem for the control of the CAT mechanism; that is the approach taken by Yu's current patch set.

Herding the CAT

Returning to Yu's talk: he noted that a new patch set had been posted just prior to the conference; it shows the implementation of the new control interface. It is all based on a virtual filesystem, as Gleixner had suggested. Naturally enough, the name of that filesystem (/sys/fs/rscctrl) became the first topic of debate, with Gleixner complaining that it was too cryptic. Tony Luck's suggestion that it could instead be called:

    /sys/fs/Intel(R) Resource Director Technology(TM)/

seems unlikely to be adopted; "/sys/fs/resctrl" may emerge as the most acceptable name in the end.

The top level of this filesystem contains three files: tasks, cpus, and schemas. The tasks file contains a list of all processes whose cache access is controlled by the bitmap found in the schemas file; similarly, cpus can be used to attach a bitmap to one or more CPUs. Initially the tasks file holds the IDs for all processes in the system, and cpus is all zeroes; the schemas file contains all ones. The default policy, thus, is to allow all processes in the system the full use of the cache.

Normal usage will involve the system administrator creating subdirectories to create new policies; each subdirectory will contain the same set of three files. A different CBM can be written to the schemas file in the subdirectory, changing the cache-access policy for any affected process. A process can be tied to that new policy by writing its ID to the tasks file. It is also possible to tie the policy to one or more CPUs by writing a CPU mask to the cpus file. A CPU-based policy will override a process-ID-based one — if a process is running on a CPU with a specific policy, that is the policy that will be used regardless of whether the process has been explicitly set to use a different one.

Yu's talk glossed over a number of details on exactly how these control files work, as one might expect; the documentation file from the patch set contains those details and some usage examples as well. He did discuss some benchmark results (which can be seen at the end of his slides [PDF]) showing significant improvements for specific workloads that were affected by heavy cache contention. This feature may not be needed by everybody, but it seems that some users will have a lot to gain from it. Realtime workloads, in particular, would appear to stand to benefit from dedicated cache space.

As for where things stand: the current patch set is out for review, with the hope that the most significant obstacles have been overcome at this point. Assuming that the user-space interface issues have now been resolved, this code, which has been under development for well over a year, should be getting close to being ready for merging into the mainline.

[Your editor would like to thank the Linux Foundation for supporting his travel to LinuxCon Japan].

Comments (6 posted)

LTSI and Fuego

By Jonathan Corbet
July 20, 2016

LinuxCon Japan
It has now been nearly five years since Tsugikazu Shibata announced the launch of the long-term support initiative (LTSI) project. LTSI's objective is to provide extended support for specific kernel releases that can serve as a rallying point for embedded-system vendors and a means by which those vendors can get their patches upstream. At LinuxCon Japan 2016, Shibata-san provided an update on LTSI; he was followed by Tim Bird, who discussed the "Fuego" test framework that is now being used to help validate LTSI releases.

An LTSI update

The core process for LTSI kernels has not changed much since the project's inception. LTSI releases are based on the long-term support releases maintained by Greg Kroah-Hartman, and are maintained for the same time period. They do, however, include a significant set of extra patches in the form of vendor-contributed features and backports from more recent kernels; they also go through more extensive testing than ordinary stable-kernel releases.

Five years in, LTSI is seeing some significant adoption. The Yocto meta-distribution has had an option to use LTSI kernels since 2012. The Automotive Grade Linux project [Tsugikazu Shibata] is using LTSI kernels (via Yocto). The relatively new Civil Infrastructure Platform (CIP) project is also using LTSI, with an interesting twist. Systems based on CIP are likely to be deployed in situations where they are expected to run for a long time, so there is a need for long-term support with a different value of "long-term": 10-15 years. CIP will itself be providing that support by taking over responsibility for LTSI kernels after LTSI itself has moved on. Shibata-san noted that supporting a kernel for that long is going to be an interesting challenge; he wished the project luck.

The LTSI release process starts with one of the regular long-term support releases. There is a four- or five-month period in which patches to this kernel are prepared; these include backports and other features that are useful to the LTSI community. That is followed by a two-month merge window in which all those patches are applied. One month of validation follows; all contributors to the LTSI kernel are expected to ensure that things work properly in the final release. This process, Shibata-san claimed, leads to the production of one of the most stable and secure kernels available.

That said, there are concerns that the current seven-month process takes too long; the latency in the process is especially acutely felt with the 4.4 kernel, which came a bit sooner than had been expected. So the project is talking about shortening the release process this time around; there would be a two-month preparation period and a one-month merge window. The final decision on that change, it seems, will come in the near future.

Fuego

One of the significant changes in the LTSI release process mentioned by Shibata-san was the adoption of a new testing framework called "Fuego." Bird used the next session to talk about Fuego and how it works. In short, Fuego is the combination of the Jenkins continuous-integration tool, a set of scripts, and a collection of tests, all packaged within a Docker container.

Jenkins is used to run tests based on various triggers and collect the results. It is widely used and features hundreds of extensions to handle things like email notifications or integration with source-code management systems. The big customization that Fuego has added is to separate host [Tim Bird] and target configuration; testing can be directed from a host, but it runs on the specific embedded target of interest.

There is a set of "abstraction scripts" designed to make Fuego work with any specific target board; these scripts are driven by variables describing how to interact with the board, functions to get or put files and run commands, etc. The end result is a generated script to run the actual tests. There are about fifty tests integrated into the system so far; most of those are existing tests from elsewhere, but the plan is to add a bunch of new tests as well.

The whole system is designed to be packaged up into a Docker container. The end result should be runnable on any Linux distribution without modification.

Fuego was designed to be easy for embedded engineers to set up and run. It comes with configurations for specific target systems, including Yocto, Buildroot, OpenWrt, and more. Various target types and transports are supported; Fuego can talk to a target using a serial port, SSH, or Android's adb tool, among others. It is designed to send test results to a centralized repository. The end goal is to enable the creation of a decentralized test network, allowing the testing of changes on a wide variety of hardware and getting past the "I don't have that particular board" problem.

Future plans include the decluttering of the Jenkins interface, which is rather busy at the moment. The project would like to add handling for USB connections, making it easier to use tools like adb to talk to handset-like devices. More documentation and more tests are on the list, as is integration with the kernelci.org project.

More users and contributors would certainly be welcome. The project is using the ltsi-dev mailing list for its communications for now; more information on the project, including pointers to the repositories, can be found on elinux.org. See this page for more information on how to install and use Fuego.

[Your editor would like to thank the Linux Foundation for supporting his travel to LinuxCon Japan].

Comments (none posted)

Coding-style exceptionalism

July 20, 2016

This article was contributed by Neil Brown

As I was analyzing the behavioral details of various drivers as part of my research for a recent article on USB battery charging in Linux, I was struck by the the thought that code doesn't exist just to make certain hardware perform certain functions. Important though that is, the code in Linux, and in other open projects, also exists as a cultural artifact through which we programmers communicate and from which we learn. The disappointment I felt at the haphazard quality I found was not simply because some hardware somewhere might not perform optimally. It was because some other programmer tasked with writing a similar driver for new hardware might look to some of these drivers for inspiration, and might copy something that was barely functional and not use the best interfaces available.

With these thoughts floating around my mind I was interested to find a recent thread on the linux-kernel mailing list that was more concerned about how a block of code looked than about what it did.

The code in question handles SHA-256 hash generation on Intel x86 platforms. The thread started because Dan Carpenter's smatch tool had found some unusual code:

    if ((ctx->partial_block_buffer_length) | (len < SHA256_BLOCK_SIZE)) {

The "|" here looks like it was probably meant to be "||" — so there was a bit-wise "or" where a logical "or" is more common. Carpenter went to some pains to be clear that he knew the code would produce the same result no matter which operator was used, but observed that "it's hard to tell the intent." Intent doesn't matter to a compiler, but it does to a human reader. Even well-written code can be a challenge to read due to the enormous amount of detail embedded in it. When there is an unusual construct that you need to stop and think about, that doesn't make it any easier.

There were a couple of suggestions that this was an intentional optimization and there is some justification for this. With both GCC 4.8 and 5.3 compiling for x86_64, the "|" version produces one fewer instruction, avoiding a jump. In some cases that small performance difference might be worth the small extra burden on the reader, though as Joe Perches observed: "It's probably useful to add a comment for the specific intent here"; that would not only make it easier to read, but would ensure that nobody broke the optimization in the future. Further, the value of such optimizations can easily vary from compiler to compiler.

Once a little attention was focused on this code, other complaints arose, with Ingo Molnar complaining about the excessive use of parentheses, the unusually long field name partial_block_buffer_length, and, responding to what is clearly a sore spot for some, requesting that the "customary" style be used for multi-line comments.

Documentation/Codingstyle explains that:

    The preferred style for long (multi-line) comments is:

        /*
         * This is the preferred style for multi-line
         * comments in the Linux kernel source code.
         * Please use it consistently.
         *
         * Description:  A column of asterisks on the left side,
         * with beginning and ending almost-blank lines.
         */

    For files in net/ and drivers/net/ the preferred style for long (multi-line)
    comments is a little different.

        /* The preferred comment style for files in net/ and drivers/net
         * looks like this.
         *
         * It is nearly the same as the generally preferred comment style,
         * but there is no initial almost-blank line.
         */

The code under the microscope is in arch/x86/crypto — not strictly part of the networking subsystem — but this code uses the style for net/ and drivers/net/ in at least one place. Herbert Xu, the crypto subsystem maintainer, asserted that the crypto API uses the same style as networking, but Molnar wasn't convinced and neither, it turned out, was Linus Torvalds. I won't try to summarize Torvalds's rant (which he promised he would not follow up on) but I will examine a concrete and testable assertion made by Molnar: "That 'standard' is not being enforced consistently at all".

Looking at the ".c" and ".h" files in linux 4.7-rc7 and using fairly simple regular expressions (which might have occasional false positives), the string "/*" appears 1,308,166 times, suggesting the presence of over 1.3 million comments. Of those, 981,168 are followed by "*/" on the same line, leaving 326,998 multi-line comments. 200,737 of these have nothing (apart from the occasional space) following the opening "/*" on the first line, and 51,366 start with "/**" which indicates a "kernel-doc" formatted comment, leaving 74,895 multi-line comments in the non-customary format with text on the first line.

These three groups are present in a ratio of approximately 8:2:3. The kernel-doc comments have to be in the expected format to be properly functional, leaving the developer no discretion; it thus isn't reasonable to include them when looking at the choices developers have made. Of the multi-line comments where the programmer has some discretion, we find an 8:3 ratio of customary format, in the sense Molnar meant it, to others. So 27% are non-standard.

If we repeat these measurements for net/, drivers/net/, crypto/, and drivers/crypto/, the number of non-standard multi-line comments are:

SubsystemComments Percent
TotalNet-style
net/ 13,441 6,423 48%
drivers/net/ 36,599 19.516 54%
crypto/ 593 171 29%
drivers/crypto/ 706 178 25%

So broadly, the evidence does seem to support Molnar's claim. While "text-on-the-first-line" comments are more common in the networking code, they just barely constitute a majority of multi-line comments there and they are not significantly more common in the crypto code. This statistic doesn't tell us a lot, but it does suggest that the supposed "preferred" style for the networking code is not consistently preferred in practice, and that sticking to it for new comments wouldn't actually improve the overall consistency of that code.

Some of us may think this is all a storm in a teacup and that empty lines in comments, much like empty lines in code, are a matter of personal taste and nothing more. For many people this may be true. But open-source code will particularly benefit from being read by people who have a high attention for details, who will notice things that look a bit out of place, and who can spot bugs that compilers or static analyzers will miss. These people are likely to notice, and so be burdened by, irrelevant details like non-standard comments.

For Molnar at least, "the networking code's 'exceptionalism' regarding the standard comment style is super distracting" and there is evidence that he is not alone in this. To get the greatest value from other people reading our code, it makes sense to keep it as easy to read as possible. The code doesn't just belong to the author, it belongs to the community which very much includes those who will read it, whether to fix bugs, to write documentation, or as a basis for writing new drivers. We serve that community, and so indirectly ourselves, best when we make our code uniform and easy for others to read.

Comments (29 posted)

Patches and updates

Kernel trees

Sebastian Andrzej Siewior 4.6.4-rt7 ?
Steven Rostedt 4.4.15-rt23 ?
Steven Rostedt 4.4.12-rt20 ?
Sasha Levin Linux 4.1.28 ?
Steven Rostedt 4.1.27-rt31 ?
Sasha Levin Linux 3.18.37 ?
Steven Rostedt 3.18.36-rt38 ?
Steven Rostedt 3.14.73-rt78 ?
Steven Rostedt 3.14.73-rt77 ?
Steven Rostedt 3.14.72-rt76 ?
Steven Rostedt 3.12.61-rt82 ?
Steven Rostedt 3.10.102-rt113 ?
Steven Rostedt 3.4.112-rt143 ?
Steven Rostedt 3.2.81-rt117 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Device driver infrastructure

Documentation

Michael Kerrisk (man-pages) man-pages-4.07 is released ?

Filesystems and block I/O

Networking

kan.liang@intel.com Kernel NET policy ?

Security-related

Miscellaneous

Lucas De Marchi kmod 23 ?

Page editor: Jonathan Corbet

Distributions

A second release from Automotive Grade Linux

By Nathan Willis
July 20, 2016

On July 12, the Automotive Grade Linux (AGL) project announced version 2.0 of its Unified Code Base (UCB) platform. The AGL UCB is an embedded Linux distribution built with Yocto that combines software components developed within AGL with components developed by other automotive-Linux projects like the GENIVI Alliance and the (now defunct) Tizen IVI. The most visible changes in the new release are support for audio and video output on multiple hardware endpoints (such as rear-seat entertainment units), but there are several other updates under the hood.

The new release (codenamed Brilliant Blowfish) comes six months after the previous UCB release, Agile Albacore, which we looked at in January. That earlier release was the first attempt to merge AGL, GENIVI, and Tizen IVI components into a coherent distribution by carefully organizing them into Yocto meta-layers. Under that scheme, AGL can add GENIVI code to its UCB distribution, and other users of GENIVI can selectively choose just the GENIVI components. By all accounts, the shared-layer strategy has been working fairly well for the projects.

Audio-video

The changes found in UCB 2.0 include an initial implementation of multi-seat video display. It is possible to attach several displays to the system (such as one front-seat unit and several rear-seat units) and play a video on all screens simultaneously. Audio playback is more sophisticated, in that applications can direct their output to particular audio endpoints.

The intended use cases include having Bluetooth-connected phones play audio only through the driver's speakers and having rear-seat units play audio through headphone ports. For now, the code only supports two output "zones" (front and rear), but the AGL audio-player demo application does allow different tracks to be played in the two zones. Furthermore, the front audio zone supports overlaying audio from multiple applications, so when that Bluetooth-connected phone starts ringing, the driver will hear it.

Ultimately, the video component will have to support displaying different content on the different screens as well, of course. But that code is still in development; it is an extension to AGL's Weston-based display manager. The audio-routing code comes from GENIVI, and has been in development significantly longer.

Supporting frameworks

The 2.0 release also adds the ConnMan network-connection manager. The key benefits of this addition are that it allows the use of multiple paired Bluetooth devices and that it supports automatically switching over active data connections from mobile networks to WiFi when WiFi is available. Users might want to switch over to WiFi before updating installed apps or navigation data, for example.

Another new addition is an application framework adapted from earlier work done for Tizen IVI. It provides basic support for user installation, updating, and removal of applications, and provides the mechanisms to launch and tear-down applications. This provides a means to manage applications not under the control of the system vendor, which is a prerequisite for allowing user-installed, aftermarket "apps" like one finds on smartphones. At this point, there are no apps available to test with; adding the framework is merely the first step. It does, notably, sandbox applications using the Smack-based access control scheme from Tizen IVI.

A related addition is a new framework, developed primarily at GENIVI, designed to restrict access to the vehicle's bus (currently, the most common bus in use is CAN Bus, but others are supported as well). At the heart of this component is the Vehicle Signal Specification (VSS), which defines a set of messages for querying and reporting various aspects of vehicle status.

The AGL VSS code is available on GitHub and builds on top of Tizen IVI's Automotive Message Broker (AMB). That code uses JSON as the VSS message format, although the specification allows for other encoding formats as well, and there is currently a debate as to whether some other format (such as YAML) should be the default going forward.

The VSS signals themselves are designed to adhere to the World Wide Web Consortium (W3C) Vehicle Information API. Since, for the time being, vendor interest in AGL is leaning strongly toward using HTML5 for automotive applications, following the W3C's specification is the only logical choice.

Undercarriage

The AGL project has also made several helpful improvements to the build system prior to the 2.0 release, starting with migrating to Yocto 2.0. For interested developers, there are also quite a few more supported board profiles in the new release. Whereas UCB 1.0 supported only the Renesas R-Car Porter single-board computer (SBC), the 2.0 releases supports the Porter, the WandBoard and Sabre Automotive (both from NXP), the Qualcomm DragonBoard, the Intel Minnowboard MAX, and even the Raspberry Pi. As before, QEMU x86_64 images are built as well.

At the moment, binary images have not yet been posted for download from the AGL site, which is likely due to the fact that many AGL and GENIVI developers were busy last week at the Automotive Linux Summit in Tokyo. Braver souls can experiment with nightly snapshot builds in the meantime, but the final releases should be ready shortly.

As was the case with UCB 1.0, this new release demonstrates AGL's progress at merging work from outside sources into a workable automotive Linux distribution. On that front, it is particularly good to see more work being done on Tizen IVI code like AMB, which has been in limbo for quite some time.

Moving forward, the project has a ways to go before there is a stable platform offering everything that third-party app developers will expect to see. The VSS and application frameworks may form the base layer on which that platform is built, but users should expect a lot of testing and alteration before the final product hits showroom floors.

Comments (none posted)

Brief items

Distribution quotes of the week

The nature of distributions is competitive. It’s inevitable, and arguably healthy for innovation. I don’t see competition as a problem within itself. But at the end of the day, we’re all doing the same thing, just different in some ways. There are plenty of opportunities to collaborate and build together on projects, tools, or resources that benefit multiple communities. I believe setting goals now to bring everyone together is premature. But I would like to encourage and remove the idea that cross-distribution collaboration is impossible. Opening our minds to the prospect of working with other communities is the first step towards making it a reality. Discouraging snide remarks or comments about work happening in other communities is one small step towards bringing us towards us together.
-- Justin W. Flory

Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.
-- Lars Wirzenius

I mean, we can certainly stop developing Fedora, because of fear that fixing things might break apps we never heard of that rely on a very specific bug or misdesign in some very specific software. But I am not convinced that fear-driven development is really the best strategy to win the future...
-- Lennart Poettering

Comments (none posted)

Distribution News

Fedora

Fedora 22 End Of Life

Fedora 22 has reached its end of life for updates and support. No further updates, including security updates, will be available for Fedora 22.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

How (and why) FreeDOS keeps DOS alive (ComputerWorld)

ComputerWorld talks with Jim Hall, a contributor to FreeDOS. "FreeDOS (it was originally dubbed ‘PD-DOS’ for ‘Public Domain DOS’, but the name was changed to reflect that it’s actually released under the GNU General Public License) dates back to June 1994, meaning it is just over 22 years old — a formidable lifespan compared to many open source projects. “And if you consider the DOS platform, MS-DOS 1.0 dates back to 1981, ‘DOS’ as an operating system has been around for 35 years! That’s not too shabby,” Hall said. (Version 1.0 of MS-DOS — then marketed by IBM as PC DOS — was released in August 1981.)" (Thanks to Paul Wise)

Comments (32 posted)

Automotive Grade Linux Releases 2.0 Spec Amid Growing Support (Linux.com)

Over at Linux.com, Eric Brown writes about the release of Automotive Grade Linux (AGL) Unified Code Base (UCB) 2.0 for in-vehicle infotainment (IVI) systems. "The latest version adds features like audio routing, rear seat display support, the beginnings of an app platform, and new development boards including the DragonBoard, Wandboard, and Raspberry Pi. AGL’s Yocto Project derived UCB distro, which is also based on part on the GENIVI and Tizen automotive specs, was first released in January. UCB 1.0 followed an experimental AGL stack in 2014 and an AGL Requirements Specification in June, 2015. UCB is scheduled for a 3.0 release in early 2017, at which point some automotive manufacturers will finally use it in production cars. Most of the IVI software will be based on UCB, but carmakers can also differentiate with their own features." We looked at AGL UCB 1.0 back in January.

Comments (none posted)

Page editor: Rebecca Sobol

Development

Quality in open source: testing CRIU

July 20, 2016

This article was contributed by Sergey Bronnikov

Checkpoint/Restore In Userspace, or CRIU, is a software tool for Linux that allows freezing a running application (or part of it) and checkpointing it to disk as a collection of files. The files can then be used to restore and run the application from the point where it was frozen. The distinctive feature of the CRIU project is that it is mainly implemented in user space.

Back in 2012, when Andrew Morton accepted the first checkpoint/restore (C/R) patches to the Linux kernel, the idea to implement saving and restoring of running processes in user space seemed kind of crazy. Yet, four years later, not only is CRIU working, it has also attracted more and more attention. Before CRIU, there had been other attempts to implement checkpoint/restore in Linux (DMTCP, BLCR, OpenVZ, CKPT, and others), but none were merged into the mainline. Meanwhile CRIU survived, which attests to its viability. Some time ago, I implemented support for the Test Anything Protocol format into the CRIU test runner; creating that patch allowed me to better understand the nature of the CRIU testing process. Now I want to share this knowledge with LWN readers.

Things were simple in the beginning: three developers and a small feature set. As the project evolved, more developers joined and more features were added. The project's growth posed a number of testing problems that needed to be solved:

  • Make running tests easy enough so that any developer would be able to test changes they had made.
  • The number of combinations of features and configurations grew exponentially, so running tests manually started taking too much time. Test automation was in order.
  • To save the precious time of both developers and users, there was a need to cover as much CRIU functionality as possible with tests and to avoid regressions in new versions as well.
  • Make testing transparent and the test results public.
  • Code review became insufficient for accepting new changes. CRIU's maintainer wanted to get more details on patches before adding them. So testing proposed changes was needed.

The development of CRIU doesn't differ much from that of the Linux kernel. All patches are sent to the criu@openvz.org mailing list and get reviewed by CRIU developers to weed out bugs at the earliest stage. Reviewing used to be the only criterion for accepting patches, but that is not the case any more. So now, many more checks are done as well: compilation checks, automated test runs, code-coverage measurements, and static code analysis. All of that is performed with freely available tools, so the entire testing process is available to the community.

Patches are transferred from the mailing list to Patchwork, which automatically builds CRIU on all supported platforms (x86_64, ARM, AArch64, and PPC64le) to make sure the changes do not break the build. For this, CRIU uses Travis CI for x86_64 and qemu-user-static in a Docker container for the other architectures.

[Patchwork]

Good, working tests are vital for any project, no matter how complex it is. They let developers be sure their changes don't break anything, they give the maintainer a sense for how good the code is and how well it works, and they let users be sure that their use cases or configurations won't be broken in the next release. The more complex a project is, though, the higher the demand for testing is.

For functional regression tests, CRIU developers use the ZDTM (zero down-time migration) test suite that has been used to test the in-kernel implementation of C/R in OpenVZ. Each test from the suite is run separately and goes through three stages: environment preparation, daemonization and waiting for a signal to check the test's state, and a result check.

The tests are conventionally divided into two groups. The first group is static tests that prepare a certain static environment or state and wait for a signal. The second group is dynamic tests that constantly change their environment and/or state (e.g., transmit data via TCP). Back in 2012, CRIU's test suite included about 70 separate tests. These days, there are about 200. Functional tests are run on a schedule on a public Jenkins CI instance for each change added to the repository. The benefit is obvious as, according to the statistics gathered by the project, 10% of the changes break something.

[Jenkins]

Running the tests is as simple as running make and then make test, so anyone can test CRIU. However, the number of combinations of features and configurations is too large to do so manually. Besides, developers can sometimes be lazy when it comes to running tests regularly and might skip them even if the tests take only a minute.

The primary testing configuration is to launch the entire test suite on the host. After launch, each test puts itself into a particular state, its process is checkpointed, restored, and then checked for any changes in the state. Another important piece is to check that the process remains in working condition after it has been checkpointed. For this, each test needs to be run with checkpointing alone and the state must, once again, remain unchanged.

To make sure that the state remains unchanged after the restore, each test has a set of checks. For example, the test env00 checks that an environment variable has not changed. Sometimes the state of the restored process appears to remain unchanged and will pass the ZDTM tests, but it is unsuitable for another C/R. This gives us another testing configuration, repeated C/R, which will detect these kinds of problems. Then additional types of tests are run:

  • C/R with snapshots: CRIU saves a series of application states (all but the first are incremental) and later reverts to them. One example when snapshots might be useful is for debugging.
  • C/R in namespaces: C/R of applications running in namespaces (network, user, PID, IPC, mount, UTS)
  • Checkpoint with regular user privileges: Originally, CRIU required root privileges to perform a checkpoint operation, however, in CRIU 2.0, the ability to checkpoint as a regular user was added. Checkpoint with regular user privileges checks regressions for this mode.
  • C/R with backward compatibility. In this configuration, the test saves the current head, rolls back to the specified commit, compiles the CRIU binary, then executes a ZDTM test, and dumps its processes. Then the test checks out the current head, compiles the CRIU binary again, restores the tested processes, and checks the result.
  • Additional configurations with restore on BTRFS and NFS were added (due to the peculiarities of these filesystems).
And these are only the single-process tests. For group C/R you can also test the checkpointing of process groups where all processes are in the same state as prepared by a ZDTM test or where each has its own state.

But wait, there's more. CRIU currently supports several hardware architectures and also needs to test several kernels: the latest vanilla kernel, RHEL7 kernel based on 3.10, and the linux-next branch. Each test takes just 5 to 300 seconds, but considering all combinations of possible scenarios and configurations, the total time is quite impressive. Let's try to calculate it (approximately):

  • Currently there are 260 tests in ZDTM, each test has at least 100 run variants, each run taking 5 seconds on average.
  • The total number of configurations is 26,000 (260 x 100), and each takes five seconds to run, so it would take about 7 hours to run all variants of all tests.
  • The additional configurations, like snapshots, backward compatibility, namespaces, and so on, add 2-3 hours.
  • There is also group C/R when every process in the group has its own state and the test performs C/R for the entire group. It gives us about 2200-1 combinations more.
  • Add to this different Linux kernels and hardware architectures ...
... and the total test time increases to infinity. Obviously, the project must then choose the highest priority configurations and tests to use for regular daily testing. Lower priority testing is done as time is available.

Kernels from the linux-next branch help discover and report changes that break the project before they make it into the mainline. In the course of developing CRIU, the developers have found roughly 20 bugs by testing with linux-next. Each test of linux-next must be run in a clean environment, so developers use a cloud service provider's API to create a virtual machine, install the kernel, and run tests. That ensures there won't be anything left over from previous tests.

[Test results]

Even though functional testing guarantees that features that have worked before will continue to do so, it doesn't help find new bugs. For this reason, fuzz tests were added. There are not as many fuzz tests as would be preferred, but it is a start. For example, the maps007 test creates random mappings and "touches" those memory regions. The mmap() system call uses four modes and 20 flags to create a new mapping in the virtual address space. Our test creates mappings with random parameters and makes sure that CRIU successfully performs C/R with this mapping.

Error-handling code paths are among the least covered by tests, so developers test the most critical of these paths with fault injection. The CRIU team couldn't find a suitable solution for such tests and had to write its own in the CRIU code. A number of CRIU tests are regularly run in the fault-injection mode.

Andrew Vagin, one of the CRIU developers, decided to try static code analysis along the way. He started with clang-analyzer and then moved on to Coverity, which is proprietary, but free to use for open-source projects. He expected static code analysis reports to have lots of false positives. However, it was just the opposite: the analyzers found bugs not discovered by the tests. Now, checking project code in Coverity is a must for each release.

Code coverage is typically measured to find parts of the code that are never tested and to understand how to test them—or at least why they are never reached by tests. For CRIU, developers did stumble upon parts of code that were never covered by tests, even though there were tests meant to exercise them (those discoveries were not pleasant at all). To measure code coverage for CRIU, developers use the standard gcov and lcov tools and also upload results to Coveralls to find out exactly which lines of code are covered.

Conclusion

The CRIU tests are quite easy to use and available for everyone. Moreover, the CRIU team has a continuous-integration system that consists of Patchwork and Jenkins, which run the required test configurations per-patch and per-commit. Patchwork also allows the team to track the status of patch sets to make the maintainer's work easier. The developers from the team always keep an eye on regressions. If a commit breaks a tree, the patches in question will not be accepted.

The testing regime targets finding bugs in CRIU as early in the process as possible. That leads to happier users, developers, and maintainers—and, of course, more stable code.

Comments (none posted)

Brief items

Quote of the week

Well, alright, since you asked...

Python is a pretty okay first language, with a tendency towards style enforcement, monoculture, and group-think. Python is more interested in giving you one adequate way to do something than it is in giving you a workshop that you, the programmer, get to choose the best tool from. So it works well for certain problems that can use an existing tool, but less well for other problems that require a machine shop to make a new tool. For instance, if you only want to think of your list processings as list comprehensions, Python 3 tends to enforce that culturally. If you want several ways to map over a list depending on which order makes more sense in context, Perl 6 will be more accommodating. If you want concurrency with a global interpreter lock, Python might suit. But if you want a concurrency model designed to scale to multicore/manycore, look to Perl 6, which avoids global bottlenecks and non-composable primitives, but instead borrows composable ideas from Haskell, Go, Erlang, and C# instead.

Larry Wall, when asked what he thinks about Python. (Thanks to Paul Wise.)

Comments (none posted)

Qt WebBrowser 1.0

Version 1.0 of the QtWebBrowser has been released. Qt WebBrowser is a browser for embedded devices developed using the capabilities of Qt and Qt WebEngine. "The browser is optimized for embedded touch displays (running Linux), but you can play with it on the desktop platforms, too! Just make sure that you have Qt WebEngine, Qt Quick, and Qt VirtualKeyboard installed (version 5.7 or newer). For optimal performance on embedded devices you should plan for hardware-accelerated OpenGL, and around 1 GiByte of memory for the whole system. Anyhow, depending on your system configuration and the pages to be supported there is room for optimization."

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Notes from the fourth RISC-V workshop

The lowRISC project, which is an effort to develop a fully open-source, Linux-powered system-on-chip based on the RISC-V architecture, has published notes from the fourth RISC-V workshop. Notably, the post explains, the members of the RISC-V foundation voted to keep the RISC-V instruction-set architecture (ISA) and related standards open and license-free to all parties. There are also accounts included of the work on RISC-V interrupts, heterogeneous multicore RISC-V processors, support for non-volatile memory, and Debian's RISC-V port.

Comments (7 posted)

Smedberg: Reducing Adobe Flash Usage in Firefox

Benjamin Smedberg writes that the Firefox browser will soon start taking a more active approach to the elimination of Flash content. "Starting in August, Firefox will block certain Flash content that is not essential to the user experience, while continuing to support legacy Flash content. These and future changes will bring Firefox users enhanced security, improved battery life, faster page load, and better browser responsiveness."

Comments (26 posted)

Page editor: Nathan Willis

Announcements

Articles of interest

The Importance of Following Community-Oriented Principles in GPL Enforcement Work

The Software Freedom Conservancy is one of the few organizations involved in GPL enforcement, and it has published principles regarding enforcement practices that seek compliance and not financial penalties. Bradley Kuhn and Karen Sandler urge others doing GPL enforcement to follow principles set forth by the SFC. "One impetus in drafting the Principles was our discovery of ongoing enforcement efforts that did not fit with the GPL enforcement community traditions and norms established for the last two decades. Publishing the previously unwritten guidelines has quickly separated the wheat from the chaff. Specifically, we remain aware of multiple non-community-oriented GPL enforcement efforts, where none of those engaged in these efforts have endorsed our principles nor pledged to abide by them. These “GPL monetizers”, who trace their roots to nefarious business models that seek to catch users in minor violations in order to sell an alternative proprietary license, stand in stark contrast to the work that Conservancy, FSF and gpl-violations.org have done for years." The actions of one individual prompted the netfilter project to make a statement endorsing the principles, which we covered earlier this month.

Comments (22 posted)

Calls for Presentations

LLVM Cauldron 2016

LLVM Cauldron will be held September 8 in Hebden Bridge, UK. "This will be a one-day conference with a single talks track and a space for breakout sessions, birds of a feather session, and tutorials. For those that want to give a brief description of their work, there will be lightning talks. The meeting is free to attend and open to anyone whether a hobbyist, from academia, or from industry, and regardless of previous experience with LLVM." The call for papers deadline is August 8.

Full Story (comments: none)

CHAR(16)

CHAR(16) is an international conference showcasing significant developments the leading PostgreSQL engineering teams have made in the areas of Clustering, High Availability and Replication. The conference takes place December 6 in New York, NY. The call for papers closes September 30.

Full Story (comments: none)

CFP Deadlines: July 21, 2016 to September 19, 2016

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
July 22 October 7
October 8
Ohio LinuxFest 2016 Columbus, OH, USA
July 24 September 20
September 21
Lustre Administrator and Developer Workshop Paris, France
July 30 August 25
August 28
Linux Vacation / Eastern Europe 2016 Grodno, Belarus
July 31 September 9
September 11
GNU Tools Cauldron 2016 Hebden Bridge, UK
July 31 October 29
October 30
PyCon HK 2016 Hong Kong, Hong Kong
August 1 October 6
October 7
PyConZA 2016 Cape Town, South Africa
August 1 September 28
October 1
systemd.conf 2016 Berlin, Germany
August 1 October 8
October 9
Gentoo Miniconf 2016 Prague, Czech Republic
August 1 November 11
November 12
Seattle GNU/Linux Conference Seattle, WA, USA
August 3 October 1
October 2
openSUSE.Asia Summit Yogyakarta, Indonesia
August 5 January 16
January 20
linux.conf.au 2017 Hobart, Australia
August 7 November 1
November 4
PostgreSQL Conference Europe 2016 Tallin, Estonia
August 7 October 10
October 11
GStreamer Conference Berlin, Germany
August 8 September 8 LLVM Cauldron Hebden Bridge, UK
August 15 October 5
October 7
Netdev 1.2 Tokyo, Japan
August 17 September 21
September 23
X Developers Conference Helsinki, Finland
August 19 October 13 OpenWrt Summit Berlin, Germany
August 20 August 27
September 2
Bornhack Aakirkeby, Denmark
August 20 August 22
August 24
7th African Summit on FOSS Kampala, Uganda
August 21 October 22
October 23
Datenspuren 2016 Dresden, Germany
August 24 September 9
September 15
ownCloud Contributors Conference Berlin, Germany
August 31 November 12
November 13
PyCon Canada 2016 Toronto, Canada
August 31 October 31 PyCon Finland 2016 Helsinki, Finland
September 1 November 1
November 4
Linux Plumbers Conference Santa Fe, NM, USA
September 1 November 14 The Third Workshop on the LLVM Compiler Infrastructure in HPC Salt Lake City, UT, USA
September 5 November 17 NLUUG (Fall conference) Bunnik, The Netherlands
September 9 November 16
November 18
ApacheCon Europe Seville, Spain
September 12 November 14
November 18
Tcl/Tk Conference Houston, TX, USA
September 12 October 29
October 30
PyCon.de 2016 Munich, Germany
September 13 December 6 CHAR(16) New York, NY, USA
September 15 October 21
October 23
Software Freedom Kosovo 2016 Prishtina, Kosovo

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: July 21, 2016 to September 19, 2016

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 17
July 24
EuroPython 2016 Bilbao, Spain
July 30
July 31
PyOhio Columbus, OH, USA
August 2
August 5
Flock to Fedora Krakow, Poland
August 10
August 12
MonadLibre 2016 Havana, Cuba
August 12
August 14
GNOME Users and Developers European Conference Karlsruhe, Germany
August 12
August 16
PyCon Australia 2016 Melbourne, Australia
August 18
August 20
GNU Hackers' Meeting Rennes, France
August 18
August 21
Camp++ 0x7e0 Komárom, Hungary
August 20
August 21
FrOSCon - Free and Open Source Software Conference Sankt-Augustin, Germany
August 20
August 21
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 22
August 24
ContainerCon Toronto, Canada
August 22
August 24
LinuxCon NA Toronto, Canada
August 22
August 24
7th African Summit on FOSS Kampala, Uganda
August 24
August 26
YAPC::Europe Cluj 2016 Cluj-Napoca, Romania
August 24
August 26
KVM Forum 2016 Toronto, Canada
August 25
August 26
Linux Security Summit 2016 Toronto, Canada
August 25
August 26
Xen Project Developer Summit Toronto, Canada
August 25
August 28
Linux Vacation / Eastern Europe 2016 Grodno, Belarus
August 25
August 26
The Prometheus conference Berlin, Germany
August 27
September 2
Bornhack Aakirkeby, Denmark
August 31
September 1
Hadoop Summit Melbourne Melbourne, Australia
September 1
September 7
Nextcloud Conference Berlin, Germany
September 1
September 8
QtCon 2016 Berlin, Germany
September 2
September 4
FSFE summit 2016 Berlin, Germany
September 7
September 9
LibreOffice Conference Brno, Czech Republic
September 8
September 9
First OpenPGP conference Cologne, Germany
September 8 LLVM Cauldron Hebden Bridge, UK
September 9
September 10
RustConf 2016 Portland, OR, USA
September 9
September 11
GNU Tools Cauldron 2016 Hebden Bridge, UK
September 9
September 11
Kiwi PyCon 2016 Dunedin, New Zealand
September 9
September 15
ownCloud Contributors Conference Berlin, Germany
September 13
September 16
PostgresOpen 2016 Dallas, TX, USA
September 15
September 17
REST Fest US 2016 Greenville, SC, USA
September 15
September 19
PyConUK 2016 Cardiff, UK
September 16
September 22
Nextcloud Conference Berlin, Germany

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds