LWN.net Weekly Edition for November 6, 2014
The Dronecode collaborative project
The Linux Foundation (LF) announced a new project called Dronecode in mid-October, during LinuxCon Europe. The project is a collaboration of developers working on open-source unmanned aerial vehicle (UAV) software. UAVs (a.k.a., drones) may sound like a rather small niche, and one that involves a lot of per-vehicle specialization—factors that might make one ask why a collaborative "umbrella" organization is needed at all. The answer would seem to be that commercial drone makers have concerns about legal and regulatory issues—which a non-profit structure may be more advantageous to work within. Hopefully, though, the new project will benefit everyone who works with UAVs, hobbyists included.
The Dronecode announcement was made on October 12. In the press release, LF Chief Marketing Officer Amanda McPherson explained that the idea for the project came from executives at 3D Robotics, a commercial UAV manufacturer. Joining 3D Robotics as founding members are several other drone makers, such as Yuneec, Walkera, jDrones, and Squadrone, plus two companies that sell drone-related data services. SkyWard focuses on remote flight monitoring and routing, while DroneDeploy focuses on data capture. In addition, there are several other general technology companies involved, like Intel and Qualcomm.
In the LF announcement, the initial emphasis of Dronecode seems to be on a pair of open-source projects: APM and PX4. APM is an autopilot system that is developed largely at 3D Robotics. The lead maintainer is Andrew "Tridge" Tridgell, who is well-known within the open-source community for a variety of projects (most notably the Samba file server), although he is not a 3D Robotics employee. PX4 is also an autopilot system, although it is developed by a distributed team that is led by several Swiss research laboratories.
The difference, though, is that PX4 is designed for "DIY" (do-it-yourself) class devices, whether built by hobbyists or in classrooms. APM targets higher-end systems, and it addresses not only multi-rotor helicopter hardware (which is the most common DIY drone form factor), but fixed-wing aircraft as well.
An October 13 announcement at the Dronecode site indicated that several other open-source projects would also be developed under the Dronecode umbrella, including MissionPlanner, DroidPlanner, and MavLink. MissionPlanner and DroidPlanner are applications for plotting flight paths (for desktop systems and Android, respectively). MavLink is a UAV-to-ground communication protocol. Altogether, there are quite a few software components that go into building and safely operating a UAV, so there will likely be no shortage of work for Dronecode developers to devote their time to.
It is worth noting, though, that the other factor that may contribute to the decision of many of these projects to band together is that as drone technology has improved, governments are increasingly taking note and, on occasion, stepping in with regulation. Paul Fraidenburgh at Computerworld pointed out that the US Federal Aviation Administration (FAA) made several key decisions about the commercial usage of drones in September 2014.
Those decisions approved drones that adhere to specific weight restrictions and operating conditions—such as properly monitoring and maintaining the UAV's altitude, within specified limits that do not interfere with manned aircraft. And maintaining altitude is a software problem—one that drone makers will have to take seriously in order to avoid potential conflicts with the FAA that might ground a company's UAVs. UAVs, like any vehicle, must adhere to all FAA regulations, but the altitude limitation was a major sticking point; the FAA's initial rules limited UAVs to below 400 feet (122 meters).
Writing at ZDNet, Steven J. Vaughan-Nichols noted that one aerospace research firm recently predicted the total market expenditure on UAVs would hit US $91 billion dollars within a decade. It is certainly nowhere near that level today, but the commercial uses of drones (such as in film production) are a major money-making opportunity. The Dronecode site points out that there are far more uses, including humanitarian relief and scientific research, but the more lucrative business opportunities are no small factor.
All of the companies and researchers interested in pushing the state-of-the-art in drone software have reasons to band together and collaborate. Establishing a common platform will result in fewer software products that need to be investigated and approved by government regulatory agencies, and collaborating in an open-source project is well-established as one of the quickest ways to accelerate a product's development cycle.
Fortunately, the founding members of Dronecode have a proven history both of working in the open and of considering the needs of DIY projects as well as commercial entities. APM and PX4, for example, run on many hobbyist systems as well as high-end vendor products. Tridgell, in addition to serving as APM maintainer, will also be taking on the role of chairing Dronecode's Technical Steering Committee. Where UAV technology will be a few years down the road is hard to predict, but broad collaboration will surely help.
A control group manager
CGManager is a year-old project to develop a daemon to manage control groups (cgroups) on a Linux system. These days, it is mostly targeted at doing that management for LXC containers, but it was originally envisioned as an alternative to systemd's cgroup management for those distributions that were not using systemd as their init. LXC maintainer Serge Hallyn gave a presentation about CGManager on October 13 at LinuxCon Europe in Düsseldorf, Germany.
![[Serge Hallyn]](https://static.lwn.net/images/2014/lce-hallyn-sm.jpg)
Hallyn began his talk by saying that he and others didn't really care about CGManager, per se, but need the features that it currently provides. If there are alternatives that can still solve the problems that LXC has, he is not tied to keeping CGManager around. He hoped that conversations during the week (at LinuxCon, the containers track at Linux Plumbers, and the systemd hackfest) would be productive in that regard.
Background
Control groups started out as "task containers" when they were first introduced in 2007, Hallyn said. The idea was to add core kernel functionality to group tasks, along with code for tracking and limiting resource usage of each group as a whole. Task containers were eventually renamed and, over the years, controllers for resources like memory, CPU, block I/O, and so on have been added. Cgroups are administered through a filesystem interface.
Containers are an operating system (OS) level virtualization mechanism that uses many different kernel features to emulate virtual machines (VMs). Containers provide a separate, clean environment for the processes they contain using just the base OS, without any hardware support (unlike full virtualization solutions such as KVM). He called containers a "user space fiction" that builds on cgroups, bind mounts, namespaces, and other features to give the illusion of isolated systems. In addition, containers can be used without requiring privilege and they can be nested, he said.
CGManager was born out of the need for safe, unprivileged, nested containers. The cgroups maintainer (Tejun Heo) discourages the delegation of portions of the cgroup filesystem (cgroupfs) to unprivileged processes, Hallyn said, which was the mechanism used by LXC to support unprivileged containers in the past. Thus, CGManager avoids the need to grant safe access to cgroupfs to other processes and prevents processes from "escaping" into parent cgroups, even if they are root within the container.
In order to support that use case, he proposed the idea of CGManager in November 2013. Since that time, it has been developed and is in use by LXC, upstart, systemd-shim, and libvirt on Ubuntu and other systems.
Design
There is one CGManager daemon per host and requests to it are sent over D-Bus. The kinds of requests that are made are things like "create a new cgroup" or "move this process into that cgroup". D-Bus uses a Unix-domain socket, so the SCM_CREDENTIALS message can provide the UID, GID, and PID of the requesting process; the kernel will translate them to the appropriate value in the receiving namespace.
But when a request to move a process to a new cgroup is sent to CGManager, the PID used to identify it is local to the requesting process's namespace. CGManager would somehow have to translate that to the PID in the root namespace, but that is tricky to do. One possibility would be to use the setns() system call to put CGManager into the namespace of the requester. There are two problems there, however. For one, he wanted to be able to support kernels prior to the introduction of setns() in 3.0. More importantly, though, switching to the requester's namespace could cause CGManager to lose the privileges it needs to do its job: possibly including the ability to switch back to the root namespace.
Beyond those problems, though, he wanted users of CGManager to be able to send simple D-Bus requests, without adding credentials or doing anything special. So there is a proxy that lives in each container to accept simple D-Bus requests and translate them to requests with credentials to send to CGManager. It is worth noting that the proxies cannot chain, as their input and output have different characteristics; the proxies talk directly to CGManager, no matter how deeply nested they are. Chaining them could introduce performance problems for deeply nested containers, he said.
The standard socket for CGManager is created at /sys/fs/cgroup/cgmanager/sock. LXC bind mounts the /sys/fs/cgroup/cgmanager directory into each container. The proxy moves the cgmanager directory aside and puts its own socket in its place. That way, processes inside and outside of containers do not have to be aware of the difference.
Hallyn listed the 18 or so D-Bus methods that CGManager provides. They allow requesters to create cgroups, move processes to or from them, list processes in a cgroup, and so on. All of the requests are handled as being relative to the cgroup of the requester, so there is no way to create or access cgroups further up the hierarchy.
The GetValue and SetValue requests allow processes to set cgroup subsystem resource limits using the names currently exported by cgroupfs. There was discussion early on about creating an API to separate cgroup users from the exact names that are currently exported, but that has not happened. The idea is to allow the kernel to change those parameter names and other characteristics without requiring changes in various user-space programs. It is a "worthwhile goal", Hallyn said, but LXC has been exporting those names for longer than he has been maintaining it. For now, LXC will continue using them, but the project is willing to work on a higher-level API down the road.
Future
There are several alternatives for the future of CGManager, he said. One possibility is to enhance cgroupfs to virtualize cgroups. That would mean cgroups would not be able to see other cgroups that are above (or at the same level) in the hierarchy. Currently, /proc/self/cgroup and cgroupfs leak information about other cgroups. One way to avoid that would be via cgroups namespaces that have been proposed by Aditya Kali. That, coupled with the ability to fully delegate parts of the hierarchy to other processes, would obviate the need for CGManager. Since the cgroups maintainer does not favor that approach, though, it is unlikely, Hallyn said.
Another idea might be to move the functionality provided by CGManager into systemd. That would require enhancing the functionality of systemd slices and to allow users to specify sub-slices. In order to support the use cases that LXC users require, it would mean moving much of what CGManager does into systemd, and it is unclear whether that would be possible or not.
Lastly, CGManager could continue on. There are features that need work, including a higher-level API to abstract away the cgroupfs resource file names, that should be done in conjunction with others in the community. Support for the new "remove on empty" feature of cgroups is another. There is also work to be done on integrating with systemd, since CGManager will have to coexist with systemd on some systems. All of those topics were things he hoped to discuss with others during the week.
[ I would like to thank the Linux Foundation for travel assistance to Düsseldorf for LinuxCon Europe. ]
"Importing" data runs afoul of the ITC
Software developers who work and live outside of the US have taken comfort in being outside that legal jurisdiction, so they avoid the application of US patent law. However, the International Trade Commission (ITC), an arm of the US government that prohibits certain unfair business practices involving international trade, used a recent investigation as a chance to prohibit certain electronic transmissions from being sent into the US from outside the country. This is troubling for open-source communities and others, but businesses are fighting back.
The ITC report on the investigation, titled "Certain Digital Models, Digital Data, and Treatment Plans for Use in Making Incremental Dental Positioning Adjustment Appliances, the Appliances Made Therefrom, and Methods of Making the Same", initially looks rather mundane. In early 2012, California-based orthodontics company, Align Technology, claimed that its patents on orthodontics devices, and on systems and methods relating to those devices, were being infringed by Pakistan-based competitor ClearCorrect (which also operates in Texas). In its original March 2012 notice [PDF] to the public, the ITC characterized the alleged harm as stemming from the importation and manufacture for commercial purposes of those goods.
The alleged importation occurred when digital information that would be used to help make its products was transmitted electronically from ClearCorrect's Pakistan home to its Texas-based arm over the Internet. The issue of interest here for software developers, then, is whether or not information transmitted electronically into the US that could be used in patent infringing goods or services are "articles" being "imported" under the ITC's rules, and thus subject to its jurisdiction.
The ITC took a couple years to deliberate, with an initial determination made by an ITC administrative law judge in May 2013. The Commission's investigation came to a conclusion in April 2014, when it released its review [PDF] of the initial determination. In that review, the ITC found that ClearCorrect unlawfully transmitted its digital goods into the US, and ordered the company to cease and desist from further transmitting those specific goods into the country.
ClearCorrect appealed [PDF] soon after, with only mild success. It got a favorable ruling in July from the Court of Appeals for the Federal Circuit (CAFC), which ruled that the ITC had no legal authority to review the original judge's initial determination, but that determination still held that inbound electronic signals from outside the United States are imported articles.
Interested third-parties have decided to keep the case alive, as they are concerned about the impact the ITC's decision could have on them. In a recent amicus brief [PDF] to the CAFC, the Internet Association, a trade association made up of major companies including Twitter, Google, Amazon, Facebook, and Netflix, argued that the ITC had no legal basis to support its initial ruling. The organization formally asked the CAFC to overturn its decision. The Association's argument has five prongs:
First, that patents don't apply to transmissions of electronic information at all because they aren't physical goods: "Electronic transmissions lack the tangible, physical embodiment necessary to be eligible for patent-law protection.
" (page 6).
Second, that the ITC can't find a transmission infringing solely because it contributes to infringement after it is imported, despite the transmission alone not being infringing. At the time of importation, there was no infringement: "The holding below improperly treated the electronic transmissions as infringing articles even though the statute requires infringement at the time of importation
" (page 7).
Third, that the ITC was wrong to say imported electronic goods can infringe method patents (like the patents on a method to use digital data sets to construct orthodontic devices, as in this case), because a method patent is just a recipe, and not an actual embodiment of an infringing activity: "Even if an electronic signal could be considered an infringing article, the ITC erred in holding that an article may infringe a method patent, which claims the performance of a series of steps, not a structure.
" (page 7).
Fourth, that the CAFC said repeatedly in past cases that the types of goods the ITC can prohibit have to be "tangible products
" (page 8).
Lastly (and possibly most compellingly), that Congress didn't give the
ITC any power to do what it's trying to do. The main stick that the ITC has
to smack down what it sees as bad actors is the power to issue an
"exclusion order". When the ITC issues an exclusion order, it's instructing
US customs agents to block infringing goods from being brought into the
country. Since it's totally impractical to tell border control to stop
certain electronic signals from coming into the country, the ITC tried to
get around the problem in this case by using a different stick: ordering
ClearCorrect to cease and desist from importing its digital data sets
relating to orthodontic devices via electronic transmission and from using
the data sets already
available to its Texas-based arm in its commercial products and services. However, written law states that the ITC can't use the latter stick if it can't use the former: "The ITC attempted to impose a cease-and-desist remedy, but the statute makes clear that the authority to issue cease-and-desist orders does not extend to cases where an exclusion order is unavailable
" (page 9).
It is not hard to imagine how open-source enthusiasts would be concerned by an unsuccessful appeal. For example, Fedora does not package any software in its repositories that the project believes violate US patents. Some users (likely including at least a few US residents) get this software through third-party repositories, such as RPM Fusion. While the vast majority of RPM Fusion's public mirrors are hosted on servers outside the US, the fact that some of the contents may be transmitted to a US user who might engage in an infringing activity could make RPM Fusion subject to an ITC cease-and-desist order; an order to stop transmitting the infringing software into the US, with steep fines for non-compliance. That order could come about if some organization that holds US patents that read on some software provided by RPM Fusion made a complaint to the ITC. Enforcing the order on a non-US organization might be difficult for the ITC, but it would be somewhat painful, or at least annoying, for the organization in question.
The open-source community should be pleased by the fact that large Internet corporations are formally expressing their concern with this case. In addition, should the CAFC rule against ClearCorrect, an appeal to the Supreme Court, which has repeatedly disagreed with the CAFC's patent decisions, could result in an overruling. This case is one worth keeping an eye on.
Security
Smartcard features on the YubiKey NEO
YubiKeys are a line of small and low-cost hardware security tokens popular for their one-time password (OTP) functionality. While the basic YubiKey model is limited to generating OTPs when plugged into a USB port, the more expensive NEO model adds contactless NFC support for OTP and it can be configured as a smartcard—which opens up the possibility of several other use cases. When we first looked at the NEO in April, the smartcard functionality was in a temperamental state. Fortunately, things have matured quite a bit since then, which significantly increases the YubiKey NEO's value as a security tool.
To recap, both the regular YubiKey and the NEO include two virtual configuration "slots" that can be set up independently. Each slot can be loaded with a secret credential that the device will use to generate a security code in response to a button press (one slot is bound to a short tap, the other to a longer press-and-hold). The YubiKey presents itself as a standard USB human interface device (HID) keyboard, so there are no drivers required on any platform: one plugs it in and it works. In this basic mode, each slot can be set up to send a static password, an Open Authentication (OATH)-compatible Hash-based message authentication code (HMAC)-based One-Time Password (HOTP), a password for Yubico's own OTP service, or an HMAC-SHA1 challenge-response code.
But this set of options is a bit of a limitation. HOTP is not widely deployed, at least not in comparison to the other OATH standard, Time-based One-Time Passwords (TOTP). The YubiKey cannot compute TOTP passwords internally, because doing so requires a realtime clock. A YubiKey can generate a TOTP password when used in conjunction with a software program running on the computer that the YubiKey is plugged into (using the HMAC-SHA1 challenge-response mode); Yubico provides both a desktop Qt application and an Android app for this purpose.
This is a useful feature, since so many services use TOTP, but the YubiKey is still limited to storing two credentials. Software-based competitors, like the Google Authenticator app for Android, can store any number of credentials.
But this is where the smartcard features can make up some of the difference. The NEO includes a Common Criteria–certified JavaCard secure element, which can be loaded with several JavaCard applets. One of the applets developed by Yubico is an OATH implementation that can store multiple TOTP credentials, essentially allowing the NEO to serve as a Google Authenticator substitute—at least, a substitute on any device that the NEO can be connected to (which, of course, does not include every Android device on the market).
Smarts and cards
![[YubiKey neoman]](https://static.lwn.net/images/2014/11-yubikey-neoman-sm.png)
Back in April, however, the tools required to get the JavaCard applets running on the NEO were in a bit of a rough state and the relevant information was limited to whatever one could find by scouring the forums. To be fair, of course, anyone without prior experience in the world of configuring and using JavaCard hardware will face a difficult learning curve when tackling the task, but the YubiKey software was spotty and the documentation sparse. For example, it relied on an external tool to manage and upload applets to the JavaCard element—one that suffered from incompatibilities with modern Linux systems.
Subsequently, though, Yubico wrote its own command-line program for interfacing with the NEO's JavaCard element, developed a Qt-based graphical tool for configuring the NEO's mode and applet settings, and put a considerable amount of work into developing a suite of applets. A handful of these applets come with the NEO firmware, which spares new users the pain of compiling and installing the applets altogether. But, if users so choose, they can still update the applets manually.
All of Yubico's client software is available from the Yubico site, although most of it is also now packaged by mainstream Linux distributions. Running:
ykneomgr -a
will return the applets currently installed on an attached NEO, listed by their JavaCard Application Identifiers (AID). For example:
0: a000000527200101 1: d2760000850101 2: d2760001240102000000000000010000 3: a000000308000010000100 4: a000000527210101
Determining which applets correspond to the AIDs, though, requires some searching, as there is no official list. In this instance, there is a forum thread that sheds some light. In order, these applets are the basic NEO OTP functionality, the NFC data-exchange functionality, an OpenPGP applet, a Personal Identity Verification (PIV) applet, and the HOTP/TOTP OATH applet.
A new applet can be installed with ykneomgr -i and an existing applet deleted with ykneomgr -D. The basic OTP and NFC applets should not be deleted; they implement the core functionality of the device and (at least as of today) source code is not available for them. Source is provided only for the OATH applet and OpenPGP applet (the latter of which is a slightly modified version of Joeri de Ruiter's GPLv2+ licensed JavaCard OpenPGP applet). The JavaCard element can be protected with a PIN code to prevent unauthorized users from removing or replacing applets; it is clearly a good idea to enable this protection, lest some attacker replace an applet on one's YubiKey.
Time for TOTP
![[YubiKey desktop OATH application]](https://static.lwn.net/images/2014/11-yubikey-oathdesktop-sm.png)
The OATH applet is, for many users, the key piece of JavaCard functionality, because it effectively removes the two-slot credential limitation (how many HOTP/TOTP secrets it can hold varies, depending on their size, but the number is quite large) and is compatible with the majority of two-factor authentication options in widespread usage. To use it, the NEO must first be placed into the proper mode (by default, the JavaCard functionality is switched off, for wider compatibility). The graphical neoman application sports a selector for toggling OTP mode and JavaCard mode (labeled CCID) independently.
The other half of the OATH applet is the client-side YubiOATH application. The desktop version is Python-based; instructions and dependencies are listed on the application web page, though the instructions for launching it are incorrect. With the CCID-enabled NEO plugged into a USB port, the user can launch the OATH client application with
python ./ui_systray.py &
This spawns a system-tray/taskbar application; right-clicking on its icon, one can open the main window, which shows a list of the configured OATH accounts, the current codes for each account, and a timer indicating how long the TOTP codes have before they expire and a new code regenerated. In the Android YubiOATH app, one can swipe the NEO past the device NFC sensor and see the TOTP codes generated.
![[Adding a credential to the YubiKey desktop OATH application]](https://static.lwn.net/images/2014/11-yubikey-oath-add-sm.png)
The user experience is more or less identical to that of mobile apps like Google Authenticator (in fact, the Android version of Yubico's client software even mimics the Google Authenticator icon). But there are a few differences. The important distinction is that the desktop OATH application has no access to any cameras attached to the system (nor other image-input methods); it can therefore not be used to load any HOTP/TOTP secrets that are only presented as QR codes.
In my tests, only about half of the two-factor authentication services I configured displayed a text version of the HOTP/TOTP secret credential in addition to the QR code version. The good news is that, because HOTP/TOTP credentials are stored on the NEO, the NEO can be set up with the Yubico Authenticator app on an Android device, and it will subsequently work on the desktop software, too. But for those without an Android device, the Yubico desktop software will not work with a QR-code–only configuration process.
PGP and other applet options
After the OATH applet, the most popular JavaCard applet for the NEO seems to be the OpenPGP applet. Support for smartcard configuration and usage is built in to GnuPG, and the NEO's OpenPGP applet works without too much trouble. With the NEO plugged in, all configuration is done through the GnuPG command-line tools.
Typing gpg –card-edit opens the connection to the card; the admin command enters configuration mode, generate generates a key pair, and so forth. Earlier versions of the applet could only generate a new key pair on the card itself (and could not import an existing key), but this has been fixed in subsequent releases.
The one limitation to be aware of with the OpenPGP applet is that the hardware has limits on the size and types of key it can store. It supports maximum key sizes of 2048 bits for RSA keys and 320 bits for Elliptic Curve Cryptography (ECC) keys (of the finite-field, ECC-over-GF(p) variety), due to the limitations of the cryptographic coprocessor on the NXP A700x security microcontroller chip. There are several key types allowed, but each must be explicitly supported—in fact, GnuPG's ECC support was only added in the GnuPG 2.1 development branch, and is regarded as unstable. Yubico's Klas Lindfors told forum members that the company has been experimenting with other elliptical curve keys, although at the moment it does not feel that GnuPG 2.1 has stabilized enough to roll out support.
For those interested in exploring the OpenPGP functionality in detail, Yubico's Simon Josefsson has written a detailed account of how the NEO's OpenPGP applet can be used—including quite a few less-than-common options like embedding a JPEG photo into the key.
The PIV applet implements an US National Institute of Standards and Technology (NIST) identity standard called SP 800-73-3. This is a FIPS specification that stores a secret key on the device, which is then usable to encrypt or sign messages. In practice, this is far less likely to be of practical value than the OpenPGP applet for those who do not have to work with US Government–mandated FIPS-compliant systems.
But other applets are certainly possible, and there appear to be users on the discussion forum who have undertaken development of their own applets—including, for example, a Bitcoin wallet applet. Yubico also seems to be working on other possibilities; it has evidently developed a yet-to-be-released Bitcoin applet of its own (which has been alluded to on the forums and is evident in the company's GitHub repository).
U2F and more
One final tidbit of trivia concerns Yubico's support for the Universal 2nd Factor (U2F) two-factor authentication standard. The standard is published by the FIDO Alliance, an industry group to which Yubico belongs. Back when the first public drafts began to appear in early 2014, Yubico announced its intention to support U2F in the YubiKey product line, although exactly how it would do so remained unclear.
The company has now released two separate U2F-capable products. One is a U2F-only token called the FIDO Security Key. The other, however, is a refresh of the NEO that adds U2F functionality alongside the OTP and smartcard functions. But only those NEOs shipped from Yubico after October 1 support U2F, since it is implemented as a firmware-level feature. Older NEOs cannot be field-upgraded to support U2F because all YubiKey models—by design—cannot be reflashed with new firmware.
Since the product name was unchanged and there is no easy way to tell one NEO from another on the outside, this refresh spawned a fair amount of confusion among YubiKey customers. Some of them, in fact, took the company to task for advertising that the NEO was certified for U2F but neither clearly stating that some NEOs would remain incompatible, nor offering purchasers any upgrade path.
On the other hand, it does seem like implementing U2F support in a JavaCard applet would be possible. That idea has been floated multiple times on the forums, so far with no response either way from the company. Then again, NEO owners' frustration with the U2F feature may simply motivate some third-party developers to undertake the task on their own.
As always, it is difficult to form an objective conclusion about the value proposition that the YubiKey NEO provides. The NEO remains quite a bit more expensive than the other YubiKey models ($50 compared to $25), but with working smartcard functionality, it does quite a bit more. The OATH applet support removes the two-slot configuration limit, which is a big deal to many customers. The case for the OpenPGP and PIV applets is harder to make. There are many other smartcard options on the market, most of which are cheaper than the NEO and many of which do not come with the same key-size limitations. When it comes to getting the most value out of a piece of hardware, though, the addition of OpenPGP functionality in such a compact and portable format is appealing, indeed.
The software side of the product remains muddled in several key places: out-of-date or incorrect documentation, numerous inconsistencies (even on simple matters like program names), and very little in the way of support. The company does seem to be committed to free software, though—its releases tend to be GPLv3 unless they are derived from other works—so perhaps it will simply require some additional engagement with the community to make the simple user experience of the basic YubiKey viable for its more complex features as well.
Brief items
Security quotes of the week
You may not be watching, but the telescreen is listening.
New vulnerabilities
dokuwiki: multiple vulnerabilities
Package(s): | dokuwiki | CVE #(s): | CVE-2014-8761 CVE-2014-8762 CVE-2014-8763 CVE-2014-8764 | ||||||||||||
Created: | October 30, 2014 | Updated: | November 5, 2014 | ||||||||||||
Description: | From the Debian advisory (which lists four CVE numbers, though it only talks about two vulnerabilities):
Two vulnerabilities have been discovered in dokuwiki. Access control in the media manager was insufficiently restricted and authentication could be bypassed when using Active Directory for LDAP authentication. | ||||||||||||||
Alerts: |
|
fedup: temporary directory creation
Package(s): | fedup | CVE #(s): | CVE-2013-6494 | ||||||||
Created: | November 3, 2014 | Updated: | November 5, 2014 | ||||||||
Description: | From the Red Hat bugzilla:
Michael Scherer of Red Hat reports: While trying to upgrade my F19 to F20 using fedup, I noticed that it use a directory in /var/tmp/, with a fixed known name. cachedir = '/var/tmp/fedora-upgrade' | ||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-3647 CVE-2014-7207 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | October 31, 2014 | Updated: | November 5, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
CVE-2014-3647 - Nadav Amit reported that KVM mishandles noncanonical addresses when emulating instructions that change rip, potentially causing a failed VM-entry. A guest user with access to I/O or the MMIO can use this flaw to cause a denial of service (system crash) of the guest. CVE-2014-7207 - Several Debian developers reported an issue in the IPv6 networking subsystem. A local user with access to tun or macvtap devices, or a virtual machine connected to such a device, can cause a denial of service (system crash). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2014-7145 | ||||||||||||||||||||||||||||||||||||||||
Created: | October 31, 2014 | Updated: | November 5, 2014 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory: Raphael Geissert reported a NULL pointer dereference in the Linux kernel's CIFS client. A remote CIFS server could cause a denial of service (system crash) or possibly have other unspecified impact by deleting IPC$ share during resolution of DFS referrals. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mod_auth_mellon: two vulnerabilities
Package(s): | mod_auth_mellon | CVE #(s): | CVE-2014-8566 CVE-2014-8567 | ||||||||||||||||
Created: | November 5, 2014 | Updated: | November 6, 2014 | ||||||||||||||||
Description: | From the Red Hat advisory:
An information disclosure flaw was found in mod_auth_mellon's session handling that could lead to sessions overlapping in memory. A remote attacker could potentially use this flaw to obtain data from another user's session. (CVE-2014-8566) It was found that uninitialized data could be read when processing a user's logout request. By attempting to log out, a user could possibly cause the Apache HTTP Server to crash. (CVE-2014-8567) | ||||||||||||||||||
Alerts: |
|
openstack-cinder: information disclosure
Package(s): | openstack-cinder | CVE #(s): | CVE-2014-3641 | ||||||||||||
Created: | November 3, 2014 | Updated: | November 12, 2014 | ||||||||||||
Description: | From the CVE entry:
The (1) GlusterFS and (2) Linux Smbfs drivers in OpenStack Cinder before 2014.1.3 allows remote authenticated users to obtain file data from the Cinder-volume host by cloning and attaching a volume with a crafted qcow2 header. | ||||||||||||||
Alerts: |
|
openstack-nova: denial of service
Package(s): | openstack-nova | CVE #(s): | CVE-2014-3608 | ||||||||||||
Created: | November 3, 2014 | Updated: | November 5, 2014 | ||||||||||||
Description: | From the CVE entry:
The VMWare driver in OpenStack Compute (Nova) before 2014.1.3 allows remote authenticated users to bypass the quota limit and cause a denial of service (resource consumption) by putting the VM into the rescue state, suspending it, which puts into an ERROR state, and then deleting the image. NOTE: this vulnerability exists because of an incomplete fix for CVE-2014-2573. | ||||||||||||||
Alerts: |
|
php-Smarty: code execution
Package(s): | php-Smarty | CVE #(s): | CVE-2014-8350 | ||||||||||||||||||||||||
Created: | November 5, 2014 | Updated: | May 3, 2016 | ||||||||||||||||||||||||
Description: | From the CVE entry:
Smarty before 3.1.21 allows remote attackers to bypass the secure mode restrictions and execute arbitrary PHP code as demonstrated by "{literal}<{/literal}script language=php>" in a template. | ||||||||||||||||||||||||||
Alerts: |
|
python-keystoneclient: man-in-the-middle attacks
Package(s): | python-keystoneclient | CVE #(s): | CVE-2014-7144 | ||||||||||||||||||||||||
Created: | November 3, 2014 | Updated: | January 9, 2015 | ||||||||||||||||||||||||
Description: | From the CVE entry:
OpenStack keystonemiddleware (formerly python-keystoneclient) 0.x before 0.11.0 and 1.x before 1.2.0 disables certification verification when the "insecure" option is set in a paste configuration (paste.ini) file regardless of the value, which allows remote attackers to conduct man-in-the-middle attacks via a crafted certificate. | ||||||||||||||||||||||||||
Alerts: |
|
RHOSE: two vulnerabilities
Package(s): | RHOSE | CVE #(s): | CVE-2014-3602 CVE-2014-3674 | ||||||||
Created: | November 4, 2014 | Updated: | November 26, 2014 | ||||||||
Description: | From the Red Hat advisory:
It was reported that OpenShift Enterprise 2.2 did not properly restrict access to services running on different gears. This could allow an attacker to access unprotected network resources running in another user's gear. OpenShift Enterprise 2.2 introduces the oo-gear-firewall command which creates firewall rules and SELinux policy to contain services running on gears to their own internal gear IPs. The command is invoked by default during new installations of OpenShift Enterprise 2.2 to prevent this security issue. Administrators should run the following on node hosts in existing deployments after upgrading to 2.2 to address this security issue: # oo-gear-firewall -i enable -s enable Please see the man page for the oo-gear-firewall command for more details. (CVE-2014-3674) It was reported that OpenShift Enterprise did not restrict access to the /proc/net/tcp file on gears, which allowed local users to view all listening connections and connected sockets. This could result in remote systems IP or port numbers in use being exposed which may be useful for further targeted attacks. Note that for local listeners, OSE restricts connections to within the gear by default, so even with the knowledge of the local port and IP the attacker is unable to connect. This bug fix updates the SELinux policy on node hosts to prevent this gear information from being accessed by local users. (CVE-2014-3602) | ||||||||||
Alerts: |
|
ruby: two vulnerabilities
Package(s): | ruby1.8, ruby1.9.1, ruby2.0, ruby2.1 | CVE #(s): | CVE-2014-4975 CVE-2014-8080 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 5, 2014 | Updated: | November 14, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Will Wood discovered that Ruby incorrectly handled the encodes() function. An attacker could possibly use this issue to cause Ruby to crash, resulting in a denial of service, or possibly execute arbitrary code. The default compiler options for affected releases should reduce the vulnerability to a denial of service. (CVE-2014-4975) Willis Vandevanter discovered that Ruby incorrectly handled XML entity expansion. An attacker could use this flaw to cause Ruby to consume large amounts of resources, resulting in a denial of service. (CVE-2014-8080) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
shim: multiple vulnerabilities
Package(s): | shim | CVE #(s): | CVE-2014-3675 CVE-2014-3676 CVE-2014-3677 | ||||||||||||||||||||||||||||||||
Created: | November 5, 2014 | Updated: | February 11, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
A heap-based buffer overflow flaw was found the way shim parsed certain IPv6 addresses. If IPv6 network booting was enabled, a malicious server could supply a crafted IPv6 address that would cause shim to crash or, potentially, execute arbitrary code. (CVE-2014-3676) An out-of-bounds memory write flaw was found in the way shim processed certain Machine Owner Keys (MOKs). A local attacker could potentially use this flaw to execute arbitrary code on the system. (CVE-2014-3677) An out-of-bounds memory read flaw was found in the way shim parsed certain IPv6 packets. A specially crafted DHCPv6 packet could possibly cause shim to crash, preventing the system from booting if IPv6 booting was enabled. (CVE-2014-3675) | ||||||||||||||||||||||||||||||||||
Alerts: |
|
spacewalk-java: cross-site scripting
Package(s): | spacewalk-java | CVE #(s): | CVE-2014-3654 | ||||||||
Created: | November 3, 2014 | Updated: | November 5, 2014 | ||||||||
Description: | From the SUSE bug report:
Stored cross-site scripting on /rhn/kickstart/cobbler/CustomSnippetList.do using the name parameter of a "snippit" - Example: setting the name to: testabc" onclick="alert(1) - This will execute when trying to view, delete, etc. (as far as I can tell, it becomes impossible to delete) - This is the one place in the application where something is indexed by a name, not its id, which causes all kinds of problems with viewing/deleting/etc when an attacker slips in HTML characters | ||||||||||
Alerts: |
|
systemd-shim: denial of service
Package(s): | systemd-shim | CVE #(s): | CVE-2014-8399 | ||||
Created: | October 30, 2014 | Updated: | November 5, 2014 | ||||
Description: | From the Ubuntu advisory:
It was discovered that systemd-shim incorrectly shipped with a debugging clause enabled. A local attacker could possibly use this issue to cause a denial of service. | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.18-rc3, released on November 2. Linus complained that things aren't slowing down as he would like, but doesn't seem too worried: "That said, I don't think there is anything particularly horrible in here. Lots and lots of small stuff, with drivers accounting for the bulk of it (both in commits and in lines), but networking and core kernel showing up too. Nothing particularly stands out." With this prepatch, the codename for the release has changed to "Diseased Newt."
Stable updates: 3.17.2, 3.16.7, 3.14.23, and 3.10.59 were released on October 30.
Vetter: Atomic Modeset Support for KMS Drivers
For those who are interested in the grungy details of getting the new atomic modesetting operations working with existing graphics drivers, Daniel Vetter has the scoop: "So I've just reposted my atomic modeset helper series, and since the main goal of all that work was to ensure a smooth and simple transition for existing drivers to the promised atomic land it's time to elaborate a bit. The big problem is that the existing helper libraries and callbacks to driver backends don't really fit the new semantics, so some shuffling was required to avoid long-term pain. So if you are a driver writer and just interested in the details then read for what needs to be done to support atomic modeset updates using these new helper libraries."
Linux 3.16.y.z extended stable support
The Ubuntu kernel team has announced that they will be providing extended support for the 3.16 kernel series. The team will pick up where Greg Kroah-Hartman left off, with 3.16.7, and will provide support until April 2016.
Kernel development news
Supporting solid-state hybrid drives
In recent years we have seen the addition of a number of subsystems to the kernel that provide high-speed caching for data on (relatively) slow drives; examples include bcache and dm-cache. But there is nothing preventing drive manufacturers from building this kind of caching into their products directly. The result of such bundling is "solid-state hybrid drives" — rotating drives that have some flash storage built in as well. Properly used, that flash storage can speed accesses to frequently used data. But it turns out that getting to "properly used" is not quite as straightforward as one might think.Of course, one can simply leave everything up to the drive itself. Left to its own devices (so to speak), the drive will observe which blocks are frequently accessed and work to keep those blocks in fast storage. But the operating system — or the programs running on that system — will often have a better idea of which data will be most useful in the future. If that information is communicated to the drives, the result should be better use of fast storage, and, thus, better performance.
Enabling that communication is the goal of this patch set posted by Jason Akers. The response to that patch set from the kernel community makes it clear, though, that there is still some work to be done to figure out the best way to get the best possible performance from such drives.
This patch set uses the per-process I/O priority value as a way of signaling information about cache usage. That priority can be set by way of the ionice command. Using a few bits of the priority field, the user can specify one of a number of policies (listed here in symbolic form):
- IOPRIO_ADV_EVICT says that the data involved in
I/O operations should be actively removed from the cache, should it be
found there. It's a way of saying that the data will, with certainty,
not be used again in the near future.
- IOPRIO_ADV_DONTNEED says that the data should not be cached,
but that there is no need to actively evict it from the cache if it's
already there.
- IOPRIO_ADV_NORMAL leaves caching policy up to the drive, as
if no advice had been provided at all.
- IOPRIO_ADV_WILLNEED indicates that the data will be needed again in the near future and, thus, should be stored in the cache.
This patch set is unlikely to be merged in anything close to its current form for a few reasons. One of those is that, as a few developers pointed out, associating I/O caching policy with a process is a bit strange. Any given process may want different caching policies for different files it works with; indeed, it may want different policies for different parts of the same file. Creating a single, per-process policy makes this kind of use nearly impossible.
Beyond that, as Dave Chinner pointed out, the process that generates an I/O operation in user space may not be the process that submits the I/O to the block subsystem. Many filesystems use worker threads to perform actual submission; that breaks the link with the process that originally created the I/O operation. Filesystems, too, may wish to adjust caching policy; giving metadata a higher priority for the cache than data is one obvious possibility. As it happens, there is a way for filesystems to adjust the I/O priority value on individual requests, but it is not the most elegant of APIs.
For these reasons, some developers have suggested that the caching policy should be set on a per-file basis with a system call like fadvise() rather than on a per-process basis. Even better, as Jens Axboe noted, would be to add a mechanism by which processes could provide hints on a per-operation basis. The approach used in the non-blocking buffered read proposal might be applicable for that type of use.
There is another problem with this patch set, though: the types of "advice" that can be provided is tied tightly to the specifics of how the current generation of hybrid drives operates. It offers low-level control over a single level of cache and not much else. Future drives may operate in different ways that do not correspond well to the above-described operations. Beyond that, hybrid drives are not the only place where this kind of advice can be provided; it can also be useful over NFS 4.2, with persistent memory devices, and with the upcoming T10/T13 "logical block markup descriptors." There is a strong desire to avoid merging a solution that works with one type of current technology, but that will lack relevance with other technologies.
Martin Petersen has put some time into trying to find an optimal way to
provide advice to storage devices in general. His approach is to avoid
specific instructions ("evict this data from the cache") in favor of a
description why the I/O is being performed. He described his results as
"
That table consists of a set of I/O classes, along with the performance
implications of each class. There is a "transaction" class with stringent
completion-time and latency requirements and a high likelihood that the
data will be accessed again in the near future. The "streaming" class also
wants fast command completion, but the chances of needing the data again
soon are quite low. Other classes include "metadata" (which is like
transactions but with a lower likelihood of needing the data again),
"paging," "data," and "background" (which has low urgency and no need for
caching).
Given an (unspecified) API that uses these I/O classes, the low-level
driver code can map the class of any specific I/O operation onto the
proper advice for the hardware. That mapping might be a bit trickier than
one might imagine, though, as the hardware gets more complex. There is
also the problem of consistency across devices; if drivers interpret the
classes differently, the result could be visible performance differences
that create unhappy users.
These issues will need to be worked out, though, if Linux systems are to
drive hybrid devices in anything other than the default, device-managed
mode. Given a suitable kernel and user-space API, the class-based
approach looks like it should be flexible enough to get the most out of
near-future hardware. Getting there, though, means a trip back to the
drawing board for the authors of the current hybrid-drive patches.
The O_TMPFILE flag has been discussed a few times in these pages;
the abrupt nature of its addition meant
that it had little review and a fair number of post-merge problems. The
concept behind this flag is simple enough: it requests the creation of a
file with no associated directory entry. It is thus meant for temporary
files that will not be opened by any other process.
Eric Rannaud recently asked a question:
what should happen when a process makes a call like the following?
The flags request the creation of a writable temporary file, but the third
argument (the file mode) says that there should be no access (read or
write) allowed. As it happens, POSIX is clear enough about this situation
when a file is created with ordinary O_CREAT: the provided mode
only applies after the creation of the file. So, while a process
can create a file that it cannot itself access in general, it can still get
a working file descriptor in the act of creation itself.
As it happens, though, file creation with O_TMPFILE does not work
that way; the file mode is applied from the beginning, so the
open() call listed above will fail. This behavior was widely
recognized to be a bug, and Eric's fix was merged for the 3.18-rc3
release. But there are a couple of interesting side notes that are worth
looking at.
One is that this call:
will still fail. When the O_TMPFILE feature was implemented, it
seemed that there
was no use case for a temporary file that could not be written to, so this
case (O_TMPFILE with O_RDONLY) was explicitly forbidden.
But it turns
out that there is a use case for this type of file: creating an empty
file with a specific set of extended attributes atomically. The
open() call would be followed by one or more fsetxattr()
calls; once everything is in place, linkat() can be used to make
the file visible in the filesystem. Linus initially agreed that this use case should be supported,
but later changed his mind. So read-only
O_TMPFILE files will remain unsupported.
Amusingly, the original bug was discovered while digging into a related
glibc bug. It seems that, when O_TMPFILE is used, the mode
argument isn't passed into the kernel at all. In the case of
open() on x86-64 machines, things work out of sheer luck: the mode
argument just happens to be sitting in the right register when glibc makes
the call into the kernel. Things do not work as well with
openat(), though, with the result that, in current glibc
installations, O_TMPFILE cannot be used with openat() at
all. The bug is well understood and should be fixed soon.
When a developer makes a call to openat(), they will normally
expect that the file being opened or created will be located in the
specified directory. As is often the case, though, surprises lurk for the
unwary. Trouble can come from a surprising symbolic link or deliberately
malicious input; either way, it can lead to files being created or opened
where they should not be.
David Drysdale has a solution in the form of the O_BENEATH flag for
openat(). If this flag is included in the call, the file being
accessed must exist in or below the directory provided. The enforcement of
this rule is simple enough: the provided path is constrained to not start
with "/" or contain "../". Any symbolic links traversed
while resolving the path must meet the same conditions.
This feature was implemented as part of the filesystem access restrictions
found in the Capsicum patch set. It turns
out that there are other potential users as well, though. In particular,
when combined with a secure computing ("seccomp") filter,
O_BENEATH can be used to safely give a sandboxed process a
directory to create files in.
The initial review concerns raised against this patch have been addressed
in the current version. It is a relatively simple and non-invasive patch,
so there is a reasonable chance that we'll see it enter the mainline during
a near-future merge window.
In short, kdbus is a mechanism by which processes can find each other and
exchange messages. It is meant to facilitate certain kinds of
interprocess communications in a way that is both secure and reasonably
fast. For those wanting details, this
document covers kdbus functionality in a fairly thorough way.
For those not wanting to read an 1800-line file, here's a brief
summary. When kdbus starts up, it creates a set of device nodes under
/dev/kdbus; any actions involving kdbus require opening one or
more of those nodes. A "bus" is essentially a namespace within which
processes can communicate with each other. A fairly normal default
configuration involves a single "system" bus for communicating with
privileged services, and one "user" bus for each logged-in user. The user
bus would be used, for example, to allow the processes implementing the
user's desktop environment to talk to each other.
While there is a single bus namespace at boot time, things need not remain
that way. A set of buses exists within a kdbus "domain"; domains are
organized into a hierarchy. So, for example, a container-management system
would create a new domain for each container, then use a bind mount to make
the appropriate subtree of /dev/kdbus available within the
container. Thereafter, processes within the container can communicate with
each other without having any access to communications outside of the
container. There is currently no provision for using kdbus to communicate
between containers.
Messages are, at their simplest, a set of bytes with no interpretation by
the kernel at all. Messages can pass file descriptors between
processes; the passing of sealed files and
memfds is also supported. Message recipients can specify a set of
sender credentials that must be supplied with a message for policy
checking; those credentials are attached to the message by the kernel.
There is also a built-in policy mechanism describing which processes can adopt
"well-known names" and which processes can communicate with which others.
Kdbus is intended to be fast with both large and small messages. For the
largest of messages, zero-copy transfer between processes is supported.
Experience has shown, though, that a message must be about 512KB or larger
before page-mapping tricks become cheaper than just copying the data.
There is support for broadcast messages, along with a mechanism based on
bloom filters for
filtering out unwanted broadcasts without waking up the
(uninterested) recipients.
In general, kdbus is meant to be a replacement for D-Bus that addresses the
various issues that have come up with the latter over time. The goal is
not to be the ultimate messaging system for all possible applications.
While the kdbus developers are open to the idea of adding more
functionality in the future, they are trying to keep a lid on the
complexity at this stage.
Given the (systemd-ish) origins of the kdbus code, one might well have
expected the
discussion to be somewhat hostile at times. In truth, while there have
been concerns expressed, the discussion has remained mostly friendly and
entirely technical. Developers are taking a deep look at the code and
discussing how it can be improved; one cannot say that kdbus is not getting
a fair hearing.
One of the initial questions was, inevitably, why does this functionality
need to be in the kernel in the first place? The kernel already provides a
number of interprocess communication primitives, and tools like D-Bus have
successfully used them for many years. See this message from Greg for a detailed answer.
In short, it comes down to performance (fewer context switches to send a
message), security (the kernel can ensure that credentials passed with
messages are correct), race-free operation, the ability to use buses in
early boot, and more. There do seem to be legitimate reasons to want this
kind of functionality built into the kernel.
The handling of credentials drew a couple of different criticisms; the
first was that credentials are checked when a message is sent — not when
the connection to the bus is first created. Eric Biederman raised concerns that failure to capture credentials
at open() time could lead to exploitable vulnerabilities. He did
not actually point out any such vulnerabilities, though, and, in the past,
such vulnerabilities have tended to be associated with later
read() and write() calls. Since kdbus does not support
either call on any of its file descriptors, that kind of vulnerability
should not be an issue here. Still, there is some discomfort among the
more security-oriented reviewers that the late capture of credentials is
asking for trouble.
Another problem, raised by Andy Lutomirski,
is that checking credentials at message-sending time makes
privilege-separation architectures impossible:
If that privilege is checked every time a message is sent, the ability to
drop privileges in this way is lost. Kdbus developer Daniel Mack responded that, in the D-Bus world (which
carries over into the kdbus design), there is no concept of "opening a
connection" to a service like journald. Instead, one connects to a bus and
sends messages to services; each message has to stand on its own. As
Daniel put it:
This particular disagreement reflects a fundamental difference in how
developers see kdbus being used. It does not look like an easy one to
resolve without some significant design changes on the kdbus side; any such
changes would move it away from the D-Bus model and are likely to encounter
resistance from the kdbus developers.
A related issue, also raised by Andy, is
that the recipient of a message specifies which credential information
should accompany that message. This information can include user and group
IDs, process command line, control group information, capabilities,
security labels, the audit session information, and more. The sender of a
message has no control over whether this information is sent. Andy thinks
that sending this information will lead to information leaks and security
problems.
Instead, Andy said, the sending process should explicitly specify which
credential information should accompany a message and that security-related
requests should explicitly document what credentials are required.
"
Both Eric and Andy also raised an entirely different set of concerns having
to do with the way the domain namespace works. The decision to attach
globally visible names to domains leads to some unfortunate consequences in their
view. The first (and smaller) of these is that the existence of a
namespace forces kdbus domains into a hierarchical structure, even though
there is nothing that is actually hierarchical about them. Each domain is
an independent entity with no particular relation to its parent domain
outside of the naming scheme.
The real problem, though, is that a global namespace implies the need for
some sort of control to keep malicious processes from polluting that
namespace. That, in turn, means that creating a kdbus domain is a
privileged operation. Quite a bit of work has gone into allowing
unprivileged users to create containers. But if a new
container cannot be given a kdbus domain without privilege, that model
breaks down. Lennart Poettering acknowledged this concern in an apparently private email publicly responded
to by Andy; he said that allowing unprivileged domain creation should be
possible, as long as the checks for namespace collisions remain in place.
Andy's reply there was that none of the other container-oriented primitives
have global names, and that there is a reason for that: avoidance of just
this type of namespace collision possibility. Kdbus domains, he asserts,
would be better off without the globally visible names.
There would appear to be a couple of reasons why these names exist. One
would be to make it easy for a privileged process to tap into any domain
and watch traffic for debugging purposes. That particular need could
probably be met by way of a domain pointer in each process's /proc
area.
The bigger problem relates to another fundamental kdbus design decision: to
base the whole thing around device nodes found in /dev. If there
are kdbus devices for multiple domains in /dev, they must be
organized into that directory's hierarchical namespace. Such a namespace
is essentially unavoidable if the device nodes are to be available to (and,
importantly, locatable by) processes in the system. For this reason, a
couple of reviewers have said that the device abstraction is a mistake.
Rather than implementing kdbus operations as a set of ioctl()
calls on a device, perhaps kdbus should have a set of dedicated system
calls that would eliminate the need for the device nodes altogether. That
would also eliminate the need for a global kdbus domain namespace.
Eric expressed a related concern: the use
of device nodes implies the existence of dynamically allocated device
numbers. That will interfere with the checkpointing and restoring of
containers, since there is no way to guarantee that the same device numbers
will be available when the container is restored. That breaks a use case
that works with D-Bus today, so Eric has described it as a regression.
From one perspective, the response on the mailing list should be
encouraging for the kdbus developers. While the obligatory "why do this in
the kernel?" questions were asked, there does not appear to be much
fundamental opposition to putting this kind of functionality into the
kernel. That suggests that, sooner or later, the kernel will have an
answer for users who have asked for a native messaging solution.
The form of that solution remains up in the air, though. Kdbus will
clearly have to change to address the review comments that have been
posted (and those yet to come); how radical that change needs to be remains
to be seen. It could
be that, as Alan Cox put it, "
Either way, it does not look like the long-playing kdbus story will come to
a close anytime soon. That may be frustrating for those who are waiting
for this functionality to become available in a mainline kernel. But this
process can only be hurried so much if the end result is to be a solution
that will stand the test of time. Once kdbus goes into the kernel it will
become much harder to change, so it is worth taking the time to get the
interface (and its semantics) right first.
a huge twisted mess of a table with ponies of various sizes
",
but it's not all that complicated in the end.
open() flags: O_TMPFILE and O_BENEATH
Not much happens on a Linux system without one or more calls to
open() or one of its variants. There is no other way to create a
file or access a file that already exists. It thus follows that the
various flags that control the behavior of open() have a
significant effect on the functionality of the system as a whole. Here,
we'll look at two specific open() flags; one of them is a
relatively recent addition to the kernel, while the other is still in the
proposal stage.
O_TMPFILE
int fd = open("/tmp", O_TMPFILE | O_RDWR, 0);
int fd = open("/tmp", O_TMPFILE | O_RDONLY, 0666);
O_BENEATH
Kdbus meets linux-kernel
There has been a long history of attempts to put interprocess messaging
systems into the Linux kernel; in general, these attempts have not gotten
very far. From the beginning, though, the expectations around "kdbus," an
in-kernel implementation of the widely used D-Bus mechanism, have been
higher than the usual. Kdbus has been under development for more than two
years, and was
unveiled at linux.conf.au in January. But
it had never been posted to the linux-kernel mailing list
for review and, with luck, eventual inclusion — until October 29, when Greg
Kroah-Hartman posted a twelve-part series
for consideration.
A whirlwind overview
Reviews
Credentials
Otherwise it
becomes unclear what things convey privilege when, and that will lead
immediately to incomprehensible security models, and that will lead to
exploits.
" The response from kdbus
developer Tom Gundersen is that "by simply connecting to the bus and
sending a message to some service, you implicitly agree to passing some
metadata along to the service
". It allows the recipient to be
sure that the necessary information will be supplied, even if the
recipient's security model changes (requiring different information) in the
future. Again, Andy disagrees, insisting that the provision of credentials
should be a matter of negotiation between both sides.
Namespace and device issues
Going forward
it
would be far more constructive to treat the current kdbus as a proof of
concept/prototype or even a draft requirements specification
". Or
perhaps the concerns that have been raised can be addressed with a simpler
set of changes.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Chromium and distributions
In 2009, Tom "spot" Callaway famously listed a number of reasons why Chromium was not officially packaged for Fedora. Five years later, that post was the impetus for a talk by Google's Paweł Hajdan, Jr. at LinuxCon Europe in Düsseldorf, Germany. Things have gotten far better since Callaway's post, but there is still more that both Google and the community can do to improve the situation.
The Chrome browser (with its open-source counterpart: Chromium) was announced in 2008 and was met with "excitement in the Linux community", Hajdan said. The "main goal" of his talk "is to connect the two communities"—Chrome/Chromium and Linux. There is "some room for improvement" by both the project and the distributions. Providing the best solution for users is "going to require the cooperation of both communities".
There are roughly 150 open-source packages that go into Chromium, so the project has a "really strong foundation" of openness, he said. Unlike Android, all commits to Chromium go into an open repository. There are also project committers who are not Google employees. Chromium has been open since the first public release, and it is getting more open over time.
Complaints
Callaway's 2009 post is "the basis of my talk", Hajdan said. That
article still comes up as one of the first results when searching for
information on Chromium and Fedora. Hajdan said that he agrees with many parts
of that post, but that quite a few things have changed since 2009.
Back then, Callaway complained that Chromium was not stable. There were no releases of Chromium in those days, Hajdan said, the first was made in May 2010. Now there are regular releases. In 2009, code was pulled directly from the Subversion repository, but since 2011 there have been tarball releases—though, at roughly 200MB, they are rather large.
Beyond that, patches from packagers were not getting merged into the Chromium base, so it might have taken up to two weeks for a packager to make a new release for a distribution. These days, patches from packagers are being merged, so it should only take half an hour or so to package the browser up. Distributions are aiming for fewer patches to apply, but there are some 10 million lines of code in Chromium, so there will likely always be some patches, he said.
The other main complaint from 2009 was about "forked libraries". Chromium uses a lot of third-party libraries, which is not uncommon, but Linux programs tend to assume that packagers will handle any library problems. Chromium is a cross-platform program, though, and there is "no concept of system libraries" on Windows and Mac OS X. In the end, Linux users are a minority, and the libraries have to be in the Chromium tree anyway to support the other operating systems, so Linux was also handled that way.
But the bundled libraries are not placed haphazardly into the Chromium tree. They all live in a separate "third_party" directory where it is easy to find them. In addition, there is metadata attached to each of the libraries to make it easier to determine their origin, license, and so on. To "some degree", there is truth to the statement that Chromium has a lot of bundled libraries, Hajdan said, but partly that perception comes from the fact that the libraries are "more visible than in other projects" because they are all in one place.
According to Callaway, system libraries are not supported by the Chromium upstream, but that is something of a misreading of the situation. Chromium is big and multi-platform; it does not foist problems off on users simply because they are found in some third-party library. "If it affects users, we can and do fix it", he said.
Hajdan noted that, as with other big projects and big companies, there are a variety of opinions on any subject, including library handling for Chromium—both inside Google and the project. He is presenting his, which may well not be the official policy. The various rebuttals that Callaway had heard for his complaints and described in his post are also just opinions. But some of them do have some merit, Hajdan said.
For example, the Chromium project does move fast, which makes it difficult to work with upstreams. Some 200 patches per day land in the repository. He is on the infrastructure team that is supporting that project; "we have to work hard not to crumble under our own weight", he said. But, since the project is moving so fast, it actually tries to minimize the differences it has with upstreams. In general, working better with upstreams is a goal for the project.
Specific libraries
There is also the need to use specific versions of libraries, but that could be handled better in some cases. He gave the "reverse talk" at Google at one point, presenting about how the open-source community works. It turned out that many inside the company did not know that a package can request that the package manager provide a specific version of a library. But there are times when "deep changes" to a library are necessary. Typically, that is done to support the Chromium sandbox that prevents the rendering processes from making system calls that they shouldn't.
He pointed to Gentoo stable as evidence that newer libraries can work just fine for Chromium. The distribution has newer versions of some important libraries (e.g. jinja, libevent, libpng, libxml, and zlib) and the browser "still works". Distributions like Gentoo are helping the Chromium team by doing a lot of testing with these newer libraries; he would like to see the project give recognition to Gentoo and others for that work.
Some notable libraries that are bundled with Chromium include FFmpeg, Hunspell, Mesa, and SQLite. FFmpeg is used to handle HTML5 audio and video for the browser. The Chromium project does work with the FFmpeg upstream and newer versions of FFmpeg "technically could work" for the browser.
On the other hand, Chromium uses an incompatible fork of Hunspell that has a different dictionary format. Some of the changes have made their way upstream, but the Hunspell project is sometimes slow to respond to patches. It would also help if Hunspell would remove some unnecessary obstacles to working with others, he said, including moving away from using CVS as its source code management tool.
The Mesa library is a fork due to some type mismatches for 32-bit systems. Upstream Mesa can be made to work for Chromium on 64-bit systems, he said. At one time, Chromium was using a patched version of the GLEW OpenGL wrapper library, but it doesn't support some features that are needed (e.g. GLES2 and the sandbox).
The Chromium version of SQLite is also an incompatible fork, he said. SQLite is unlikely to make the changes Chromium would need; likewise, Chromium is probably not going to change to use the upstream version.
There is also a handful of libraries included that are not designed to be reused. For example, JavaScript libraries like Polymer and JSZip are meant to be distributed with a project's source. There are libraries that do not provide a stable API, such as libyuv and Skia, which are also meant to be included. The blocking issue for some of these libraries is that the developers "don't have the cycles to do a stable API", he said. By not having to worry about compatibility, those projects can move faster.
There is another handful of libraries that still need more work before they can be unbundled from the Linux version of Chromium. But he listed 17 libraries that had been unbundled over the last few years, including those for bzip2, HarfBuzz, ICU, and Speex.
In fact, there is now a directory in the Chromium tree (build/linux/unbundle) that has tools and information to assist in unbundling libraries from the Chromium builds for Linux. It is currently being used by Arch Linux, Gentoo, and some of the BSDs, but he encourages other distributions to use it as well. The remove_bundled_libraries.py script takes a list of libraries from the third_party directory on the command line and removes them from the build. On Gentoo, there are more than 100 libraries listed for the standard build. In addition, there is a mechanism to generate shim header files that include either the third_party version of the header or the one from the system, which facilitates unbundling.
As usual, contributions are welcome, Hajdan said. There are many contributions from outside Google and he encouraged attendees to "jump into the code base and make it better". There is a contributor agreement, but it is not a copyright assignment; Google does not want it to be a barrier to contributions. So far, FreeBSD and OpenBSD have gotten changes for those systems upstream, as have MIPS developers for their architecture. As he said at the outset, work is needed from both sides to bring Chromium smoothly into the Linux ecosystem.
[ I would like to thank the Linux Foundation for travel assistance to Düsseldorf for LinuxCon Europe. ]
Brief items
Distribution quotes of the week
Every time this gets brought up someone says "oh, but we can't do anything, because someone might be sad about it and quit gentoo". Uhm, right. That's already happening, all the time.
Fedora 21 beta released
The Fedora 21 beta release is available for testing. "Every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide. Together, we can make Fedora 21 a rock-solid distribution. We have a culture of coordinating new features and pushing fixes upstream as much as feasible and your feedback will help improve not only Fedora but Linux and free software on the whole."
OpenBSD 5.6 Released
OpenBSD 5.6 has been released. This is the first release to fully incorporate LibreSSL instead of OpenSSL. This release does not support some legacy platforms, antique compilers, FIPS-140 compliance, EBCDIC, big-endian i386 and amd64 platforms, and more. Some old drivers have been removed, and some new ones added. The announcement contains the details.openSUSE 13.2 released
The openSUSE 13.2 release is now available. "This version presents the first step to adopt the new openSUSE design guidelines system-wide. The graphical revamp is noticeable everywhere: the installer, the bootloader, the boot sequence and all of the (seven!) supported desktops (KDE, GNOME, Xfce, LXDE, Enlightenment 19, Mate and Awesome). Even the experimental Plasma 5.1 is adapted to the overall experience." See the announcement for details on what's new in this release.
Distribution News
Debian GNU/Linux
REISSUED CfV: General Resolution: Init system coupling
The call for votes for the Debian init system GR is open until November 18. The details of the resolution can be found on its vote page.
Fedora
Fedora Council elections scheduled
Fedora Project Leader Matthew Miller has announced the election schedule meant to fill the two new "at large" slots on Fedora's upcoming Fedora Council governance body. "These positions are of strategic importance, with a full voice in the Council's consensus process. The primary function of the Council is to identify community goals and to organize and enable the project to achieve them.
" Nominations will be open from November 4 through 10; voting be open from November 18 through 25. The week in between will be for campaigning. Miller also encourages potential candidates to consider the time commitment the new roles require. "We recognize that this level of commitment is difficult for many community members with full-time jobs not directly related to Fedora, and the intent is not to exclude those contributors. At the same time, these positions will require a meaningful commitment of time and responsiveness. If your other obligations make this impossible, please consider suggesting candidacy to other community members who you feel would be able to bring your voice to the table.
"
Fedora Council nominations for upcoming election now open
The nomination period for the first Fedora Council election is open until November 10. There are two seats open. "This election, we're encouraging nominees to run a (short, low-budget!) election campaign — have a platform, blog, tweet, etc. Hire your own PR team? Probably no need to go that far! At the end of the campaign period, we're also going to run email-based interviews with each candidate on Fedora Magazine." People are invited to add their interview questions for the candidates to the Elections/Questionnaire wiki page.
Red Hat Enterprise Linux
Red Hat Software Collections 1.2 is available
Red Hat has announced the release of Red Hat Software Collections 1.2. "The third installment of Red Hat Software Collections now includes vital open developer tools, such as GCC 4.9, Maven and Git, and, for the first time, makes the Eclipse IDE available on Red Hat Enterprise Linux 7. In addition, Red Hat is offering Dockerfiles for many of the most popular software collections, aiding in the rapid creation and deployment of container-based applications."
Other distributions
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 583 (November 3)
- 5 things in Fedora this week (October 31)
- Ubuntu Weekly Newsletter, Issue 390 (November 2)
Mobile Linux Distros Keep on Morphing (Linux.com)
Linux.com looks at the distributions powering mobile devices, including Firefox OS, Tizen, Ubuntu, and WebOS. "At the Mozilla Festival held earlier this week in the U.K. , Mozilla unveiled a PiFxOS version of Firefox OS for the Raspberry Pi, also dubbed Foxberry Pi, with promises to make it competitive with Raspbian Linux. It's currently a bleeding edge demoware build, but Mozilla appears to be serious about ramping it up, with an early focus on robotics hacking and media players. PiFxOS is based on a Firefox OS port to the Pi developed by Oleg Romashin and Philip Wagner, which seems to have stalled. Mozilla plans to beef it up with support for sensors, control motors, LEDs, solenoids, and other components, as well as build a modified version for drones. A longer term project is to develop a DOM/CSS platform for robots using "a declarative model of a reactive system.""
Page editor: Rebecca Sobol
Development
A look at Pitivi 0.94
The latest update to the GTK+-based free-software video-editing application Pitivi was released on November 2. Version 0.94 is the first release since the project's recent crowdfunding campaign, so many users will be interested to see what fruit the contributed funds have borne in the code. The answer, it seems, is that there are not many new user-visible features on display, but there are considerable changes under the hood—including a port to Python 3.
Pitivi developer Jeff Fortin Tam described the release in a blog post. Source code bundles are available on the downloads page, but there is an even easier option available for users interested in testing the new release. Self-contained binaries that bundle in the necessary libraries are also provided for 32-bit and 64-bit Intel systems. Such bundles may indeed be helpful for many users, since version 0.94 is built on top of GTK+ 3.10 and GStreamer 1.4, which may not be available yet in many distributions.
The shiny and new
![[Pitivi main window]](https://static.lwn.net/images/2014/11-pitivi-editor-sm.png)
The most noticeable changes in the new release are the updates to Pitivi's user interface (UI). A rather fundamental distinction from earlier versions is that the main application window now uses GTK+'s GtkHeaderBar—the new widget found in many recent GNOME applications, which merges the window titlebar with top-level toolbars and menubars, to form a single unit. The goal, of course, is for this combined widget to free up additional vertical space, and it is a welcome change for an application like Pitivi, where screen real estate is frequently a scarce commodity when working on a project.
More judicious use of screen space is a plus, but the other big UI improvement may solve a frequent annoyance. Many of Pitivi's window components (such as the timeline or clip library) can be undocked from the main window and turned into floating elements. In past releases, when starting Pitivi in this mode, the floating components would not retain their screen position between sessions, forcing the user to rearrange them each time. That has now been fixed. Fortin Tam noted, though, that a bug prevents the restoration of docked components' positions if Pitivi is maximized.
A more functional UI improvement is the reworked interface for the title editor (the tool used to add text overlays to a video). The title editor now has a pop-up font selector, easier foreground and background color choosers, and simple positioning controls to set text placement on a video. And there are several other UI updates to be found. For instance, the animations used in the Timeline component have been tweaked, the default screen positions of several components have been rearranged, and various UI elements have been redesigned to use fewer colors—in an effort to make them less distracting.
![[Pitivi title editor]](https://static.lwn.net/images/2014/11-pitivi-title-sm.png)
As far as functional additions go, there are not many to be found in 0.94. The main one—which is important—is that the effects filters applied to a video clip can be dragged and rearranged. Since reordering the effects can affect their output, this spares the user from deleting them and starting over in order to make a change.
There are, of course, other fixes that users will appreciate. Undo and Redo were unreliable in the past, but have now been refactored and are reportedly solid and dependable. The video mixer was fixed to be thread-safe, which should mean significantly fewer lock-ups. Last—but certainly not least—all of the documentation was updated to reflect changes in 0.94.
Under the hood
Most of the significant changes to Pitivi in this release, though, are beneath the surface. Several problems with GObject Introspection in the gst-python package were found and fixed. The video viewer element had been based on Clutter in prior releases, but on desktops other than GNOME Shell, it was triggering frequent crashes, so in 0.94 the viewer was rewritten to use the new OpenGL video sink in GStreamer instead. There were also many users encountering broken packages for the external CoGL library, so the Pitivi team dropped CoGL in favor of more reliable options. Several deprecated GTK+ widgets were removed, and other elements ported to new widgets. And so forth.
Another one of the attention-grabbing changes is that Pitivi 0.94 has been ported to Python 3. Obviously Python 3 adoption remains a topic that can stir up plenty of emotion and debate. But it is worth remembering that many of the criticisms of Python 3 focused on its text handling, which (even if they have not been resolved, which is another frequent point of disagreement) is not necessarily a major issue for Pitivi. The UI and application architecture are Python-based, of course, but the significant functionality comes from GStreamer and various GStreamer-based libraries.
It is also not clear that Python scripting really is, in practice, important to Pitivi users. Python serves as a plugin language, but there is not much in the way of a plugin-development community outside of the core development team. Presumably, the Pitivi project would like for an active Python-scripting community to develop (as most projects would), but as a practical matter, there is not much Python 2–to–Python 3 porting to worry about.
Regarding those GStreamer-based underpinnings, in early October Fortin Tam posted an update on Pitivi's development in the wake of its crowdfunding campaign. The campaign raised more than €19,000, which was below the target but is still sufficient to fund some paid time for developers Thibault Saunier and Mathieu Duponchelle.
The plan is that Saunier and Duponchelle will use the current development cycle (leading up to Pitivi 0.95) to replace the GNonLin library with a new non-linear editing layer that they call NLE. GNonLin is an abstraction layer that sits between GStreamer and the higher-level GStreamer Editing Services (GES) library on which most of Pitivi's editing functionality is based. GNonLin, Fortin Tam says, was the source of numerous deadlocks and freezes.
GNonLin has certainly had its critics in the past. The OpenShot team criticized it a few years ago before jumping ship for the MLT library. Pitivi is the last major project to stick with GNonLin, so the time to shut down the library in favor of a replacement may have finally come.
However, this also highlights one of the uncomfortable realities on open-source video editing. There are a lot of competing editor projects out there (Pitivi, Kdenlive, OpenShot, LiVES), and others seem to come and go frequently (Lombard, Kino, Jahshaka, etc.). From the outside, they all seem to reach (roughly speaking) feature-parity, with basic editing, transitions, and effects. But at that point, development all too often seems to pause and the teams spend a significant amount of their time replacing and/or rewriting the underlying media-handling stack.
From the user's perspective, this can be quite frustrating. Hopefully Pitivi, with some success at funding further development, can avoid such pitfalls. In the meantime, version 0.94 may not sport a long list of new features, but an improved user interface and better stability are welcome additions.
Brief items
Quotes of the week
Apple has a big garden.
Introducing Dynomite - Making non-distributed databases, distributed
The Netflix Tech Blog has posted an introduction to Dynomite, a database distribution system. "In the age of high scalability and big data, Dynomite’s design goal is to turn those single-server datastore solutions into peer-to-peer, linearly scalable, clustered systems while still preserving the native client/server protocols of the datastores, e.g., Redis protocol." Dynomite is available under the Apache license.
adns 1.5.0 available
Version 1.5.0 of the adns DNS resolver library has been released. Notably, this update adds full IPv6 support and adds several functions for converting between addresses and address literals.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (October 31)
- Haskell Weekly News (October 29)
- LLVM Weekly (November 3)
- OCaml Weekly News (November 4)
- Perl Weekly (November 3)
- PostgreSQL Weekly News (November 2)
- Python Weekly (October 30)
- Ruby Weekly (October 30)
- Tor Weekly News (November 5)
- Wikimedia Tech News (November 3)
KVM Matures, and the Use Cases Multiply (Linux.com)
Over at Linux.com, Adam Jollans has a report from the recently completed KVM Forum that was held in Düsseldorf, Germany October 14-16. He looks at a talk that he gave on KVM's relationship to OpenStack and the open cloud, a new white paper on KVM [PDF], and a panel on network function virtualization (NFV): "In the past, communications networks have been built with specific routers, switches and hubs with the configuration of all the components being manual and complex. The idea now is to take that network function, put it into software running on standard hardware. The discussion touched on the demands – in terms of latency, throughput, and packet jitter – that network function virtualization places on KVM when it is being run on general purpose hardware and used to support high data volume. There was a lively discussion about how to get fast communication between the virtual machines as well as issues such as performance and sharing memory, as attendees drilled down into how KVM could be applied in new ways."
Page editor: Nathan Willis
Announcements
Brief items
Videos from the GNU Tools Cauldron
The GNU Tools Cauldron, a conference on the low-level toolchain (GCC, glibc, GDB, etc.) was held last July. There is now a full set of videos from the event available for your viewing pleasure. Anybody with an interest in this area is advised to have a fair amount of time available before visiting that page; there are quite a few interesting topics in the list.
Articles of interest
Free Software Supporter - Issue 79, October 2014
This edition of the Free Software Foundation's monthly newsletter covers nominations for Free Software Awards, Matthew Garrett joins board of directors, the Licensing and Compliance Lab interviews Jessica Tallon of PyPump, LibrePlanet 2015, Munich sticks with Free Software, and much more.AdaCamp Berlin report
The Ada Initiative has released a report on the recent AdaCamp in Berlin. "AdaCampers reported learning a variety of new skills including but not limited to the usage of crypto tools, privacy, approaches to feminism, how to contribute to open source, how to better organize events, creating safer spaces, making events inclusive, fan culture, security and what one AdaCamper described as "A deeper understanding of why security is particularly important for women.""
Yocto training materials published
Free Electrons has announced that Yocto Project and OpenEmbedded training materials have been published under the Creative Commons Attribution Share-Alike license. They are available in PDF format and the LaTeX source is also available.
New Books
How Linux Works, 2nd Edition -- New from No Starch Press
No Starch Press has released "How Linux Works, 2nd Edition" by Brian Ward.The Book of CSS3, 2nd Edition -- New from No Starch Press
No Starch Press has released "The Book of CSS3, 2nd Edition" by Peter Gasston.Libre Calendar 2015
LILA, a French non-profit association that seeks to promote free art and free software, is putting together a print calendar with photographs, drawings and 3D renders done fully with Free Software. "If we can sell enough to get any profit (since printshop's price per calendar goes down as we print more), most of it will go to the artists, as well as donated to a selection of graphics Free Software (GIMP, Blender, etc.) used to make the calendar. The association LILA gets a eighth (that we will use for our projects of film animations under Free licenses as well)."
Education and Certification
Announcement of LibreOffice Certification
The Document Foundation has announced Certification for LibreOffice Migrations and LibreOffice Training Professionals, open to TDF Members until April 2015 and then to all free software advocates. "'LibreOffice Certification is an absolute first for a community based project, and has been developed adapting existing best practices to the different reality of the TDF ecosystem', says Italo Vignoli, Chairman of TDF Certification Committee. 'We want to recognize the skills of free software advocates who are able to provide value added services to large organizations deploying LibreOffice. Once certified, they will be recognized as LibreOffice experts and ambassadors'."
Calls for Presentations
CFP: SciPy India 2014
SciPy India will take place December 5-7 in Bombay, India. The call for presentations closes November 7.FOSDEM 2015 Desktops DevRoom Call for Talks
FOSDEM (Free and Open Source Developers European Meeting) will take place January 31-February 1 in Brussels, Belgium. The Desktops DevRoom will take place on February 1. The call for proposals for talks about free/open-source desktops closes December 7.CFP Deadlines: November 6, 2014 to January 5, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
November 7 | December 5 December 7 |
SciPy India | Bombay, India |
November 9 | March 21 March 22 |
LibrePlanet 2015 | Cambridge, MA, USA |
November 30 | January 13 | Linux.Conf.Au 2015 Systems Administration Miniconf | Auckland, New Zealand |
December 1 | February 6 February 8 |
DevConf.cz | Brno, Czech Republic |
December 1 | March 11 March 12 |
Vault Linux Storage and Filesystems Conference | Boston, MA, USA |
December 7 | January 31 February 1 |
FOSDEM'15 Distribution Devroom/Miniconf | Brussels, Belgium |
December 8 | February 18 February 20 |
Linux Foundation Collaboration Summit | Santa Rosa, CA, USA |
December 10 | February 19 February 22 |
Southern California Linux Expo | Los Angeles, CA, USA |
December 14 | January 12 | LCA Kernel miniconf | Auckland, New Zealand |
December 17 | March 25 March 27 |
PGConf US 2015 | New York City, NY, USA |
December 21 | January 10 January 11 |
NZ2015 mini-DebConf | Auckland, New Zealand |
December 21 | January 12 | LCA2015 Debian Miniconf | Auckland, New Zealand |
December 23 | March 13 March 15 |
FOSSASIA | Singapore |
December 31 | March 17 March 19 |
OpenPOWER Summit | San Jose, CA, USA |
January 1 | March 21 March 22 |
Kansas Linux Fest | Lawrence, Kansas, USA |
January 2 | May 21 May 22 |
ScilabTEC 2015 | Paris, France |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
LCA2015 Keynote Speaker - Professor Eben Moglen
Professor Eben Moglen, Executive Director of the Software Freedom Law Center and professor of Law and Legal History at Columbia University Law School, will be a keynote speaker at linux.conf.au.Events: November 6, 2014 to January 5, 2015
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
November 3 November 7 |
OpenStack Summit | Paris, France |
November 4 November 7 |
Open Source Developers' Conference 2014 | Gold Coast, Australia |
November 6 November 9 |
mini-DebConf | Cambridge, UK |
November 7 November 9 |
Jesień Linuksowa | Szczyrk, Poland |
November 8 | Open Source Days | Copenhagen, Denmark |
November 9 November 14 |
Large Installation System Administration | Seattle, WA, USA |
November 10 November 14 |
21'th Annual Tcl/Tk Conference | Portland, Oregon, USA |
November 11 | Korea Linux Forum | Seoul, South Korea |
November 13 | Hackaday Munich | Munich, Germany |
November 16 November 21 |
Supercomputing 14 | New Orleans, LA, USA |
November 17 November 21 |
ApacheCon Europe | Budapest, Hungary |
November 18 November 20 |
Open Source Monitoring Conference | Nuremberg, Germany |
November 19 November 21 |
CloudStack Collaboration Conference Europe | Budapest, Hungary |
November 21 November 23 |
Debian Bug Squashing Party in Munich | Munich, Germany |
November 22 November 23 |
AdaCamp Bangalore | Bangalore, India |
November 25 | New Directions in Operating Systems | London, UK |
November 29 November 30 |
OpenPhoenux Hard and Software Workshop | Munich, Germany |
December 5 December 7 |
SciPy India | Bombay, India |
December 27 December 30 |
31st Chaos Communication Congress | Hamburg, Germany |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol