|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for June 19, 2014

Running Firefox OS apps on Android

By Nathan Willis
June 18, 2014

On June 12, the "Mozilla Hacks" blog posted a story explaining how to install and run HTML5 "web apps" on Android. Packaged, locally installed web apps are the cornerstone of Firefox OS's user experience, of course, and are an option on several other free mobile operating system platforms like Tizen and Ubuntu Touch. But they have not played a significant role in the Android story, so the opportunity to bring them to the most popular Linux-based mobile ecosystem is potentially big news for developers as well as users.

Web apps require a compatible web runtime, of course—one that implements the APIs for accessing the necessary device features. Firefox OS uses Mozilla's Gecko as its web runtime, and it reuses Android's kernel and hardware abstraction layer (HAL), so it should perhaps come as no surprise that relatively little work is required to get Firefox to function as a web runtime within Android itself. Indeed, that is what Mozilla has now done: the recently-released version 29 of Firefox for Android can serve as the web runtime for any web app packaged for Firefox OS and submitted to Mozilla's Firefox Marketplace.

Normally, web apps built for Firefox OS are packaged as .zip files. In conjunction with the new release of Firefox for Android, the project has also built a web service called APK Factory that converts the packages to Android's .apk package format. APK Factory can be installed and used locally, although the developer will need to sign the resulting .apk packages with his or her own key in order for it to be published through Google's Play Store. The web apps submitted to Firefox Marketplace have been converted to .apk form by Mozilla, and signed with Mozilla's key, so they, too, can be installed just like any native Android app.

To install a Firefox OS web app on an Android phone or tablet, one needs only to visit the Firefox Marketplace site with the device and click on the desired app's "install" link. The standard Android confirmation dialog pops up, although users may need to enable app installation from third-party sources if they have not done so already.

[Installing a Firefox OS app on Android]

As an earlier Mozilla Hacks post explained it, the key benefits of this new plan to users is that "installed" web apps are fully integrated with the Android platform's services. While one could already launch a mobile-friendly web site from a browser shortcut in Android, the packaged apps appear in the recent-app list and the app drawer, and the user has access to the usual tools to monitor the apps' permissions, memory, storage, and CPU usage. Installed apps can also be updated and uninstalled using the standard Android mechanisms.

In practice, the only noticeable shortcoming of this plan may be that the Firefox Marketplace offers substantially fewer choices than Google Play and the side-loaded Android app ecosystem (including individual .apk releases plus alternative app stores like F-Droid). Of course, one might reasonably argue that a high percentage of the world's available Android apps are not worth installing (and the sheer numbers can make finding a good choice more difficult), but the reality is that, at this point, Firefox Marketplace includes a fairly basic, no-frills selection of apps.

[Firefox OS calculator app on Android]

But there are some major names among the available options, including Box, SoundCloud, and Twitter, plus a commendable variety of utility and productivity apps. Out of curiosity, I installed a handful on a Nexus tablet. A few of them exhibited some strange quirks—for instance, typing the same number twice in rapid succession on the scientific calculator app triggered a "zoom" event; not the kind of behavior a non–web-app would fall victim to. But none that I tried failed to install or to run.

In fact, in some cases the web-app version even offers an arguably nicer experience than the native Android version. Take the official Twitter apps, for example. Notably, the Android Twitter app takes up 16.52MB of space on the device, and it wants an extensive set of permissions—including phone status and identity, SMS access, GPS and network-based location, access to contacts, access to read and delete USB storage, access to add and remove accounts, full network access, vibration control, the ability to prevent the device from sleeping, installing "shortcuts", and the ability to read and change sync settings. The web-app version of Twitter from the Firefox Marketplace takes up 60KB and uses only the location privilege. The user experience is more-or-less the same.

[Firefox OS Twitter app on Android]

For app developers who wish to explore the option, Mozilla has added a section to its developer documentation dealing with the process of building and publishing web apps for Android. There is a Node.js-based command-line interface to the APK Factory service with which developers can test their web apps on Android before submitting them to the Firefox Marketplace or self-publishing them.

The other remaining issue is API availability, which may differ between Firefox OS and Android. Mozilla maintains a list on its wiki, with all Android-supported APIs marked with their level of support. That includes not only whether or not the API itself is supported, but whether it requires the user to change any preferences on their device. At the moment, several key APIs are still partially incomplete, while several others are missing but in the planning stages, including the Alarm, SimplePush, and Web Activities APIs. Several others are marked as "not currently planned," including the WebTelephony and WebBluetooth APIs.

After all that the development community has heard about HTML5's suitability as a mobile app platform, it is always nice to find an opportunity to put web apps to the test on a real device. But, while there are already options for running Tizen, Ubuntu Touch, and Firefox OS on separate devices, this new ability to install and use Firefox OS web apps side by side with their native Android competition provides a new perspective. It is usable today, which is a boon to the curious. Perhaps more interesting will be to see what impact it could have on Android app development further down the line. A large potential user base like Android could motivate a lot of developers to seriously investigate HTML5-based apps who have not done so before.

Comments (3 posted)

A report from the first DockerCon

June 18, 2014

This article was contributed by Josh Berkus


DockerCon 2014

Docker Inc. and the Docker community celebrated a 1.0 release at the first DockerCon, which was held in San Francisco on June 9 and 10. The conference slogan was "Containers Are the New Virtualization", which was not only a vision for Docker, but also a challenge to virtualization software projects. DockerCon was packed with new product and project announcements as well as demos by Docker Inc., Google, Amazon, IBM, Red Hat, and other Docker-adopting companies. The conference certainly demonstrated the excitement around Docker that has built over the last year. While almost every presenter and keynote speaker had something new to show off, the most interesting announcements and demos came from Docker Inc. itself, and from Google.

First, however, a recap on the Docker project and technology for those unfamiliar is in order. If you already know about Docker and containers on Linux, you can skip to the next section.

Some background

According to its GitHub page: "Docker is an open source project to pack, ship and run any application as a lightweight container". More technically, Docker is a management tool that enables users to easily package and deploy single-service "containers" as an alternative to virtual machines or traditional installation scripts and packages. The project's goal is to deliver the advantages of virtual machines, including isolation, portability, and easy deployment, without the overhead.

The first thing to understand is: containers are not virtual machines. A virtual machine (VM) creates an isolated runtime environment for software based on "hardware virtualization", where the VM emulates a complete hardware environment for a full guest operating system and kernel. Containers operate at a different layer: they provide an isolated operating system (OS) and filesystem environment that supports an isolated guest user space that all runs on the same kernel as the host OS.

Containers have a long history, and Docker is just the latest implementation. Within open source, FreeBSD pioneered containers with jails in version 4.0. This was followed by Solaris Zones and Solaris Containers in OpenSolaris 10. Container support came to Linux in several competing libraries starting with OpenVZ in 2005 and LXC in 2006. Docker was originally based on LXC, but switched to using its own libcontainer in release 0.9.

Docker was created by the company DotCloud as part of its platform-as-a-service (PaaS) infrastructure. The company open-sourced Docker in March 2013 under the Apache License, and it quickly came to eclipse the company's cloud services in popularity. In October, the company was renamed to Docker Inc., and began planning for the first DockerCon.

The advantages of containers over VMs are that they require fewer system resources, start up much faster, and are smaller and easier to deploy. For example, on my laptop a VirtualBox VM running PostgreSQL on Ubuntu uses about 2GB of disk space and takes a couple minutes to start up, whereas a Docker container uses around 200MB of space and starts up in less than five seconds. This enables the Docker approach of "one application, one container", as it is reasonable to run dozens of containers on a single commodity server.

Compared to VMs, the main limitation of containers is that they run on the same kernel as the host operating system. This means that you cannot run a completely different operating system, such as Windows, on top of Linux using a container, and even the ability to run different Linux distributions is limited by kernel compatibility. Right now, this is especially restrictive since Docker recommends using Linux kernel 3.8 and higher for technical reasons. It will seem less restrictive as Red Hat Enterprise Linux 7.0, which is based on the 3.10 kernel, becomes more widely deployed.

What Docker adds to containers is a suite of integrated management tools. First there's the concept of "images", which are stripped-down sets of operating system files that supply the foundation of a container. There are a number of "base images" for various Linux distributions, including Ubuntu, CentOS, and Amazon Linux. Users then make their own changes to the OS environment and save new images. These sets of changes are known as "layers" and are implemented either via a union file system, such as aufs or, more commonly these days, using Btrfs snapshots.

The second major thing Docker does for you is allow you to customize these containers through the "Dockerfile", which is a configuration file that runs commands on the container and launches the service that will be the "main service" for the container. For example, if you have a container that is your Apache HTTPD container, that's the main service, and when HTTPD shuts down, so does the container. This makes it easier to use containers as part of automated testing and deployment, and is a great deal like the virtual machine management offered by Vagrant. Docker also helps create and manage virtual networking, file sharing, and system resource allocations for the containers.

Docker 1.0 and DockerHub

Docker CEO Ben Golub kicked off DockerCon with a keynote announcing two things: the release of Docker 1.0, and the launch of Docker Hub. Golub, who previously was CEO of Gluster, joined Docker in July 2013, after the first open-source release. He spent some time talking about the momentum and accomplishments of Docker Inc. and the Docker community, including the success of its open source community-building effort. Thanks to the Apache license and an open contribution policy, he said, it has received contributions from over 450 people, including nearly 400 outside contributors.

Then he unveiled Docker 1.0, which was released the day before DockerCon. For those who have been following Docker development, there were no big surprises; 1.0 was identical to the release candidate 0.12.0. For those who haven't touched Docker in a while, though, there are a bunch of changes, some of them fairly fundamental, which have come about in the last three months since Docker 0.8.

First, Docker is no longer based on LXC, and as of version 1.0 works with multiple container libraries. This means it's possible to run Docker instances using libcontainer, LXC, OpenVZ, and various virtualization tools. It is also theoretically compatible with Solaris Zones and FreeBSD jails, although there were no demonstrations of that. Not all Docker features will work with all container types, and it seems likely that only libcontainer will support everything. The developers have also made the filesystems pluggable, supporting Btrfs, aufs, and device-mapper for storage, with plans to support other filesystems, such as XFS, in the near future.

Docker 1.0 adds the ability to pause and resume containers to save CPU cycles. It has improved security and made Docker compatible with SELinux and AppArmor for high-security environments. There were also a bunch of minor improvements to Dockerfiles and Docker commands to fix longstanding issues. Boot2docker, a lightweight VM that allows Mac and Windows users to use Docker, has also been brought up to 1.0 status. Finally, libcontainer has become its own, standalone project.

More importantly, with the 1.0 release, the Docker project is declaring this a stable version of the software, and Docker Inc. will be offering long-term support for it. The project is promising a stable API with a commitment to backward compatibility for the future. Accordingly, Docker has requested and received its official port numbers from the Internet Assigned Numbers Authority (IANA) for HTTP and HTTPS API traffic: 2375 and 2376, respectively.

The other big new thing is Docker Hub, which is a centralized repository for container images. Users can upload and download images of OS and application containers from it, and downloading from Docker Hub is the default option for new container deployments in the Docker API. The images stored at Docker Hub include users' personal application images and "Official Repositories", which are vetted and curated images managed by Docker staff and trusted outsiders. These include both base-level OS images, like "Ubuntu", and application images, such as "WordPress".

Docker Hub and the namespace for images are organized like GitHub, except for the official images. For example, the official PostgreSQL image is at "postgres", and if I create and release my own version, it will be at "jberkus/postgres". Also like GitHub, public image repositories will be free, but private ones will require a paid account.

Google's Docker tools

During DockerCon, multiple companies, including Amazon, Red Hat, Rackspace, and IBM, announced and demonstrated various products and tools designed to work with Docker. To me, the most interesting of these talks was the keynote given by Google's Eric Brewer on the second morning of DockerCon. It was engaging because not only is Google using Docker, it is releasing a whole bunch of internal tools for Docker as open source.

According to Brewer, Google has been using containers for a while, which are based on an internal management tool set that was never open sourced. Containers are "application-centric", which is how Google does things, and are suitable for large-scale application framework load-balancing. "Google deploys over two billion containers per week," said Brewer. "We run containers inside VMs on top of containers."

So when Docker came along, Google decided to embrace it and portions of the company's infrastructure have been converted to using Docker. And, starting recently, Google has been contributing to it.

In October 2013, Google released its own container system that was originally in competition with Docker. It bears the cute name of "LMCTFY", which stands for "Let Me Contain That For You", after a well-known web site. LMCTFY offers resource-managed containers, which use control groups to limit CPU, memory, and I/O usage by each container so that more containers can share a single machine. LMCTFY also supports nested containers, which enables grouping containers.

Google plans to take this resource management code and move it into Docker to give it the same capabilities. Its first effort, released for the hackathon on the day before DockerCon, is cadvisor, which is a tool that reports container resource usage on the host system.

Like the Docker project, Google also endorses the mantra of "one service, one container". However, this means that you have a lot of closely related containers which need to be deployed, then start and stop together. For example, you might have a container which has a web application server, a second container which has mapped file storage, and a third which has a logger service. Google uses nested containers to group these containers into "pods", which are deployed as a unit and are intended to share a single IP address.

To support this architecture, Google has released the Kubernetes project, which is an "orchestration" system for groups of containers organized into pods. These pod configurations are controlled through a configuration file in JSON that also supports the idea of load-balanced groups of pods. Pods communicate through ports assigned to each service at declaration time. All of this is designed to enable the rapid provisioning of large groups of servers based on a declarative configuration.

Brewer said that Google plans to open source more internal container-management tools in the future to make Docker the "open standard" for containerization on the web. All of these projects will be under the Google Cloud Platform group of tools.

More DockerCon

Of course, there were many other presentations and demos during the conference. Amazon announced that it was changing its application deployment service, Elastic Beanstalk, to be based on Docker in the future. Speakers from Chef, SaltStack, and Puppet each showed off using their management tools to deploy containers. Red Hat talked about its new Project Atomic, a lightweight version of RHEL designed to both run, and be run on, containers.

One of the major areas of technical competition among the various companies at DockerCon was "orchestration", which means tools to manage large numbers of containers on many physical hosts. In addition to Google's Kubernetes project, there were talks and demos of Red Hat's GearD and Apache Mesos. Docker Inc. is also working on a new orchestration tool called libswarm.

Overall, DockerCon was impressive in the amount of enthusiasm, adoption, and technology arrayed around a project which is only fifteen months old. In a little over a year, it has acquired a full ecosystem of dependent projects and competing corporate contributors of which any open source project would be proud. I could not help but come away from the conference convinced that I'll see a lot more of Docker in the future. In fact, I'm already working on improvements to the official PostgreSQL image.

Comments (10 posted)

Karen Sandler on what we mean by "we"

By Nathan Willis
June 18, 2014

TXLF 2014

Identity can be a nebulous issue for the free and open-source software (FOSS) community, perhaps in part because of how different FOSS is from other communities. Karen Sandler of the Software Freedom Conservancy (SFC) explored that topic from several angles during her keynote talk, "Identity crisis: are we who we say we are?" at the fifth annual Texas Linux Fest in Austin. In particular, the FOSS community often speaks of itself as a monolithic "we," but defining who "we" means is a tricky task in many FOSS contexts, she said. There are blurry boundaries, multiple roles, and overlapping objectives that permeate many FOSS projects, and the language used can exacerbate real-world problems.

Sandler started her talk by noting that she currently has many roles herself. In addition to her role as Executive Director of SFC, she was recently elected to the Board of the GNOME Foundation (where she previously served as Executive Director), is a practicing attorney in New York, and is associated with several other organizations, such as the Software Freedom Law Center (SFLC) and QuestionCopyright.Org. But in spite of her extensive experience with FOSS, she said, it is really only in the past year or so that she has really felt like she has gotten a handle on the complicated issues of identity and representation.

In March, she taught a seminar on legal ethics in free software on behalf of the Free Software Foundation. The seminar was a professional "continuing education" course (which practicing attorneys are required to complete a certain number of in order to maintain their certification), and although she initially worried that the time she spent preparing the material would prove boring, it turned out to be fascinating, since it highlighted just how differently FOSS does things from the rest of the world. To begin with, there are blurry lines everywhere in FOSS: between what is personal and what is professional, between volunteers and paid contributors, between non-profit organizations and for-profit companies, and even between the ideological and commercial goals that motivate the work.

Still, she said, "we say 'we' a lot," and figuring out who "we" is in any one instance can be difficult. As a lawyer, she continued, she has to think about the question in strict terms, since attorneys have definite legal obligations to their clients and rules they must abide by. For instance, she noted that strangers often come up to her at FOSS events and start to share "juicy gossip" about projects and companies. She stops them and asks whether they should be telling their story, and often hears a reply like "it's okay: you're my lawyer." In fact, Sandler said, she does not represent everyone in the FOSS world, and for those clients she does represent, she has obligations to protect their interests, which may include giving the client information that she learned from someone else.

But figuring out who you represent can be confusing in FOSS, she said. Other fields are all about keeping things secret, but FOSS wants to work in the open. Even within a project, the lines between client and outsider can be fuzzy; Sandler said she once had a conversation with a Red Hat employee about a legal question relating to the GNOME project, and had to tell the person she was required to take the question to Red Hat's legal team instead. Ethics rules dictate that an attorney speak only to another party's attorney (and not to the person involved) once they know the other party has representation.

Organizations, communities, and friends

Of course, many organizations exist in the FOSS universe, which can help to draw clearer lines about who is and is not "we" in a given context. But the various legal forms these organizations take affects matters deeply. Some organizations, for instance, are 501(c)(3) charities acting "for the public good," while others are 501(c)(6) trade associations, which act in the interests of the members to promote a business goal. Each type of organization is appropriate in some circumstances, yet in FOSS their goals can seem to align (such as promoting free software adoption in businesses) while they remain quite different from a legal standpoint.

The differences between the various types of non-profit organization are most certainly important to the US Internal Revenue Service (IRS), she explained. FOSS projects' rhetoric of "changing the world" may be genuine idealism, she said, but it sounds virtually identical to every for-profit tech company's advertising, too. A few years back, the IRS started taking a hard look at the various FOSS non-profit filings, apparently out of concern over whether they were genuinely doing their work for "the public good." Naturally, the agency found the question confusing; it tried to find clear lines—such as saying that a project using a copyleft license was a "public good" project, while one using a permissive license was interested in proprietarization and, thus, a trade association. But such simple rules do not encompass the wide range of ideas about licensing, Sandler said; one cannot blame the IRS for being confused, and many of the FOSS non-profit applications take a long time to process as a result.

In addition to an organization's purpose, what constitutes "we" also concerns how and why individuals choose to participate in FOSS. When Canonical rolled out its Unity desktop interface in 2011, Sandler was Executive Director of the GNOME Foundation. Since Unity was an alternative to the recently-released GNOME Shell, it was already a move of considerable interest to her in her role at GNOME, but Sandler said she also found it surprising that so many in the Ubuntu community did not object to contributing their effort to a for-profit company's project. So she went to the next Ubuntu Developer Summit (UDS) hoping to get a feel for the community's stance. Over the course of UDS, she said, she had many conversations with her assigned roommate (a motivated Ubuntu volunteer), in particular asking the roommate about her motivations for contributing. Fundamentally, Sandler said, the roommate's answer was "because my friends are part of this community."

Being with one's friends is a major motivator for participating, but it is also something that makes FOSS distinct. Sandler noted the well-publicized incident at a recent PyCon where two people joking with each other were overheard by someone else, and a contentious conference-harassment incident resulted. FOSS makes the line between personal and professional blurry, Sandler observed. Many of us like to go see our friends at FOSS events, she said; we even invite FOSS friends to personal events like birthday parties, and we work from "home offices." In addition, we also "play musical chairs" a lot, moving to different employers even while continuing to work on the same project.

We are even conflicted at times about who we are as individuals. She quoted ownCloud's Frank Karlischek, who said that in his startup sometimes he is an "evil capitalist" and sometimes an "ideological free software guy." Within the FOSS community, it can even be hard to tell which role someone is in from minute to minute. There are also developers who get paid by a company to work on code then work on the same code at night. The confusion this causes was pointed out in the Debian project's recent init system debate, when some people said that it was not clear when other people were expressing their personal views and when they were expressing their company's views. On the other hand, asking what "hat" another party was wearing for a particular comment was also seen as a type of attack.

Governance

Coping with these sorts of uncertainties is one of the reasons that FOSS projects have governance structures, Sandler said. Providing projects with assistance is why SFC was founded; it handles logistical duties (including fiscal oversight, conference travel, and even paid development contracts) for its 30-plus member projects, she said, but it also lets those projects make a statement about identity issues. Joining SFC allows a project to be clear that it is a charitable effort not controlled by a company, to commit its assets (financial and intellectual) to the social good, and to have a clearly defined "we" by establishing project governance and membership policies.

In addition to SFC's other operations—which Sandler described—the organization can help FOSS projects deal with some of the trickier identity questions. An example is trademarks, which she described as "a lot more important than you think they are." A common problem facing FOSS projects is that they are often started by a small, excited group of volunteers—who trust each other. Somewhere down the road as work progresses, the group decides it needs a trademark, which one person then registers as an individual. But sometimes those individuals subsequently form companies to do paid consulting work related to the project, and at that point the same individual owns the trademark and a company that conducts business related to it, which complicates matters for the community.

SFC is working to improve the services it offers, Sandler said, by publishing transparent annual reports to "show where the money goes," and by developing policies in the open, including keeping them in a public Git repository. She listed several other problems that still need to be addressed, such as policies about the default positions people speak from during public discussions (like Debian's init system debate), and better ways to handle the use of email addresses and aliases (which an audience member asked about in the Q&A period after the talk, concerned about when he could use his company email address for FOSS project work). She told attendees to "watch this space" for upcoming announcements about new efforts the SFC will be launching in the near term (while apologizing for making "one of those annoying 'pre-announcements'").

In the end, Sandler told the audience that she wants the FOSS community to "worry and not worry" about the complicated identity questions that surround it. People should worry, in the sense that they should think about the questions and try to get things right. But they should not worry, in the sense that they should know that the community is passionate about what it does and everyone else is trying to get things right, too.

Comments (2 posted)

Page editor: Jonathan Corbet

Security

Apt vulnerability sparks Debian security discussion

By Jake Edge
June 18, 2014

Downloading packages from a distribution's repositories is generally considered to be a safe operation—packages are (or at least should be) signed and those signatures are verified before installation. Debian's Apt package manager has used cryptographic signatures to verify the authenticity of packages for more than ten years. So it was a rather large surprise to see a late May report that Apt doesn't require valid signatures for source packages.

Jakub Wilk found the bug when testing repositories with packages that didn't have any signatures. By using a proxy that returned 404 "not found" errors for any requests targeting Release.gpg or InRelease files (which hold the signatures), he found that installing or downloading binary packages failed (as expected). But he also found that downloading or unpacking a source package worked, as did building a binary package from the downloaded source package. That is clearly a flaw that a man in the middle (MITM) could exploit to put compromised source files onto Debian systems.

It is a difficult vulnerability to exploit, perhaps, and would require user assistance (i.e. building the package) to activate a malicious payload, but it certainly runs afoul of reasonable expectations. One can also imagine targeted attacks using the vulnerability that could be far more destructive. Worse yet, though, is that the normal methods for rebuilding the Debian archive (e.g. for a new architecture) would not detect this kind of tampering, as Thorsten Glaser pointed out. Those methods assume that apt-get source pkg always verifies the signature.

The problem in Apt was fixed quickly. The function that handles source packages simply needed to call the IsTrusted() method to verify the signature. In addition, a test case was added to catch this if the bug ever reappears. The bug was then closed by Michael Vogt on June 10, only to be reopened by Christoph Anton Mitterer two days later.

Although there was mention of contacting the security team in the bug, that evidently never happened. So one of the reasons that Mitterer reopened the issue was to ensure that a CVE got assigned and that a Debian Security Advisory (DSA) was issued. As he put it: "So IMHO this bug definitely deserves a CVE and a DSA,... so that people are informed that [their] systems might have been compromised (i.e. if an attacker tricked them into using forged sources)". A CVE was duly assigned (CVE-2014-0478) and DSA-2958-1 was issued.

But there are a number of larger issues here. Mitterer outlines some of them in his lengthy bug-reopening message. He is concerned that various pieces of Debian infrastructure are insufficiently secure against (mostly) MITM attacks. For example, Apt will work with unsigned repositories, which is seen as a feature by some. As David Kalnischkies said: "The 'problem; is that apt supports unsigned repositories as too many people would bitch too much if it would require a signature – it used to work before apt 0.6, it has to work forever, man – FOR EVER!" Glaser's description of the potential MITM problems with sbuild and cowbuilder also factor in. Beyond those, Mitterer wondered about the security verification in packages that download code from elsewhere (e.g. Tor browser or Flash plugin) and other Debian tools that grab code to be built or to create new systems (e.g. debootstrap).

But there is more to improving the security of Debian (or any project, for that matter) than just compiling lists of problem areas. As security team member Thijs Kinkhorst pointed out in a post to the debian-devel mailing list—where parts of the discussion moved—finding some piece of the problem to work on may be a better approach:

You raise a lot of broad concerns under the header "holes in secure apt" which I'm afraid does not [do] much to get us closer to a more secure Debian. Not many people will object that making Debian even more secure is a bad idea; it just needs concrete action, not a large list of potential areas to work on.

I suggest that you focus on one of those aspects of your email and take some concrete action to get it addressed.

Kalnischkies had a similar comment:

What is really sad is that many people keep talking about how much more secure everything should be but don't do the smallest bit of work to make it happen or even do a basic level of research themselves.

So instead of answering all your questions, I will instead leave them unanswered and say: Go on and check for yourself! You shouldn't trust a random guy like me anyway and if that leads to even one person contributing to apt (or the security team or anything else really) in this area, we have a phenomenal massive increase in manpower … (for apt in the 50% ballpark!)

But there certainly is value in collecting up problem areas and trying to figure what the "proper" solution should be, Mitterer argued. Because many of the solutions would require fairly major changes to how things are done and what types of behavior are allowed—policy decisions, essentially—they are not things that Mitterer (or any single developer) can directly address without involving others.

It's clear that there are some holes in Debian's packaging infrastructure. Beyond the bug that Wilk just found, he also encountered a bug that was reported over a year ago regarding the hash checking done for source packages. It turns out that Apt only checks the MD5 hash, even if there are SHA1 or SHA256 hashes available for the package. That seems rather sloppy, even though it may be hard or impossible to exploit—as Kalnischkies put it: "If you happen to have a same-size preimage attack on MD5 I would be interested to hear about it."

Mitterer is trying to raise the profile of these problems—with many lengthy replies throughout the bug and mailing list threads—but there is little evidence that much progress has been made. Some of the problems may be less dangerous or harder to exploit than Mitterer makes them out to be, but they add up to something that should be a bit worrisome. The inertia of a long-running project may be working against some kind of concerted effort to address the problems, as "we've always done it that way" can sometimes be a powerful, if potentially problematic, argument. It will be interesting to see what, if any, attention these problems get over time—it may require someone to drive the process with more than just ideas and words.

Comments (9 posted)

Brief items

Security quotes of the week

The NSA, GCHQ et al actually don't have the ability to conduct the mass surveillance that we now believe they do. Edward Snowden was in fact groomed, without his knowledge, to become a whistleblower, and the leaked documents were elaborately falsified by the NSA and GCHQ.

The encryption and security systems that 'private' companies are launching in the wake of [these] 'revelations', however, are in fact being covertly funded by the NSA/GCHQ — the aim being to encourage criminals and terrorists to use these systems, which the security agencies have built massive backdoors into.

Doubleplusunlol wins Bruce Schneier's seventh movie-plot threat contest

The antidote for this ransomware was incredibly easy to create because the ransomware came with both the decryption method and the decryption password. Therefore producing an antidote was more of a copy-and-paste job than anything.

It's also worth noting that while this antidote doesn't detect the decryption password automatically, it could be possible to do so. However, future versions of the ransomware will probably not reveal the decryption password so easily and will likely receive it from the C&C [Command and Control] server.

Since the Simplelocker ransomware is a proof-of-concept, the antidote provided here is simply a solution to this proof-of-concept. Future versions of advanced smartphone ransomware will likely prove significantly harder to reverse engineer.

Simon Bell—his analysis of the ransomware is also worth reading

We need to remember that security is a transitive verb: we secure something against something or someone. As you say, DRM is "securing" a device against the user.
pjc50 on Hacker News (Thanks to Paul Wise.)

Comments (none posted)

Android Root Access Vulnerability Affecting Most Devices (Threatpost)

Threatpost reports that most Android devices are vulnerable to a privilege escalation flaw in the kernel. "Researchers at Lacoon Mobile Security are calling the bug “TowelRoot,” because it is the very same vulnerability (CVE-2014-3153) exploited in the latest Android rooting tool developed by George Hotz (Geohot). Successful exploitation of the Linux bug within the Android operating system would give the attacker administrative access to a victim’s phone. Specifically, such access could potentially allow that same attacker to run further malicious code, retrieve files and device data, bypass third-party or enterprise security applications including containers like Samsung’s secure Knox sub-operating system, and establish backdoors for future access on victim devices."

Comments (10 posted)

New vulnerabilities

apt: invalid source package authentication

Package(s):apt CVE #(s):CVE-2014-0478
Created:June 13, 2014 Updated:June 18, 2014
Description: From the Debian advisory:

Jakub Wilk discovered that APT, the high level package manager, did not properly perform authentication checks for source packages downloaded via "apt-get source". This only affects use cases where source packages are downloaded via this command; it does not affect regular Debian package installation and upgrading.

Alerts:
Ubuntu USN-2246-1 apt 2014-06-17
Debian DSA-2958-1 apt 2014-06-12

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium-browser CVE #(s):CVE-2014-3154 CVE-2014-3155 CVE-2014-3156 CVE-2014-3157
Created:June 16, 2014 Updated:October 10, 2014
Description: From the CVE entries:

Use-after-free vulnerability in the ChildThread::Shutdown function in content/child/child_thread.cc in the filesystem API in Google Chrome before 35.0.1916.153 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to a Blink shutdown. (CVE-2014-3154)

net/spdy/spdy_write_queue.cc in the SPDY implementation in Google Chrome before 35.0.1916.153 allows remote attackers to cause a denial of service (out-of-bounds read) by leveraging incorrect queue maintenance. (CVE-2014-3155)

Buffer overflow in the clipboard implementation in Google Chrome before 35.0.1916.153 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger unexpected bitmap data, related to content/renderer/renderer_clipboard_client.cc and content/renderer/webclipboard_impl.cc. (CVE-2014-3156)

Heap-based buffer overflow in the FFmpegVideoDecoder::GetVideoBuffer function in media/filters/ffmpeg_video_decoder.cc in Google Chrome before 35.0.1916.153 allows remote attackers to cause a denial of service or possibly have unspecified other impact by leveraging VideoFrame data structures that are too small for proper interaction with an underlying FFmpeg library. (CVE-2014-3157)

Alerts:
Mageia MGASA-2014-0413 chromium-browser-stable 2014-10-09
Gentoo 201408-16 chromium 2014-08-30
openSUSE openSUSE-SU-2014:0982-1 chromium 2014-08-11
Ubuntu USN-2298-1 oxide-qt 2014-07-23
Debian DSA-2959-1 chromium-browser 2014-06-14

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2014-3940
Created:June 12, 2014 Updated:July 30, 2015
Description: From the CVE entry:

The Linux kernel through 3.14.5 does not properly consider the presence of hugetlb entries, which allows local users to cause a denial of service (memory corruption or system crash) by accessing certain memory locations, as demonstrated by triggering a race condition via numa_maps read operations during hugepage migration, related to fs/proc/task_mmu.c and mm/mempolicy.c.

Alerts:
Scientific Linux SLSA-2015:1272-1 kernel 2015-08-03
Oracle ELSA-2015-1272 kernel 2015-07-29
Red Hat RHSA-2015:1272-01 kernel 2015-07-22
Scientific Linux SLSA-2015:0290-1 kernel 2015-03-25
Red Hat RHSA-2015:0290-01 kernel 2015-03-05
Oracle ELSA-2015-0290 kernel 2015-03-12
Red Hat RHSA-2014:0913-01 kernel-rt 2014-07-22
Ubuntu USN-2288-1 linux-lts-trusty 2014-07-16
Ubuntu USN-2290-1 kernel 2014-07-16
Fedora FEDORA-2014-7320 kernel 2014-06-16
Fedora FEDORA-2014-7128 kernel 2014-06-11

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2014-1739
Created:June 17, 2014 Updated:June 18, 2014
Description: From the oss-sec mailing list:

We found an infoleak vulnerability in the ioctl media_enum_entities() that allows to disclose 200 bytes the kernel process' stack. The vulnerability is exploitable on versions up to linux-3.15-rc3 by local users with read access to `/dev/media0`. Linux distributions ship with `chmod 600 /dev/media0` preventing unprivileged local users from exploiting the vulnerability. However, some Android devices are known to be shipped with both read and/or write permissions for all: chmod 666 /dev/media0.

Alerts:
Mageia MGASA-2015-0077 kernel-rt 2015-02-19
Oracle ELSA-2015-0290 kernel 2015-03-12
openSUSE openSUSE-SU-2014:1677-1 kernel 2014-12-21
Oracle ELSA-2014-3104 kernel 2014-12-11
Oracle ELSA-2014-3104 kernel 2014-12-11
Scientific Linux SLSA-2014:1971-1 kernel 2014-12-10
Oracle ELSA-2014-1971 kernel 2014-12-09
CentOS CESA-2014:1971 kernel 2014-12-10
Red Hat RHSA-2014:1971-01 kernel 2014-12-09
Oracle ELSA-2014-3096 kernel 2014-12-04
Oracle ELSA-2014-3096 kernel 2014-12-04
SUSE SUSE-SU-2014:1316-1 Linux kernel 2014-10-22
SUSE SUSE-SU-2014:1319-1 Linux kernel 2014-10-23
openSUSE openSUSE-SU-2014:1246-1 kernel 2014-09-28
Mageia MGASA-2014-0332 kernel-vserver 2014-08-18
Mageia MGASA-2014-0337 kernel-tmb 2014-08-18
Mageia MGASA-2014-0331 kernel-tmb 2014-08-18
Mageia MGASA-2014-0336 kernel-linus 2014-08-18
Mageia MGASA-2014-0330 kernel-linus 2014-08-18
Ubuntu USN-2288-1 linux-lts-trusty 2014-07-16
Ubuntu USN-2286-1 linux-lts-raring 2014-07-16
Ubuntu USN-2285-1 linux-lts-quantal 2014-07-16
Ubuntu USN-2290-1 kernel 2014-07-16
Ubuntu USN-2259-1 kernel 2014-06-27
Ubuntu USN-2263-1 linux-ti-omap4 2014-06-27
Ubuntu USN-2261-1 linux-lts-saucy 2014-06-27
Ubuntu USN-2264-1 kernel 2014-06-27
Mageia MGASA-2014-0273 kernel 2014-06-22
Mageia MGASA-2014-0265 kernel 2014-06-18
CentOS CESA-2014:X009 kernel 2014-06-16

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2012-6647
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entry:

The futex_wait_requeue_pi function in kernel/futex.c in the Linux kernel before 3.5.1 does not ensure that calls have two different futex addresses, which allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a crafted FUTEX_WAIT_REQUEUE_PI command.

Alerts:
Oracle ELSA-2014-1392 kernel 2014-10-21
CentOS CESA-2014:0981 kernel 2014-07-31
Scientific Linux SLSA-2014:0981-1 kernel 2014-07-29
Oracle ELSA-2014-0981 kernel 2014-07-29
Red Hat RHSA-2014:0981-01 kernel 2014-07-29
SUSE SUSE-SU-2014:0807-1 Linux Kernel 2014-06-18

Comments (none posted)

libfep: privilege escalation

Package(s):libfep CVE #(s):CVE-2014-3980
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entry:

libfep 0.0.5 before 0.1.0 does not properly use UNIX domain sockets in the abstract namespace, which allows local users to gain privileges via unspecified vectors.

Alerts:
Fedora FEDORA-2014-7214 libfep 2014-06-17
Fedora FEDORA-2014-7126 libfep 2014-06-17

Comments (none posted)

lucene-solr: multiple vulnerabilities

Package(s):lucene-solr CVE #(s):CVE-2013-6397 CVE-2013-6407 CVE-2013-6408
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entries:

Directory traversal vulnerability in SolrResourceLoader in Apache Solr before 4.6 allows remote attackers to read arbitrary files via a .. (dot dot) or full pathname in the tr parameter to solr/select/, when the response writer (wt parameter) is set to XSLT. NOTE: this can be leveraged using a separate XXE (XML eXternal Entity) vulnerability to allow access to files across restricted network boundaries. (CVE-2013-6397)

The UpdateRequestHandler for XML in Apache Solr before 4.1 allows remote attackers to have an unspecified impact via XML data containing an external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) issue. (CVE-2013-6407)

The DocumentAnalysisRequestHandler in Apache Solr before 4.3.1 does not properly use the EmptyEntityResolver, which allows remote attackers to have an unspecified impact via XML data containing an external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) issue. NOTE: this vulnerability exists because of an incomplete fix for CVE-2013-6407. (CVE-2013-6408)

Alerts:
Debian DSA-2963-1 lucene-solr 2014-06-17

Comments (none posted)

lynis: privilege escalation

Package(s):lynis CVE #(s):CVE-2014-3982 CVE-2014-3986
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entries:

include/tests_webservers in Lynis before 1.5.5 on AIX allows local users to overwrite arbitrary files via a symlink attack on a /tmp/lynis.##### file. (CVE-2014-3982)

include/tests_webservers in Lynis before 1.5.5 allows local users to overwrite arbitrary files via a symlink attack on a /tmp/lynis.*.unsorted file with an easily determined name. (CVE-2014-3986)

Alerts:
Fedora FEDORA-2014-7400 lynis 2014-06-17

Comments (none posted)

nova: privilege escalation

Package(s):nova CVE #(s):CVE-2013-1068 CVE-2014-0167
Created:June 18, 2014 Updated:July 14, 2014
Description: From the CVE entry:

The Nova EC2 API security group implementation in OpenStack Compute (Nova) 2013.1 before 2013.2.4 and icehouse before icehouse-rc2 does not enforce RBAC policies for (1) add_rules, (2) remove_rules, (3) destroy, and other unspecified methods in compute/api.py when using non-default policies, which allows remote authenticated users to gain privileges via these API requests. (CVE-2014-0167)

From the Ubuntu advisory:

Darragh O'Reilly discovered that OpenStack Nova did not properly set up its sudo configuration. If a different flaw was found in OpenStack Nova, this vulnerability could be used to escalate privileges. This issue only affected Ubuntu 13.10 and Ubuntu 14.04 LTS. (CVE-2013-1068)

Alerts:
Red Hat RHSA-2014:1084-01 openstack-nova 2014-08-21
Fedora FEDORA-2014-7954 openstack-nova 2014-07-12
Ubuntu USN-2248-1 cinder 2014-06-18
Ubuntu USN-2247-1 nova 2014-06-17

Comments (none posted)

opera: multiple vulnerabilities

Package(s):opera CVE #(s):CVE-2012-6461 CVE-2012-6462 CVE-2012-6463 CVE-2012-6464 CVE-2012-6465 CVE-2012-6466 CVE-2012-6467 CVE-2012-6468 CVE-2012-6469 CVE-2012-6470 CVE-2012-6471 CVE-2012-6472 CVE-2013-1618 CVE-2013-1637 CVE-2013-1638 CVE-2013-1639
Created:June 16, 2014 Updated:June 18, 2014
Description: From the Gentoo advisory:

A remote attacker could entice a user to open a specially crafted web page using Opera, possibly resulting in execution of arbitrary code with the privileges of the process or a Denial of Service condition. Furthermore, a remote attacker may be able to obtain sensitive information, conduct Cross-Site Scripting (XSS) attacks, or bypass security restrictions.

Alerts:
Gentoo 201406-14 opera 2014-06-14

Comments (none posted)

php5, gd: denial of service

Package(s):php5, gd CVE #(s):CVE-2014-2497
Created:June 12, 2014 Updated:March 29, 2015
Description: From the CVE entry:

The gdImageCreateFromXpm function in gdxpm.c in libgd, as used in PHP 5.4.26 and earlier, allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted color table in an XPM file.

Alerts:
Gentoo 201607-04 gd 2016-07-16
Ubuntu USN-2987-1 libgd2 2016-05-31
Oracle ELSA-2015-1135 php 2015-06-23
Debian-LTS DLA-189-1 libgd2 2015-04-08
Debian DSA-3215-1 libgd2 2015-04-06
Mandriva MDVSA-2015:153 libgd 2015-03-29
Fedora FEDORA-2015-0503 gd 2015-01-20
Fedora FEDORA-2015-0432 gd 2015-01-19
Red Hat RHSA-2014:1766-01 php55-php 2014-10-30
Red Hat RHSA-2014:1765-01 php54-php 2014-10-30
Oracle ELSA-2014-1326 php 2014-09-30
Oracle ELSA-2014-1327 php 2014-09-30
CentOS CESA-2014:1326 php 2014-09-30
CentOS CESA-2014:1326 php 2014-09-30
CentOS CESA-2014:1327 php 2014-09-30
Red Hat RHSA-2014:1326-01 php 2014-09-30
Red Hat RHSA-2014:1327-01 php 2014-09-30
Slackware SSA:2014-247-01 php 2014-09-04
Mandriva MDVSA-2014:172 php 2014-09-03
Fedora FEDORA-2014-9679 php 2014-09-02
Gentoo 201408-11 php 2014-08-29
Scientific Linux SLSA-2014:1326-1 php53 and php 2014-10-13
Fedora FEDORA-2014-8458 gd 2014-08-15
Mandriva MDVSA-2014:133 gd 2014-07-10
Mageia MGASA-2014-0283 php 2014-07-09
Mageia MGASA-2014-0288 gd 2014-07-09
SUSE SUSE-SU-2014:0873-2 PHP5 2014-07-07
SUSE SUSE-SU-2014:0873-1 PHP5 2014-07-05
SUSE SUSE-SU-2014:0869-1 php53 2014-07-04
SUSE SUSE-SU-2014:0868-1 PHP5 2014-07-04
openSUSE openSUSE-SU-2014:0786-1 php5 2014-06-12
openSUSE openSUSE-SU-2014:0784-1 php5 2014-06-12

Comments (none posted)

php5: code execution

Package(s):php5 CVE #(s):CVE-2014-4049
Created:June 17, 2014 Updated:July 31, 2014
Description: From the Debian advisory:

It was discovered that PHP, a general-purpose scripting language commonly used for web application development, is vulnerable to a heap-based buffer overflow in the DNS TXT record parsing. A malicious server or man-in-the-middle attacker could possibly use this flaw to execute arbitrary code as the PHP interpreter if a PHP application uses dns_get_record() to perform a DNS query.

Alerts:
SUSE SUSE-SU-2016:1638-1 php53 2016-06-21
Oracle ELSA-2015-1135 php 2015-06-23
Mandriva MDVSA-2015:080 php 2015-03-28
Red Hat RHSA-2014:1766-01 php55-php 2014-10-30
Red Hat RHSA-2014:1765-01 php54-php 2014-10-30
Oracle ELSA-2014-1326 php 2014-09-30
Oracle ELSA-2014-1327 php 2014-09-30
Mandriva MDVSA-2014:172 php 2014-09-03
Gentoo 201408-11 php 2014-08-29
Debian DSA-3008-2 php5 2014-08-21
Scientific Linux SLSA-2014:1012-1 php53 and php 2014-08-06
CentOS CESA-2014:1013 php 2014-08-06
openSUSE openSUSE-SU-2014:0942-1 php5 2014-07-30
CentOS CESA-2014:1012 php53 2014-08-06
Oracle ELSA-2014-1013 php 2014-08-06
Oracle ELSA-2014-1012 php53 2014-08-06
Oracle ELSA-2014-1012 php53 2014-08-06
CentOS CESA-2014:1012 php53 2014-08-06
Red Hat RHSA-2014:1012-01 php53 2014-08-06
Slackware SSA:2014-192-01 php 2014-07-11
Mandriva MDVSA-2014:130 php 2014-07-09
Mageia MGASA-2014-0284 php 2014-07-09
Mageia MGASA-2014-0283 php 2014-07-09
SUSE SUSE-SU-2014:0873-2 PHP5 2014-07-07
Fedora FEDORA-2014-7782 php 2014-07-08
SUSE SUSE-SU-2014:0873-1 PHP5 2014-07-05
SUSE SUSE-SU-2014:0869-1 php53 2014-07-04
SUSE SUSE-SU-2014:0868-1 PHP5 2014-07-04
Red Hat RHSA-2014:1013-01 php 2014-08-06
Fedora FEDORA-2014-7765 php 2014-06-30
Ubuntu USN-2254-2 php5 2014-06-25
openSUSE openSUSE-SU-2014:0841-1 php5 2014-06-25
Ubuntu USN-2254-1 php5 2014-06-23
Debian DSA-2961-1 php5 2014-06-16

Comments (none posted)

php-horde-Horde-Ldap: check for empty passwords

Package(s):php-horde-Horde-Ldap CVE #(s):
Created:June 18, 2014 Updated:June 18, 2014
Description: From the Red Hat bugzilla:

It was reported that php-horde-Horde-Ldap could be used to connect to an LDAP server with an empty password. In this case, the flaw is in the LDAP server, so this issue is just considered hardening.

Alerts:
Fedora FEDORA-2014-7228 php-horde-Horde-Ldap 2014-06-17

Comments (none posted)

python-djblets: cross-site scripting

Package(s):python-djblets CVE #(s):CVE-2014-3994
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entry:

Cross-site scripting (XSS) vulnerability in util/templatetags/djblets_js.py in Djblets before 0.7.30 and 0.8.x before 0.8.3 for Django, as used in Review Board, allows remote attackers to inject arbitrary web script or HTML via a JSON object, as demonstrated by the name field when changing a user name.

Alerts:
Mageia MGASA-2014-0462 python-djblets 2014-11-21
Fedora FEDORA-2014-7224 python-djblets 2014-06-17
Fedora FEDORA-2014-7223 python-djblets 2014-06-17

Comments (none posted)

typo3-cms-4_5: multiple vulnerabilities

Package(s):typo3-cms-4_5 CVE #(s):CVE-2014-3941 CVE-2014-3942 CVE-2014-3943
Created:June 18, 2014 Updated:June 18, 2014
Description: From the CVE entries:

TYPO3 4.5.0 before 4.5.34, 4.7.0 before 4.7.19, 6.0.0 before 6.0.14, 6.1.0 before 6.1.9, and 6.2.0 before 6.2.3 allows remote attackers to have unspecified impact via a crafted HTTP Host header, related to "Host Spoofing." (CVE-2014-3941)

The Color Picker Wizard component in TYPO3 4.5.0 before 4.5.34, 4.7.0 before 4.7.19, 6.0.0 before 6.0.14, and 6.1.0 before 6.1.9 allows remote authenticated editors to execute arbitrary PHP code via a serialized PHP object. (CVE-2014-3942)

Multiple cross-site scripting (XSS) vulnerabilities in unspecified backend components in TYPO3 4.5.0 before 4.5.34, 4.7.0 before 4.7.19, 6.0.0 before 6.0.14, 6.1.0 before 6.1.9, and 6.2.0 before 6.2.3 allow remote authenticated editors to inject arbitrary web script or HTML via unknown parameters. (CVE-2014-3943)

Alerts:
openSUSE openSUSE-SU-2016:2114-1 typo3-cms-4_7 2016-08-19
openSUSE openSUSE-SU-2016:2025-1 typo3 2016-08-10
openSUSE openSUSE-SU-2014:0813-1 typo3-cms-4_5 2014-06-18

Comments (none posted)

xen: denial of service

Package(s):xen CVE #(s):CVE-2014-3967 CVE-2014-3968
Created:June 17, 2014 Updated:June 26, 2014
Description: From the xen advisory:

The implementation of the HVM control operation HVMOP_inject_msi, while checking whether a particular IRQ was already set up in the necessary way, fails to properly check all respective conditions. In particular it doesn't check the returned pointer for being non-NULL before de- referencing it. (CVE-2014-3967)

Furthermore that same code also handles certain errors by logging messages, without (under default settings) at least making these messages subject to rate limiting. (CVE-2014-3968)

The NULL pointer de-reference would lead to a host crash, and hence a denial of service would result. Since host and guest page tables are fully separated for HVM guests, the guest would not be able to leverage the vulnerability for other kinds of attacks (privilege escalation or information leak).

The spamming of the hypervisor log could similarly lead to a denial of service.

In a configuration where device models run with limited privilege (for example, stubdom device models), a guest attacker who successfully finds and exploits an unfixed security flaw in qemu-dm could leverage the other flaw into a Denial of Service affecting the whole host.

In the more general case, in more abstract terms: a malicious administrator of a domain privileged with regard to an HVM guest can cause Xen to become unresponsive leading to a Denial of Service.

Alerts:
Gentoo 201504-04 xen 2015-04-11
openSUSE openSUSE-SU-2014:1281-1 xen 2014-10-09
openSUSE openSUSE-SU-2014:1279-1 xen 2014-10-09
Fedora FEDORA-2014-7408 xen 2014-06-26
Fedora FEDORA-2014-7423 xen 2014-06-26
CentOS CESA-2014:X008 xen 2014-06-16

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.16-rc1, released on June 15. Linus said:

It may have been a slightly unusual two week merge window, in that it's only one week since the release of 3.15 and the first week overlapped with the last -rc for that previous release, but that doesn't seem to have affected development much. Things look normal, and if anything, this is one of the bigger release windows rather than on the smaller side. It's not quite as big as the merge window for 3.15, but it's actually not that far off.

In the end, 11,364 changesets were pulled in during the merge window (3.15-rc1 had 12,034).

Stable updates: 3.15.1, 3.14.8, 3.10.44, and 3.4.94 all came out on June 16. The list of fixes is relatively short this time, but they certainly look worth having.

Comments (none posted)

Quote of the week

So there is an impressionistic painting of RCU inside my head. And in one corner, there is something that might be a dog. If it really is a dog, or can be convinced to become one, perhaps it can herd the sheep in the middle of the painting. At least they look sort of like sheep. They might instead be rainclouds. Or powdered-wig-wearing barristers. Of course, in the latter case, introducing them to the dog might be worthwhile just for sheer entertainment value.

Anyway, I am heading out to the gym. Perhaps a few gym sessions and a couple of sleep cycles will convert the painting to useful code. This approach has worked well for me many times over the decades, so here is hoping that it does again.

— The Paul McKenney school of software design

Comments (none posted)

Kernel development news

The 3.16 merge window concludes

By Jake Edge
June 18, 2014

On June 15, Linus Torvalds put out the 3.16-rc1 prepatch and closed the merge window for this cycle. From here on in, features are unlikely to be added as fixes and stabilization patches should predominate.

At this point, Torvalds has merged 11,364 non-merge commits for 3.16. That's around 3,200 since last week's look (and a total of nearly 6,000 since part 1 of our merge window coverage). The last two merge windows have been two of the top three windows in terms of commits, with 3.16 in third place behind 3.10 (11,963) and 3.15 (12,034). We will have to see if the trend continues and we get 11,000–12,000 patches for 3.17 and beyond.

In any case, here are some of the more significant changes that users will see in 3.16:

  • Modules now have the read-only (RO) and no-execute (NX) bits set on their data sections much earlier in the loading process, before parsing any module arguments. This will further reduce the time window in which a misbehaving (or malicious) module can modify or execute its data.
  • The secure computing (seccomp) BPF filters are now just-in-time (JIT) compiled.
  • Support for TCP fast open over IPv6 has been added.
  • The Xen virtual network interfaces now have multi-queue support, which provides much better performance.
  • Support for busy polling on stream control transmission protocol (SCTP) sockets has been added. Busy polling is set on a socket using the SO_BUSY_POLL socket option; it can reduce the latency of receives on high-traffic interfaces that support the option.
  • The extended verification module (EVM) has added configuration options to support putting new extended attributes (xattrs) into the calculated HMAC value for a file. Using that facility, three Smack attributes (SMACK64EXEC, SMACK64TRANSMUTE and SMACK64MMAP) can now be added into the HMAC calculation.
  • Btrfs has added a new ioctl() called BTRFS_IOC_TREE_SEARCH_V2 to search the filesystem for keys. As its name would imply, it is a more flexible version of the existing BTRFS_IOC_TREE_SEARCH that allows for a larger buffer to be passed in to retrieve larger search results that won't fit into the 3992-byte fixed-sized buffer.
  • New hardware support:
    • Graphics: The Nouveau driver now supports NVIDIA Tesla K40 GK110B devices and has initial support for NVIDIA Tegra K1 GK20A devices. ASPEED AST2400 devices.
    • Miscellaneous: Freescale Quad Special Peripheral Interface (SPI) controllers; LPDDR2-NVM flash chips; Broadcom Kona pulse-width modulation (PWM) blocks; Intel system-on-chip (SoC) platform digital temperature sensors (DTSs); and Sensiron SHTC1 and SHTW1 humidity and temperature sensors.
    • Network devices: Broadcom BCM7xxx set-top box SYSTEMPORT Ethernet MACs; STMicroelectronics ST21NFCA near field communication (NFC) controllers; Renesas R-Car SoC controller area network (CAN) controllers; Geschwister Schneider USB/CAN devices; Xilinx CAN devices; Hisilicon HIX5HD2 family network devices; and AMD SoC 10GbE Ethernet devices.
    • Staging graduation: Freescale i.MX5/6 v3 image processing units (IPUv3).

Changes of interest to kernel developers include:

  • A simple interval-tree interface has been added as lib/interval_tree.c. The interval tree is implemented as an augmented red-black tree.
  • Tracepoints have been added to give finer resolution of events during suspend and resume.
  • The BPF interpreter now has a self-test that covers both classic and internal BPF instructions.
  • A software TCP segmentation offload (TSO) API has been added; several drivers have used it to add software TSO support (mvneta, mv643xx_eth, fec).
  • The Documentation/mutex-design.txt file has been extensively updated to better reflect today's reality.
  • Optimistic spinning has been added to read-write semaphores (rwsems). Also, a queued variant of read-write locks (qrwlocks) has been added.
  • Two new methods, read_iter() and write_iter(), have been added to struct file_operations. They are intended to support the move toward using the iov_iter interface and are meant to eventually replace the aio_read() and aio_write() methods.

Now the stabilization phase for 3.16 begins. That means we are likely to see a final 3.16 release sometime in early August depending on how the cycle goes. Then it will be time to start the festivities all over again for 3.17.

Comments (none posted)

The volatile volatile ranges patch set

By Jonathan Corbet
June 18, 2014
"Volatile ranges" is a name given to regions of user-space memory that can be reclaimed by the kernel when memory is tight. The classic use case is for a web browser's image cache; the browser would like to keep that information in memory to speed future page loads, but it can do without that data should the memory used for the cache be needed elsewhere. Implementations of the volatile range concept have experienced more than the usual amount of change; that rate of change may well continue into the future — if a developer can be found to continue the work.

Early versions of the patch set were based on the posix_fadvise() system call. Some developers complained that it was more of an allocation-related concept, so the patch was reworked to use fallocate() instead. By 2013, the plan had shifted toward the addition of two new system calls named fvrange() and mvrange(). Version 11, released in March 2014, moved to a single system call named vrange(). During all of these iterations, there have also been concerns about user-space semantics (what happens when a process tries to access a page that has been purged, in particular) and the best way to implement volatile ranges internally. So nothing has ever been merged into the mainline kernel.

Version 14, posted by John Stultz on April 29, changes the user-space API yet again. Volatile ranges have now shifted to the madvise() system call. In particular, a call to:

    madvise(address, length, MADV_VOLATILE);

Will mark the memory range of length bytes starting at address as being volatile. Once the memory range has been marked in this way, the kernel is free to reclaim the associated pages and discard their contents at any time. Should the application need access to the range in the future, it should mark it as being nonvolatile with:

    madvise(address, length, MADV_NONVOLATILE);

The return value is zero for success (the range is now nonvolatile and the previous contents remain intact), a negative number if some sort of error occurred, or one if the operation was successful but at least one of the pages has been purged.

The use of madvise() had been considered in the past; it makes sense, given that the purpose is to advise the kernel about the importance of a particular range of memory. Previous volatile range implementations, though, had the property that marking a range nonvolatile could fail partway through. That meant that the interface had to be able to return two values: (1) how many pages had been successfully marked, and (2) whether any of them had been purged. This time around, John found a way to make the operation atomic, in that it either succeeds or fails as a whole. In the absence of a need for a second return value, the madvise() interface is adequate for the task.

What happens if user space attempts to access a volatile page that has been purged by the kernel? This implementation will deliver a SIGBUS signal in that situation. A properly-equipped application can catch the signal and respond by obtaining the needed data from some other source; applications that are not prepared will litter the disk with unsightly core dumps instead. That may seem like an unfriendly response, but one can argue that an application should not be trying to directly access memory that, according to instructions it gave to the kernel, does not actually need to be kept around.

Minchan Kim does not like this approach; he would prefer, instead, that the application simply receive a new, zero-filled page in this situation. He is, it turns out, thinking about a slightly different use case: code that reuses memory and wants to tell the kernel that the old contents need not be preserved. In this case, the reuse should be as low-overhead as possible; Minchan would prefer to have no need for either an MADV_NONVOLATILE call or a SIGBUS signal handler. John suggested that Minchan's own MADV_FREE patch was better suited to that use case, but Minchan disagreed, noting that MADV_FREE is a one-time operation, while MADV_VOLATILE can "stick" to a range of memory through several purge-and-reuse cycles. John, however, worries that silently substituting zero-filled pages could lead to data corruption or other unpleasant surprises.

Johannes Weiner, who joined the conversation in June, also prefers that purged pages be replaced by zero-filled pages on access. He asked if the patch set could be reworked on top of MADV_FREE (which, he thinks, has a better implementation internally) to provide a choice: applications could request either the new-zero-filled-page or the SIGBUS semantics. John responded that he might give it a try, someday:

I'll see if I can look into it if I get some time. However, I suspect its more likely I'll just have to admit defeat on this one and let someone else champion the effort. Interest and reviews have seemingly dropped again here and with other work ramping up, I'm not sure if I'll be able to justify further work on this.

John certainly cannot be faulted for a lack of effort; this patch set has been through fourteen revisions since 2011; it has also been the subject of sessions at the Kernel Summit and Linux Storage, Filesystem, and Memory Management Summit. It has seen extensive revisions in response to comments from several reviewers. But, somehow, this feature, which has real users waiting for it to show up in a mainline kernel, does not seem much closer to being merged than before.

At the same time, it is hard to fault the reviewers. The volatile ranges concept adds new user-visible memory-management behavior with some subtle aspects. If the implementation and interface are not right, the pain will be felt by developers in both kernel and user space for a long time. Memory-management changes are notoriously hard to get into the kernel for a good reason; user-visible changes are even worse. This patch set crosses two areas where, past history shows, we have a hard time getting things right, so some caution is warranted.

Still, one can't help but wonder if merging nothing at all yields the best kernel in the long run. Users will end up working with out-of-tree variants of this concept (Android's "ashmem" in particular) that the development community has even less control over. Unless somebody comes up with the time to continue trying to push this patch set forward, the mainline kernel may never acquire this feature, leaving users without a capability that they demonstrably have a need for.

Comments (16 posted)

Teaching the scheduler about power management

June 18, 2014

This article was contributed by Nicolas Pitre

Power-efficient CPU scheduling is increasingly important in the mobile world, but it has become just as important in large data-center settings, where the electricity bills can be painful indeed. Unfortunately, the kernel's infrastructure for CPU power management lacks integration with the scheduler itself, with the result that scheduling decisions are not as good as they should be. This article reviews the state of the CPU power-management mechanisms in the kernel and looks at what is being done to improve the situation.

A bit of history

A process scheduler is the core component of an operating system responsible for selecting which process to run next. The scheduler implementation in the Linux kernel has been through a couple of iterations — and even complete rewrites — over the years. The Completely Fair Scheduler (CFS), written by Ingo Molnar, was introduced in the 2.6.23 kernel. It replaced the O(1) scheduler which, in turn, was introduced in version 2.5.2 of the kernel, also by Ingo, replacing the scheduler implementation that existed before that. Despite all of the different algorithms, the general goal is always the same: to try to make the most of available CPU resources.

CPU resources have also evolved during this time. Initially, the scheduler's role was simply to properly manage processor time between all runnable processes. Increasing parallelism in hardware due to the emergence of SMP, SMT (or Hyper-threading), and NUMA added more twists to the problem. And, of course, the scheduler had to scale to an ever-increasing number of processes and processors in the same system without consuming too much CPU time on its own. These changes explain why multiple scheduler implementations have been developed over the last half-century and are still being worked on today. In the process, the scheduler has grown in complexity and only a few individuals have become experts in this area.

Initially, task scheduling was all about throughput with no regard for energy consumption; scheduler work was driven by the enterprise space, where everything was plugged into the wall. At the other end of the spectrum, we saw the emergence of battery-operated devices from the embedded and mobile space, where power management is a primary concern. Separate subsystems dealing with power management, such as cpuidle and cpufreq, were introduced and contributed to by a different set of developers with little scheduler experience. In due course, the power management subsystems grew in complexity as well with its own experts.

This split arrangement worked out reasonably well… at least initially. The isolation between the subsystems allowed for easier development and maintenance. With mobile devices growing in capabilities, as well as ever-increasing data-center electric bills, everyone started caring about energy efficiency. This brought about core kernel changes such as deferrable timers, dyntick, and runtime power management. The rise of multi-core portable devices pushed the need for yet more aggressive power-management tricks such as the controversial use of CPU offlining.

There is a pattern that emerges from these changes: the more complex scheduler and power management become, the more isolated they are from each other. And this turns out to be completely counterproductive, since, as we'll see later, one side can't predict what the other side might do in the near future. Because (or in spite) of that, some chip manufacturers are increasingly implementing DVFS in hardware away from the operating system, which exacerbates the problem. Yet support for ARM's big.LITTLE and the increasing influence scheduler decisions have on power consumption in general have made it clear that merging power management with the scheduler is becoming unavoidable.

Scheduler: meet cpuidle

The cpuidle subsystem tries to minimize power consumption by selecting a low-power mode, or idle mode (often referred as C-State), when the CPU is idle. However, idling a CPU comes with a price: the more power savings such a mode provides, the longer it will take for the affected CPU to become fully operational again. A good balance between the power actually saved and the time "wasted" in entering and exiting a power-saving mode has to be reached. Furthermore, many modes consume some non-negligible amount of power for the CPU simply to transition in and out of them, meaning the CPU has to be idle for a sufficiently long period of time for those modes to be worth entering. Most CPUs have multiple idle modes, providing different trade-offs between achievable power savings and latency.

Therefore, the cpuidle code has to gather statistics on actual CPU usage to select the most appropriate idle mode depending on the observed idleness pattern of the CPU. And this statistics-gathering work duplicates what the scheduler already does, albeit through indirect and somewhat imprecise heuristics.

Idleness patterns are determined by wake-up events bringing the CPU out of idle. Those events can be classified into three categories:

  • Predictable events: This group comprises all timers from which we can obtain their next expiry time and deduce a minimum idle period.

  • Semi-predictable events: These are somewhat repetitive events, like I/O request completions, that often follow a known pattern.

  • Random events: Anything else, such as keystrokes, touchscreen events, network packets, etc.

By directly involving the scheduler in the idle-state selection process, we can do a much better job at considering the semi-predictable events. I/O patterns are mainly a function of those tasks generating them and the device they're directed to. The scheduler can therefore keep track of the average I/O latency on a per-task basis and, possibly, with some input from the I/O scheduler, provide an estimated delay for the next I/O completion to occur according to the list of waiting tasks on a given CPU. And if a task is migrated to a different CPU, its I/O latency statistics are migrated along. The scheduler is therefore in a better position to appreciate the actual idleness of a CPU.

It is therefore necessary for the scheduler and cpuidle to become better integrated, to let the scheduler manage the available idle modes and eventually bypass the current cpuidle governors entirely. Moving the main idle loop into the scheduler code will also allow for better accounting of CPU time spent in the servicing of interrupts and their occurrence rate while idle.

Furthermore, the scheduler should be aware of the current idle mode on each CPU to do a better job at load balancing. For instance, let's consider the function find_idlest_cpu() in kernel/sched/fair.c, which looks for the least-loaded CPU by comparing the weighted CPU load value for each CPU. If multiple CPUs are completely idle, their load would be zero, with no distinction for the idle mode they're in. In this case, it would be highly beneficial to choose the CPU whose current idle mode has the shortest exit latency. If idle exit latency is the same for all idle CPU candidates then the last to have entered idle mode is more likely to have a warmer cache (assuming the relevant idle mode preserves cache, of course). An initial patch series to that effect was posted by Daniel Lezcano.

This also highlighted the fact that some definitions for the same expression may differ depending on one's perspective. A function called find_idlest_cpu() in the scheduler context is simply the converse of find_busiest_cpu(), whereas in the cpuidle context this would mean looking for the CPU with the deepest idle state. The deeper an idle state is, the more costly it is to bring a CPU back to operational state — clearly not what we want here. A similar confusion may occur with the word "power". The traditional meaning in the scheduler code is "compute capacity" while it means "energy consumption rate" in a power management context. Patches to clarify this have recently been merged.

Scheduler: meet cpufreq

The scheduler keeps track of the average amount of work being done by scheduled tasks on each CPU in order to give each task fair access to CPU resources and to decide when to perform load balancing. The ondemand cpufreq governor does similar load tracking in order to dynamically set each CPU's clock frequency to optimize battery life. Since energy consumption is proportional to the square of the voltage, it is desirable to run at the lowest clock frequency, which allows for voltage reduction to the CPU, while still being fast enough to perform all the scheduled work during a given period of time.

As with cpuidle, the cpufreq subsystem was developed in isolation from the CPU scheduler. Many problems result from the split between those subsystems:

  • The cpufreq code goes to great lengths trying to evaluate the actual CPU load through indirect means, including heuristics to avoid misprediction, while, once again, the scheduler has all this information available already.

  • The scheduler can determine the load contribution of individual tasks, whereas the cpufreq code has no such ability. In the occurrence of a task migration, or a task waking up, the scheduler may determine in advance what the load on the target CPU is likely to become. The cpufreq code may only notice an average load increase and react to it after the fact.

  • The scheduler records the execution time for each task in order to ensure fairness between all tasks. However, since the scheduler has no awareness of CPU frequency changes, tasks executing on a CPU whose clock has been slowed down will be unfairly charged more execution time than similar tasks running on another CPU with a faster clock for the same amount of work. Fairness is thus compromised.

  • As the CPU clock frequency is reduced, the resulting apparent increase in task load may trigger load balancing toward a less-loaded CPU in order to spread the load, despite the fact that this increased apparent load was indeed the cpufreq's goal initially.

  • And if that load balancing happens while the target CPU's clock frequency is reduced, then that CPU could end up being oversubscribed. Because there is no coordination between the scheduler and cpufreq, either (or both) of them may react by, respectively, migrating a task back or raising the CPU clock frequency. The CPU may suddenly be underutilized, and the cycle could repeat again.

To fix this, the current plan is to integrate cpufreq more tightly with the scheduler. The various platform-specific, low-level cpufreq drivers will remain unchanged and still register with the cpufreq core as usual; however, the governors — the part that decides what clock frequency to ask for and when — could be bypassed entirely. In fact, the scheduler could simply register itself as a new governor with the cpufreq core.

The advantage of a tighter integration of cpufreq with the scheduler is the ability to be proactive with clock frequency changes rather than reactive, and also to coordinate better with scheduler activities like load balancing. A CPU clock frequency change could be requested in anticipation of a load change; this could happen in response to a call to fork(), exit(), or when a task sleeps or wakes up. The frequency policy could be different depending on the particular scheduler event, task historical behavior patterns, etc.

However, to be able to perform well in the presence of varying CPU clock frequencies, the notion of scale-invariant task load tracking must be added to the scheduler. This is a correction factor to normalize load measurements from CPUs executing code at different speeds so the load contribution of a task can be predicted when the task is moved. The relative computation capacity of each CPU as seen by the scheduler also has to be adjusted according to its effective clock frequency in order to do proper load balancing. It is still unclear how accurate this correction factor can be, considering that tasks making lots of memory accesses are less likely to be influenced by the CPU clock speed compared to tasks performing computational work, etc. Still, anything is going to be better than no correction at all like we have today.

Incidentally, the scale-invariant load tracking does apply to big.LITTLE scheduling as well. Leaving cpufreq aside for a moment, a "little" CPU is permanently slowed down, which translates into a reduced compute capacity to the scheduler, and conversely a "big" CPU has more capacity. With distinct correction factors permanently applied to "big" and "little" CPUs, the scheduler is likely to just work optimally in terms of task throughput, with no further changes to the scheduler. The cpufreq correction factor simply has to be combined with the "big" and "little" factors afterward.

Several developers, including Mike Turquette and Tuukka Tikkanen, are working on the cpufreq integration and initial patches should be posted for public review soon.

Scheduler: may the power be with you

Okay… So we might get to the point where cpuidle and cpufreq are tightly integrated with the scheduler. Are we done? Unlikely. In fact we now have more difficult decisions to make than before and they all relate to the new mechanisms at the scheduler's disposal to perform load balancing. For example:

  • When the system load goes up, should a new CPU be brought out of idle, or should the clock frequency on an already running CPU be increased instead? Or both?

  • Conversely, when the system load goes down, is it best to keep more CPUs alive with a reduced clock frequency or pack more tasks on fewer CPUs in order to send the other CPUs to sleep?

  • Is it best to consolidate loads onto fewer CPUs or to spread the load over more CPUs?

  • When is it time to perform active task packing to let a whole CPU cluster (or package) get into low-power mode?

The latest power-aware scheduler work from Morten Rasmussen provides a framework to evaluate the power cost of the available scheduling scenarios. This, in combination with Vincent Guittot's sched_domain topology and CPU capacity tracking consolidation work, should provide answers to the above questions.

What else?

We desperately need measurement tools to validate proposed solutions. Linaro is working on a tool called idlestat to validate idle-state usage and its effectiveness. Traditional benchmark tools such as sysbench may be combined with energy usage monitoring to provide a way to perform power characterization of a system. Extensions to cyclictest to create various synthetic workloads are being explored as well. This is still unwieldy, though, and more integration and automation are required.

This article hasn't covered thermal management. The Linux kernel implements a thermal-management interface that allows a user-space daemon to control thermal constraints. However, as we've seen, power-related issues are intertwined, and a thermal-control solution that lives separately from the scheduler is likely to be suboptimal. If the scheduler controls power states, it will also have to deal with platform temperature someday, providing thermal "provisioning" or the like. But we can save this for another day.

Thanks

Thanks to Amit Kucheria, Daniel Lezcano, Kevin Hilman, Mike Turquette, and Vincent Guittot for their help in reviewing this article.

Comments (9 posted)

Patches and updates

Kernel trees

Greg KH Linux 3.15.1 ?
Greg KH Linux 3.14.8 ?
Kamal Mostafa Linux 3.13.11.3 ?
Greg KH Linux 3.10.44 ?
Greg KH Linux 3.4.94 ?

Architecture-specific

Core kernel code

Development tools

Stanislav Fomichev perf trace pagefaults ?

Device drivers

Documentation

Michael Kerrisk (man-pages) man-pages-3.69 is released ?

Filesystems and block I/O

Memory management

Virtualization and containers

Daniel Kiper xen: Add EFI support ?

Miscellaneous

Douglas Gilbert sg3_utils-1.39 available ?
Lucas De Marchi kmod 18 ?

Page editor: Jonathan Corbet

Distributions

Android without the mothership

By Jonathan Corbet
June 18, 2014
The success of Android has brought Linux to many millions of new users and that, in turn, has increased the development community for Linux itself. But those who value free software and privacy can be forgiven for seeing Android as a step backward in some ways; Android systems include significant amounts of proprietary software, and they report vast amounts of information back to the Google mothership. But Android is, at its heart, an open-source system, meaning that it should be possible to cast it into a more freedom- and privacy-respecting form. Your editor has spent some time working on that goal; the good news is that it is indeed possible to create a (mostly) free system on the Android platform.

One might well wonder why this goal is important to some. After all, Google's services define the Android experience; disconnecting from them can only leave an Android device with fewer features and capabilities. There are a few reasons, starting with the fact that some users simply do not trust Google at all and do not wish to share with it the details of their social connections, email interactions, physical movements, and more. Others may trust today's Google while fearing what the company could become after a management change. Regardless of the level of trust, the concentration of personal information found at companies like Google cannot help but attract the attention of governments and criminal organizations. One need not reach the tin-foil-hat level of paranoia to want to opt out of that arrangement.

There is also the simple fact that Google has repeatedly shown a willingness to shut down its (free of charge) services. As this history of Android recently posted by Ars Technica shows, Google-attached Android devices have a finite lifespan; after a while, the remote support they need to function will simply not be available. Depending on free services from others carries certain risks; some of those risks can be avoided by taking a more active role in the selection of the services one depends on.

That said, there is, beyond doubt, a great deal of functionality and convenience built into an Android device. Those of us who remember losing our contacts along with a phone do not wish to go back to those days. Disconnecting from Google can only mean losing some of those features. The important questions are: what's left, and what can be replaced?

Building a Google-free device

For those who want a 100% free Android experience, the Replicant project offers images for a handful of devices. But this project does not appear to have a vast number of developers; releases tend to be slow (the last was 4.2 in January) and a fair amount of functionality is missing. For most users, Replicant is probably not the way to go at this time.

Instead, for most, the starting point for an alternative Android device will be a CyanogenMod release. CyanogenMod is not 100% free; in particular, it contains whatever non-free drivers and firmware are needed to make a specific device work. But, above that level, CyanogenMod is free; it is mostly a build of the Android Open Source Project (AOSP) release with a bunch of added goodies. Google's proprietary user-space apps and utilities are available, but they must be downloaded and installed separately. It seems certain that the vast majority of CyanogenMod users immediately do that installation, but it is not necessary to do so. Your editor started this experiment with a fresh CyanogenMod 11.0 M7 release without installing the Google apps.

Leaving out the Google apps will deprive an Android device of much of its functionality. In many cases, such as with Gmail, Google+, GDrive, etc., these apps will be something that one has already decided to live without as part of a plan to disconnect from Google. In other cases, the loss of functionality hurts more. That said, a bare CyanogenMod installation provides a highly functional device. Most of the basic functionality that one might need is there; all that is left is the task of filling in the rest.

Perhaps the first missing feature that will be noticed by many is the app for the Google Play Store — the app used to download and install everything that is not on a device by default. This is a proprietary app that will refuse to function unless the user is logged into a Google account; most of what it installs is also proprietary, of course, and, increasingly, Play Store apps depend on on the proprietary Play Services layer. So the Play Store is not really an option for this use case. (Interestingly, the CyanogenMod 11.0 M7 installation included the Play Store app; from your editor's understanding, Google's licensing says that it should not be there at all, so its presence was a bit surprising.)

So one of the first post-installation steps is to get set up with another app repository; that generally means installing the F-Droid app. The F-Droid repository is limited to free software, though some of the apps may rely on non-free dependencies or services. At a little over 1,000 apps, it is rather smaller than the commercial app stores out there, but it has many of the essentials needed to bring a Google-free device up to full functionality.

Replacing Google apps and services

While it is certainly possible to keep contact and calendar information locally on an Android device, chances are that most users will want to have that information backed up to a server somewhere and synchronized across all their devices. Naturally, if Google is cut out of the picture, Google will not be providing those synchronization services. Happily, the contact-management and calendar code built into AOSP handle these tasks nicely, with no need to install any additional software — if one has a server on the net somewhere to synchronize with. There are a number of alternatives for people wanting to set up their own servers, including ownCloud and Kolab. Commercial services exist for those who do not want the trouble of maintaining a server on the net; your editor set up an account at MyKolab.com for this purpose.

The instructions provided by MyKolab for setting up synchronization were straightforward enough. For users with information in Google currently, it is easy enough to extract that information and upload it into MyKolab. The end result is contact and calendar synchronization that Just Works and which is outside of the Google sphere. This sort of arrangement might be a good option even for people who do not want to cut the connection with the mothership entirely, but who want to back up their contact and calendar information in a more private place.

The Chrome web browser is not available in CyanogenMod, but CyanogenMod does have the classic "Browser" app that was the standard Android browser not that long ago. This browser is entirely capable; for those wanting more, the Firefox browser is available through F-Droid. For email, the standard Android email client comes with CyanogenMod, but most users will likely want to install the K-9 Mail client instead. K-9 is not an ideal mail client, but it does slowly get better with time. Of course, one needs mail hosting somewhere else on the net to be able to use clients like K-9; such services are available from a wide range of providers for those looking to get away from Gmail without having to set up their own mail server.

Simple file synchronization, like that provided by GDrive or Dropbox, is an important feature for some users. It can be useful to synchronize documents across devices, say, or to automatically upload photographs to a server. The best option would appear to be the ownCloud client, though that, naturally, requires the availability of an ownCloud server on the net. There is also an app for the (discontinued) Ubuntu One service, but that is not likely to please large numbers of people.

One of the hardest applications to replace might be Maps. There are a couple of mapping and navigation tools available in F-Droid; of the two, OsmAnd appears to be the better bet. OsmAnd offers high-quality, audio turn-by-turn navigation; the route planning is fast and it reroutes quickly when the need arises. The ability to download maps and operate offline can be helpful in places where coverage is spotty or expensive. The map data itself comes from OpenStreetMap, so the quality can be variable but is, as a rule, quite high.

[OsmAnd Navigation] On the other hand, the user interface to OsmAnd is confusing and difficult to use. Finding destinations by address is a hit-or-miss affair, and the extensive search capabilities found in Google Maps are absent. The audio instructions are often strange; a sharp left turn (seen in the screenshot to the left) was accompanied by an instruction to "make a U-turn" followed [OsmAnd clutter] by a right turn. The maps themselves can be cluttered to the point of being unreadable (see example to the right). The core application is open source, but some features are reserved for a proprietary premium edition. Satellite imagery is not available. And so on.

For those who like the configurability and the use of OpenStreetMap data and who are willing to deal with occasional quirks, OsmAnd could well be preferable to the Google Maps app. For the (presumably larger) crowd that just wants to find a nearby sushi bar and be told how to get there, the lack of a full replacement for Maps could be the one factor that keeps them in Google's embrace.

For those who like the video-calling features found in apps like Hangouts or Skype, there are not a lot of alternatives in the Google-free world. A couple of SIP phone clients claim to be able to do video calling, but, it goes without saying, the number of people who can be called with such a client is relatively small.

Beyond that, though, almost any user will miss at least one app that would otherwise be available via the Google Play Store. Just like putting Linux onto a desktop system means giving up the world of proprietary Windows software, setting up a detached Android device means doing without the wide range of apps out there. Even apps that might run happily on a system without proprietary Google software may simply be unavailable; since most devices only allow installation from the Play Store, most app developers do not make their work available anywhere else.

Given that the number of users who go out of their way to install restricted versions of Android must be quite small, it is perhaps surprising that the Android free software community is as successful as it is. Getting the base system from AOSP is clearly a nice start, but there is a wide variety of free add-on software available; much of it is quite capable. A fully free system that is not attached to any company's data centers is attainable now. One can only imagine what might be possible with (1) a bit more attention toward making such systems easy to install and (2) more awareness of the value of such devices. There is a lot to be said about the virtues of a Google-attached mobile device, but it would be good to have well-established alternatives.

(The Free Your Android site is a useful resource for those wanting to pursue this idea further.)

Comments (104 posted)

Brief items

CentOS 7 Public QA Release

While stressing that it is a pre-release for testing (i.e. quality assurance or QA) purposes, the CentOS team has announced the availability of the CentOS 7 QA release. It can be downloaded from here. Packages are not GPG signed, are likely to be replaced "in place" as bugs are fixed, and upgrading from the QA release to the final release may not be possible (and will not be supported). But, unlike previous CentOS releases, it has been opened up to the community before the final release. "We appreciate any and all bug reports at http://bugs.centos.org (please also check upstream bugzilla.redhat.com and link to those bugs when filing a new CentOS issue), and assistance with the “Branding Hunt” (see http://lists.centos.org/pipermail/centos-devel/2014-June/010411.html)."

Comments (11 posted)

Debian 6 debuts its long term support period

The Debian project has announced that the "Long Term Support (LTS)" infrastructure to provide security updates for Debian GNU/Linux 6.0 "squeeze" is now in place. "Users of this version should follow the instructions from the LTS wiki page to ensure that they get the LTS security updates." Support will be provided until February 2016.

Full Story (comments: 2)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

The history of Android (ars technica)

Ars technica has put together a detailed history of Android so far. "Thanks to this 'cloud rot,' an Android retrospective won’t be possible in a few years. Early versions of Android will be empty, broken husks that won't function without cloud support. While it’s easy to think of this as a ways off, it's happening right now. While writing this piece, we ran into tons of apps that no longer function because the server support has been turned off. Early clients for Google Maps and the Android Market, for instance, are no longer able to communicate with Google."

Comments (6 posted)

Page editor: Rebecca Sobol

Development

Acilos, the private social networking "valet"

By Nathan Willis
June 18, 2014

TXLF 2014

There are many competing philosophies in the FOSS community when it comes to the question of how best to cope with the proliferation of proprietary social media services that vie for users' attention. On one side might be the freedom-and-privacy absolutism approach that drives developers to work on projects like FreedomBox and to steer clear of services run by entities who might not be trusted to preserve their users' rights ahead of other concerns. On the other end of the spectrum are those who happily take advantage of services offered by Google, Twitter, LinkedIn, and others, arguing that it is their choice to participate. Somewhere in between are the people who want to find a way to use the services popular with everyone else while somehow wresting more control away from the service provider and into their own hands.

Boyd Wilson of Omnibond is, evidently, in this middle category, and at Texas Linux Fest 2014 in Austin, he demonstrated his work on Acilos, an LGPL-licensed tool for manipulating social media accounts. Termed a "Private Social Valet," the tool can be connected to accounts on a variety of social networks, and pull in all of the content those accounts receive to the user's local machine. More importantly, rather than simply aggregating the feeds, it grants the user more control than those social media services generally allow. Acilos is not fundamentally linked to closed, third-party services—it can ingest and manipulate data from free software services as well (including anything that produces RSS)—but it may be most striking for its ability to connect, search, and analyze the data from proprietary social networks that tend to summarily restrict users' access.

Acilos is a web application, with a user interface written in the Dojo and d3 libraries, backed by the ElasticSearch search framework. The code is available on GitHub, and there are Amazon EC2 images created with each release. The most recent release is Beta 8, from June 4.

[Acilos main feed]

The name, Wilson explained, is an anagram of "social," sorted into alphabetical order. The credit for it belongs to Wilson's wife, who evidently possesses an uncanny knack for rapidly sorting the letters of words into alphabetical order. But this origin also provides a clue as to what the application does. Users link their instance of it to any of a variety of social network accounts, after which they can combine, search, sort, analyze, and post to all of the supported services.

Some of those features may sound mundane to someone who already has Google Plus, Twitter, and Facebook accounts, but Wilson pointed out that these services are increasingly shutting off features and making life more difficult. "Have you ever tried to find that interesting post you saw on Facebook a few weeks ago?" he asked the audience. "You can't, not anymore." Similarly, public APIs for popular services are on the decline as providers push users toward their official apps, and it often requires jumping hurdles to connect two or more accounts on the same service—even though it is common for people to have a personal Twitter account and also be responsible for managing a group or company account.

There are, of course, other open-source applications that aggregate feeds from social networks, Wilson said, but Acilos differs in some key respects. First, many open-source tools have one registered API key from each supported service for the application itself. That is convenient for users, but if the service provider shuts off the API key, all of those users are cut off, and users often worry about the privacy implications of shared API credentials. Acilos requires each user to set up an individual API key for each service they user, providing better isolation and privacy. The trick has been to make getting and installing the necessary API keys as painless as possible.

The second difference between Acilos and other social-media aggregators is that it has been designed from the beginning to function just as well as an in-browser tool on the desktop as it does as a mobile client. A lot of effort, he said, has gone into making a responsive UI that degrades smoothly to smaller screen sizes and that works as well with mouse input as it does with touch events.

[Acilos word cloud]

Wilson demonstrated some of what Acilos is capable of today. With a set of social media accounts configured, Acilos shows a "main" feed that includes all incoming message content, as well as a simple interface for creating custom feeds by selecting individual users and search terms. The custom feeds tool can also be used to add public RSS subscriptions (which, obviously, do not require API keys or similar credentials) that can be treated just like the incoming message feeds from social network accounts. Acilos also allows users to post a single message that is dispatched to every connected service or any user-selectable subset, and messages can be sent immediately or queued for delivery at a later time.

There are a variety of analytics available, such as generating charts of post frequency (by user or by service) in several formats, as well as some more artistic output options like creating "word clouds." As of right now, the search interface is effectively split into two separate parts; the "query" feature searches the public feeds provided by the various services, while the "create a local feed" feature is used to execute a search on the incoming messages from the user's subscriptions. Wilson admitted that some of this work needs further refinement, particularly in clarifying the various search options.

After Texas Linux Fest was over, I tested Acilos on my own, and was generally pleased with the results, although there were some rough edges. It was particularly nice to be able to add multiple accounts for services like Twitter; this is a feature that quite a few proprietary applications support, but that has never been well addressed by free software. Similarly, the ability to cross-post between different services is a recurring need, and one that free software has been slow to support. Anyone who is responsible for a company or project account in addition to their personal identity will find a lot to like about Acilos in this respect.

The effectiveness of the search interface is less clear at the moment, partly due to the unfinished state of the code itself, but also because Acilos needs to collect messages from the connected accounts before the search feature has much to work with. That limitation is not Acilos's fault, of course—the services involved are the ones not interested in making searching easy. Last but not least, at the moment only the Twitter, LinkedIn, Facebook, Instagram, and Google+ services are supported. While this is a start, and one that encompasses many people's needs entirely, there are more services out there needing to be added—particularly free and open-source services. The developer documentation is sparse at the moment, which is hopefully something that will be addressed before a stable non-beta release.

Wilson closed out his overview of Acilos by asking anyone interested in getting involved to grab the code from GitHub and get in touch. It will be interesting to watch where the project goes from here; it may have started out as a personal quest on Wilson's behalf, but for many FOSS advocates and developers, a tool that makes up for the shortcomings in the feature sets of commercial social networking services is a sorely needed addition.

Comments (1 posted)

Brief items

Quotes of the week

Made a chatbot that passes the Turing Test. It doesn't answer you until right at the end, then says

> Sorry was afk lol

> gtg bye

Peter Silk

Usually people throw out their old IT manuals and those end up being the only sources of documentation about old file formats. Unfortunately, if you keep your old IT books instead, society tends to view you as pathological.
Wilhelmina Randtke, at Texas Linux Fest 2014.

Comments (none posted)

TeX Live 2014 available

The 2014 edition of the TeX Live software collection has been released in downloadable form. DVDs are in production and will be mailed out to TeX user groups shortly. TeX Live is designed to be a "get up and running" distribution of the TeX typesetting system; the entire TeX collection is also available.

Comments (none posted)

openHAB 1.5 available

Version 1.5 of the openHAB open-source home automation platform has been released. New features include bindings for a wide array of new hardware types and protocols, including GPIO devices, eKey fingerprint authentication hardware, IRTrans infrared devices, the xPL and HAI Omni-Link protocols, and many more. There is also a new front-end for iOS devices and full integration with XBMC.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Poettering: Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems

On his blog, Lennart Poettering writes about new systemd features that will make it easier to "factory reset" systems back to their initial configuration. By handling /etc and /var differently, it will also support other use cases, such as "stateless" systems that store no persistent configuration, as well as "reproducible" and "verifiable" systems. "Booting up a system without a populated /var is relatively straight-forward. With a few lines of tmpfiles configuration it is possible to populate /var with its basic structure in a way that is sufficient to make a system boot cleanly. systemd version 214 and newer ship with support for this. Of course, support for this scheme in systemd is only a small part of the solution. While a lot of software reconstructs the directory hierarchy it needs in /var automatically, many software does not. In case like this it is necessary to ship a couple of additional tmpfiles lines that setup up at boot-time the necessary files or directories in /var to make the software operate, similar to what RPM or DEB packages would set up at installation time. Booting up a system without a populated /etc is a more difficult task. In /etc we have a lot of configuration bits that are essential for the system to operate, for example and most importantly system user and group information in /etc/passwd and /etc/group. If the system boots up without /etc there must be a way to replicate the minimal information necessary in it, so that the system manages to boot up fully."

Comments (43 posted)

Page editor: Nathan Willis

Announcements

Brief items

GCC wins ACM SIGPLAN Programming Languages Software Award

The GNU Compiler Collection (GCC) has received the ACM SIGPLAN Programming Languages Software Award. "GCC is the product of hundreds of person-years of work over its 27 years of existence. This award recognizes the GCC developer community for the substantial impact it has had on the programming language community and the larger software industry." (Thanks to David Edelsohn)

Comments (73 posted)

LAC'14 video archive

Videos from the Linux Audio Conference sessions are available.

Full Story (comments: none)

Articles of interest

Reset the Net and beyond

Zak Rogoff, Campaigns Manager for the Free Software Foundation, wraps up a successful Reset the Net day. "Last Thursday was a big day for defending our freedom and privacy on the Internet. The FSF and its supporters joined the ranks of thousands for Reset the Net, the biggest-ever day of action against bulk surveillance. All in all, it was a whopping success, with major Web sites commiting to improve their security and more than thirty thousand people visiting the FSF's brand-new Email Self-Defense guide."

Full Story (comments: none)

Calls for Presentations

KVM Forum 2014 Call for Participation

KVM Forum 2014 will take place October 14-16 in Düsseldorf, Germany. The call for participation closes July 27.

Full Story (comments: none)

PyCon ZA 2014 - Call for Speakers

PyCon ZA will take place October 2-3 in Johannesburg, South Africa. The call for proposals closes September 1.

Full Story (comments: none)

CFP Deadlines: June 19, 2014 to August 18, 2014

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
June 20 August 18
August 19
Linux Security Summit 2014 Chicago, IL, USA
June 30 November 18
November 20
Open Source Monitoring Conference Nuremberg, Germany
July 1 September 5
September 7
BalCCon 2k14 Novi Sad, Serbia
July 4 October 31
November 2
Free Society Conference and Nordic Summit Gothenburg, Sweden
July 5 November 7
November 9
Jesień Linuksowa Szczyrk, Poland
July 7 August 23
August 31
Debian Conference 2014 Portland, OR, USA
July 11 October 13
October 15
CloudOpen Europe Düsseldorf, Germany
July 11 October 13
October 15
Embedded Linux Conference Europe Düsseldorf, Germany
July 11 October 13
October 15
LinuxCon Europe Düsseldorf, Germany
July 11 October 15
October 17
Linux Plumbers Conference Düsseldorf, Germany
July 14 August 15
August 17
GNU Hackers' Meeting 2014 Munich, Germany
July 15 October 24
October 25
Firebird Conference 2014 Prague, Czech Republic
July 20 January 12
January 16
linux.conf.au 2015 Auckland, New Zealand
July 21 October 21
October 24
PostgreSQL Conference Europe 2014 Madrid, Spain
July 24 October 6
October 8
Qt Developer Days 2014 Europe Berlin, Germany
July 24 October 24
October 26
Ohio LinuxFest 2014 Columbus, Ohio, USA
July 25 September 22
September 23
Lustre Administrators and Developers workshop Reims, France
July 27 October 14
October 16
KVM Forum 2014 Düsseldorf, Germany
July 27 October 24
October 25
Seattle GNU/Linux Conference Seattle, WA, USA
July 30 October 16
October 17
GStreamer Conference Düsseldorf, Germany
July 31 October 23
October 24
Free Software and Open Source Symposium Toronto, Canada
August 1 August 4 CentOS Dojo Cologne, Germany Cologne, Germany
August 15 September 25
September 26
Kernel Recipes Paris, France
August 15 August 25 CentOS Dojo Paris, France Paris, France
August 15 November 3
November 5
Qt Developer Days 2014 NA San Francisco, CA, USA
August 15 October 20
October 21
Tizen Developer Summit Shanghai Shanghai, China

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

LibreOffice bug hunting event

The Document Foundation (TDF) has announced a LibreOffice 4.3 bug hunting session on June 20-22. "The community has already made a large collective effort to make LibreOffice 4.3 the best ever, based on automated stress tests and structured tests by Quality Assurance volunteers. Enterprise and individual LibreOffice users can now contribute to the quality of the best free office suite ever by testing the release candidate to identify issues in their preferred user scenario." See the wiki page for more information about the hunt.

Full Story (comments: none)

Events: June 19, 2014 to August 18, 2014

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
June 17
June 20
2014 USENIX Federated Conferences Week Philadelphia, PA, USA
June 19
June 20
USENIX Annual Technical Conference Philadelphia, PA, USA
June 20
June 22
SouthEast LinuxFest Charlotte, NC, USA
June 21
June 28
YAPC North America Orlando, FL, USA
June 21
June 22
AdaCamp Portland Portland, OR, USA
June 23
June 24
LF Enterprise End User Summit New York, NY, USA
June 24
June 27
Open Source Bridge Portland, OR, USA
July 1
July 2
Automotive Linux Summit Tokyo, Japan
July 5
July 11
Libre Software Meeting Montpellier, France
July 5
July 6
Tails HackFest 2014 Paris, France
July 6
July 12
SciPy 2014 Austin, Texas, USA
July 8 CHAR(14) near Milton Keynes, UK
July 9 PGDay UK near Milton Keynes, UK
July 14
July 16
2014 Ottawa Linux Symposium Ottawa, Canada
July 18
July 20
GNU Tools Cauldron 2014 Cambridge, England, UK
July 19
July 20
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
July 20
July 24
OSCON 2014 Portland, OR, USA
July 21
July 27
EuroPython 2014 Berlin, Germany
July 26
August 1
Gnome Users and Developers Annual Conference Strasbourg, France
August 1
August 3
PyCon Australia Brisbane, Australia
August 4 CentOS Dojo Cologne, Germany Cologne, Germany
August 6
August 9
Flock Prague, Czech Republic
August 9 Fosscon 2014 Philadelphia, PA, USA
August 15
August 17
GNU Hackers' Meeting 2014 Munich, Germany

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds