|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for May 14, 2015

The state of color

By Nathan Willis
May 13, 2015

LGM 2015

Libre Graphics Meeting (LGM) always features talks that provide status updates from application projects, as well as presentations from artists and users about their own graphics work. But the event is also a rare opportunity to hear about the state of the art in various technology areas that buttress application code itself. At LGM 2015 in Toronto, one such talk was color-management consultant Chris Murphy's status report on the state of color management in free software. Although users can already count on Krita, GIMP, Scribus, and other applications to handle the necessary color transformations, color management is still a field where there are new challenges facing developers.

Color management is, historically, one of LGM's biggest success stories. The various applications that make up the core of the free-software creative toolkit are all color-managed now—users who care to configure their hardware and software can expect an image to look correct on all of their displays, regardless of which applications are used to edit it (and in which order), and to be free of surprises when printed (whether professionally or at home). That accomplishment is thanks, in large part, to collaborations that took place in and around previous LGMs.

So when Murphy stood up to begin the session, he started with a joke. "Everything works great. Next talk!"

[Chris Murphy at LGM 2015]

Though he was kidding, Murphy continued, in a sense everything is great in free software—especially where Linux is concerned. Most applications use the LittleCMS library to transform color pixels from one space to another. The ArgyllCMS project provides good tools for creating accurate color profiles. There are two actively maintained systems for managing color profiles in a desktop environment: colord and Oyranos. And there is even open hardware available for profiling displays: Richard Hughes's ColorHug (which we looked at in 2012).

This situation on Linux is in contrast to the state of affairs on Windows and on Mac OS X. Windows's color-management library is so buggy that it is disabled by default. Turning it on for professional work, he said, requires "a dance with a dog, a pig, and pony under a full moon." But the good news, he added, is that no one ever reports bugs about it if they can't use it. On Macs, the situation is reversed: the pro-level color-management features cannot be disabled, so they generate a constant stream of bug reports.

In a lightning talk later in the day, Murphy added a few words about iOS and Android, which he said had simply slipped his mind during the main talk. iOS, he said, has a color-management API, "but I don't think it works. No one uses it." As far as he is aware, there is a single app that leverages it: a proprietary tool from X-Rite; even then, the app is largely inconsequential since it does not make any of its features accessible to other apps. Android is much better; device displays can be profiled and tested. He recommended users start with the color profile testing tools at the color.org web site.

The basic underpinnings of color management in Linux and free software are good, he said; primarily the shortcomings at present are found in the user interfaces. The interfaces for activating and tweaking color-management settings all seem to vary and, perhaps more importantly, applications tend to vary in what their default settings are. Specifically, he highlighted that some parts of the color-management pipeline might be turned on for printing with Ghostscript but turned off for on-screen viewing—which can lead to differences between print output and the screen.

Whichever software a user encounters trouble with, however, Murphy urged them to report bugs. "If you experience strange problems and can't figure out what's going on, write to the OpenICC list. CC me, and I'll try to reproduce it." Many users encounter color bugs, he said, but rarely report them. "I recommend being prolific with your complaints. That's how things get fixed."

Although the overall picture is in good shape for free-software users, Murphy did point out several places where there is new work to watch, and a few areas of concern. One thing the color-science community is still working on, he said, is standardizing black-point compensation. Black-point compensation is the process of trying to properly account for the difference between the darkest black that two different devices can produce. The darkest level producible by a digital projector in a well-lit room, for example, is still quite bright in the absolute sense.

There is a draft ISO standard [PDF] addressing how to compensate for black-point differences; developers will want to watch its progress. There are still open questions, such as how the ISO black-point compensation specification should be used in combination with standards from other organizations.

Another new development is the recent effort by one of those other organizations—the International Color Consortium (ICC)—to work more openly with the technology community at large. In the early 20th Century, the scientists who did pioneering color work published everything in the open, Murphy said. In more recent years, though, international standards bodies and technology companies (such as those that make up the ICC) have done most of the new science and specification writing. Too many scientists get hired off to work on proprietary applications, he said, rather than creating open standards or open-source software.

But the ICC is trying to engage more with the public; its ICC Labs project has published a new (version 4) sRGB profile that should provide better color rendering when printing highly colorful images. Software support still needs to catch up, however. There is support for viewing images using the new sRGB profile in several applications (such as Firefox), but output profiles to translate images into printer color spaces have yet to appear.

There is a significant unsolved problem in color management, however, which Murphy called "the elephant in the room that's about to sneeze and cause a lot of chaos." That problem is optical brightening agents (OBAs), which he introduced as "what laundry detergent, toothpaste, and printer paper have in common." OBAs are fluorescent additives used to make objects appear whiter to the human eye; they absorb radiation in the ultraviolet spectrum and radiate it back out in the visible spectrum—usually in the blues and greens.

OBAs are a clever trick for creating whiter whites, but they wreak havoc with color specifications. They are difficult to measure (and, thus, to adjust for), their performance characteristics vary depending on the light in the viewing room, and they degrade over time. OBAs are one reason why printer paper turns yellow after two years, he said.

It is bad enough that OBAs are in new desktop-printer paper, since they make proofing difficult (for proper proofing, the desktop-printer paper should behave the same as the paper used by the commercial print shop). But as paper includes more and more recycled content, which he called an undeniably good change overall, paper stock includes more and more recycled OBAs—in unpredictable amounts and from various sources. Thus, even papers sold as OBA-free may contain some level of OBAs.

Murphy ended the session by noting that the United Nations had declared 2015 the "International Year of Light," a designation intended to promote scientific study. As a result, a number of color-science organizations were conducting programs and workshops that may interest users and developers concerned about color management. The International Commission on Illumination (CIE), for instance, is running a series of Open Lab Days around the globe.

Not to be outdone, Murphy ran his own workshops at LGM apart from his talk; one a BoF about color management, the other a hands-on session helping users configure a full color-managed workflow. For those who could not be at LGM, the good news is how many pieces of the color-management puzzle are already in the correct places. But as the new challenges Murphy outlined reveal, there are few targets in the software development field that sit still for long, color included.

[The author would like to thank Libre Graphics Meeting for assistance with travel to Toronto.]

Comments (10 posted)

Free software and fashion tech

By Nathan Willis
May 13, 2015

LGM 2015

At Libre Graphics Meeting 2015 in Toronto, Hong Phuc Dang presented an update of the state of various projects from the free-software and open-hardware world that deal with garment design and manufacturing, as well as textiles in general. The scope of the topic is rather large; it encompasses everything from Arduino-driven knitting machines to producing one-off garments for cosplayers to developing software for fashion designers. Thus, there are a great many small projects that are active in different areas, with the potential to grow into a full-fledged community.

Dang credited Susan Spencer's presentation at LGM 2013 with jump-starting her interest in free software for working with garments and textiles. After that session, she started researching the current state of affairs—talking to fashion designers and students around Asia and Europe, as well as to developers and people in the garment-production business.

[Hong Phuc Dang at LGM 2015]

In brief, she said, she learned that the fashion industry is and long has been slow to adopt new technology. The modern sewing machine is virtually identical in function to the earliest Singer models from the 1850s. Newer machines are faster, and some can be computer controlled, but they do not offer much else in the way of new capabilities. One of the key reasons for this is that garment manufacturing revolves around notoriously cheap labor. When labor is so inexpensive, producers have no incentive to pay more for newer equipment.

This pits fashion producers into a "race to the bottom" price war, she said, leaving little room to invest in new technology. As a result, the software used even by the largest producers is of low quality. Several designers told Dang that they used CAD drafting software to work on their designs because they cannot find anything else usable in their price range. What software is available is, naturally, proprietary and is locked to closed data formats.

At the same time, she said, there are other problems plaguing the industry that also have an impact on technology. As more garment production moves to third-world countries to save costs, first-world communities begin to lose their collective traditional knowledge. Mass production also means that consumers have grown used to generic, one-size-fits-all garments as the norm, even though technology should allow for fast and easy customization—or perhaps even direct collaboration between the designer and the consumer. And mass production generates significant amounts of waste and environmental pollution.

The drawbacks to mass production of garments are reminiscent of the types of problem that the "maker" movement has already tackled for a number of engineering disciplines. Dang believes free software, open hardware, and open data formats can overcome many of these drawbacks, so she has been working to foster connections within the community. Her community-building project is called Fashiontec, and it includes a GitHub organization in addition to the main site.

There are several active free-software projects worth looking at, she said. Design and patternmaking are the purview of Tau Meta Tau Physica, Valentina, and several independent Processing-based efforts. A related project is BodyApps, which provides a 3D body-measurement system. It is developed by members of the Fashiontec community.

While the patternmaking projects focus on cutting and sewing material, there are also several knitting applications in development. Dang cited Knitic and All Yarns Are Beautiful (AYAB) as among the best; there is a more complete list available at the Fashiontec GitHub site. Related projects include Embroidermodder, an open-source application that can control several programmable embroidery machines.

Most of these knitting projects focus on supporting commercially available hardware devices. On the open-hardware side, there are several projects dedicated to building knitting machines. The most well-known of these is OpenKnit, which uses an Arduino to drive a home-built machine that includes a number of 3D-printed specialty parts. There is also an open-hardware embroidery machine built and documented by members of the OpenBuilds project. Some Fashiontec members are also working on reverse engineering a circular knitting machine.

Last but not least, the Fashiontec community has also been working to define an open file format that can facilitate data sharing between applications. Called the Human Definition Format (HDF), it is a container format modeled after The Document Foundation's Open Document Format (ODF). It contains structured XML and binary images, and can already be used with Valentina.

Together, these projects constitute an active development scene, but Dang ended her session with a reminder that more is still needed. There are many more hardware devices that need to be "liberated" through reverse-engineering so that they can be used with free software. Individuals still face obstacles to setting up their own maker-style businesses. Some of those obstacles are quite large—such as how to compete with the global-scale distribution channels available to mass-production companies. Dang said she is still researching approaches to that problem.

Other challenges are smaller, such as the difficulty of building custom hardware (such as the open-hardware knitting machine). Here, Dang said that the Fashiontec community is trying to reach out more to the maker movement—hacker spaces in particular, which she said could all benefit from adding a sewing or knitting machine to their stable of 3D printers and laser cutters.

Over the coming year, Fashiontec will have a presence at a number of events, including MeshCon in Berlin this October, as well as FOSDEM, FOSSASIA, and several other free-software conferences. Dang closed by saying anyone with an interest in textiles, knitting, or garment production is welcome to join the community.

[The author would like to thank Libre Graphics Meeting for assistance with travel to Toronto.]

Comments (none posted)

CoreOS Fest and the world of containers, part 1

May 13, 2015

This article was contributed by Josh Berkus


CoreOS Fest

It's been a Linux container bonanza in San Francisco recently, and that includes a series of events and announcements from multiple startups and cloud hosts. It seems like everyone is fighting for a piece of what they hope will be a new multi-billion-dollar market. This included Container Camp on April 17 and CoreOS Fest on May 5th and 6th, with DockerCon to come near the end of June. While there is a lot of hype, the current container gold rush has yielded more than a few benefits for users — and caused technological development so rapid it is hard to keep up with.

CoreOS Fest was a demonstration of how trendy containers are in the startup world right now. The event sold out for 300 attendees, despite being planned within the last six months and located in an ill-suited venue called The Village in San Francisco's Tenderloin. Despite the drawbacks, it was well-attended; I suspect that DockerCon will be even bigger. This audience, based on responses to speaker questions, was almost entirely made up of system administrators and dedicated DevOps staff.

Among the latest developments in the container world are new funding, a new appc committee, the release of CoreOS, Inc.'s Tectonic platform, Kubernetes, new tools and techniques for databases on containers, systemd integration, Project Calico, Sysdig, and more. Over this series of three articles, we're going to be exploring some of the developments in the world of Linux containers. But first, some Silicon Valley politics.

Note to forestall confusion: For the rest of this article, "Docker" and "CoreOS" refer to the respective open-source projects and related software, and "Docker, Inc." and "CoreOS, Inc." refer to the companies.

The orchestration gold rush and CoreOS vs. Docker

CoreOS, Inc. was Docker, Inc.'s strongest partner, but split with it only six months before CoreOS Fest, when it launched the competing container platform rkt (formerly also known as Rocket). The separation between the two companies seems to have become a divorce, as competition between them for users and capital has heated up. Docker, Inc. received $95 million in Series D funding on April 14th. That same week, CoreOS, Inc. raised $12 million, notably including an investment from Google Ventures.

The conference made it obvious that it's a strange separation, though. Probably 80% of the people in the room at the keynote were Docker users, and most of the technologies introduced are compatible with Docker. Yet few people on stage ever said the word "Docker"; one speaker even went so far as to use the phrase "the D word" instead of saying the name.

A lot of this competition centers around orchestration platforms, which refers to the suite of tools required to deploy, manage, and network large numbers of containers that make up a container-based software infrastructure. The idea is that while Linux containers on their own are useful as a development platform, for them to be useful as the basis for the whole software stack, there is a need for several orchestration tools. This includes: container schedulers that deploy groups of containers to physical servers, cluster information stores for container data sharing and coordination, software-defined networking for connecting containers, along with resource management and monitoring tools.

All of the companies in the container space seem to have decided that orchestration is where they can differentiate products, and is therefore the primary way to exert influence and create revenue. It's not just Docker, Inc. and CoreOS, Inc. in this field: Red Hat's Project Atomic, Ubuntu's Snappy Core, Joyent's Triton, and the Apache Mesos project are all strong contenders for the future of container orchestration. Notably, Microsoft has now announced that Windows Containers, which make container deployment available for users of Windows Server and .Net Stacks, will be available in 2016.

Perhaps because of this intense competition, there was a much stronger emphasis on talking about container security at CoreOS Fest than there has been at the prior CoreOS Meetups. Weak access controls and a lack of other security measures has been one of CoreOS, Inc.'s main criticisms of the Docker project from before the split in December.

The new appc committee

This security focus was evident in the App Container (appc) specification panel. The specification was created and had its 0.1 release in December. It describes the required properties of an Application Container Image (ACI); rkt is CoreOS's implementation of that specification as explained in an earlier article.

[appc spec panel]

Before discussing any new features, CoreOS, Inc. CEO Alex Polvi cautioned the audience that the committee was still working on the security part of the specification; "sometimes it takes a while to get these things right", he said. He then introduced the members of the panel, who are also the committee in charge of the new "appc specification community": Vincent Batts of Red Hat, Tim Hockin of Google, Charles Aylward of Twitter, along with Brandon Philips and Jonathan Boulle of CoreOS, Inc. Ken Robertson of Apcera was also on the panel, although he is not a member of the committee.

That was one of the two big announcements of the morning: CoreOS has created a governance document and turned over the appc specification project to a committee of "maintainers", the majority of whom do not work for CoreOS, Inc. While this is not a foundation or other incorporated body, the move seems intended to make appc a real, independent specification. It was also a demonstration of partner support for the spec. "[appc] should feel like the HTML 5 standard. Shared standards plus competition creates better product", Polvi said.

To start out, each of the panelists explained their company's interest in the appc specification.

Apcera was working on its own closed-source container technology when the appc project was announced. It quickly worked to bring its own technology in line with the draft specification. "When we saw the rkt announcements, we thought 'damn, now we have to build an abstraction'", said Robertson. He also announced the release of Kurma, Apcera's bootable container infrastructure that is compatible with appc-compliant containers.

Twitter already had a lot of existing infrastructure, and Docker didn't fit in with what it had, Aylward said. Rkt and appc allowed the company to pick and choose what it implemented. Hockin noted that Google is looking to create an open-source platform that mirrors how its large-scale, proprietary container platform works, and has formed a tight partnership with CoreOS to support it. "Coming from Google, I'm interested in building the cathedral. But before you can build the cathedral, you need to pour the foundation", he said.

Batts was more equivocal, saying that Red Hat's interest in appc is in supporting standards and user choice. Since Red Hat's Project Atomic is also closely aligned with Docker, Red Hat's fairly neutral stance makes sense. He explained it as "finding commonalities and working with them which drives everything else forward."

Once corporate politics were out of the way, the panelists discussed the state of the spec and current development. They started with some of the major challenges and feature requests, such as making encryption work with service discovery, the need for a better ACI validator, and the need to lock down more system calls inside the container for better security. The main challenge, however, is that parts of the 0.5 specification are still vaguely described, which frequently forces the rkt team to halt work while the specification is hammered out.

"You can write a spec, but without an implementation, you don't know that you can build it. So implementation and spec need to go hand-in-hand", said Aylward.

The committee agreed on the main goal of the project: for ACI to be the reference format for container images, and for developers to build ACI images first, then use them to create whatever other packages are needed. They did not agree about everything else. For example, while CoreOS, Inc. is devoted to systemd for container bootstrapping and initialization, Google is not using systemd. Hockin also disagreed with the other committee members on how much container isolation could be part of the spec. He believes that, eventually, by separating the general "spec" and the "os-spec", appc can encompass a full application binary interface (ABI) in order to provide full isolation for container runtimes. "It's pretty well understood that containers are not a security barrier. This is something that needs to evolve from the inside out", Hockin said.

Tectonic

The other major announcement for the conference was CoreOS, Inc.'s launch of the Tectonic platform, which is the full CoreOS, Inc. suite of tools. That includes CoreOS Linux, the container deployment tool fleet, the clustered data store etcd, the flannel virtual networking system, and the image repository Quay.io, all combined with Google's Kubernetes project (see below). The idea is to present a single, user-friendly integrated platform for large-scale container orchestration. Polvi called it "Google's infrastructure for everyone else, or GIFEE".

[Alex Polvi]

Tectonic is proprietary, commercial software that CoreOS, Inc. plans to sell to customers who want a fully integrated stack with a nice GUI and are willing to pay for it. While all the tools used are available as open source — except for the GUI — doing your own orchestration is difficult due to the newness of the tools and the complex ways in which they interact.

CoreOS's fleet and flannel may seem to have overlapping and conflicting functionality with Kubernetes, but in Tectonic they are complementary. According to Kelsey Hightower of CoreOS, Inc., fleet is used in Tectonic to bootstrap and monitor Kubernetes, which can otherwise require a lot of hand configuration. Flannel supplies an overlay networking system that supports Kubernetes' service discovery features.

As a demonstration product, Intel, Supermicro, and data center vendor Redapt announced a joint venture in making preconfigured Tectonic stacks available. At the conference, they showed off a quarter-rack of servers that were running the beta version of Tectonic as a "plug and play" container infrastructure that was ready to go. It is also possible to run the Tectonic beta on top of Amazon EC2.

Kubernetes

The other project logo that was just as pervasive at CoreOS Fest as the CoreOS logo was the Kubernetes ship's wheel. Brendan Burns, head of the Kubernetes project at Google, explained what Kubernetes was, how it worked, and how it relates to CoreOS and containers.

He started by separating operations into four layers: application ops, cluster ops, kernel ops, and hardware ops. Kubernetes operates at the level of cluster ops, synchronizing servers into a "unified compute substrate", in order to decouple application requirements from specific knowledge of the hardware, in the same way that a public cloud does.

Developers interact with Kubernetes through its API server, which supports both a command-line interface and a JavaScript-based web API. All of its data is contained in etcd. Like the configuration management system Puppet, Kubernetes uses a declarative approach where users specify how the system should be, and then Kubernetes reconciles the actual state with the desired state. An example of this would be "exactly three Redis servers should be running", which would cause Kubernetes to either stop or start containers until that declaration was true.

Deploying containers to servers in order to provide requested services is known as "scheduling". The "atomic unit of scheduling" in Kubernetes is the "pod", a group of containers, networking, and data volumes. This allows Kubernetes to schedule services that require multiple components that don't work if not placed on the same physical server, such as a database and its file storage.

The other big feature of Kubernetes is service discovery, which lets application developers use service proxies to talk to services without knowing where those containers are on the network. This proxy network is driven by "labels" attached to each pod and container that show the services that they provide. In this model, multiple pods supplying the same service are treated as fungible units — Kubernetes will load-balance among them.

Compared with competing orchestration frameworks such as CoreOS's own fleet, Apache Mesos, or Docker, Inc.'s Swarm and Machine, Kubernetes feels more feature-complete and mature in simple trials at my company. Since it's a de facto port of Google's own, in-production orchestration software, this should not be surprising.

The only tool from the CoreOS stack that is actually required for Kubernetes is etcd, although flannel can be used to support the Kubernetes service discovery with virtual networking. Etcd can be used to share metadata for Docker and rkt containers equally well. However, given the close alliance between Google and CoreOS, Inc., further integration with CoreOS tools seems likely.

Next up

The pace of new tools, companies, techniques, and practices in the Linux container world has been extremely rapid, and it is only through events like CoreOS Fest that I have been able to keep pace. The shifting alliances between companies and open-source projects is constantly changing in a way that we haven't seen since the early days of mobile Linux.

In the next part of this series, we'll be covering systemd and CoreOS, the rise of the language Go as a container tool language, the new projects Calico and Sysdig. We will conclude with an article about the issues and solutions for storing persistent data on container infrastructures, including PostgreSQL Governor, CockroachDB, etcd, and the RAFT consensus algorithm.

Comments (1 posted)

A few notes from LWN

By Jonathan Corbet
May 13, 2015
It has been a while since the last update on the status of LWN. There are a few changes coming to the LWN site, so this seems like a good time for a summary of the various bits of metanews that have built up.

Perhaps the biggest upcoming change is that we are getting closer to switching over to the new responsive site design by default. Readers who have not yet done so can test out the new design by setting the appropriate preference in the account area. Those who have tried it may wish to give it another look; things have changed significantly in the last few weeks. This feature is no longer limited to subscribers; one does, though, need to be logged in to be able to change to the new design. A few small glitches remain, but most of the big problems have been ironed out — as far as we know.

Once the default changeover happens, it will still be possible to use the older design by changing the same preference value. We will keep that code around for now, but, it must be said, we are unlikely to use it ourselves or to put a lot of maintenance effort into it. Eventually the older mode is likely to fade away unless a strong reason to keep it surfaces.

One problem that came up during the work on this project was a difficulty in finding a spot for the text advertisement that traditionally runs in the left column. As it happens, nobody has bought such an ad in 2015, and only two were sold in 2014. We therefore conclude that LWN text ads are something less than a compelling offering at this point. So, support for text ads has been removed.

A feature that has been quietly added to the new design, instead, is the ability to use Google fonts to render LWN pages. This feature is currently experimental and might be removed in the future. Google fonts are disabled by default, but can be turned on in the preferences page. Note that doing so will cause the fonts to be downloaded from Google's servers if they are not already in your browser's cache. Google's privacy promises regarding fonts seem pretty solid, but we remain reluctant to turn them on by default; reader opinions on the matter would be of interest.

In general terms, LWN is currently running on a solid financial footing and won't be going away anytime soon. It is worth noting, though, that individual subscriptions have been nearly level for a few years now (group subscriptions are up a bit). We have also seen a bit of a tendency for subscribers to drop down to the lower subscription levels. A larger subscription base would enable us to hire more staff (something that your editor, currently dealing with workers compensation insurance issues, would appreciate) and expand our coverage.

So we would like to thank all of our subscribers, and to encourage other readers to subscribe to LWN. That is, in the end, the only thing that keeps this site on the net. We have been at it for 17 years now, but, sometimes, it feels like we're just getting started. Much of interest is going to happen in the free software world in the coming years, and we'll be there to report on it.

Comments (98 posted)

Page editor: Jonathan Corbet

Security

Mozilla and deprecating HTTP

By Nathan Willis
May 13, 2015

On April 30, Mozilla announced its intention to try to phase out the usage of HTTP in favor of HTTPS. The plan involves systematically deprecating support in Firefox for certain web features on those sites running un-secure HTTP. That way, the thinking goes, site administrators will be compelled to migrate their servers over to HTTPS. Needless to say, the plan attracted its fair share of criticism—for its perceived use of strong-arm tactics, for the claims it makes about HTTPS security, and for the practical challenges it will need to overcome.

Firefox Security Lead Richard Barnes wrote the April 30 announcement. It began by pointing out that there have been numerous calls in recent months to migrate all web sites over to HTTPS (and, indeed, calls to encrypt all Internet applications). Mozilla brought the topic to its community mailing list in early April, expressing the project's interest in luring more site maintainers over to HTTPS by implementing new Firefox features only for secure connections.

What followed was a lengthy debate—sometimes over legitimate, specific concerns, but sometimes devolving into less high-minded criticism of Mozilla. Barnes and Martin Thomson drafted a more detailed plan in a publicly-readable Google Docs document.

The plan

That plan proposed four phases. First, stakeholders would need to define what precisely counts as a "secure" context. This is not necessarily a simple encryption-or-no-encryption question. For instance, the W3C WebAppSec working group is currently considering how an authenticated TLS connection impacts deciding whether or not various types of page content ought to be considered parts of the same secure context—from IFrames to Web Workers. It is also expected to come up with a plan for determining whether local resources (such as localhost/ and file:/// URLs) should be treated as secure and how organizations should treat internal servers.

Whatever the eventual consensus looks like, the second phase involved publicly declaring a cutoff date, after which new browser features will only be implemented for use in secure contexts. The third phase would involve a second cutoff date, after which certain existing browser features would be disabled for insecure contexts. The fourth phase, simply put, is "essentially all of the web is HTTPS".

The tricky part, the document goes on to explain, is deciding which features need to be cut off in phase three. The plan does not involve switching off arbitrary functionality just to make HTTP sites break; the set of features being looked at is those that have the potential to be a security risk for users: localStorage, getUserMedia, IndexedDB, cookies, and so on.

The blog post goes on to explain that, once implemented by Firefox, the plan would not require site owners to rewrite millions of http:// URLs into https:// ones. HTTP Strict Transport Security (HSTS) and Upgrade Insecure Requests allow the browser to transform URLs into HTTPS requests automatically. Firefox has had HSTS support since 2010.

The other major obstacle to the proposed plan is that it requires all sites to have a valid TLS certificate. While, historically, TLS certificates backed by certificate authorities (CAs) have been expensive and difficult to acquire, over the past few years more and more services have appeared with the goal of providing ubiquitous certificates. Let's Encrypt, the non-profit program backed by Mozilla (among others) is arguably the most famous (though it has yet to launch), but there are others. The FAQ document [PDF] at the end of Barnes's blog post mentions StartSSL and WoSign among the other such services.

Opposition

In the mailing list discussion in April, several objections were raised to the plan. Some took umbrage at the notion that Mozilla was setting out to force site owners to make technical changes; user "vic" called it "strong-arming," for example. Barnes and the other Mozilla employees in the discussion largely left such accusations alone, although Barnes did point out that HTTP/2 mandates TLS encryption already—and does so strictly on a public-policy basis, not on technical grounds. "So if you're OK with limiting HTTP/2 to TLS, you've sort of already bought into the strategy we're proposing here," he said.

Others raised more technical concerns. Eli Naeher (among others) noted that there are "hundreds of millions of home routers and similar devices with web UIs on private networks" that would potentially be broken if the plan is implemented. Many such devices are not designed to be user-upgradable; others are simply so old or resource-constrained that adding a new HTTPS stack might be tricky or impossible.

A commenter named "Lorenzo" pointed out that the plan could seriously impact networking infrastructure: if all web traffic is encrypted with HTTPS, it cannot be cached by caching proxies. To that point, Mozilla's Gervase Markham responded that there are some possible solutions. For instance, the proposal could allow Firefox to accept HTTP resources requested from an HTTPS page. Since only the non-user-specific elements on a page (e.g., generic site images, but not user data) should be cached in the first place, network operators could still be using caching proxies to reduce their bandwidth requirements.

Elsewhere, Ben Klemens raised privacy concerns over the plan in a blog post. He pointed out that even free-TLS-certificate vendors require users to send in a sizable amount of personal information (including their real name and email address), which would hamper web usage by dissidents and others seeking anonymity. Markham replied to that by pointing out that web-hosting providers and domain-name registrars have similar policies already (as well as noting the irony that Klemens's post only allows comments to be left through Facebook and Twitter).

Barnes replied to a number of the frequently repeated concerns in a summary message. There, he acknowledged that it is still not trivial for everyone to implement HTTPS. While there are a number of initiatives working to make the process easier, he reminded everyone that the HTTPS-only future was still a long ways off. Other recurring concerns will require work on the part of other players—such as the home router vendors. But, he said, "interfaces to these sorts of devices don’t typically use a lot of advanced web features, so may not be impacted by this deprecation plan for a long time (if ever)."

The more philosophical criticisms of the plan, however, do not have pat answers. Plenty of commenters on the blog post accused Mozilla of breaking the web, not caring about its users, not caring about site maintainers, and several variations on the theme of general corruption. Then again, similar criticisms seem to appear in response to many announcements that Mozilla makes, so it is easy to see why the browser maker spends little time responding to them.

The future

As it stands now, Mozilla seems intent on pursuing the plan, but it is clearly quite a few steps away from implementing the first technical changes. In addition to deciding on a time frame and determining what features ought to be marked for HTTP deprecation, there are still user-interface issues to be considered (e.g., the meaning of the lock icon in the location bar would surely need to change, since it currently reports whether HTTP or HTTPS is used), and Let's Encrypt has yet to launch.

Perhaps the biggest unanswered question, though, is how much of an impact a move like this on Mozilla's part would have in reality—and on what sort of time scale. Google's Chromium team has evidently been considering a similar approach—though with less publicity than Mozilla's attracted. With the two largest browser vendors on board, it seems likely that most site administrators would eventually have to capitulate and migrate to HTTPS, even if such a large-scale operation seems hard to imagine at present.

Comments (57 posted)

Brief items

Security quotes of the week

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES [Cryptanalysis and Exploitation Services] and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG [random number generator] or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.
Ross Anderson summarizes a "meeting" with Edward Snowden

But the Mozilla foundation’s HTTPS requirement is, to me, the real end of the DIY [do it yourself] era. This is not a closed-source corporation, or a startup pushing its new tool, or the arrogant guy at the hackathon, but the Mozilla Foundation — “Our mission is to promote openness, innovation & opportunity on the Web” — saying that if you are building web pages using tools from your desert island, without first filling in registration forms, then you are doing it wrong. Mozilla Firefox will make increasingly active efforts to block you until you obtain the correct permissions to build modern web pages.

This statement from Mozilla describes itself as “a message to the web developer community”. The introverts on the desert island, the me of the 1990s, the kid of the present day who doesn’t like WordPress and has the energy and curiosity to try building something new, the real-world dissidents in real-world totalitarian countries, are dark matter in the background and not addressed directly in the announcement, but are affected by the announcement nonetheless.

Ben Klemens

If you have a source of free, no-information-required server hosting and free, no-information-required domain names (as Ben happens to for his Caltech Divinity School example), then it’s reasonable to say that you are a little inconvenienced if your HTTPS certificate is not also free and no-information-required. But most people doing homebrew DIY websites aren’t in that position – they have to rent such things. Once Let’s Encrypt is up and running, the situation with certificates will actually be easier and more anonymous than that with servers or domain names.
Gervase Markham responding to Klemens

Comments (12 posted)

New vulnerabilities

async-http-client: multiple vulnerabilities

Package(s):async-http-client CVE #(s):CVE-2013-7398 CVE-2013-7397
Created:May 8, 2015 Updated:May 14, 2015
Description:

From the Fedora advisory:

CVE-2013-7398: missing hostname verification for SSL certificates.

CVE-2013-7397t: SSL/TLS certificate verification is disabled under certain conditions.

Alerts:
Mageia MGASA-2015-0212 async-http-client 2015-05-11
Fedora FEDORA-2015-6891 async-http-client 2015-05-08

Comments (none posted)

docker: multiple vulnerabilities

Package(s):docker CVE #(s):CVE-2015-3627 CVE-2015-3629 CVE-2015-3630 CVE-2015-3631
Created:May 11, 2015 Updated:May 21, 2015
Description: From the Arch Linux advisory:

- CVE-2015-3627 (privilege escalation): The file-descriptor passed by libcontainer to the pid-1 process of a container has been found to be opened prior to performing the chroot, allowing insecure open and symlink traversal. This allows malicious container images to trigger a local privilege escalation.

- CVE-2015-3629 (privilege escalation): Symlink traversal on container respawn allows local privilege escalation. Libcontainer version 1.6.0 introduced changes which facilitated a mount namespace breakout upon respawn of a container. This allowed malicious images to write files to the host system and escape containerization.

- CVE-2015-3630 (unauthorized modification): Several paths underneath /proc were writable from containers, allowing global system manipulation and configuration. These paths included /proc/asound, /proc/timer_stats, /proc/latency_stats, and /proc/fs. By allowing writes to /proc/fs, it has been noted that CIFS volumes could be forced into a protocol downgrade attack by a root user operating inside of a container. Machines having loaded the timer_stats module were vulnerable to having this mechanism enabled and consumed by a container.

- CVE-2015-3631 (policy profile escalation): By allowing volumes to override files of /proc within a mount namespace, a user could specify arbitrary policies for Linux Security Modules, including setting an unconfined policy underneath AppArmor, or a docker_t policy for processes managed by SELinux. In all versions of Docker up until 1.6.1, it is possible for malicious images to configure volume mounts such that files of proc may be overridden.

Alerts:
openSUSE openSUSE-SU-2015:0905-1 docker 2015-05-19
Arch Linux ASA-201505-6 docker 2015-05-08
Oracle ELSA-2015-3037 docker 2015-05-21
Oracle ELSA-2015-3037 docker 2015-05-21

Comments (none posted)

dpkg: format string vulnerabilities

Package(s):dpkg CVE #(s):CVE-2014-8625
Created:May 13, 2015 Updated:May 13, 2015
Description: From the CVE entry:

Multiple format string vulnerabilities in the parse_error_msg function in parsehelp.c in dpkg before 1.17.22 allow remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via format string specifiers in the (1) package or (2) architecture name.

Alerts:
Fedora FEDORA-2015-7342 dpkg 2015-05-12
Fedora FEDORA-2015-7296 dpkg 2015-05-12

Comments (none posted)

gnu_parallel: file overwrite

Package(s):gnu_parallel CVE #(s):
Created:May 12, 2015 Updated:June 1, 2015
Description: From the openSUSE advisory:

GNU parallel was updated to version 20150422 to fix one security issue, several bugs and add functionality.

The following vulnerability was fixed:

* A local attacker could make a user overwrite one of his own files with a single byte when using --compress, --tmux, --pipe, --cat or --fifo when guessing random file names within a time window of 15 ms.

Alerts:
openSUSE openSUSE-SU-2015:0856-1 gnu_parallel 2015-05-12
openSUSE openSUSE-SU-2015:0968-1 parallel 2015-05-29

Comments (none posted)

hostapd: denial of service

Package(s):hostapd CVE #(s):CVE-2015-4142
Created:May 13, 2015 Updated:November 25, 2015
Description: From the Mageia advisory:

A vulnerability was found in hostapd that can be used to perform denial of service attacks by an attacker that is within radio range of the AP that uses hostapd for MLME/SME operations.

Alerts:
openSUSE openSUSE-SU-2016:2357-1 wpa_supplicant 2016-09-23
Gentoo 201606-17 hostapd 2016-06-27
Fedora FEDORA-2015-1521e91178 wpa_supplicant 2015-11-24
Fedora FEDORA-2015-cfea96144a wpa_supplicant 2015-11-23
Fedora FEDORA-2015-6f16b5e39e wpa_supplicant 2015-11-12
Debian DSA-3397-1 wpa 2015-11-10
Arch Linux ASA-201510-2 hostapd 2015-10-05
Scientific Linux SLSA-2015:1439-1 wpa_supplicant 2015-08-03
Oracle ELSA-2015-1439 wpa_supplicant 2015-07-29
Red Hat RHSA-2015:1439-01 wpa_supplicant 2015-07-22
Debian-LTS DLA-260-1 hostapd 2015-06-30
Red Hat RHSA-2015:1090-01 wpa_supplicant 2015-06-11
Fedora FEDORA-2015-8386 hostapd 2015-05-27
Fedora FEDORA-2015-8336 hostapd 2015-05-27
Mageia MGASA-2015-0216 hostapd 2015-05-12
Ubuntu USN-2650-1 wpa, wpasupplicant 2015-06-16
CentOS CESA-2015:1090 wpa_supplicant 2015-06-15
Scientific Linux SLSA-2015:1090-1 wpa_supplicant 2015-06-11
Fedora FEDORA-2015-8303 hostapd 2015-05-26
openSUSE openSUSE-SU-2015:1030-1 wpa_supplicant 2015-06-11

Comments (none posted)

icu: code execution

Package(s):icu CVE #(s):CVE-2014-8146 CVE-2014-8147
Created:May 11, 2015 Updated:July 7, 2015
Description: From the Ubuntu advisory:

Pedro Ribeiro discovered that ICU incorrectly handled certain memory operations when processing data. If an application using ICU processed crafted data, an attacker could cause it to crash or potentially execute arbitrary code with the privileges of the user invoking the program.

Alerts:
openSUSE openSUSE-SU-2015:2368-1 Qt 2015-12-27
openSUSE openSUSE-SU-2016:0588-1 LibreOffice 2016-02-26
Debian DSA-3323-1 icu 2015-08-01
Mageia MGASA-2015-0286 icu 2015-07-27
Gentoo 201507-04 icu 2015-07-07
Ubuntu USN-2605-1 icu 2015-05-11

Comments (none posted)

java: multiple vulnerabilities

Package(s):java-1.6.0-ibm CVE #(s):CVE-2015-0138 CVE-2015-0192 CVE-2015-1914 CVE-2015-2808
Created:May 13, 2015 Updated:May 13, 2015
Description: From the Red Hat advisory:

CVE-2015-0138 IBM JDK: ephemeral RSA keys accepted for non-export SSL/TLS cipher suites (FREAK)

CVE-2015-0192 IBM JDK: unspecified Java sandbox restrictions bypass

CVE-2015-1914 IBM JDK: unspecified partial Java sandbox restrictions bypass

CVE-2015-2808 SSL/TLS: "Invariance Weakness" vulnerability in RC4 stream cipher

Alerts:
SUSE SUSE-SU-2016:0113-1 java-1_6_0-ibm 2016-01-13
Gentoo 201512-10 firefox 2015-12-30
SUSE SUSE-SU-2015:2192-1 java-1_6_0-ibm 2015-12-03
SUSE SUSE-SU-2015:2166-1 java-1_6_0-ibm 2015-12-02
Debian DSA-3339-1 openjdk-6 2015-08-19
SUSE SUSE-SU-2015:1375-1 java-1_7_0-ibm 2015-08-12
Ubuntu USN-2706-1 openjdk-6 2015-08-06
SUSE SUSE-SU-2015:1345-1 java-1_6_0-ibm 2015-08-05
Scientific Linux SLSA-2015:1526-1 java-1.6.0-openjdk 2015-08-03
Oracle ELSA-2015-1526 java-1.6.0-openjdk 2015-07-31
SUSE SUSE-SU-2015:1331-1 java-1_7_1-ibm 2015-07-31
SUSE SUSE-SU-2015:1329-1 java-1_7_1-ibm 2015-07-31
SUSE SUSE-SU-2015:1320-1 java-1_7_0-openjdk 2015-07-30
SUSE SUSE-SU-2015:1319-1 java-1_7_0-openjdk 2015-07-30
Oracle ELSA-2015-1526 java-1.6.0-openjdk 2015-07-30
Oracle ELSA-2015-1526 java-1.6.0-openjdk 2015-07-30
CentOS CESA-2015:1526 java-1.6.0-openjdk 2015-07-30
CentOS CESA-2015:1526 java-1.6.0-openjdk 2015-07-30
Red Hat RHSA-2015:1526-01 java-1.6.0-openjdk 2015-07-30
Debian-LTS DLA-303-1 openjdk-6 2015-08-28
Ubuntu USN-2696-1 openjdk-7 2015-07-30
openSUSE openSUSE-SU-2015:1289-1 java-1_8_0-openjdk 2015-07-26
openSUSE openSUSE-SU-2015:1288-1 java-1_7_0-openjdk 2015-07-26
Mageia MGASA-2015-0280 java-1.8.0-openjdk 2015-07-27
Debian DSA-3316-1 openjdk-7 2015-07-25
Mageia MGASA-2015-0277 java-1.7.0-openjdk 2015-07-23
Arch Linux ASA-201507-16 jre7-openjdk 2015-07-22
SUSE SUSE-SU-2015:1509-1 java-1_6_0-ibm 2015-09-08
Oracle ELSA-2015-1230 java-1.7.0-openjdk 2015-07-16
Red Hat RHSA-2015:1241-01 java-1.8.0-oracle 2015-07-17
Red Hat RHSA-2015:1242-01 java-1.7.0-oracle 2015-07-17
Red Hat RHSA-2015:1243-01 java-1.6.0-sun 2015-07-17
Scientific Linux SLSA-2015:1228-1 java-1.8.0-openjdk 2015-07-15
Scientific Linux SLSA-2015:1229-1 java-1.7.0-openjdk 2015-07-15
Scientific Linux SLSA-2015:1230-1 java-1.7.0-openjdk 2015-07-15
CentOS CESA-2015:1228 java-1.8.0-openjdk 2015-07-15
CentOS CESA-2015:1228 java-1.8.0-openjdk 2015-07-15
CentOS CESA-2015:1230 java-1.7.0-openjdk 2015-07-15
CentOS CESA-2015:1229 java-1.7.0-openjdk 2015-07-15
CentOS CESA-2015:1229 java-1.7.0-openjdk 2015-07-15
Red Hat RHSA-2015:1228-01 java-1.8.0-openjdk 2015-07-15
Red Hat RHSA-2015:1230-01 java-1.7.0-openjdk 2015-07-15
Red Hat RHSA-2015:1229-01 java-1.7.0-openjdk 2015-07-15
SUSE SUSE-SU-2015:1161-1 java-1_6_0-ibm 2015-06-30
SUSE SUSE-SU-2015:1086-4 java-1_7_0-ibm 2015-06-27
SUSE SUSE-SU-2015:1086-3 Java 2015-06-24
SUSE SUSE-SU-2015:1138-1 IBM Java 2015-06-24
SUSE SUSE-SU-2015:1086-2 IBM Java 2015-06-22
SUSE SUSE-SU-2015:1086-1 IBM Java 2015-06-18
SUSE SUSE-SU-2015:1085-1 IBM Java 2015-06-18
Red Hat RHSA-2015:1007-01 java-1.7.0-ibm 2015-05-13
Red Hat RHSA-2015:1006-01 java-1.6.0-ibm 2015-05-13
Red Hat RHSA-2015:1020-01 java-1.7.1-ibm 2015-05-20
Red Hat RHSA-2015:1021-01 java-1.5.0-ibm 2015-05-20
SUSE SUSE-SU-2015:1073-1 java-1_7_0-ibm 2015-06-16

Comments (none posted)

kernel: use-after-free flaw

Package(s):kernel CVE #(s):CVE-2015-3636
Created:May 12, 2015 Updated:August 19, 2015
Description: From the Mageia advisory:

It was found that the Linux kernel's ping socket implementation didn't properly handle socket unhashing during spurious disconnects which could lead to use-after-free flaw. On x86-64 architecture systems, a local user able to create ping sockets could use this flaw to crash the system. On non-x86-64 architecture systems, a local user able to create ping sockets could use this flaw to increase their privileges on the system. Note: By default ping sockets are disabled on the system (net.ipv4.ping_group_range = 1 0) and have to be explicitly enabled by the system administrator for specific user groups in order to exploit this issue.

Alerts:
openSUSE openSUSE-SU-2016:0301-1 kernel 2016-02-01
Oracle ELSA-2015-2152 kernel 2015-11-25
Red Hat RHSA-2015:1643-01 kernel 2015-08-18
openSUSE openSUSE-SU-2015:1382-1 kernel 2015-08-14
SUSE SUSE-SU-2015:1376-1 kernel-rt 2015-08-12
Red Hat RHSA-2015:1583-01 kernel 2015-08-11
Scientific Linux SLSA-2015:1534-1 kernel 2015-08-06
CentOS CESA-2015:1534 kernel 2015-08-06
Red Hat RHSA-2015:1564-01 kernel-rt 2015-08-05
Red Hat RHSA-2015:1565-01 kernel-rt 2015-08-05
Red Hat RHSA-2015:1534-01 kernel 2015-08-05
Oracle ELSA-2015-3064 kernel 3.8.13 2015-07-31
Oracle ELSA-2015-3064 kernel 3.8.13 2015-07-31
SUSE SUSE-SU-2015:1491-1 kernel 2015-09-04
SUSE SUSE-SU-2015:1488-1 kernel 2015-09-04
SUSE SUSE-SU-2015:1478-1 kernel 2015-09-02
SUSE SUSE-SU-2015:1489-1 kernel 2015-09-04
SUSE SUSE-SU-2015:1487-1 kernel 2015-09-04
Scientific Linux SLSA-2015:1221-1 kernel 2015-07-15
Oracle ELSA-2015-3049 kernel 2.6.39 2015-07-16
Oracle ELSA-2015-3049 kernel 2.6.39 2015-07-16
Oracle ELSA-2015-3048 kernel 3.8.13 2015-07-15
Oracle ELSA-2015-3048 kernel 3.8.13 2015-07-15
CentOS CESA-2015:1221 kernel 2015-07-15
Oracle ELSA-2015-1221 kernel 2015-07-14
Red Hat RHSA-2015:1221-01 kernel 2015-07-14
SUSE SUSE-SU-2015:1224-1 kernel 2015-07-10
Mageia MGASA-2015-0219 kernel-tmb 2015-05-13
Mageia MGASA-2015-0221 kernel-linus 2015-05-13
Fedora FEDORA-2015-7736 kernel 2015-05-12
Mageia MGASA-2015-0210 kernel 2015-05-11
Debian DSA-3290-1 kernel 2015-06-18
Ubuntu USN-2637-1 kernel 2015-06-10
Fedora FEDORA-2015-8518 kernel 2015-05-26
Ubuntu USN-2632-1 linux-ti-omap4 2015-06-10
Ubuntu USN-2633-1 linux-lts-trusty 2015-06-10
Ubuntu USN-2634-1 kernel 2015-06-10
SUSE SUSE-SU-2015:1071-1 kernel 2015-06-16
Ubuntu USN-2631-1 kernel 2015-06-10
Ubuntu USN-2636-1 linux-lts-vivid 2015-06-10
Ubuntu USN-2635-1 linux-lts-utopic 2015-06-10
Ubuntu USN-2638-1 kernel 2015-06-10

Comments (none posted)

kexec-tools: file overwrites

Package(s):kexec-tools CVE #(s):CVE-2015-0267
Created:May 13, 2015 Updated:May 14, 2015
Description: From the Red Hat advisory:

It was found that the module-setup.sh script provided by kexec-tools created temporary files in an insecure way. A malicious, local user could use this flaw to conduct a symbolic link attack, allowing them to overwrite the contents of arbitrary files.

Alerts:
Scientific Linux SLSA-2015:0986-1 kexec-tools 2015-05-13
Oracle ELSA-2015-0986 kexec-tools 2015-05-12
CentOS CESA-2015:0986 kexec-tools 2015-05-13
Red Hat RHSA-2015:0986-01 kexec-tools 2015-05-12

Comments (none posted)

kvm: code execution

Package(s):kvm CVE #(s):CVE-2015-3456
Created:May 13, 2015 Updated:August 18, 2015
Description: From the Red Hat advisory:

An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host's QEMU process corresponding to the guest.

Alerts:
Gentoo 201604-03 xen 2016-04-05
Gentoo 201612-27 virtualbox 2016-12-12
Mageia MGASA-2016-0098 xen 2016-03-07
Gentoo 201602-01 qemu 2016-02-04
openSUSE openSUSE-SU-2015:1400-1 virtualbox 2015-08-18
Fedora FEDORA-2015-13402 qemu 2015-08-18
openSUSE openSUSE-SU-2015:1092-1 xen 2015-06-22
Debian-LTS DLA-249-1 qemu-kvm 2015-06-19
Debian-LTS DLA-248-1 qemu 2015-06-19
Debian-LTS DLA-268-1 virtualbox-ose 2015-07-06
Fedora FEDORA-2015-8248 qemu 2015-05-22
Fedora FEDORA-2015-8220 qemu 2015-05-26
SUSE SUSE-SU-2015:0929-1 KVM 2015-05-22
Ubuntu USN-2608-1 qemu, qemu-kvm 2015-05-13
Scientific Linux SLSA-2015:0998-1 qemu-kvm 2015-05-13
Scientific Linux SLSA-2015:0999-1 qemu-kvm 2015-05-13
Oracle ELSA-2015-1002 xen 2015-05-13
Oracle ELSA-2015-0998 qemu-kvm 2015-05-13
Oracle ELSA-2015-0999 qemu-kvm 2015-05-13
Oracle ELSA-2015-1003 kvm 2015-05-13
Mageia MGASA-2015-0220 qemu 2015-05-13
Debian DSA-3259-1 qemu 2015-05-13
CentOS CESA-2015:1002 xen 2015-05-13
CentOS CESA-2015:0998 qemu-kvm 2015-05-13
CentOS CESA-2015:0999 qemu-kvm 2015-05-13
CentOS CESA-2015:1003 kvm 2015-05-13
Arch Linux ASA-201505-9 qemu 2015-05-14
Scientific Linux SLSA-2015:1002-1 xen 2015-05-13
Scientific Linux SLSA-2015:1003-1 kvm 2015-05-13
Red Hat RHSA-2015:1002-01 xen 2015-05-13
Red Hat RHSA-2015:1004-01 qemu-kvm-rhev 2015-05-13
Red Hat RHSA-2015:0998-01 qemu-kvm 2015-05-13
Red Hat RHSA-2015:0999-01 qemu-kvm 2015-05-13
Red Hat RHSA-2015:1003-01 kvm 2015-05-13
openSUSE openSUSE-SU-2015:0983-1 xen 2015-06-02
Fedora FEDORA-2015-8252 xen 2015-05-26
Fedora FEDORA-2015-8270 xen 2015-05-26
SUSE SUSE-SU-2015:0896-1 qemu 2015-05-18
SUSE SUSE-SU-2015:0889-1 KVM 2015-05-16
openSUSE openSUSE-SU-2015:0894-1 qemu 2015-05-18
openSUSE openSUSE-SU-2015:0893-1 qemu 2015-05-18
Mageia MGASA-2015-0228 virtualbox 2015-05-15
SUSE SUSE-SU-2015:0943-1 KVM 2015-05-26
Fedora FEDORA-2015-8194 xen 2015-05-26
Debian DSA-3274-1 virtualbox 2015-05-28
SUSE SUSE-SU-2015:0889-2 Xen 2015-05-26
SUSE SUSE-SU-2015:0927-1 Xen 2015-05-22
Debian DSA-3262-1 xen 2015-05-18
Red Hat RHSA-2015:1031-01 qemu-kvm 2015-05-27
SUSE SUSE-SU-2015:0940-1 Xen 2015-05-26
SUSE SUSE-SU-2015:0944-1 Xen 2015-05-26
SUSE SUSE-SU-2015:0923-1 xen 2015-05-21
Fedora FEDORA-2015-8249 qemu 2015-05-17

Comments (none posted)

libarchive: denial of service

Package(s):libarchive CVE #(s):
Created:May 12, 2015 Updated:June 21, 2016
Description: From the Mageia advisory:

An out-of-bounds read flaw was found in the way libarchive processed certain archives. An attacker could create a specially crafted archive that, when processed by an application using the libarchive library, would cause that application to crash.

Alerts:
Slackware SSA:2016-172-01 libarchive 2016-06-20
Fedora FEDORA-2015-7216 libarchive 2015-05-22
Mageia MGASA-2015-0208 libarchive 2015-05-11

Comments (none posted)

libmodule-signature-perl: multiple vulnerabilities

Package(s):libmodule-signature-perl CVE #(s):CVE-2015-3406 CVE-2015-3407 CVE-2015-3408 CVE-2015-3409
Created:May 12, 2015 Updated:January 19, 2016
Description: From the Ubuntu advisory:

John Lightsey discovered that Module::Signature incorrectly handled PGP signature boundaries. A remote attacker could use this issue to trick Module::Signature into parsing the unsigned portion of the SIGNATURE file as the signed portion. (CVE-2015-3406)

John Lightsey discovered that Module::Signature incorrectly handled files that were not listed in the SIGNATURE file. A remote attacker could use this flaw to execute arbitrary code when tests were run. (CVE-2015-3407)

John Lightsey discovered that Module::Signature incorrectly handled embedded shell commands in the SIGNATURE file. A remote attacker could use this issue to execute arbitrary code during signature verification. (CVE-2015-3408)

John Lightsey discovered that Module::Signature incorrectly handled module loading. A remote attacker could use this issue to execute arbitrary code during signature verification. (CVE-2015-3409)

Alerts:
openSUSE openSUSE-SU-2016:0163-1 perl-Module-Signature 2016-01-19
Debian-LTS DLA-264-1 libmodule-signature-perl 2015-07-01
Ubuntu USN-2607-1 libmodule-signature-perl 2015-05-12
Debian DSA-3261-1 libmodule-signature-perl 2015-05-15
Debian DSA-3261-2 libmodule-signature-perl 2015-05-20

Comments (none posted)

libssh: denial of service

Package(s):libssh CVE #(s):CVE-2015-3146
Created:May 12, 2015 Updated:July 14, 2015
Description: From the Mageia advisory:

libssh versions 0.5.1 and above, but before 0.6.5, have a logical error in the handling of a SSH_MSG_NEWKEYS and SSH_MSG_KEXDH_REPLY package. A detected error did not set the session into the error state correctly and further processed the packet which leads to a null pointer dereference. This is the packet after the initial key exchange and doesn’t require authentication. This could be used for a Denial of Service (DoS) attack.

Alerts:
Debian DSA-3488-1 libssh 2016-02-23
Ubuntu USN-2912-1 libssh 2016-02-23
Fedora FEDORA-2015-10962 libssh 2015-07-14
Fedora FEDORA-2015-7590 libssh 2015-05-14
openSUSE openSUSE-SU-2015:0860-1 libssh 2015-05-12
Mageia MGASA-2015-0209 libssh 2015-05-11

Comments (none posted)

libtasn1: denial of service

Package(s):libtasn1 CVE #(s):CVE-2015-3622
Created:May 7, 2015 Updated:August 12, 2015
Description: From the Mageia advisory:

A malformed certificate input could cause a heap overflow read in the DER decoding functions of Libtasn1. The heap overflow happens in the function _asn1_extract_der_octet() (CVE-2015-3622).

Alerts:
openSUSE openSUSE-SU-2016:1674-1 libtasn1 2016-06-24
openSUSE openSUSE-SU-2016:1567-1 libtasn1 2016-06-14
Gentoo 201509-04 libtasn1 2015-09-24
openSUSE openSUSE-SU-2015:1372-1 gnutls 2015-08-12
Ubuntu USN-2604-1 libtasn1-3, libtasn1-6 2015-05-11
Debian DSA-3256-1 libtasn1-6 2015-05-10
Mandriva MDVSA-2015:232 libtasn1 2015-05-08
Arch Linux ASA-201505-5 libtasn1 2015-05-08
Mageia MGASA-2015-0200 libtasn1 2015-05-06
Fedora FEDORA-2015-7288 libtasn1 2015-05-19

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox thunderbird seamonkey CVE #(s):CVE-2015-2708 CVE-2015-2710 CVE-2015-2713 CVE-2015-2716
Created:May 13, 2015 Updated:September 4, 2015
Description: From the Red Hat advisory:

Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2015-2708, CVE-2015-0797, CVE-2015-2710, CVE-2015-2713)

A heap-based buffer overflow flaw was found in the way Firefox processed compressed XML data. An attacker could create specially crafted compressed XML content that, when processed by Firefox, could cause it to crash or execute arbitrary code with the privileges of the user running Firefox. (CVE-2015-2716)

Alerts:
Gentoo 201605-06 nss 2016-05-31
Slackware SSA:2016-359-01 expat 2016-12-24
openSUSE openSUSE-SU-2015:1266-1 firefox, thunderbird 2015-07-18
Mageia MGASA-2015-0342 iceape 2015-09-08
Slackware SSA:2015-246-01 seamonkey 2015-09-03
SUSE SUSE-SU-2015:0978-1 firefox 2015-06-01
SUSE SUSE-SU-2015:0960-1 firefox 2015-05-28
Debian DSA-3264-1 icedove 2015-05-19
Ubuntu USN-2602-1 firefox 2015-05-13
Scientific Linux SLSA-2015:0988-1 firefox 2015-05-13
Debian DSA-3260-1 iceweasel 2015-05-13
CentOS CESA-2015:0988 firefox 2015-05-13
Slackware SSA:2015-132-04 firefox 2015-05-12
Oracle ELSA-2015-0988 firefox 2015-05-12
Oracle ELSA-2015-0988 firefox 2015-05-12
CentOS CESA-2015:0988 firefox 2015-05-13
CentOS CESA-2015:0988 firefox 2015-05-13
Arch Linux ASA-201505-7 firefox 2015-05-13
Red Hat RHSA-2015:0988-01 firefox 2015-05-12
Slackware SSA:2015-137-01 thunderbird 2015-05-17
openSUSE openSUSE-SU-2015:0892-1 firefox 2015-05-18
CentOS CESA-2015:1012 thunderbird 2015-05-18
Arch Linux ASA-201505-13 thunderbird 2015-05-18
Red Hat RHSA-2015:1012-01 thunderbird 2015-05-18
Fedora FEDORA-2015-8138 firefox 2015-05-19
CentOS CESA-2015:1012 thunderbird 2015-05-19
openSUSE openSUSE-SU-2015:0934-1 firefox 2015-05-24
CentOS CESA-2015:1012 thunderbird 2015-05-18
Ubuntu USN-2603-1 thunderbird 2015-05-18
Fedora FEDORA-2015-8806 thunderbird 2015-05-26
Fedora FEDORA-2015-8806 firefox 2015-05-26
Oracle ELSA-2015-1012 thunderbird 2015-05-18
openSUSE openSUSE-SU-2015:0935-1 thunderbird 2015-05-24
Fedora FEDORA-2015-8138 thunderbird 2015-05-19
Scientific Linux SLSA-2015:1012-1 thunderbird 2015-05-18
Oracle ELSA-2015-1012 thunderbird 2015-05-18
Mageia MGASA-2015-0234 firefox, thunderbird, sqlite3 2015-05-18

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox thunderbird seamonkey CVE #(s):CVE-2015-2709 CVE-2015-2711 CVE-2015-2712 CVE-2015-2715 CVE-2015-2717 CVE-2015-2718
Created:May 13, 2015 Updated:September 4, 2015
Description: From the Arch Linux advisory:

- CVE-2015-2709 (Memory safety bugs fixed in Firefox 38): Gary Kwong, Andrew McCreight, Christian Holler, Jesse Ruderman, Mats Palmgren, Jon Coppeard, and Milan Sreckovic reported memory safety problems and crashes that affect Firefox 37.

- CVE-2015-2711 (Referrer policy ignored when links opened by middle-click and context menu): Security researcher Alex Verstak reported that <meta name="referrer"> is ignored when a link is opened through the context menu or a middle-click by mouse. This means that, in some situations, the referrer policy is ignored when opening links in new tabs and may cause some pages to open without an HTTP Referer header being set according to the author's intended policy.

- CVE-2015-2712 (Out-of-bounds read and write in asm.js validation): Security researcher Dougall Johnson reported an out-of-bounds read and write in asm.js during JavaScript validation due to an error in how heap lengths are defined. This results in a potentially exploitable crash and could allow for the reading of random memory which may contain sensitive data.

- CVE-2015-2715 (Use-after-free due to Media Decoder Thread creation during shutdown): Security researchers Tyson Smith and Jesse Schwartzentruber reported a use-after-free during the shutdown process. This was caused by a race condition when media decoder threads are created during the shutdown process in some circumstances. This leads to a potentially exploitable crash when triggered.

- CVE-2015-2717 (Buffer overflow and out-of-bounds read while parsing MP4 video metadata): Security researcher laf.intel reported a buffer overflow and out-of-bounds read in the libstagefright library while parsing invalid metadata in MP4 video files. This can lead to a potentially exploitable crash.

- CVE-2015-2718 (Untrusted site hosting trusted page can intercept webchannel responses): Mozilla developer Mark Hammond reported a flaw in how WebChannel.jsm handles message traffic. He found that when a trusted page is hosted within an <iframe> on an untrusted third-party untrusted framing page, the untrusted page could intercept webchannel responses meant for the trusted page, bypassing origin restrictions.

Alerts:
Gentoo 201605-06 nss 2016-05-31
Slackware SSA:2015-246-01 seamonkey 2015-09-03
Mageia MGASA-2015-0342 iceape 2015-09-08
SUSE SUSE-SU-2015:0978-1 firefox 2015-06-01
Ubuntu USN-2602-1 firefox 2015-05-13
Oracle ELSA-2015-0988 firefox 2015-05-13
Fedora FEDORA-2015-8179 firefox 2015-05-14
Arch Linux ASA-201505-7 firefox 2015-05-13
SUSE SUSE-SU-2015:0960-1 firefox 2015-05-28
Arch Linux ASA-201505-13 thunderbird 2015-05-18
openSUSE openSUSE-SU-2015:0934-1 firefox 2015-05-24

Comments (none posted)

netcf: denial of service

Package(s):netcf CVE #(s):CVE-2014-8119
Created:May 11, 2015 Updated:December 22, 2015
Description: From the Red Hat bugzilla:

A flaw was found in the way the netcf's find_ifcfg_path() function processed certain XPath expressions. An attacker able to supply a specially crafted XML file to an application using netcf could cause that application to crash.

Alerts:
Scientific Linux SLSA-2015:2248-3 netcf 2015-12-21
Oracle ELSA-2015-2248 netcf 2015-11-23
Red Hat RHSA-2015:2248-03 netcf 2015-11-19
Mageia MGASA-2015-0215 netcf 2015-05-12
Fedora FEDORA-2015-5910 netcf 2015-05-10
Fedora FEDORA-2015-5872 netcf 2015-05-11

Comments (none posted)

openssl: re-enable TLSv1.2 by default

Package(s):openssl CVE #(s):
Created:May 12, 2015 Updated:May 13, 2015
Description: From the Ubuntu advisory:

For compatibility reasons, Ubuntu 12.04 LTS shipped OpenSSL with TLSv1.2 disabled when being used as a client.

This update re-enables TLSv1.2 by default now that the majority of problematic sites have been updated to fix compatibility issues.

For problematic environments, TLSv1.2 can be disabled again by setting the OPENSSL_NO_CLIENT_TLS1_2 environment variable before library initialization.

Alerts:
Ubuntu USN-2606-1 openssl 2015-05-12

Comments (none posted)

pcre: code execution

Package(s):pcre CVE #(s):CVE-2015-2325 CVE-2015-2326
Created:May 12, 2015 Updated:May 13, 2015
Description: From the openSUSE advisory:

* CVE-2015-2325: Specially crafted regular expressions could have caused a heap buffer overflow in compile_branch(), potentially allowing the execution of arbitrary code. (boo#924960)

* CVE-2015-2326: Specially crafted regular expressions could have caused a heap buffer overflow in pcre_compile2(), potentially allowing the execution of arbitrary code. [boo#924961]

Alerts:
Red Hat RHSA-2016:2750-01 rh-php56 2016-11-15
openSUSE openSUSE-SU-2016:3099-1 pcre 2016-12-12
Ubuntu USN-2943-1 pcre3 2016-03-29
Ubuntu USN-2694-1 pcre3 2015-07-29
SUSE SUSE-SU-2015:1273-1 mariadb 2015-07-21
Slackware SSA:2015-198-02 php 2015-07-17
openSUSE openSUSE-SU-2015:1216-1 MariaDB 2015-07-09
openSUSE openSUSE-SU-2015:0858-1 pcre 2015-05-12
Slackware SSA:2015-162-02 php 2015-06-11

Comments (none posted)

pcs: privilege escalation

Package(s):pcs CVE #(s):CVE-2015-1848
Created:May 13, 2015 Updated:June 5, 2015
Description: From the Red Hat advisory:

It was found that the pcs daemon did not sign cookies containing session data that were sent to clients connecting via the pcsd web UI. A remote attacker could use this flaw to forge cookies and bypass authorization checks, possibly gaining elevated privileges in the pcsd web UI.

Alerts:
Scientific Linux SLSA-2015:0990-1 pcs 2015-05-13
Scientific Linux SLSA-2015:0980-1 pcs 2015-05-13
CentOS CESA-2015:0990 pcs 2015-05-12
CentOS CESA-2015:0980 pcs 2015-05-13
Red Hat RHSA-2015:0990-01 pcs 2015-05-12
Red Hat RHSA-2015:0980-01 pcs 2015-05-12
Fedora FEDORA-2015-8765 pcs 2015-06-04
Fedora FEDORA-2015-8788 pcs 2015-06-04
Fedora FEDORA-2015-8761 pcs 2015-06-04

Comments (none posted)

realmd: unsanitized input

Package(s):realmd CVE #(s):CVE-2015-2704
Created:May 8, 2015 Updated:December 22, 2015
Description:

From the bug report:

realmd configures sssd.conf and smb.conf. No data that was retrieved before join (and the point where mutual trust, sealing is established) should be used when configuring sssd.conf and/or smb.conf.

Alerts:
Scientific Linux SLSA-2015:2184-7 realmd 2015-12-21
Red Hat RHSA-2015:2184-07 realmd 2015-11-19
Fedora FEDORA-2015-6387 realmd 2015-05-08

Comments (none posted)

ruby-redcarpet: cross-site scripting

Package(s):ruby-redcarpet CVE #(s):
Created:May 12, 2015 Updated:May 13, 2015
Description: From the Mageia advisory:

Redcarpet allows for possible XSS of untrusted markdown if the autolink extension is enabled.

Alerts:
Mageia MGASA-2015-0206 ruby-redcarpet 2015-05-11

Comments (none posted)

springframework: information disclosure

Package(s):springframework CVE #(s):CVE-2014-0225
Created:May 8, 2015 Updated:May 13, 2015
Description:

From the bug report:

When processing user provided XML documents, the Spring Framework did not disable by default the resolution of URI references in a DTD declaration. By observing differences in response times, an attacker could then identify valid IP addresses on the internal network with functioning web servers.

Alerts:
Mageia MGASA-2015-0211 springframework 2015-05-11
Fedora FEDORA-2015-6862 springframework 2015-05-08

Comments (none posted)

suricata: denial of service

Package(s):suricata CVE #(s):CVE-2015-0971
Created:May 11, 2015 Updated:June 1, 2015
Description: From the Debian advisory:

Kostya Kortchinsky of the Google Security Team discovered a flaw in the DER parser used to decode SSL/TLS certificates in suricata. A remote attacker can take advantage of this flaw to cause suricata to crash.

Alerts:
Debian DSA-3254-1 suricata 2015-05-09
Fedora FEDORA-2015-7886 suricata 2015-05-30
Fedora FEDORA-2015-7730 suricata 2015-05-26

Comments (none posted)

testdisk: multiple vulnerabilities

Package(s):testdisk CVE #(s):
Created:May 8, 2015 Updated:May 13, 2015
Description:

TestDisk 7.0 has multiple unspecified security fixes from Coverity scans and fuzzing, as well as a report linked from the release notes of a code execution vulnerability via a stack buffer overflow.

Alerts:
Mageia MGASA-2015-0217 testdisk 2015-05-12
Fedora FEDORA-2015-6933 testdisk 2015-05-08

Comments (none posted)

texlive: predictable filenames

Package(s):texlive CVE #(s):
Created:May 13, 2015 Updated:May 13, 2015
Description: From the Red Hat bugzilla:

It was reported that mktexlsr script uses /tmp in an insecure way.

This is insecure because the filename is predictable and, more importantly, the program doesn't fail atomically if the file already exists.

Alerts:
Fedora FEDORA-2015-7292 texlive 2015-05-12

Comments (none posted)

tomcat6: denial of service

Package(s):tomcat6 CVE #(s):CVE-2014-0230
Created:May 13, 2015 Updated:May 13, 2015
Description: From the Arch Linux advisory:

When a response for a request with a request body is returned to the user agent before the request body is fully read, by default Tomcat swallows the remaining request body so that the next request on the connection may be processed. There was no limit to the size of request body that Tomcat would swallow. This permitted a limited Denial of Service as Tomcat would never close the connection and a processing thread would remain allocated to the connection.

Alerts:
Debian DSA-3530-1 tomcat6 2016-03-25
Ubuntu USN-2654-1 tomcat7 2015-06-25
Ubuntu USN-2655-1 tomcat6 2015-06-25
Arch Linux ASA-201505-8 tomcat6 2015-05-13
Debian-LTS DLA-232-1 tomcat6 2015-05-28

Comments (none posted)

zeromq3: security bypass

Package(s):zeromq3 CVE #(s):CVE-2014-9721
Created:May 11, 2015 Updated:June 11, 2015
Description: From the Debian advisory:

It was discovered that libzmq, a lightweight messaging kernel, is susceptible to a protocol downgrade attack on sockets using the ZMTP v3 protocol. This could allow remote attackers to bypass ZMTP v3 security mechanisms by sending ZMTP v2 or earlier headers.

Alerts:
Debian DSA-3255-1 zeromq3 2015-05-10
Fedora FEDORA-2015-8635 zeromq 2015-05-30
openSUSE openSUSE-SU-2015:1028-1 zeromq 2015-06-10

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.1-rc3, released on May 10. Linus said: "Go out and test. By -rc3, things really should be pretty non-threatening and this would be a good time to just make sure everything is running smoothly if you haven't tried one of the earlier development kernels already."

Stable updates: 3.10.77, 3.14.41, 3.19.7, and 4.0.2 were released on May 7. 3.19.8 (the final 3.19 update) followed on May 11. The 4.0.3, 3.14.42, and 3.10.78 updates came out on May 13.

Canonical has announced that it will be maintaining the 3.19 kernel series through July 2016.

Comments (none posted)

Quotes of the week

I dislike "turn off safety for performance" options because Joe SpeedRacer will always select performance over safety.
Dave Chinner

Ingo, I feel like you just gave me a free puppy...
Rusty Russell

— The kernel feature-naming bikeshed committee

Comments (none posted)

Kernel development news

Memory protection keys

By Jonathan Corbet
May 13, 2015
The memory-management units built into most contemporary processors are able to control access to memory on a per-page basis. Operating systems like Linux make that control available to applications in user space; the protection bits supplied to system calls like mmap() and mprotect() allow a process to say whether any given page should be readable, writable, or executable. This level of protection has served for a long time, so one might be tempted to conclude that it provides everything that applications need. But a new hardware feature under development at Intel suggests otherwise; the first round of patches to support this feature explore how programs might gain access to this feature.

This feature is called "memory protection keys" (MPK); it will only be available in future 64-bit Intel processors. When this feature is enabled, four (previously unused) bits in each page-table entry can be used to assign one of sixteen "key" values to any given page. There is also a new 32-bit processor register with two bits for each key value. Setting the "write disable" bit for a given key will block all attempts to write a page with that key value; setting the "access disable" bit will block reads as well. The MPK feature thus allows a process to partition its memory into a maximum of sixteen regions and to selectively disable or enable access to any of those regions. The control register is local to each thread, so different threads can enable or disable different regions independently.

A patch set enabling the MPK feature has been posted by Dave Hansen for review even though, as he noted, nobody outside of Intel will be able to actually run that code at this time. Dave is hoping to get comments on the (minimal) user-space API changes needed to support MPK once the hardware is available.

In the proposed design, applications can set the page keys using any of the system calls that set the other page protections — mprotect(), for example. There are four new flags defined (PROT_PKEY0 through PROT_PKEY3) to represent the key bits. Within the kernel, these bits are stored in the virtual memory area (VMA), and pushed into the relevant location in the hardware page tables. If a process attempts to access a page in a way that is not allowed by the protection keys, it will get the usual SIGSEGV signal. Should it catch that signal, it can look for the new SEGV_PKUERR code (in the si_code field of the siginfo_t structure passed to the handler) to detect a fault caused by a protection key. There is not currently a way to determine which key caused the fault, but adding that is on the list of things to do in the future.

One might well wonder why this feature is needed when everything it does can be achieved with the memory-protection bits that already exist. The problem with the current bits is that they can be expensive to manipulate. A change requires invalidating translation lookaside buffer (TLB) entries across the entire system, which is bad enough, but changing the protections on a region of memory can require individually changing the page-table entries for thousands (or more) pages. Instead, once the protection keys are set, a region of memory can be enabled or disabled with a single register write. For any application that frequently changes the protections on regions of its address space, the performance improvement will be large.

There is still the question (as asked by Ingo Molnar) of just why a process would want to make this kind of frequent memory-protection change. There would appear to be a few use cases driving this development. One is the handling of sensitive cryptographic data. A network-facing daemon could use a cryptographic key to encrypt data to be sent over the wire, then disable access to the memory holding the key (and the plain-text data) before writing the data out. At that point, there is no way that the daemon can leak the key or the plain text over the wire; protecting sensitive data in this way might also make applications a bit more resistant to attack.

Another commonly mentioned use case is to protect regions of data from being corrupted by "stray" write operations. An in-memory database could prevent writes to the actual data most of the time, enabling them only briefly when an actual change needs to be made. In this way, database corruption due to bugs could be fended off, at least some of the time. Ingo was unconvinced by this use case; he suggested that a 64-bit address space should be big enough to hide data in and protect it from corruption. He also suggested that a version of mprotect() that optionally skipped TLB invalidation could address many of the performance issues, especially if huge pages were used. Alan Cox responded, though, that there is real-world demand for the ability to change protection on gigabytes of memory at a time, and that mprotect() is simply too slow.

Being able to turn off unexpected writes could be especially useful when the underlying memory is a persistent memory device; any erroneous write there will go immediately to permanent storage. There have also been suggestions that tools like Valgrind could make good use of MPK.

Ingo's concerns notwithstanding, the MPK hardware feature is being added in response to customer interest; it would be surprising if the kernel did not end up supporting it, especially given that the required changes are not hugely invasive. So the real question is whether the proposed user-space API is correct and supportable in the long run. Hopefully, developers who think they might make use of this feature will take a look at the patches and make themselves heard if they find something they don't like.

Comments (11 posted)

Persistent memory and page structures

By Jonathan Corbet
May 13, 2015
As is suggested by its name, persistent memory (or non-volatile memory) is characterized by the persistence of the data stored in it. But that term could just as well be applied to the discussions surrounding it; persistent memory raises a number of interesting development problems that will take a while to work out yet. One of the key points of discussion at the moment is whether persistent memory should, like ordinary RAM, be represented by page structures and, if so, how those structures should be managed.

One page structure exists for each page of (non-persistent) physical memory in the system. It tracks how the page is used and, among other things, contains a reference count describing how many users the page has. A pointer to a page structure is an unambiguous way to refer to a specific physical page independent of any address space, so it is perhaps unsurprising that this structure is used with many APIs in the kernel. Should a range of memory exist that lacks corresponding page structures, that memory cannot be used with any API expecting a struct page pointer; among other things, that rules out DMA and direct I/O.

Persistent memory looks like ordinary memory to the CPU in a number of ways. In particular, it is directly addressable at the byte level. It differs, though, in its persistence, its performance characteristics (writes, in particular, can be slow), and its size — persistent memory arrays are expected to be measured in terabytes. At a 4KB page size, billions of page structures would be needed to represent this kind of memory array — too many to manage efficiently. As a result, currently, persistent memory is treated like a device, rather than like memory; among other things, that means that the kernel does not need to maintain page structures for persistent memory. Many things can be made to work without them, but this aspect of persistent memory does bring some limitations; one of those is that it is not currently possible to perform I/O directly between persistent memory and another device. That, in turn, thwarts use cases like using persistent memory as a cache between the system and a large, slow storage array.

Page-frame numbers

One approach to the problem, posted by Dan Williams, is to change the relevant APIs to do away with the need for page structures. This patch set creates a new type called __pfn_t:

    typedef struct {
	union {
	    unsigned long data;
	    struct page *page;
	};
    __pfn_t;

As is suggested by the use of a union type, this structure leads a sort of double life. It can contain a page pointer as usual, but it can also be used to hold an integer page frame number (PFN). The two cases are distinguished by setting one of the low bits in the data field; the alignment requirements for page structures guarantee that those bits will be clear for an actual struct page pointer.

A small set of helper functions has been provided to obtain the information from this structure. A call to __pfn_t_to_pfn() will obtain the associated PFN (regardless of which type of data the structure holds), while __pfn_t_to_page() will return a struct page pointer, but only if a page structure exists. These helpers support the main goal for the __pfn_t type: to allow the lower levels of the I/O stack to be converted to use PFNs as the primary way to describe memory while avoiding massive changes to the upper layers where page structures are used.

With that infrastructure in place, the block layer is changed to use __pfn_t instead of struct page; in particular, the bio_vec structure, which describes a segment of I/O, becomes:

    struct bio_vec {
        __pfn_t         bv_pfn;
        unsigned short  bv_len;
        unsigned short  bv_offset;
    };

The ripple effects from this change end up touching nearly 80 files in the filesystem and block subtrees. At a lower level, there are changes to the scatter/gather DMA API to allow buffers to be specified using PFNs rather than page structures; this change has architecture-specific components to enable the mapping of buffers by PFN.

Finally, there is the problem of enabling kmap_atomic() on PFN-specified pages. kmap_atomic() maps a page into the kernel's address space; it is only really needed on 32-bit systems where there is not room to map all of main memory into that space. On 64-bit systems it is essentially a no-op, turning a page structure into its associated kernel virtual address. That problem gets a little trickier when persistent memory is involved; the only code that really knows where that memory is mapped is the low-level device driver. Dan's patch set adds a function by which the driver can inform the rest of the kernel of the mapping between a range of PFNs and kernel space; kmap_atomic() is then adapted to use that information.

All together, this patch set is enough to enable direct block I/O to persistent memory. Linus's initial response was on the negative side, though; he said "I detest this approach". Instead, he argued in favor of a solution where special page structures are created for ranges of persistent memory when they are needed. As the discussion went on, though, he moderated his position, saying: "So while I (very obviously) have some doubts about this approach, it may be that the most convincing argument is just in the code." That code has since been reposted with some changes, but the discussion is not yet finished.

Back to page structures

Various alternatives have been suggested, but the most attention was probably drawn by Ingo Molnar's "Directly mapped pmem integrated into the page cache" proposal. The core of Ingo's idea is that all persistent memory would have page structures, but those structures would be stored in the persistent memory itself. The kernel would carve out a piece of each persistent memory array for these structures; that memory would be hidden from filesystem code.

Despite being stored in persistent memory, the page structures themselves would not be persistent — a point that a number of commenters seemed to miss. Instead, they would be initialized at boot time, using a lazy technique so that this work would not overly slow the boot process as a whole. All filesystem I/O would be direct I/O; in this picture, the kernel's page cache has little involvement. The potential benefits are huge: vast amounts of memory would be available for fast I/O without many of the memory-management issues that make life difficult for developers today.

It is an interesting vision, and it may yet bear fruit, but various developers were quick to point out that things are not quite as simple as Ingo would like them to be. Matthew Wilcox, who has done much of the work to make filesystems work properly with persistent memory, noted that there is an interesting disconnect between the lifecycle of a page-cache page and that of a block on disk. Filesystems have the ability to reassign blocks independently of any memory that might represent the content of those blocks at any given time. But in this directly mapped view of the world, filesystem blocks and pages of memory are the same thing; synchronizing changes to the two could be an interesting challenge.

Dave Chinner pointed out that the directly mapped approach makes any sort of data transformation by the filesystem (such as compression or encryption) impossible. In Dave's view, the filesystem needs to have a stronger role in how persistent memory is managed in general. The idea of just using existing filesystems (as Ingo had suggested) to get the best performance out of persistent memory is, in his view, not sustainable. Ingo, instead, seems to feel that management of persistent memory could be mostly hidden from filesystems, just like the management of ordinary memory is.

In any case, the proof of this idea would be in the code that implements it, and, currently, no such code exists. About the only thing that can be concluded from this discussion is that the kernel community still has not figured out the best ways of dealing with large persistent-memory arrays. Likely as not, it will take some years of experience with the actual hardware to figure that out. Approaches like Dan's might just be merged as a way to make things work for now. The best way to make use of such memory in the long term remains undetermined, though.

Comments (1 posted)

Trading off safety and performance in the kernel

By Jonathan Corbet
May 12, 2015
The kernel community ordinarily tries to avoid letting users get into a position where the integrity of their data might be compromised. There are exceptions, though; consider, for example, the ability to explicitly flush important data to disk (or more importantly, to avoid flushing at any given time). Buffering I/O in this manner can significantly improve disk write I/O throughput, but if application developers are careless, the result can be data loss should the system go down at an inopportune time. Recently there have been a couple of proposed performance-oriented changes that have tested the community's willingness to let users put themselves into danger.

O_NOMTIME

A file's "mtime" tracks the last modification time of the file's contents; it is typically updated when the file is written to. Zach Brown recently posted a patch creating a new open() flag called O_NOMTIME; if that flag is present, the filesystem will not update mtime when the file is changed. This change is wanted by the developers of the Ceph filesystem, which has no use for mtime updates:

The ceph servers don't use mtime at all. They're using the local file system as a backing store and any backups would be driven by their upper level ceph metadata. For ceph, slow IO from mtime updates in the file system is as daft as if we had block devices slowing down IO for per-block write timestamps that file systems never use.

Disabling mtime updates, Zach said, can reduce total I/O associated with a write operation by a factor of two or more.

There are, of course, a couple of problems with turning off mtime updates. Trond Myklebust noted that it would break NFS "pretty catastrophically" to not maintain that information; NFS clients would lose the ability to detect when they have stale cached data, leading to potential data corruption. The biggest concern, though, appears to be the effect on filesystem backups; if a file's mtime is not updated when the file is modified, that file will not be picked up in an incremental backup (assuming the backup scheme uses mtime, which most do). A system's administrator might decide to run that risk, but there is the possibility that users may run it for them. As Dave Chinner put it:

The last thing an admin wants when doing disaster recovery is to find out that the app started using O_NOMTIME as a result of the upgrade they did 6 months ago. Hence the last 6 months of production data isn't in the backups despite the backup procedure having been extensively tested and verified when it was first put in place.

Another way of putting it is that the mtime value is often not there for the benefit of the creator of the file; it is often used by others as part of the management of the system. Allowing the creator to disable mtime updates may have implications for those others, who would then have cause to wish that they had been part of that decision before it was made.

Despite the concerns, most developers appear to recognize that there is a real use case for being able to turn off mtime updates. So the discussion shifted quickly to how this capability could be provided without creating unpleasant surprises for system administrators. There appear to be two approaches toward achieving that goal.

The first of those is to not allow applications to disable mtime updates unless the system administrator has agreed to it. That agreement is most likely to take the form of a special mount option; unless a specific filesystem has been mounted with the "allow_nomtime" option, attempts to disable mtime updates on that filesystem will be denied. The second aspect is to hide the option in a place where it does not look like part of the generic POSIX API. In practice, that means that, rather than being a flag for the open() system call, O_NOMTIME will probably become a mode that is enabled with an ioctl() call.

Syncing and suspending

Putting a system into the suspended state is a complicated task with a number of steps; in current kernels, one of those steps is to call sys_sync() to flush all dirty file pages back out to persistent storage. It might seem intuitively obvious that saving the contents of files before suspending is a good thing to do, but that has not stopped Len Brown from posting a patch to remove the sys_sync() call from the suspend path.

Len's contention is that flushing disks can be an expensive operation (it can take multiple seconds) and that this cost should not necessarily be paid every time the system is suspended. Doing the sync unconditionally in the kernel, in other words, is a policy decision that may not match what all users want. Anybody who wants file data to be flushed is free to run sync before suspending the system, so removing the call just increases the flexibility of the system.

This change concerns some; Alan Cox was quick to point out some reasons why it makes sense to flush out file data, including the facts that resume doesn't always work and that users will sometimes disconnect drives from a suspended system. It has also been pointed out that, sometimes, a suspended system will never resume due to running out of battery or the kernel being upgraded. For cases like this, it was argued, removing the sys_sync() call is just asking for data to be lost.

Nobody, of course, is trying to make the kernel more likely to lose data. The driving force here is something different: the meaning of "suspending" a system is changing. A user who suspends a laptop by closing the lid prior to tossing it into a backpack almost certainly wants all data written to disk first. But when a system is using suspend as a power-management mechanism, the case is not quite so clear. If a system is able to suspend itself between every keystroke — as some systems are — it may not make sense to do a bunch of disk I/O every time. That may be doubly true on small mobile devices where the power requirements are strict and the I/O devices are slow. On such systems, it may well make sense to suspend the system without flushing I/O to persistent storage first.

The end result is that most (but not all) developers seem to agree that there is value in being able to suspend the system without syncing the disks first. There is rather less consensus, though, on whether that should be the kernel's default behavior. If this change goes in, it is likely to be controlled by a sysctl knob, and the default value of that knob will probably be to continue to sync files as is done in current kernels.

Comments (108 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.1-rc3 ?
Greg KH Linux 4.0.3 ?
Greg KH Linux 4.0.2 ?
Greg KH Linux 3.19.8 ?
Greg KH Linux 3.19.7 ?
Luis Henriques Linux 3.16.7-ckt11 ?
Greg KH Linux 3.14.42 ?
Greg KH Linux 3.14.41 ?
Kamal Mostafa Linux 3.13.11-ckt20 ?
Greg KH Linux 3.10.78 ?
Greg KH Linux 3.10.77 ?
Ben Hutchings Linux 3.2.69 ?

Architecture-specific

Build system

Core kernel code

Development tools

Device drivers

Device driver infrastructure

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Paolo Bonzini KVM: x86: SMM support ?

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Why switch to Fedora?

By Jake Edge
May 13, 2015

Fedora, like other distributions, has often struggled with how to attract new users. It is not always clear what the "pain points" are for new Fedora users, nor what it might take to get users to switch. A recent discussion on the Fedora desktop mailing list highlights some of those issues.

In an April blog post, Christian Schaller noted that a review of GNOME 3.16 was, in some sense, really a review of Fedora Workstation. The review reflected a lot of work the distribution has done to integrate GNOME into the Workstation product, resulting in what Schaller called "a tightly vertically integrated and tested system from the kernel up to core desktop applications". He was disappointed that the reviewer was awaiting GNOME 3.16's appearance in Ubuntu, rather than continuing on with Fedora, so he asked for feedback from blog readers on what it might take to get people to switch to Fedora.

He then summarized those responses in a post to the desktop list on May 7. Some of the responses are something of recurring themes for complaints heard about Fedora: its release cadence, lack of third-party software (including media codecs, drivers, and applications), and pain caused by SELinux. Several of the others were graphics related, including support for NVIDIA Optimus hardware, high-DPI display problems, and multi-monitor support. The rest were a grab bag of user annoyances: lack of a UI for Fedora upgrades, the need for better Android integration, no solution for backups, and the need for a handful of packages that aren't currently available for Fedora.

The third-party software issue occupied much of the thread. The lack of royalty-encumbered codecs (e.g. MP3 for music and H.264 for video) for Fedora is a common complaint about the distribution, especially from less-technical users. Developers are the target users for Fedora Workstation, though, so they may well be able to find Fedora's list of forbidden items as well as information on third-party repositories (which do have solutions for many things that Fedora can't distribute).

Those repositories may suffer from security problems, however, as Elad Alfassa pointed out. In addition, the availability of those repositories doesn't solve all these problems by any means:

[...] drivers are even a bigger mess. How is a user supposed to download a wifi driver when their wifi is not working? Keep in mind that many newer laptops don't have an [ethernet] port at all. If you have a broadcom wireless chip and no ethernet port, you'll need a second device, or a second OS, to find out how to get the driver and how to install it. And if you have a different OS that already works, and Fedora requires you to either replace your wifi chip or figure out the magic command lines to install a driver, why would you make the switch?

Basically, the more time a person needs to spend on learning how to make your OS work the less they'd want to make the switch.

For codecs, though, it would be possible for someone (e.g. Red Hat) to license them for Fedora, as "drago01" noted. Beyond the unlikelihood of any organization actually doing that, there is another problem: the license would not necessarily apply for downstream Fedora remixes. Those distributions would either have to remove the codecs or license them too—not something the Fedora project wants its downstreams to have to deal with. But Fedora is one of the few distributions that has this particular problem, since most other distributions find some way to install the necessary codecs for users that need them.

The age-old tension between seeing users as participants in free-software development versus users that "just want to get their work done" also played out some in the thread. Because users are (mostly) unwilling to mess with their computers to get them working, they gravitate to Windows and Mac OS X, Alfassa said. And they don't switch away:

And the answer to why people are not switching away from Mac or Windows is simple: Mac OS X and Windows both have one thing in common. They work. They don't require too much fiddling (especially in the case of OS X). WiFi works, music works, video works...

But Michael Catanzaro thinks Fedora should be looking at a different question: why aren't users switching to Fedora Workstation from Ubuntu?

We can't really compete with Mac or Windows because we can't run Mac or Windows apps. We can compete just fine with Ubuntu. I don't see why we shouldn't aim to make Fedora Workstation the #1 GNU/Linux distribution.

The question of cadence and the interest in a "rolling" release also came up a bit in the thread. Edward Borasky noted that he moved to Fedora from openSUSE a few years back, partly due to the eight-month release cycle versus six months for Fedora. So he is happy now with Workstation and, even though he would like a rolling release, he wouldn't return to openSUSE for Tumbleweed. There are lots of good distributions, but he wouldn't switch because "they have no *compelling* advantage and I'd have to spend a couple of weeks getting up to speed on the way *they* do things."

He continued:

They're all fine distros, they all have wonderful communities, they all do a good job of tracking upstream, I can compile unpackaged software on them, I can remix them as long as I don't infringe on trademarks, etc. But they aren't "better" than Fedora and Fedora's not really "better" than they are.

So if you want to take users away from Ubuntu, you need a *compelling* advantage. Fedora Workstation has to make users badass at something meaningful in a way that Ubuntu doesn't.

Another pain point that came out in the thread is the current state of Office 365 support in Workstation. "Alex G.S." argued that one of the main reasons people end up on Macs is because they need access to Microsoft's collaboration tools. They work well with OS X, but have a variety of problems on Fedora. From the list he provided, there would seem to be a fairly large hurdle there.

No real conclusions were drawn in the thread. For the most part, there were not truly any surprises in the comments on Schaller's blog post. These complaints have been heard before (and likely will be again). Other distributions have similar lists, with at least some overlap with Fedora's. It will be interesting to see which of the items the Workstation project tries to tackle—and what progress it makes.

Comments (32 posted)

Brief items

Distribution quotes of the week

Linux took a principle and filled in an important technology gap that inspired the filling of a thousand other gaps too. This led to the rise of the venerable Linux distribution, as myriad as consumer-grade platforms such as Ubuntu and Fedora, to server-grade such as CentOS and Debian, and down to the downright weird such as RebeccaBlackOS.
-- Jono Bacon (by way of Opensource.com)

I feel I do not have a lot to say about LXLE because the distribution ran smoothly and offered no surprises.
-- Jesse Smith (DistroWatch review)

Comments (none posted)

The Foresight Linux Project shuts down

The development of the Foresight Linux distribution has come to an end. "The Foresight Linux Council has determined that there has been insufficient volunteer activity to sustain meaningful new development of Foresight Linux. Faced with the need either to update the project's physical infrastructure or cease operations, we find no compelling reason to update the infrastructure."

Full Story (comments: 13)

Distribution News

CentOS

CentOS-7 alpha for AArch64

The CentOS project has announced the availability of an alpha version of CentOS 7 for the AArch64 architecture. "Because this hardware is very new and support for it is still evolving, there is no expectation for kernel ABI compatibility."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

New Debian Project Leader Talks Open Source Careers, PPAs, and More (Linux.com)

Swapnil Bhartiya talks with Neil McGovern, the new Debian Project Leader.

Why did you choose to be associated with Debian and not any other free software project?

McGovern: I think Debian has a couple of unique attributes. Firstly, it's a true community distribution - we're run by thousands of volunteers. This makes it easy to get involved, and help contribute. The second is our social contract. Our five promises ensure that we will continue to remain open to our users.

Comments (none posted)

Page editor: Rebecca Sobol

Development

Python coroutines with async and await

By Jake Edge
May 13, 2015

It is already possible to create coroutines for asynchronous processing in Python. But a recent proposal would elevate coroutines to a full-fledged language construct, rather than treat them as a type of generator as they are currently. Two new keywords, async and await, would be added to the language to support coroutines as first-class Python features.

A coroutine is a kind of function that can suspend and resume its execution at various pre-defined locations in its code. Subroutines are a special case of coroutines that have just a single entry point and complete their execution by returning to their caller. Python's coroutines (both the existing generator-based and the newly proposed variety) are not fully general, either, since they can only transfer control back to their caller when suspending their execution, as opposed to switching to some other coroutine as they can in the general case. When coupled with an event loop, coroutines can be used to do asynchronous processing, I/O in particular.

Python's current coroutine support is based on the enhanced generators from PEP 342, which was adopted into Python 2.5. That PEP changed the yield statement to be an expression, added several new methods for generators (send(), throw(), and close()), and ensured that close() would be called when generators get garbage-collected. That functionality was further enhanced in Python 3.3 with PEP 380, which added the yield from expression to allow a generator to delegate some of its functionality to another generator (i.e. a sub-generator).

But all of that ties coroutines to generators, which can be confusing and also limits where in the code it is legal to make an asynchronous call. In particular, the with and for statements could conceptually use an asynchronous call to a coroutine, but cannot because the language syntax does not allow yield expressions in those locations. In addition, if a refactoring of the coroutine moves the yield or yield from out of the function (into a called function, for example), it no longer is treated as a coroutine, which can lead to non-obvious errors; the asyncio module works around this deficiency by using a @asyncio.coroutine decorator.

PEP 492 is meant to address all of those issues. The ideas behind it were first raised by Yury Selivanov on the python-ideas mailing list in mid-April, it was enthusiastically embraced by many in that thread, and by May 5 it had been accepted for Python 3.5 by Guido van Rossum. Not only that, but the implementation was merged on May 12. It all moved rather quickly, though it was discussed at length in multiple threads on both python-ideas and python-dev.

The changes are fairly straightforward from a syntax point of view:

    async def read_data(db):
        data = await db.fetch('SELECT ...')
	...
That example (which comes from the PEP) would create a read_data() coroutine using the new async def construct. The await expression would suspend execution of read_data() until the db.fetch() awaitable completes and returns its result. await is similar to yield from, but it validates that its argument is an awaitable.

There are several different types of awaitable. A native coroutine object, as returned by calling a native coroutine (i.e. one defined with async def) is an awaitable, as is a generator-based coroutine that has been decorated with @types.coroutine. Future objects, which represent some processing that will complete in the future, are also awaitable. The __await__() magic method is present for objects that are awaitable.

There is a problem that occurs when adding new keywords to a language, however. Any variables that are named the same as the keyword suddenly turn into syntax errors. To avoid that problem, Python 3.5 and 3.6 will "softly deprecate" async and await as variable names, but not have them be a syntax error. The parser will keep track of async def blocks and treat the keywords differently within those blocks, which will allow existing uses to continue to function.

There are two other uses of async that will come with the new feature: asynchronous context managers (i.e. with) and iterators (i.e. for). Inside a coroutine, these two constructs can be used as shown in these examples from the PEP:

    async def commit(session, data):
	...

	async with session.transaction():
	    ...
	    await session.update(data)
	    ...
        ...
        async for row in Cursor():
            print(row)
Asynchronous context managers must implement two magic async methods, __aenter__() and __aexit__(), both of which return awaitables, while an asynchronous iterator would implement __aiter__() and __anext__(). Those are effectively the asynchronous versions of the magic methods used by the existing synchronous context manager and iterator.

The main question early on was whether the deferred "cofunction" feature (PEP 3152) might be a better starting point. The author of that PEP, Greg Ewing, raised the issue, but there was a lot of agreement that the syntax proposed by Selivanov was preferable to the codef, cocall, and the like from Ewing's proposal. There was a fair amount of back and forth, but the cofunction syntax for handling certain cases got rather complex and non-Pythonic in the eyes of some. Van Rossum summarized the problems with cofunctions while rejecting that approach.

There were also several suggestions of additional asynchronous features that could be added, but nothing that seemed too urgent. There was some bikeshedding on the keywords (and their order, some liked def async, for example). The precedence of await was also debated at some length, with the result being that, unlike yield and yield from that have the lowest precedence, await has a high precedence: between exponentiation and subscripting, calls, and attribute references.

Mark Shannon complained that there was no need to add new syntax to do what Selivanov was proposing. Others had made similar observations and it was not disputed by Selivanov or other proponents. The idea is to make it easier to program with coroutines. Beyond that, Van Rossum wants the places where a coroutine can be suspended to be obvious from reading the code:

But new syntax is the whole point of the PEP. I want to be able to *syntactically* tell where the suspension points are in coroutines. Currently this means looking for yield [from]; PEP 492 just adds looking for await and async [for|with]. Making await() a function defeats the purpose because now aliasing can hide its presence, and we're back in the land of gevent or stackless (where *anything* can potentially suspend the current task). I don't want to live in that land.

Over a two to three week period, multiple versions of the PEP were posted and debated, with Selivanov patiently explaining his ideas or modifying them based on the feedback. For a feature that seems likely to be quite important in Python's future, the whole process went remarkably quickly—and smoothly. It will probably take a fair amount more time for those ideas to sink in more widely with Python developers.

Comments (7 posted)

Brief items

Quotes of the week

It occurs to me that the subtitle of PEP 493 could be "All software is terrible, but it's often a system administrator's job to make it run anyway" :)
Nick Coghlan

Let’s bring the DOS prompt back! And let a thousand programs bloom!
Jason Scott, announcing the debut of the Internet Archive's in-browser DOS simulator.

Comments (1 posted)

Firefox 38.0

Mozilla has released Firefox 38.0. This version features new tab-based preferences and Ruby annotation support. Also, it will be the base for the next ESR release. The release notes contain more information.

Comments (22 posted)

Update on Digital Rights Management and Firefox

At the Mozilla Blog, Denelle Dixon-Thayer announced that the first Firefox builds to include a "digital rights management" (DRM) module have been released. The module comes from Adobe and, in acknowledgment of its controversial nature, Mozilla will still be offering DRM-free builds for download, instructions on how to remove the module, and a "teaching kit" that details the various side of the DRM debate.

Comments (5 posted)

Choosing a license for Mailpile 1.0

The Mailpile project is soliciting input on its eventual license. The leading contenders are the Affero GPLv3 and the Apache 2.0 license. The project's post on the question outlines the usual (perceived) advantages and disadvantages of each. "For some, this is an idealogical question, a matter of right or wrong. Is it morally acceptable to allow people to benefit from Mailpile's work without contributing back? Is it morally acceptable for Mailpile's authors to tie the hands of other businesses? For others, this is merely a matter of strategy and tactics." In this case, "voting" appears to mean commenting on the topic in various public discussion forums. Mailpile is a highly anticipated project; the results of these discussions will, at the very least, be interesting to watch.

Comments (1 posted)

Glyphr Studio 1.0 released

Open-source browser-based font editor Glyphr Studio has made its 1.0 release. We noted the development of Glyphr Studio in out recent look at the FontForge's development talk from Libre Graphics Meeting 2015. The 1.0 release supports import and export of both OpenType and TrueType fonts, as well as a feature initially slated for post-1.0 release: the ability to construct glyphs out of reusable components.

Comments (none posted)

GNU inetutils 1.9.3 released

GNU inetutils 1.9.3 is now available. New in this release are the ability to specify a pattern as the payload for ping, several updates to ftp, and a --local-time switch for syslogd that "makes the service ignore a time stamp passed on by the remote host, recording instead the local time at the moment the message was received."

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

How OpenStack gets translated (Opensource.com)

Over at Opensource.com, one of the translators for OpenStack, Łukasz Jernaś, is interviewed about the process of translating a large project like OpenStack. "How does OpenStack's release cycle play into the translation process? Is it manageable to get translations done on a six-month release cycle? Most of the work gets done after the string freeze period, which happens around a month before the release, with a lot of it being completed after getting the first release candidate out of the window. Documentation is translated during the entire cycle, as many parts are common between releases and can be deployed independently to the releases. So we don't have to focus that much about deadlines, as it's available online all the time and not prepackaged and pushed out to users and distributions. Of course, having a month to do the translations can be cumbersome, depending on the team doing the translation (some do that part time, some people in their spare time), and how many developers push out new strings during the string freeze."

Comments (none posted)

MathML Accessibility

At his blog, Frédéric Wang has posted a comparison between the Orca screen reader and several proprietary alternatives when they are given the task of reading MathML expressions in Firefox. The audio snippets provided indicate that Orca has some catching up to do. Fortunately, it appears that Mozilla and Orca developers are intent on closing the gap.

Comments (none posted)

Testable Examples in Go

At the Go Blog, Andrew Gerrand provides a look at the language's approach to combining example code and documentation. "Godoc examples are snippets of Go code that are displayed as package documentation and that are verified by running them as tests. They can also be run by a user visiting the godoc web page for the package and clicking the associated "Run" button. Having executable documentation for a package guarantees that the information will not go out of date as the API changes." Each package's examples are compiled as part of the package test suite; examples can also (optionally) be executed in order to capture failures with the testing framework.

Comments (7 posted)

Page editor: Nathan Willis

Announcements

Brief items

Community activists are the stars of International Day Against DRM

The Free Software Foundation's DefectiveByDesign campaign looks back at the ninth International Day Against DRM. "Protestors at the New York City Apple store were evicted by uncomfortable security guards. Principled cooks in Italy created painfully spicy -- but tasty-looking -- DRM-themed snacks to illustrate the bait-and-switch deception of DRM-encumbered media. And a solitary activist took on the entire University of Illinois at Chicago campus with nothing but a few hundred flyers and an unflappable attitude. As of the time of this writing, we've heard about three times as many organized events as last year, a total of fifteen. Great job, anti-DRM community!"

Full Story (comments: none)

Articles of interest

Ada Initiative newsletter

The Ada Initiative has announced a new Executive Director. Also in the news; a graphic novel about Ada Lovelace book and and more workshops.

Full Story (comments: none)

Calls for Presentations

Call for Papers - PostgreSQL Conference Europe 2015

PostgreSQL Conference Europe 2015 will be held October 27-30 in Vienna, Austria. Talks may be in English or German. The submission deadline is August 7.

Full Story (comments: none)

CFP Deadlines: May 14, 2015 to July 13, 2015

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
May 15 September 28
September 30
OpenMP Conference Aachen, Germany
May 17 September 16
September 18
PostgresOpen 2015 Dallas, TX, USA
May 17 August 13
August 17
Chaos Communication Camp 2015 Mildenberg (Berlin), Germany
May 23 August 22
August 23
Free and Open Source Software Conference Sankt Augustin, Germany
May 23 May 23
May 25
Wikimedia/MediaWiki European Hackathon Lyon, France
May 31 October 2
October 4
PyCon India 2015 Bangalore, India
June 1 November 18
November 22
Build Stuff 2015 Vilnius, Lithuania
June 1 July 3
July 5
SteelCon Sheffield, UK
June 5 August 20
August 21
Linux Security Summit 2015 Seattle, WA, USA
June 6 September 29
September 30
Open Source Backup Conference 2015 Cologne, Germany
June 11 June 25
June 28
Linux Vacation Eastern Europe 2015 Grodno, Belarus
June 15 August 15
August 22
DebConf15 Heidelberg, Germany
June 15 September 24 PostgreSQL Session 7 Paris, France
June 15 November 17
November 18
PGConf Silicon Valley San Francisco, CA, USA
June 17 October 5
October 7
LinuxCon Europe Dublin, Ireland
June 17 October 5
October 7
Embedded Linux Conference Europe Dublin, Ireland
June 19 August 20 Tracing Summit Seattle, WA, USA
June 28 August 28
September 3
ownCloud Contributor Conference Berlin, Germany
June 28 November 11
November 13
LDAP Conference 2015 Edinburgh, UK
June 30 November 16
November 19
Open Source Monitoring Conference 2015 Nuremberg, Germany
July 5 October 27
October 29
Open Source Developers' Conference Hobart, Tasmania

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: May 14, 2015 to July 13, 2015

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
May 12
May 14
Protocols Plugfest Europe 2015 Zaragoza, Spain
May 13
May 15
GeeCON 2015 Cracow, Poland
May 14
May 15
SREcon15 Europe Dublin, Ireland
May 16
May 17
11th Intl. Conf. on Open Source Systems Florence, Italy
May 16
May 17
MiniDebConf Bucharest 2015 Bucharest, Romania
May 18
May 22
OpenStack Summit Vancouver, BC, Canada
May 18
May 20
Croatian Linux User Conference Zagreb, Croatia
May 19
May 21
SAMBA eXPerience 2015 Goettingen, Germany
May 20
May 22
SciPy Latin America 2015 Posadas, Misiones, Argentina
May 21
May 22
ScilabTEC 2015 Paris, France
May 23
May 24
Debian/Ubuntu Community Conference Italia - 2015 Milan, Italy
May 23
May 25
Wikimedia/MediaWiki European Hackathon Lyon, France
May 30
May 31
Linuxwochen Linz 2015 Linz, Austria
June 1
June 2
Automotive Linux Summit Tokyo, Japan
June 3
June 5
LinuxCon Japan Tokyo, Japan
June 3
June 6
Latin American Akademy Salvador, Brazil
June 5
June 7
PyCon APAC 2015 Taipei, Taiwan
June 8
June 10
Yet Another Perl Conference 2015 Salt Lake City, UT, USA
June 10
June 13
BSDCan Ottawa, Canada
June 11
June 12
infoShare 2015 Gdańsk, Poland
June 12
June 14
Southeast Linux Fest Charlotte, NC, USA
June 16
June 20
PGCon Ottawa, Canada
June 22
June 23
DockerCon15 San Francisco, CA, USA
June 23
June 26
Red Hat Summit Boston, MA, USA
June 23
June 25
Solid Conference San Francisco, CA, USA
June 23
June 26
Open Source Bridge Portland, Oregon, USA
June 25
June 26
Swiss PostgreSQL Conference Rapperswil, Switzerland
June 25
June 28
Linux Vacation Eastern Europe 2015 Grodno, Belarus
June 26
June 27
Hong Kong Open Source Conference 2015 Hong Kong, Hong Kong
June 26
June 28
FUDCon Pune 2015 Pune, India
July 3
July 5
SteelCon Sheffield, UK
July 4
July 10
Rencontres Mondiales du Logiciel Libre Beauvais, France
July 6
July 12
SciPy 2015 Austin, TX, USA
July 7
July 10
Gophercon Denver, CO, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds