LWN.net Weekly Edition for July 8, 2016
Preserving the global software heritage
The Software Heritage initiative is an ambitious new effort to amass an organized, searchable index of all of the software source code available in the world (ultimately, including code released under free-software licenses as well as code that was not). Software Heritage was launched on June 30 with a team of just four employees but with the support of several corporate sponsors. So far, the Software Heritage software archive has imported 2.7 billion files from GitHub, the Debian package archive, and the GNU FTP archives, but that is only the beginning.
In addition to the information on the Software Heritage site, Nicolas Dandrimont gave a presentation about the project on July 4 at DebConf; video [WebM] is available. In the talk, Dandrimont noted that software is not merely pervasive in the modern world, but it has cultural value as well: it captures human knowledge. Consequently, it is as important to catalog and preserve as are books and other media—arguably more so, because electronic files and repositories are prone to corruption and sudden disappearance.
Thus, the goal of Software Heritage is to ingest all of software source code available, index it in a meaningful way, and provide front-ends for the public to access it. At the beginning, that access will take the form of searching, but Dandrimont said the project hopes to empower research, education, and cultural analysis in the long term. There are also immediate practical uses for a global software archive: the tracking of security vulnerabilities, assisting in license compliance, and helping developers discover relevant prior art.
The project was initiated by Inria, the French Institute for Research in Computer Science and Automation (which has a long history of supporting free-software development) and as of launch time has picked up Microsoft and Data Archiving and Networked Services (DANS) as additional sponsors. Dandrimont said that the intent is to grow Software Heritage into a standalone non-profit organization. For now, however, there is a small team of full-time employees working on the project, with the assistance of several interns.
The project's servers are currently hosted at Inria, utilizing about a dozen virtual machines and a 300TB storage array. At the moment, there are backups at a separate facility, but there is not yet a mirror network. The archive itself is online, though it is currently accessible only in limited form. Users can search for specific files by their SHA-1 hashes, but cannot browse.
Indices
It does not take much contemplation to realize that Software Heritage's stated goal of indexing all available software is both massive in raw numbers and complicated by the vast assortment of software sources involved. Software Heritage's chief technology officer (CTO) is Stefano Zacchiroli, a former Debian Project Leader who has recently devoted his attention to Debsources, a searchable online database of every revision of every package in the Debian archive.
Software Heritage is an extension of the Debsources concept (which, no doubt, had some influence in making the Debian archive one of the initial bulk imports). In addition to the Debian archive, at launch time the Software Heritage archive also included every package available through the GNU project's FTP site and an import of all public, non-fork repositories on GitHub. Dandrimont mentioned in his talk that the Software Heritage team is currently working with Google to import the Google Code archive and with Archive Team to import its Gitorious.org archive.
Between the three existing sources, the GitHub data set is the largest, accounting for 22 million repositories and 2.6 billion files. For comparison, in 2015, Debsources was reported to include 11.7 million files in just over 40,000 packages. Google Code included around 12 million projects and Gitorious around 2 million.
But those collections account for just a handful of sites where software can be found. Moving forward, Software Heritage wants to import the archives for the other public code-hosting services (like SourceForge), every Linux distribution, language-specific sites like the Python Package Index, corporate and personal software repositories, and (ultimately) everywhere else.
Complicating the task is that this broad scope, by its very nature, will pull in a lot of software that is not open-source or free software. In fact, as Zacchiroli confirmed in an email, the licensing factor is already a hurdle, since so many repositories have no licensing information:
The way I like to think about this is: we want to protect the entire Software Commons. Free/Open Source Software is the largest and best curated part of it; so we want to protect of FOSS. Given the long-term nature of Software Heritage, we simply go for all publicly available source code (which includes all of FOSS but is larger), as it will become part of the Software Commons one day too.
For now, Zacchiroli said, the Software Heritage team is focused on finalizing the database of the current software and on putting a reliable update mechanism in place. GitHub, for example, is working with the team to enable ongoing updates of the already imported repositories, as well as adding new repositories as they are created. The team is also writing import tools for use ingesting files from a variety of version-control systems (old and new).
Access
Although the Software Heritage archive's full-blown web interface has yet to be launched, Dandrimont's talk provided some details on how it will work, as well as how the underlying stack is designed.
All of the imported archives are stored as flat files in a standard filesystem, including all of the revisions of each file. A PostgreSQL database tracks each file by its SHA-1 hash, with directory-level manifests of which files are in which directory. Furthermore, each release of each package is stored in the database as a directed acyclic graph of hashes, and metadata is tracked on the origin (e.g., GitHub or GNU) of each package and various other semantic properties (such as license and authorship). At present, he said, the archive consists of 2.7 billion files occupying 120TB, with the metadata database taking up another 3.1TB. "It is probably the biggest distributed version-control graph in existence," he added.
Browsing through the web interface and full-text searching are the next features on the roadmap. Following that, downloading comes next, including an interface to grab projects with git clone. Further out, the project's plans are less specific, in part because it hopes to attract input from researchers and users to help determine what features are of interest.
At the moment, he said, the storage layer is fairly basic in its design. He noted that the raw number of files "broke Git's storage model" and that the small file sizes (3kB on average) posed its own set of challenges. He then invited storage experts to get involved in the project, particularly as the team starts exploring database replication and mirroring. The code used by the project itself is free software, available at forge.softwareheritage.org.
Because the archive contains so many names and email addresses, Zacchiroli said that steps were being taken to make it difficult for spammers to harvest addresses in bulk, while still making it possible for well-behaved users to access files in their correct form. "There is a tension here," he explained. The web interface will likely obfuscate addresses and the archive API may rate-limit requests.
The project clearly has a long road ahead of it; in addition to the large project-hosting sites and FTP archives, collecting all of the world's publicly available software entails connecting to thousands if not millions of small sites and individual releases. But what Software Heritage is setting out to do seems to offer more value than a plain "file storage" archive like those offered by Archive Team and the Internet Archive. Providing a platform for learning, searching, and researching software has the potential to attract more investments of time and financial resources, two quantities that Software Heritage is sure to need in the years ahead.
Mozilla Servo arrives in nightly demo form
The Firefox codebase dates back to 2002, when the browser was unbundled from the Mozilla Application Suite—although much of its architecture predates even that split. Major changes have been rare over the years, but recently several long-running Mozilla efforts have started to see the light of day. The most recent of these is the Servo web-rendering engine, for which the first standalone test builds were released on June 30. Although the Servo builds are not full-blown browsers, they enable users to download and test the engine on live web sites for the first time. Servo is designed with speed and concurrency in mind, and if all goes according to plan, the code may work its way into Firefox in due course.
Servo, for those unfamiliar, is a web rendering engine—roughly analogous to Gecko in the current Firefox architecture and WebKit or Blink in other browsers. It does not execute JavaScript, but is responsible for interpreting HTML and CSS and performs the vast majority of page layout operations.
The interesting facets of Servo are that it is written to be extensively parallel in execution and that it is designed to be intrinsically secure against the most common security bugs that plague browsers (and other application software). This security comes by virtue of being developed in the Rust language, which has a variety of built-in memory-safety features. Rust also offers concurrency features that Servo can leverage to do parallel page rendering. As a practical matter, this should enable faster rendering on today's multi-core hardware.
In 2015, we covered a talk at LinuxCon Japan by Lars Bergstrom and Mike Blumenkrantz that explored Servo's design. In that talk, the two speakers cautioned that Servo is a research project and that it is not scheduled to be a drop-in replacement for Gecko—at least, not on the desktop—although they did indicate that certain parts of Servo may be migrated into Gecko.
The June 30 announcement marked the release of a series of
pre-built binaries that wrap the Servo engine with a minimalist
browser GUI (based on Browser.html).
The binaries are automatically built nightly, and are initially
provided for Mac OS X and x86_64 Linux only, although there are
tracking bugs available for users to see when Windows and Android builds
will be available. It is also possible to build the nightlies from
source; the Servo wiki includes a page about building for Linux on ARM.
Because the nightly builds are not full browsers, the interface leaves out most of the traditional browser chrome. Instead, the browser's start page presents a set of eight tiles linking to some well-known web sites, four tiles linking to graphics-intensive demos, a URL entry bar that doubles as a DuckDuckGo search bar (plus forward and back buttons and a "new tab" button). The same start page is accessible through other browsers. In some non-scientific testing, it is easy to see that Servo loads certain pages faster than recent Firefox releases—Wikipedia and Hacker News stand out among the eight tiles, for instance. On my four-core desktop machine, the difference was about twofold, although to provide a truly fair test, one should compare against Gecko (or another engine) with ad-blocking and tracker-blocking disabled, and with a clear cache.
Or, to be more precise, one could say that Servo begins to show the page sooner than Firefox. In many cases, Firefox takes a long pause to fully lay out the page content before anything is displayed, while Servo begins placing page elements on screen almost immediately, even if it still takes several additional seconds before the page-load progress bar at the top of the window indicates success. That is in keeping with Bergstrom and Blumenkrantz's comments about slow-to-load sites in Firefox: many pages are built with frames and <div> elements that are fetched separately, so loading them concurrently is where much of the time is saved.
The speed difference on the graphics demos was more drastic;
the Firefox 47 build I used could barely even animate
the Moire
and Transparent
Rectangle demos, while they ran smoothly on Servo.
The engine already provides good coverage of older HTML and CSS elements, with a few exceptions (frames and form controls, for example). Newer web specifications, including multimedia and web-application–driven standards like Service Workers tend to be less fully developed. Here again, the Servo wiki provides a page to track the project's progress.
Based on these early test builds, Servo looks promising. There were several occasions where it locked up completely, which would not be too surprising on any nightly build. But it is encouraging to see that it is already faster at rendering certain content than Gecko—and well before the project turns its attention to optimization.
Lest anyone get too excited about Servo's potential to replace Gecko, for the time being there is no such plan on the books. But the plan to patch some Servo components into Gecko or other Firefox modules appears to still be in the roadmap. Tracking bugs exist for a few components, such as Servo's URL parser and CSS style handling. The plan also notes that Servo is being looked at as a replacement for Gecko on Android, however, and as a reusable web-rendering engine—a use-case Mozilla has not addressed for quite some time.
Although that work still appears to be many releases removed from end users, it is worth noting that Firefox has moved forward on several other long-term projects in the past few months. In June, the first Firefox builds using Electrolysis, Mozilla's project to refactor Firefox for multi-process operation, were made available in the Beta channel. Recent Firefox releases have also successfully moved from the old extensions APIs to WebExtensions. Both of those changes are substantial, and both (like Servo) should provide improved security and performance.
Over the years, Mozilla has taken quite a bit of criticism for the aging architecture of Firefox—although, one must point out that Mozilla also takes quite a bit of criticism whenever it changes Firefox. If anything, the new Servo demos provide an opportunity for the public to see that some of Mozilla's research projects can have tangible benefits. One way or another, Firefox will reap benefits from Servo, as may other free-software projects looking for a modern web-rendering engine.
A leadership change for nano
The nano text editor has a long history as a part of the GNU project, but its lead developer recently decided to sever that relationship and continue the project under its own auspices. As often happens in such cases, the change raised concerns from many in the free-software community, and prompted questions about maintainership and membership in large projects.
Nano past
Nano was created in 1999 as a GPL-licensed replacement for the Pico editor, which was originally a component of the Pine email client. Pico and Pine were developed at the University of Washington and, at the time, were distributed under a unique license that was regarded as incompatible with the GPL. Nano's creator, Chris Allegretta, formally moved the project into GNU in 2001.
Like Pico, nano is a text-mode editor optimized for use in terminal emulators. As such, it has amassed a healthy following over the years, particularly as a lightweight alternative to Emacs and vi. Often when one logs into a remote machine to make a small change to a text configuration file, nano can seem to be the wise choice for editing; it loads and runs quickly, is free of extraneous features, and the only keyboard commands one needs to alter a file are helpfully displayed right at the bottom of the window.
But nano has not stayed still. Over the years, it has gained new features like color syntax highlighting, automatic indentation, toggle-able line numbering, and so on. Other programmers have led nano's development for most of its recent history, although Allegretta has served as the GNU package maintainer for the past several years (after having taken a few years off from that duty in the mid-2000s).
Nano present
Over those past few years, Benno Schulenberg has made the most code contributions (by a comfortable margin), and as Allegretta determined that he no longer had the time or inclination to act as maintainer, conversation naturally turned to a formal transition. Much of that conversation took place privately, however, which may have led to the confusion that erupted in late June when the nano site seemed to proclaim that the project was no longer part of GNU.
Specifically, the change went public on June 17, which was the date of the 2.6.0 release. The release notes on the News page ended with the words:
In addition, the ASCII-art logo on the project home page changed from reading
"The GNU nano Text Editor Homepage
" to "The nano Text Editor homepage
"
(see the Wayback Machine's archived
copy for comparison). The code was also changed to remove the GNU branding, in a
June 13 commit
by Schulenberg.
Within a few days, the change had been noticed
by people outside the project; discussion threads popped up on Hacker News (HN)
and Reddit.
Those discussions took the move to be an acrimonious fork by
Schulenberg, an interpretation perhaps fueled by GNU project member Mike Gerwitz's
comment early on that "Nano has _not_ left the GNU
project
" and "Benno decided to fork the project. But he
did so with hostility: he updated the official GNU Nano website,
rather than creating a website for the fork.
" Gerwitz reported
that the incident was fallout from a dispute between Allegretta and
Schulenberg. Specifically, Allegretta had wanted to add Schulenberg
as a co-maintainer, but that Schulenberg had refused to accept the GNU project's
conditions of
maintainership.
As it turns out, though, the sequence of events that led up to the 2.6.0 release were more nuanced. In May, Schulenberg had asked to roll a new release incorporating several recent changes. Allegretta was slow to respond, and cited several concerns with recent development processes, starting with the fact that GNU requires outside contributors to assign copyright to the FSF—at least, if the copyright on the package in question is already held by the Free Software Foundation (FSF), which was the case for nano.
Developers working on GNU packages are not required to assign copyright to the FSF (although it encourages copyright assignment in order to better enable license-compliance enforcement efforts). Schulenberg was unwilling to do the FSF copyright assignment (or any other copyright assignment), nor to participate in other formal GNU maintainer duties. But the crux of the issue for Allegretta seemed to be that the project was stuck in a place of noncompliance: as a GNU project, it should adhere to the GNU project's rules, but in practice it had not done so for years.
In the email linked-to above, Allegretta proposed that if the active
developers were not interested in following the GNU guidelines, the
project could move from GNU Savannah to GitHub or another hosting service
and drop the GNU moniker. He then reframed the
discussion by starting a new mailing-list thread titled "Should nano
stay a GNU program." In that email, he said it "is fine
" if
Schulenberg is not interested in following the GNU guidelines, but that
"we just need to figure out a solution
".
In reply, nano developers Mark Majeres, Jordi Mallach, David Ramsey, and Mike Frysinger all said that whether the project continued under the GNU banner would not impact their future participation (although Mallach and Ramsey indicated that staying with GNU would be their preference). In the end, Schulenberg made the commit that removed the GNU branding, and no one objected.
Nano future
After news of the change hit HN and Reddit, Allegretta posted a
clarification on his blog, describing the project as "peacefully
transitioning
" to a new maintainer and clarifying that he, not
Schulenberg, had redirected the project's domain name to point to the
new web server. Schulenberg filed a support ticket at
Savannah asking to have the nano project moved from the gnu
to the nongnu subdomain.
There is still the lingering question of whether or not anyone at GNU will wish to continue to develop a GNU-hosted version of nano (and, if so, how downstream users would handle the naming conflict). But it appears to be a hypothetical concern. Although he is active in several GNU projects, it is not clear that Gerwitz is involved in the nano project, and the active nano maintainers all seem to have continued participating as before.
Ultimately, the entire series of events was over long before it became news. Allegretta handed maintainership duties to Schulenberg and the project changed its affiliation. But the various discussion threads on the topic make for interesting reading nonetheless. There seems to be a fair amount of lingering confusion about the GNU project's copyright-assignment practices and what it means for a project to be affiliated with GNU, as well as disagreement over what exactly the role of project maintainer is.
For instance, as HN commenters pointed out, if GNU has been the home of a project for a decade and a half, many would say that an individual forking the project should be obligated to change its name. Conversely, user zx2c4 pointed out that Schulenberg had made considerably more commits in recent years than anyone else. To a lot of participants in the Reddit and HN threads, that fact entitled Schulenberg to make unilateral calls about the direction of the project, even though someone else was the official maintainer.
Maintaining a project means more than merely checking in commits, of course—a fact that most free-software developers readily acknowledge—and, for the record, something that Schulenberg has proven quite comfortable doing in nano's recent past. But the brief public uproar over nano's transition does reveal that, for at least some portion of the community, lines of code committed seems to count for more than formal membership in an umbrella project like GNU. Whether GNU will take any action to address that issue remains to be seen.
Security
Two approaches to reference count hardening
Reference counts are used throughout the kernel to track the lifecycles of objects; when a reference count is decremented to zero, the kernel knows that the associated object is no longer in use and can be freed. But reference counts, like almost any other mechanism, are subject to various sorts of bugs in their usage, and those bugs can lead to exploitable vulnerabilities. So it is not surprising that developers have been interested in hardening the kernel against such bugs for years.With reference counts, the most common bugs are failure to decrement a counter and decrementing the counter when a reference is not held. Both often happen in error paths and can go undetected for a long time, since those paths are lightly tested at best and rarely executed. An error situation might lead a function to return without performing a necessary decrement, or it may decrement a count that, in fact, had not yet been incremented. But these bugs can pop up in non-error paths as well; they often go unnoticed, since they rarely result in obvious explosions.
Excessive decrements will cause an object to be freed before the last real reference has been released, leading to a classic use-after-free situation. Such errors are often exploitable; see CVE-2016-4557 (and the associated fix) for a recent example. Excessive increments, if they can be provoked by an attacker, lead to a similar scenario: first the counter is overflowed, then decremented back to zero, leading to a premature freeing of the object. CVE-2016-0728 (fixed with this commit) is an example of the trouble that can ensue. Needless to say, it would be nice to catch this type of error before it gets to the point of being exploitable by an attacker.
As is so often the case, the oldest work in this area seems to have been done in the PaX project. This work starts with the kernel's atomic_t type, which is often used to implement reference counts. The kernel provides a set of helper functions for performing operations (increments and decrements, for example) on atomic_t variables, so it makes sense to add overflow checks to those functions. That must be done carefully, though, since operations on atomic_t variables are often in hot paths in the kernel; changes that increase the size of the atomic_t type are also unlikely to be accepted.
In the PaX case, the relevant operations, most of which are already implemented in assembly, are enhanced to perform overflow checks. Often that is just a matter of checking the condition-code flags set by the processor as a result of the increment or decrement operation. Should an overflow be detected, the response is architecture-dependent, but results in some sort of kernel trap. The overflow is undone, the process that overflowed the counter is killed, and a message is logged.
This checking catches attempts to exploit the overflow (excessive increment) bugs handily; that class of bugs is rendered unexploitable. Excessive decrements are harder to catch, since decrementing a reference count to zero is a part of normal operation. If such a bug exists, though, it will almost certainly show itself by decrementing the counter below zero occasionally, even in normal operations. With checking in place, somebody should notice the problem and it should be fixed.
There is one catch that makes this patch more invasive than one might expect, though: not all uses of atomic_t are reference counts. Other uses, which might legitimately wrap or go below zero, should not have this type of checking enabled. To get to that point, PaX adds atomic_unchecked_t type and converts a large set of in-kernel users; that leads to a fair amount of code churn.
Back in December, David Windsor posted a version of the PaX reference-count hardening patch set for review. A certain amount of discussion followed, and some problems were pointed out, but there was little opposition to the idea in general. Unfortunately, David vanished shortly thereafter and never followed up with a new version of the patches, so they remain outside of the mainline. Nobody else has stepped up to carry this work forward.
More recently, Jann Horn has posted a different approach to the refcount problem. Rather than change the atomic_t type, this patch set changes the kref mechanism, which exists explicitly for the implementation of reference counts. This choice means that far fewer locations in the kernel will be protected, but it makes the patch set far less invasive and allows testing of the underlying ideas.
Jann's patch set eschews assembly tweaks in favor of entirely architecture-independent checking, a choice which, he later conceded, might not be the most efficient in the end. With this patch in place, special things happen once a reference count reaches a maximum value (0x70000000): after that point, increments and decrements are no longer allowed. In essence, a reference count that large is deemed to have already overflowed, so it is "pinned" at a high number to prevent premature object freeing. No warnings are emitted, and no processes are killed.
While he had no objection to the patch as it was, Kees Cook said that he would rather see the checking done at the atomic_t level, since so much reference counting is done that way. Greg Kroah-Hartman agreed, noting that the process of auditing atomic_t users would turn up a lot of places where kref should be used instead. Adding overflow checking to atomic_t would protect kref automatically (since krefs are implemented as a wrapper around atomic_t), so it really does seem that, despite the large number of changes required, this protection should be done at the lower level.
Of course, there is already a working patch set for the detection of atomic_t overflows: the PaX code. The work to separate it out and turn it into a standalone kernel patch has even been done. The flag-day nature of the change (all non-reference-count uses of atomic_t have to change when the semantics of atomic_t do) is will make the process of upstreaming this patch a bit harder, but such changes can be made when they are justified. Closing off a class of errors that has demonstrably led to exploitable kernel vulnerabilities would seem like a reasonably strong justification.
Brief items
Security quotes of the week
The search giant today revealed that it’s been rolling out a new form of encryption in its Chrome browser that’s designed to resist not just existing crypto-cracking methods, but also attacks that might take advantage of a future quantum computer that accelerates codebreaking techniques untold gajillions of times over. For now, it’s only testing that new so-called “post-quantum” crypto in some single digit percentage of Chrome desktop installations, which will be updated so that they use the new encryption protocol when they connect to some Google services. But the experiment nonetheless represents the biggest real-world rollout ever of encryption that’s resistant to quantum attacks, and a milestone in the security world’s preparations to head off a potentially disastrous but still-distant quantum cryptopocalypse.
Extracting Qualcomm's KeyMaster Keys - Breaking Android Full Disk Encryption (Bits Please)
The "Bits Please" blog has a detailed description of how one breaks full-disk encryption on an Android phone. Included therein is a lot of information on how full-disk encryption works on Android devices and its inherent limitations. "Instead of creating a scheme which directly uses the hardware key without ever divulging it to software or firmware, the code above performs the encryption and validation of the key blobs using keys which are directly available to the TrustZone software! Note that the keys are also constant - they are directly derived from the SHK (which is fused into the hardware) and from two 'hard-coded' strings. Let's take a moment to explore some of the implications of this finding."
Linux Security Summit schedule published
On his blog, James Morris has announced that the schedule for the Linux Security Summit (LSS) is now available. "The keynote speaker for this year’s event is Julia Lawall. Julia is a research scientist at Inria, the developer of Coccinelle, and the Linux Kernel coordinator for the Outreachy project. Refereed presentations include: The State of Kernel Self Protection Project – Kees Cook, Google; Towards Measured Boot Out of the Box – Matthew Garrett, CoreOS; Securing Filesystem Images for Unprivileged Containers – James Bottomley, IBM; Opportunistic Encryption Using IPsec – Paul Wouters, Libreswan IPsec VPN Project; and Android: Protecting the Kernel – Jeffrey Vander Stoep, Google." LSS will be held August 25-26 in Toronto, co-located with LinuxCon North America.
10 million Android phones infected by all-powerful auto-rooting apps (Ars Technica)
Ars Technica reports on the "HummingBad" malware that has infected millions of Android devices: "Researchers from security firm Check Point Software said the malware installs more than 50,000 fraudulent apps each day, displays 20 million malicious advertisements, and generates more than $300,000 per month in revenue. The success is largely the result of the malware's ability to silently root a large percentage of the phones it infects by exploiting vulnerabilities that remain unfixed in older versions of Android." The article is based on a report [PDF] from Check Point, though the article notes that "
researchers from mobile security company Lookout say HummingBad is in fact Shedun, a family of auto-rooting malware that came to light last November and had already infected a large number of devices".
New vulnerabilities
cronic: predictable temporary files
| Package(s): | cronic | CVE #(s): | CVE-2016-3992 | ||||
| Created: | July 6, 2016 | Updated: | July 7, 2016 | ||||
| Description: | From the openSUSE bug report:
It looks like cronic uses very predictable temporary files (like /tmp/cronic.out.$$) that depends only on PID: OUT=/tmp/cronic.out.$$ ERR=/tmp/cronic.err.$$ TRACE=/tmp/cronic.trace.$$ set +e "$@" >$OUT 2>$TRACE RESULT=$? set -e | ||||||
| Alerts: |
| ||||||
graphicsmagick: multiple vulnerabilities
| Package(s): | GraphicsMagick | CVE #(s): | CVE-2014-9805 CVE-2014-9807 CVE-2014-9808 CVE-2014-9809 CVE-2014-9810 CVE-2014-9811 CVE-2014-9813 CVE-2014-9814 CVE-2014-9815 CVE-2014-9816 CVE-2014-9817 CVE-2014-9818 CVE-2014-9819 CVE-2014-9820 CVE-2014-9828 CVE-2014-9829 CVE-2014-9830 CVE-2014-9831 CVE-2014-9834 CVE-2014-9835 CVE-2014-9837 CVE-2014-9839 CVE-2014-9840 CVE-2014-9844 CVE-2014-9845 CVE-2014-9846 CVE-2014-9847 CVE-2014-9853 CVE-2015-8894 CVE-2015-8901 CVE-2015-8903 CVE-2016-5688 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 5, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
CVE-2014-9805: SEGV due to a corrupted pnm file. (bsc#983752). CVE-2014-9807: Double free in pdb coder. (bsc#983794). CVE-2014-9808: SEGV due to corrupted dpc images. (bsc#983796). CVE-2014-9809: SEGV due to corrupted xwd images. (bsc#983799). CVE-2014-9810: SEGV in dpx file handler (bsc#983803). CVE-2014-9811: Crash in xwd file handler (bsc#984032). CVE-2014-9813: Crash on corrupted viff file (bsc#984035). CVE-2014-9814: NULL pointer dereference in wpg file handling (bsc#984193). CVE-2014-9815: Crash on corrupted wpg file (bsc#984372). CVE-2014-9816: Out of bound access in viff image (bsc#984398). CVE-2014-9817: Heap buffer overflow in pdb file handling (bsc#984400). CVE-2014-9818: Out of bound access on malformed sun file (bsc#984181). CVE-2014-9819: Heap overflow in palm files (bsc#984142). CVE-2014-9820: Heap overflow in xpm files (bsc#984150). CVE-2014-9828: corrupted (too many colors) psd file (bsc#984028). CVE-2014-9829: Out of bound access in sun file (bsc#984409). CVE-2014-9830: Handling of corrupted sun file (bsc#984135). CVE-2014-9831: Handling of corrupted wpg file (bsc#984375). CVE-2014-9834: Heap overflow in pict file (bsc#984436). CVE-2014-9835: Heap overflow in wpf file (bsc#984145). CVE-2014-9837: Additional PNM sanity checks (bsc#984166). CVE-2014-9839: Theoretical out of bound access in magick/colormap-private.h (bsc#984379). CVE-2014-9840: Out of bound access in palm file (bsc#984433). CVE-2014-9844: Out of bound issue in rle file (bsc#984373). CVE-2014-9845: Crash due to corrupted dib file (bsc#984394). CVE-2014-9846: Added checks to prevent overflow in rle file (bsc#983521). CVE-2014-9847: Incorrect handling of "previous" image in the JNG decoder (bsc#984144). CVE-2014-9853: Memory leak in rle file handling (bsc#984408). CVE-2015-8894: Double free in coders/tga.c:221 (bsc#983523). CVE-2015-8901: MIFF file DoS (endless loop) (bsc#983234). CVE-2015-8903: Denial of service (cpu) in vicar (bsc#983259). CVE-2016-5688: Various invalid memory reads in ImageMagick WPG (bsc#985442). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
imagemagick: many vulnerabilities
| Package(s): | ImageMagick | CVE #(s): | CVE-2014-9806 CVE-2014-9812 CVE-2014-9821 CVE-2014-9822 CVE-2014-9823 CVE-2014-9824 CVE-2014-9825 CVE-2014-9826 CVE-2014-9832 CVE-2014-9833 CVE-2014-9836 CVE-2014-9838 CVE-2014-9841 CVE-2014-9842 CVE-2014-9843 CVE-2014-9848 CVE-2014-9849 CVE-2014-9850 CVE-2014-9851 CVE-2014-9852 CVE-2014-9854 CVE-2015-8900 CVE-2015-8902 CVE-2016-4562 CVE-2016-4564 CVE-2016-5687 CVE-2016-5689 CVE-2016-5690 CVE-2016-5691 CVE-2016-5841 CVE-2016-5842 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 7, 2016 | Updated: | December 1, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2014-9904 CVE-2016-5828 CVE-2016-5829 CVE-2016-6130 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 5, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The snd_compress_check_input function in sound/core/compress_offload.c in the ALSA subsystem in the Linux kernel before 3.17 does not properly check for an integer overflow, which allows local users to cause a denial of service (insufficient memory allocation) or possibly have unspecified other impact via a crafted SNDRV_COMPRESS_SET_PARAMS ioctl call. (CVE-2014-9904) The start_thread function in arch/powerpc/kernel/process.c in the Linux kernel through 4.6.3 on powerpc platforms mishandles transactional state, which allows local users to cause a denial of service (invalid process state or TM Bad Thing exception, and system crash) or possibly have unspecified other impact by starting and suspending a transaction before an exec system call. (CVE-2016-5828) Multiple heap-based buffer overflows in the hiddev_ioctl_usage function in drivers/hid/usbhid/hiddev.c in the Linux kernel through 4.6.3 allow local users to cause a denial of service or possibly have unspecified other impact via a crafted (1) HIDIOCGUSAGES or (2) HIDIOCSUSAGES ioctl call. (CVE-2016-5829) Race condition in the sclp_ctl_ioctl_sccb function in drivers/s390/char/sclp_ctl.c in the Linux kernel before 4.6 allows local users to obtain sensitive information from kernel memory by changing a certain length value, aka a "double fetch" vulnerability. (CVE-2016-6130) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2016-5728 | ||||||||||||||||||||||||||||||||||||
| Created: | July 1, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bug report: Race condition vulnerability was found in drivers/misc/mic/vop/vop_vringh.c in the MIC VOP driver in the Linux kernel before 4.6.1. MIC VOP driver does two successive reads from user space to read a variable length data structure. Local user can obtain sensitive information form kernel memory or can cause DoS by corrupting kernel memory if the data structure changes between the two reads. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
libarchive: multiple vulnerabilities
| Package(s): | libarchive | CVE #(s): | CVE-2015-8934 CVE-2016-4300 CVE-2016-4301 CVE-2016-4302 CVE-2016-5844 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 6, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
An out of bounds read in the rar parser: invalid read in function copy_from_lzss_window() when unpacking malformed rar (CVE-2015-8934). An exploitable heap overflow vulnerability exists in the 7zip read_SubStreamsInfo functionality of libarchive. A specially crafted 7zip file can cause a integer overflow resulting in memory corruption that can lead to code execution. An attacker can send a malformed file to trigger this vulnerability (CVE-2016-4300). An exploitable stack based buffer overflow vulnerability exists in the mtree parse_device functionality of libarchive. A specially crafted mtree file can cause a buffer overflow resulting in memory corruption/code execution. An attacker can send a malformed file to trigger this vulnerability (CVE-2016-4301). An exploitable heap overflow vulnerability exists in the Rar decompression functionality of libarchive. A specially crafted Rar file can cause a heap corruption eventually leading to code execution. An attacker can send a malformed file to trigger this vulnerability (CVE-2016-4302). A signed integer overflow in iso parser: integer overflow when computing location of volume descriptor (CVE-2016-5844). The libarchive package has been updated to version 3.2.1, fixing those issues and other bugs. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libgd: denial of service
| Package(s): | libgd | CVE #(s): | CVE-2016-6128 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 6, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
Improperly handling invalid color index in gdImageCropThreshold() could result in denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
libircclient: insecure cipher suites
| Package(s): | libircclient | CVE #(s): | |||||||||||||
| Created: | July 6, 2016 | Updated: | July 11, 2016 | ||||||||||||
| Description: | From the openSUSE advisory:
This update for libircclient adjusts the cipher suites from
ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH to | ||||||||||||||
| Alerts: |
| ||||||||||||||
libreoffice: code execution
| Package(s): | libreoffice | CVE #(s): | CVE-2016-4324 | ||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2016 | Updated: | November 11, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Aleksandar Nikolic discovered that missing input sanitising in the RTF parser in Libreoffice may result in the execution of arbitrary code if a malformed documented is opened. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
libvirt: authentication bypass
| Package(s): | libvirt | CVE #(s): | CVE-2016-5008 | ||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 1, 2016 | Updated: | November 11, 2016 | ||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory: Setting an empty graphics password is documented as a way to disable VNC/SPICE access, but QEMU does not always behave like that. VNC would happily accept the empty password. | ||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||
mbedtls: three vulnerabilities
| Package(s): | mbedtls | CVE #(s): | |||||||||||||||||
| Created: | July 5, 2016 | Updated: | July 28, 2016 | ||||||||||||||||
| Description: | From the mbed TLS advisory:
(2.3, 2.1, 1.3) Fixed missing padding length check required by PKCS1 v2.2 in mbedtls_rsa_rsaes_pkcs1_v15_decrypt(). (considered low impact) (2.3, 2.1, 1.3) Fixed potential integer overflow to buffer overflow in mbedtls_rsa_rsaes_pkcs1_v15_encrypt() and mbedtls_rsa_rsaes_oaep_encrypt(). (not triggerable remotely in (D)TLS). (2.3, 2.1, 1.3) Fixed potential integer underflow to buffer overread in mbedtls_rsa_rsaes_oaep_decrypt(). It is not triggerable remotely in SSL/TLS. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
openstack-ironic: authentication bypass
| Package(s): | openstack-ironic | CVE #(s): | CVE-2016-4985 | ||||||||
| Created: | July 5, 2016 | Updated: | July 7, 2016 | ||||||||
| Description: | From the Red Hat advisory:
An authentication vulnerability was found in openstack-ironic. A client with network access to the ironic-api service could bypass OpenStack Identity authentication, and retrieve all information about any node registered with OpenStack Bare Metal. If an unprivileged attacker knew (or was able to guess) the MAC address of a network card belonging to a node, the flaw could be exploited by sending a crafted POST request to the node's /v1/drivers/$DRIVER_NAME/vendor_passthru resource. The response included the node's full details, including management passwords, even if the /etc/ironic/policy.json file was configured to hide passwords in API responses. | ||||||||||
| Alerts: |
| ||||||||||
phpMyAdmin: code execution
| Package(s): | phpMyAdmin | CVE #(s): | CVE-2016-5734 | ||||||||||||
| Created: | July 5, 2016 | Updated: | July 7, 2016 | ||||||||||||
| Description: | From the CVE entry:
phpMyAdmin 4.0.x before 4.0.10.16, 4.4.x before 4.4.15.7, and 4.6.x before 4.6.3 does not properly choose delimiters to prevent use of the preg_replace e (aka eval) modifier, which might allow remote attackers to execute arbitrary PHP code via a crafted string, as demonstrated by the table search-and-replace implementation. | ||||||||||||||
| Alerts: |
| ||||||||||||||
sqlite3: information leak
| Package(s): | sqlite3 | CVE #(s): | CVE-2016-6153 | ||||||||||||||||||||
| Created: | July 6, 2016 | Updated: | August 12, 2016 | ||||||||||||||||||||
| Description: | From the Debian LTS advisory:
It was discovered that sqlite3, a C library that implements a SQL database engine, would reject a temporary directory (e.g., as specified by the TMPDIR environment variable) to which the executing user did not have read permissions. This could result in information leakage as less secure global temporary directories (e.g., /var/tmp or /tmp) would be used instead. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
struts: multiple vulnerabilities
| Package(s): | struts | CVE #(s): | CVE-2016-1181 CVE-2016-1182 | ||||||||||||
| Created: | July 1, 2016 | Updated: | July 11, 2016 | ||||||||||||
| Description: | From the Fedora advisory: CVE-2016-1181 - Vulnerability in ActionForm allows unintended remote operations against components on server memory. CVE-2016-1182 - Improper input validation in Validator. | ||||||||||||||
| Alerts: |
| ||||||||||||||
wordpress: multiple vulnerabilities
| Package(s): | wordpress | CVE #(s): | CVE-2016-5832 CVE-2016-5833 CVE-2016-5834 CVE-2016-5835 CVE-2016-5836 CVE-2016-5837 CVE-2016-5838 CVE-2016-5839 | ||||||||||||||||||||||||
| Created: | July 1, 2016 | Updated: | August 4, 2016 | ||||||||||||||||||||||||
| Description: | From the CVE entries: CVE-2016-5832 - The customizer in WordPress before 4.5.3 allows remote attackers to bypass intended redirection restrictions via unspecified vectors. CVE-2016-5833 - Cross-site scripting (XSS) vulnerability in the column_title function in wp-admin/includes/class-wp-media-list-table.php in WordPress before 4.5.3 allows remote attackers to inject arbitrary web script or HTML via a crafted attachment name, a different vulnerability than CVE-2016-5834. CVE-2016-5834 - Cross-site scripting (XSS) vulnerability in the wp_get_attachment_link function in wp-includes/post-template.php in WordPress before 4.5.3 allows remote attackers to inject arbitrary web script or HTML via a crafted attachment name, a different vulnerability than CVE-2016-5833. CVE-2016-5835 - WordPress before 4.5.3 allows remote attackers to obtain sensitive revision-history information by leveraging the ability to read a post, related to wp-admin/includes/ajax-actions.php and wp-admin/revision.php. CVE-2016-5836 - The oEmbed protocol implementation in WordPress before 4.5.3 allows remote attackers to cause a denial of service via unspecified vectors. CVE-2016-5837 - WordPress before 4.5.3 allows remote attackers to bypass intended access restrictions and remove a category attribute from a post via unspecified vectors. CVE-2016-5838 - WordPress before 4.5.3 allows remote attackers to bypass intended password-change restrictions by leveraging knowledge of a cookie. CVE-2016-5839 - WordPress before 4.5.3 allows remote attackers to bypass the sanitize_file_name protection mechanism via unspecified vectors. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
xerces-c: denial of service
| Package(s): | xerces-c | CVE #(s): | CVE-2016-4463 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2016 | Updated: | July 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Brandon Perry discovered that xerces-c, a validating XML parser library for C++, fails to successfully parse a DTD that is deeply nested, causing a stack overflow. A remote unauthenticated attacker can take advantage of this flaw to cause a denial of service against applications using the xerces-c library. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.7-rc6, which was released on July 3. "I'd love to tell you that things are calming down, and
we're shrinking, but that would be a lie. It's not like this is a huge rc,
but it's definitely bigger than the previous rc's were. I don't think
that's necessarily a big problem, it seems to be mostly timing.
"
The fourth edition of Thorsten Leemhuis's 4.7 regression list was posted on July 2. It contains fourteen entries, including two new ones and four that have fixes headed toward the mainline.
Stable updates: There have been no stable releases in the last week, though the 4.6.4 stable kernel is in the review process; it can be expected around July 9.
Kernel development news
USB charging, part 1: requirements
USB, the Universal Serial Bus, was primarily designed to transfer data to and from peripherals, with a secondary function of providing power to those peripherals so that they don't need to be independently powered. This secondary function has gained importance over the years, so that today it is sometimes the primary or only function. Many smartphone owners regularly use USB to charge their device but rarely, if ever, transfer data over USB. When Linux is running on that smartphone it needs to be able to optimize the use of whatever power is available over the bus — a task that is neither straightforward nor cleanly implemented in mainline Linux. We start this two-part series by looking at how USB communicates power availability information and will conclude in the second part by looking at how Linux does, or more often doesn't, make use of this information.
To begin, it will be helpful to be clear about some terminology. USB is an asymmetric bus — the two connected peers each play different roles. One peer is the master, or initiator, and controls all the data flow; it is known as the "host" and has an A-series connector (or receptacle) or, possibly, a B-series plug. The other peer is the slave, or responder, that can only send or receive data when the host tells it to. This is known as a USB "device" though, since "device" is an overly generic term, it is sometimes referred to as a "gadget". A USB gadget has a B-series connector, of which there are a range of sizes, or an A-series plug.
A USB cable connecting a host to a gadget typically has 4 wires. Two carry power, VBUS and GND, and two carry data, DP and DM, also known as D+ and D-. Power normally flows from the host to the gadget, but this is not universal as we shall see later. USB 3.0 adds extra wires and pins to the A-series and B-series connectors to carry "SuperSpeed" data, but only changes power delivery in that power flowing in different directions can flow over different wires. The USB 3.0 C-series cable is bidirectional and does add extra power signaling options, but the details of that would not particularly help the present discussion.
Some USB peers can serve as either a host or a gadget using a specification known as USB On-The-Go — USB-OTG. These devices have a B-series connector with a fifth pin called ID. The port will act as a gadget port unless a cable is plugged that connects the ID pin to GND, possibly through a resistor. When that happens, the device will switch the port to host mode so it can control an attached gadget.
From the perspective of a mobile battery-powered Linux device with a B-series port, the important question is: how much power can be drained from the bus and used in the device? As the voltage is fixed at 5V ±5% this is equivalent to a question of how much current can be drained, and the answer is usually given in milliamps (mA). The device will typically have power-management circuitry that can limit the current used, and other circuitry that will divert some to charging the battery when that is appropriate, but those details are not really important at this stage. For now, we only care about a number.
The USB Implementers Forum provides several multi-page specifications describing how to get that number, in particular the Battery Charging v1.2 document ("BC-1.2", which is my primary source and comes from the Class Specification page) and the newer USB Power Delivery spec. In practice, there are two classes of answers.
Current from a Standard Downstream Port
The first class of answers applies when the device is connected by a standard cable to a standard A-series host port such as on a notebook or desktop computer. A USB host provides a "Standard Downstream Port" (SDP). These ports must provide 100mA for at least the first second after attachment and can be configured to deliver more — up to 500mA in USB-2 — after enumeration has completed. If no enumeration happens, the port is expected to become suspended after 1 second at which point only 2.5mA is available.
Enumeration involves the host asking the gadget (in this case, our mobile Linux device) about its configuration options, and then requesting that some specific configuration be activated. More details on enumeration can be found in an earlier article on the USB composite framework. A configuration includes the amount of current that will be required, which may be close to zero for a separately powered device, or may be the maximum supported for something that is power hungry and fully bus-powered. The host knows what it can provide and will ignore any configuration that requires too much power.
This protocol is quite suitable for a gadget that is dependent on bus power and needs a certain amount of current or else it cannot reliably function. It is less suitable for a battery-powered gadget like a smartphone that can function with no bus-power at all, but would be happy to receive as much as is available. Such a device can present two (or more) distinct configurations to the host: one that claims to require 500mA and a second one that requires zero. A host with power to spare should activate the first one. A host that cannot provide this power should reject the first and accept the second.
Current from other port types
There are a variety of other port types that a USB gadget can find itself connected to. A Dedicated Charging Port (DCP) provides power, at least 500mA, but no data. A Charging Downstream Port (CDP) provides data access much like an SDP, but also provides at least 1.5A, and as much as 5A, that is available even before bus enumeration. USB C-series connectors introduce more options with the same maximum current of 5A, though there is the possibility of increasing the voltage up to 20V, which would yield 100W of power.
For USB-OTG there is an extra port type, the Accessory Charger Adapter (ACA) as well as an extended form: the ACA-Dock. The ACA switches the OTG port into host mode, but also provides power to it, rather than requiring power from it. The ACA-Dock provides power and can switch the OTG port between host and gadget mode, presumably based on what else is plugged into the dock. An ACA-Dock will provide at least 1.5A, while a simple ACA can provide as little as 500mA.
Each of these peers can be detected directly by the USB PHY — the circuitry responsible for the physical interface with the bus. This detection happens without needing to enter enumeration negotiations, so if power is available it can be accessed quickly.
USB connection negotiations between a host and a gadget start with the host providing a 5V level on VBUS and the gadget PHY detecting this voltage. The PHY then advertises its existence to the host by pulling one of DP or DM up to the same level as VBUS, the choice of pin giving some indication of supported bus speed. At this point the host starts enumeration. Before it applies the full VBUS voltage, the PHY can send other signals at a lower voltage and check the response. The simplest case involves setting a low voltage (e.g. 2V) on DP and checking if it is echoed back on DM. Seeing the same 2V on DM strongly implies that the DP and DM lines are shorted together, which is how a dedicated charger (DCP) can be detected. A similar but more complex signalling will detect a CDP.
For a USB-OTG port, the ID pin is supplied with 5V and, when there is no cable plugged in or when a normal 4-pin B-series plug is in place, it will stay at 5V drawing no current. As mentioned, an OTG cable will pull this pin down to GND and if a resistor is used to pull it down the resistance indicates the type of device. If ID is a short-circuit to GND, or shows a resistance of at most 1kOhm, then a simple gadget that doesn't provide any power is being attached. The OTG must provide 5V to VBUS and cannot expect anything back. If a resistance is measured between 1kOhm and 200kOhm (the minimum to be considered open-circuit), then some sort of ACA is implied and the specific resistance indicates the type of ACA.
The USB PHY controller in the mobile device can usually perform these various tests in hardware, possibly with some software support, the moment that a voltage change is detected, and can make the results available to the Linux driver. The Linux driver then just needs to tell the power-management driver how much current to draw.
A range of options
When connected to an SDP, and after a configuration has been negotiated, a simple number is available to the Linux device so it knows how much current it can draw — the number that was requested in the configuration that was activated. In theory, it should be safe to draw that much current at 5V. When attached to other port types, it isn't quite so simple.
According to BC-1.2, the current provided by a dedicated charger, DCP, is IDCP, which has a minimum of 500mA and a maximum of 5A. Similarly, a CDP provides ICDP, which ranges from 1.5A to 5A. ACA configurations have ranges too, the lower ends of which were mentioned earlier. Setting a current limiter in the portable device to a single number is normally quite simple. If we only have a range of allowable values, it isn't immediately clear what number we should use.
The intended meaning of these ranges is that the power source must provide at least the minimum listed current at a voltage within 5% of the 5V target, so at least 4.75V. Demands for higher current may cause the voltage to drop and, at some point, the supply may cut out completely. The supply should not cut out while the current is below the maximum, unless the voltage has first dropped below 2V.
The graph at right (click for larger, animated version) shows hypothetical power curves that could be presented by chargers, which would be within specifications. As the load increases, the current provided increases, and then eventually the voltage starts to drop. The key requirements of the specification are that the trace must never enter the grey areas and that the charger must not shut down (depicted by a black disk) before the trace enters a pink area. This blog post shows some power curves presented by real devices.
Pulling all of this together, the negotiations on the bus will provide an upper and lower bound for the amount of current that can be drawn. The device can safely request the minimum and if that is less than the maximum it can then slowly increase the requested current until the maximum is reached, or until the supplied voltage drops below some device-specific threshold that must be greater than 2V and at most 4.75V. Each voltage level should be averaged over 250ms to avoid being confused by transients.
While this procedure should be safe, it is not unheard of for charging hardware to be less than perfect, and some cables can introduce more resistance than one would like. So a truly cautious driver would start requesting current well below the negotiated minimum and advance slowly with large margins for error.
Dead batteries — when we don't have the luxury of time
One situation that is particularly difficult to handle with USB charging is that of a dead battery. If a device had a dedicated power supply, with known characteristics, and, in particular, if it was certain of a supply of a few hundred mA, then the hardware could be designed to boot with only external power — even if there is no battery. With USB there is no such guarantee. The only guarantee that USB provides without any negotiation is that 100mA can be used for up to 1 second, but longer term only 2.5mA is certain to be available. 100mA is barely enough power to boot Linux on current hardware, 2.5mA is certainly insufficient.
One option is to simply ignore the specification and pull 100mA anyway. Many chargers will support this and those that don't should just shut down, which is no worse than trying to survive with 2.5mA. There is, however, a better option.
BC-1.2 defines a Dead Battery Provision (DBP) which allows for an extremely simple handshake to request that 100mA be available for a longer term — up to 45 minutes. If the gadget device places a voltage of 0.5V on the DP pin, this will keep the bus from entering the suspended mode, so the drop to 2.5mA never happens. When making use of the DBP, the device is strongly encouraged to limit its activities to only those which are required to charge the battery to the "Weak Battery Threshold", and then activate just enough hardware to properly respond to USB enumeration and negotiate available power correctly.
To proceed correctly requires hardware that knows to set DP appropriately when the battery is flat, and software to avoid turning on any unnecessary components until the power state has been properly established.
The task at hand
To summarize, the tasks that Linux must support for full compliance are:
- find out from the USB PHY what type of cable is attached and report this to the battery charger
- advertise USB gadget configurations with appropriate power demands
- determine which gadget configuration was chosen and report the available power to the battery charger
- adjust current within the given range to maintain suitable voltage
- detect when the power supply is questionable during boot and limit activation of components until that is resolved
In the concluding article, we will have a look at the various device types within Linux that are responsible for these different tasks and at how they work together to produce a charging solution.
Kernel documentation with Sphinx, part 1: how we got here
The last time LWN looked at formatted kernel documentation
in January, it seemed like the merging of AsciiDoc support for the
kernel's structured source-code documentation ("kernel-doc") comments, was
imminent. As Jonathan Corbet, in the capacity of the kernel documentation
maintainer, wrote: "A good-enough solution that exists now
should not be held up overly long in the hopes that vague ideas for
something else might turn into real, working code.
" Sometimes,
however, the threat that something not quite perfect might be merged
is enough to motivate people to turn those vague ideas into something
real.
In the end, Sphinx and reStructuredText are emerging as the future of Linux kernel documentation, with far more ambitious goals than the original AsciiDoc support patches ever had. With the bulk of the infrastructure work now merged to the docs-next branch headed for v4.8, it's a good time to reflect on how this came to happen and give an overview of the promising future of kernel documentation.
Background
The patches to support lightweight markup (initially using Markdown, later AsciiDoc) in kernel-doc comments were borne out of a desire to write better documentation for the graphics subsystem. One of the goals was to enhance the in-source graphics subsystem internals documentation for two main reasons. First, if the documentation is next to the code it describes, the documentation has a better chance of being updated along with the code. Second, if the documentation can be written in plain text rather than DocBook XML, it's more likely to be written in the first place.
However, plain text proves to be just a little too plain when you venture beyond documenting functions and types, or if you want to generate pretty HTML or PDF documents out of it. Adding support for lightweight markup in the kernel-doc comments was the natural thing to do. However, bolting this to the existing DocBook toolchain turned out to be problematic.
As part of the documentation build process, the scripts/kernel-doc script extracts the structured comments and emits them in DocBook format. The kernel-doc script supports some structure but fairly little formatting. To fit into this scheme, the lightweight markup support patches caused kernel-doc to invoke an external conversion tool (initially pandoc, later asciidoc) on each documentation comment block to convert them from lightweight markup to DocBook. This was painfully slow.
Doing the conversion in kernel-doc kept the DocBook pipeline side of things mostly intact and oblivious to any markup, but it added another point of failure in the already long and fragile path from comments to HTML or PDF. Problems with markup and mismatches at each point of conversion made debugging challenging. The tools involved were not designed to work together and often disagreed about when and how markup should be applied.
It was clear that this was not the best solution, but at the time it worked and there was nothing else around.
AsciiDoc all-in, muddying the waters
Inspired by Jonathan's article and frustrated by the long documentation build times (we were testing the patches in the Intel graphics integration tree), I had the idea to make kernel-doc output AsciiDoc directly instead of DocBook. Converting the few structural features in the comments to AsciiDoc and just passing through the rest was trivial; kernel-doc already supported several output formats with reasonable abstractions. Like many ideas, this was the obvious thing to do—in retrospect. Suddenly, this opened the door to writing all of the high-level documents under Documentation/DocBook in AsciiDoc, embedding the documentation comments at that level, and getting rid of the DocBook template files altogether. This has massive benefits, and Jonathan soon followed up with a proof-of-concept that did just that.
There was a little bit of excited buzz around this, with folks exploring, experimenting, and actually trying things out with document conversion. A number of conversations between interested developers at linux.conf.au seemed to further confirm that this was the path forward. But, just when it felt like people were settling on switching to doing everything in AsciiDoc, Jonathan muddied the waters by taking a hard look at Sphinx as an alternative to AsciiDoc.
Sphinx vs. AsciiDoc
Sphinx is a documentation generator that uses reStructuredText as its markup language, extending and using Docutils for parsing. Both Sphinx and Docutils were created in Python to document Python, but documenting C and C++ is also supported. Sphinx supports several output formats directly, such as HTML, LaTeX, and ePub, and supports PDF output via either LaTeX or the external rst2pdf tool.
The AsciiDoc format, on the other hand, is semantically equivalent to DocBook XML, with the DocBook constructs expressed in terms of lightweight markup. AsciiDoc is easier for humans to read and write than XML, but since it is designed to translate to DocBook, it fits nicely in front of an existing DocBook toolchain. The original Python AsciiDoc tool has been around for a long time, but has been superseded by a Ruby reimplementation called Asciidoctor in recent years. As far as the AsciiDoc markup goes, Asciidoctor was designed to be a drop-in replacement, but any extensions are implementation-specific due to the change in implementation language. Both tools support HTML and DocBook output natively; other output formats are generated from DocBook.
When comparing the markup formats for the purposes of kernel documentation, only the table support, which is much needed for the media subsystem documentation in particular, was clearly identified as being superior in AsciiDoc. Otherwise, the markup comparison was rather dispassionate; it really boiled down to the tools themselves and, to some extent, which languages the tools were written in. Indeed, the markups and tools were not independent choices. All the lightweight markups have their pros and cons.
Superficially, the implementation language of the tools shouldn't play any role in the decision. But it seemed that neither tool would work as-is, or at least we wouldn't be able to get their full potential without extending the tools ourselves. In the kernel tree, there are no tools written in Ruby, but there are plenty of tools written in Python. It was fairly easy to lean towards Sphinx in this regard.
If you are looking for flexibility, one great advantage of AsciiDoc is
that it's so closely tied to DocBook. By switching to AsciiDoc, the
kernel documentation could reuse the existing DocBook toolchain. The
downside is that AsciiDoc would add another step in front of the already
fragile DocBook toolchain. Dan Allen of Asciidoctor said: "One of the
key goals of the Asciidoctor project is to be able to directly produce a
wide variety of outputs from the same source (without DocBook).
"
However, this support isn't quite there yet.
The Asciidoctor project has a promising future. But Sphinx is stable,
available now, and fits the needs of the kernel. Grant
Likely summed it up this way: "Honestly, in the end I think
we could make either tool do what is needed of it. However, my impression
after trying to do a document that needs to have nice publishable output
with both tools is that Sphinx is easier to work with, simpler to extend,
better supported.
"
In the end, Jonathan's verdict was to go with Sphinx. The patches have
been merged, and the first Sphinx-based documentation will appear in the
4.8 kernel.
The second and final part of this series will look into how the kernel's new Sphinx-based toolchain works and how to write documentation using it.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Page editor: Jake Edge
Distributions
Fedora and SELinux relabeling
Relabeling filesystems for SELinux is an important, if unloved, requirement for new filesystems or when updating SELinux policy. That relabeling process normally occurs at boot time, but there needs to be "enough" of the system running to support the operation, which can be tricky to arrange. Fedora recently struggled with this problem and a reasonable solution appears to have been found.
SELinux relies on filesystem extended attributes (xattrs) to store the context (or label) for each file. That context is then used by the SELinux security module in the kernel to make access-control decisions based on the policy, which gets loaded at boot time. Thus, when the policy changes or a new filesystem is created, the file contexts will need to be set or adjusted, which is done by walking the mounted filesystems and setting the xattrs appropriately.
But, as Richard W. M. Jones noted in a post
to the Fedora development mailing list, getting the system to the point
where the
fedora-autorelabel.service can run entails getting pretty far into
the boot process. Since the file labels have not yet been changed, though,
some of
the startup services and programs can fail due to SELinux denials. So
systemd may not be able to get to the point of running the relabel service
and the system "can be
dumped into a rescue shell, or worse still go into a boot loop
".
Relabeling is typically triggered by the presence of a file called
/.autorelabel at boot time or by the autorelabel boot
parameter. The process requires that the
local-fs.target be reached and "dozens of services need to be
started successfully before we even get to local-fs.target
". Several
recent bugs have been filed (Jones mentioned 1351352 and
1349586),
but the problem goes back further than that: to bug 1049656 filed
against systemd in Fedora 20.
One obvious way around the problem would be to turn off SELinux enforcing
mode before the system boots and then to enable it again once the relabel
is complete (which will reboot the system). That opens a window of
vulnerability, but that outcome may well be unavoidable. As Simo Sorce put it: "if the labeling is broken,
starting in
enforcing may mean you never get to relabel the filesystem as the
relabeling tool may fail to start altogether
".
Jones made some other suggestions for possible solutions in his post. For example, the process that switches SELinux to enforcing mode (based on the setting in /etc/selinux/config) could recognize the presence of the /.autorelabel file or the boot parameter and trigger the relabeling at that point. After that completes, enforcing mode could be turned on.
But the discussion fairly quickly zeroed in on creating a separate, minimal systemd target that would simply mount the local filesystems in preparation for doing the relabel. As Christian Stadelmann pointed out, the requirements are similar to those of dnf system-upgrade. Sorce agreed:
Systemd developer Lennart Poettering also concurred:
A systemd generator is a binary that gets run early in the boot process, which can generate unit files and do other configuration tasks at runtime for the rest of that boot cycle. Jones created two patches based on the suggestions. The first changes libselinux to recognize the autorelabel flag (from the file or command line) and sets SELinux to permissive mode. The second is a systemd generator to divert to a different target for that boot that only relies on the minimal targets required to have the local filesystems available.
As it turned out, though, Jones could only get the second patch to work by depending on both local-fs.target and sysinit.target. Without the latter, the /boot filesystem was not mounted. Zbigniew Jędrzejewski-Szmek said that was likely a bug, so simply depending on local-fs.target would seem to be a workable solution eventually.
In addition, as Adam Williamson (who filed some of the recent bugs) pointed out, the second patch is not strictly necessary. But Jones is concerned that starting other services while SELinux is in permissive mode—network services in particular—could be problematic, so he would like to push both even though the second is not absolutely necessary.
It would seem that a fairly longstanding, but infrequently occurring, bug will be closed soon. Based on the bug reports, it was largely confined to virtual machines that QA testers and others set up for their testing, so it may not have affected all that many real Fedora users. But the problem was real, with the potential to hang newly updated systems someday, which certainly would affect users.
Brief items
Distribution quotes of the week
I like to imagine that these are typically the type of mean and bitter documents that try to eat innocent office software alive.
Linux Mint 18 Cinnamon and MATE editions released
Linux Mint 18 has been released with Cinnamon and MATE editions. "Linux Mint 18 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use." The MATE edition has MATE 1.14 along with many other updates listed on the What's New page. The Cinnamon edition has Cinnamon 3.0 (which we recently reviewed) and lots of other new packages described on its What's New page. The release notes pages (MATE, Cinnamon) also have important information on the releases.
Debian Edu / Skolelinux Jessie
The Debian Edu team has announced Debian Edu 8+edu0 "Jessie", the latest Debian Edu / Skolelinux release. Debian Edu, also known as Skolelinux, provides a complete solution for schools. Debian Edu 8 is based on Debian 8 "Jessie", update 8.5. "Do you have to administrate a computer lab or a whole school network? Would you like to install servers, workstations and laptops which will then work together? Do you want the stability of Debian with network services already preconfigured? Do you wish to have a web-based tool to manage systems and several hundred or even more user accounts? Have you asked yourself if and how older computers could be used? Then Debian Edu is for you. The teachers themselves or their technical support can roll out a complete multi-user multi-machine study environment within a few days. Debian Edu comes with hundreds of applications pre-installed, but you can always add more packages from Debian."
Slackware 14.2
Slackware Linux Project has announced the release of Slackware version 14.2. "Slackware 14.2 brings many updates and enhancements, among which you'll find two of the most advanced desktop environments available today: Xfce 4.12.1, a fast and lightweight but visually appealing and easy to use desktop environment, and KDE 4.14.21 (KDE 4.14.3 with kdelibs-4.14.21) a stable release of the 4.14.x series of the award- winning KDE desktop environment. These desktops utilize eudev, udisks, and udisks2, and many of the specifications from freedesktop.org which allow the system administrator to grant use of various hardware devices according to users' group membership so that they will be able to use items such as USB flash sticks, USB cameras that appear like USB storage, portable hard drives, CD and DVD media, MP3 players, and more, all without requiring sudo, the mount or umount command. Just plug and play. Slackware's desktop should be suitable for any level of Linux experience." See the release notes for more details.
Distribution News
Debian GNU/Linux
Bits from the release team: Winter is Coming (but not to South Africa)
The Debian release team looks at the Stretch freeze schedule, a call for artwork, moving to a new host, and Buster+1 (Debian 11) will be named Bullseye.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 668 (July 4)
- Lunar Linux weekly news (July 1)
- openSUSE Tumbleweed – Review of the Week (July 1)
- Ubuntu Kernel Team Weekly Newsletter (June 28)
- Ubuntu Weekly Newsletter, Issue 472 (July 3)
A simple menu system for blind Linux users (Opensource.com)
Knoppix was at the leading edge of live media distributions, spawning many spin-offs that could run from a CD. Now many distributions offer users the opportunity to use the distribution from USB or DVD and Knoppix isn't news anymore. But Knoppix also features the audio interface ADRIANE (Audio Desktop Reference Implementation and Networking Environment), targeted at blind users. Opensource.com takes a look at Knoppix with ADRIANE. "ADRIANE is a great interface with a solid plan for design and functionality. In a way, it reduces a computer down to a minimalist device tuned for the most common everyday tasks, so it might not be the ideal interface for power users (possibly an Emacspeak solution would be better for such users), but the important thing is that it makes the computer easy to use, and tends to keep the user informed every step of the way. It's easy to try, and easy to demo, and Knoppix itself is a useful disc to have around, so if you have any interest in low-vision computer usage or in Linux in general, try Knoppix and ADRIANE."
Page editor: Rebecca Sobol
Development
A more generalized switch statement for Python?
Many languages have a "switch" (or "case") statement to handle branching to different blocks based on the value of a particular expression. Python, however, does not have a construct of that sort; it relies on chains of if/elif/else to effect similar functionality. But there have been calls to add the construct over the years. A recent discussion on the python-ideas mailing list demonstrates some of the thinking about what a Python switch might look like—it also serves to give a look at the open language-design process that typifies the language.
There are two PEPs that have proposed the feature over the years:
Marc-André Lemburg's PEP 275 from 2001
and Python benevolent dictator for life Guido van Rossum's PEP 3103 from
2006. The latter came about due to some differences of opinion about
the behavior of the feature and how it would be implemented. Ultimately,
though, both were rejected based on an informal poll: "A quick poll
during my keynote presentation at PyCon 2007 shows this proposal has no
popular support. I therefore reject it.
", Van Rossum said in his PEP.
In a discussion on type hinting for the pathlib module (and related standard library routines that can use and return either str or bytes types), the lack of a Python switch reared its head again. That initial thread centered around using the AnyStr annotation for the __fspath__() protocol, which can return either bytes or str. In a post in that thread, Van Rossum mused about adding a switch statement, though he called it "match":
That post generated a few responses in favor of looking at the feature, so
Van Rossum soon started a new "match statement
brainstorm" thread. He said that Python might well benefit from moving
beyond the "pre-computed lookup table
" approach that both of
the earlier PEPs had taken and learn from what other languages have done
(notably, Haskell).
A Python match statement (though he used a
switch keyword in his examples) could possibly do quite a bit more than
had been envisioned earlier:
- match by value or set of values (like those PEPs)
- match by type (isinstance() checks)
- match on tuple structure, including nesting and * unpacking (essentially, try a series of destructuring assignments until one works)
- match on dict structure? (extension of destructuring to dicts)
- match on instance variables or attributes by name?
- match on generalized condition (predicate)?
The idea of "destructuring" is to pull out the values in a composite type (such as a tuple), either using positional operators or using attribute names for types like the collections.namedtuple type. That could potentially be extended to destructure dictionaries by key name or, perhaps, positionally.
His post had some "strawman syntax
" for how tuple
destructuring in a switch statement might work, along with a
"demonstration" of how it would operate given different kinds of input:
def demo(arg):
switch arg:
case (x=p, y=q): print('x=', p, 'y=', q)
case (a, b, *_): print('a=', a, 'b=', b)
else: print('Too bad')
Taking his example further, Van Rossum showed how it all might work:
Point = namedtuple('Point', 'x y z')
and some variables like this:
a = Point(x=1, y=2, z=3) b = (1, 2, 3, 4) c = 'hola' d = 42then we could call demo with these variables:
>>> demo(a) x= 1 y= 2 >>> demo(b) a= 1 b= 2 >>> demo(c) a= h b= o >>> demo(d) Too bad
He did note the "slightly unfortunate outcome
" for the string
(since strings are treated as sequences of one-character strings in Python).
As might be guessed, that strawman syntax led to some other suggestions. Several commented on the attribute-extraction case with its odd-looking "assignment" construct. Nick Coghlan suggested an alternate formulation:
[...]
case (.x as p, .y as q): print('x=', p, 'y=', q)
In the brainstorming post, Van Rossum had also challenged others "to fit simple value equality, set membership,
isinstance, and guards into that same syntax.
" Coghlan had some
ideas on that, but he also suggested a new operator, of sorts:
switch expr as arg:
case ?= (.x as p, .y as q): print('x=', p, 'y=', q)
case ?= (a, b, *_): print('a=', a, 'b=', b)
case arg == value: ...
case lower_bound <= arg <= upper_bound: ...
case arg in container: ...
else: print('Too bad')
(.x as p, .y as q) = expr
In a similar vein, item unpacking might look like:
(["x"] as p, ["y"] as q) = expr
Franklin Lee also had an extensive set of suggestions, but several in the thread thought some of them were overkill. Paul Moore suggested allowing an arbitrary expression for the switch that would be given a name for use in the case statements (syntax that Coghlan also adopted):
switch expr as name:
But Joao S. O. Bueno had a fundamental
concern about the need to add a switch at all: "I still fail to see what justifies violating The One Obvious Way to Do It which
uses an if/elif sequence
". Van Rossum
agreed to a certain extent, but noted that
there are a number of match operations that are difficult to write using
if statements. For example:
There might be some other interesting possibilities when combining matching
with type annotations, he said. Overall, though, "it's about the
most speculative piece of language design I've
contemplated in a long time
".
Michael Selik noted that many of the matching features are already available for if statements. The missing piece is something like the ?= operator to allow trying the destructure operations without causing an exception if they fail—they would simply return false. He provided an example:
def demo(arg):
if p, q ?= arg.x, arg.y: # dict structure
elif x ?= arg.x and isinstance(x, int) # assignment + guard
elif a, b, *_ ?= arg: # tuple structure
elif isinstance(arg, Mapping): # nothing new here
rejected by this group in the past for other conditionalisms".
But Greg Ewing (and others) were not
particularly pleased with the suggestion: "the above looks like
an unreadable mess to me
". The problem, as Moore described, is that switch is a focused
operation on a single subject, while if statements are not:
With a switch statement, however, the subject is stated once, at the top of the statement. The checks are then listed one after the other, and they are all by definition checks against the subject expression.
There was more discussion of the ideas, though no real conclusions were drawn. No one reported an in-progress PEP to the list, so there may be no one who feels strongly enough about the feature to take that step. But it is an idea that has recurred in Python circles over the years, so it will not be a surprise to see it pop up again sooner or later. In the meantime, as with many discussions on python-ideas, we get a look inside the thinking of the Python core developers.
Brief items
Quotes of the week
I've had the same thing happening to me a few times with Battle for Wesnoth.
Mayyybe it is a sign that lately I've been playing it too much, but I'm quite happy with the fact that free software / culture is influencing my dreams.
Thanks to everybody who is involved into Free Culture for creating enough content so that this can happen.
etcd 3.0 released
CoreOS has announced the availability of version 3.0 of the etcd distributed key-value store. "etcd 3.0 marks the first stable release of the etcd3 API and data model. Upgrades are simple, because the same etcd2 JSON endpoints and internal cluster protocol are still provided in etcd3. Nevertheless, etcd3 is a wholesale API redesign based on feedback from etcd2 users and experience with scaling etcd2 in practice. This post highlights some notable etcd3 improvements in efficiency, reliability, and concurrency control."
Rails 5.0 is available
Rails 5.0 has been released. The announcement highlights two new features, the Action Cable framework for handling WebSockets and an "API mode" for interfacing with client-side JavaScript. Development of the latter feature is ongoing; progress can be tracked in the JSONAPI::Resources repository. There are quite a few other new features to be found in the update as well; the release announcement provides links to detailed ChangeLogs for various subprojects.
KDE Plasma 5.7 Release
KDE Plasma 5.7 has been released. This release features the return of the agenda view in the calendar, improvements to the Volume Control applet allow volume control on a per-application basis, improved Wayland support, and more. "This release brings Plasma closer to the new windowing system Wayland. Wayland is the successor of the decades-old X11 windowing system and brings many improvements, especially when it comes to tear-free and flicker-free rendering as well as security. The development of Plasma 5.7 for Wayland focused on quality in the Wayland compositor KWin. Over 5,000 lines of auto tests were added to KWin and another 5,000 lines were added to KWayland which is now released as part of KDE Frameworks 5."
Kubernetes 1.3 is available
Version 1.3 of the Kubernetes cluster-management system is available. The new release simplifies the creation of clusters that automatically scale up and down in response to demand, adds support for the rkt, OCI, and CNI container standards, enables support for federated applications that communicate across clusters, and adds several features to support stateful applications (such as provisioning persistent disks and permanent hostname assignment).
digiKam 5.0.0 is published
The digiKam team has announced the release of digiKam Software Collection 5.0.0. "This release marks almost complete port of the application to Qt5. All Qt4/KDE4 code has been removed and many parts have been re-written, reviewed, and tested. Porting to Qt5 required a lot of work, as many important APIs had to be changed or replaced by new ones. In addition to code porting, we introduced several changes and optimizations, especially regarding dependencies on the KDE project. Although digiKam is still a KDE desktop application, it now uses many Qt dependencies instead of KDE dependencies. This simplifies the porting job on other operating systems, code maintenance, while reducing the sensitivity of API changes from KDE project."
Twisted 16.3.0 released
Version 16.3 of the Twisted framework has been released. This update is the first since the project transitioned its source code to Git. New features in the code include improved HTTP request pipelining and support for HTTP/2 in the Twisted web server.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (June 29)
- What's cooking in git.git (July 1)
- What's cooking in git.git (July 8)
- This Week in GTK+ (July 4)
- OCaml Weekly News (July 5)
- Perl Weekly (July 4)
- Python Weekly (June 30)
- Python Weekly (July 7)
- Ruby Weekly (July 1)
- Ruby Weekly (July 7)
- This Week in Rust (July 15)
- Wikimedia Tech News (July 4)
Bassi: GSK Demystified - A GSK primer
At his blog, GTK+ developer Emmanuele Bassi has posted
an introduction to the GTK+ Scene Kit (GSK), a new GTK+ module that
provides a scene-graph API in which developers can build rich,
animated layouts that render smoothly. "Every time you wish to
render something, you build a tree of render nodes; specify their
content; set up their transformations, opacity, and blending; and,
finally, you pass the tree to the renderer. After that, the renderer
owns the render nodes tree, so you can safely discard it after each
frame.
" GSK is the successor to Bassi's earlier work on the
Clutter toolkit.
Page editor: Nathan Willis
Announcements
Brief items
Statement of netfilter project on GPL enforcement
The netfilter project declared its official endorsement of the Principles of Community-Oriented GPL Enforcement as published by the Software Freedom Conservancy and the Free Software Foundation. "The software of the netfilter project is primarily released under the GNU General Public License. We strongly believe that license compliance is an important factor in the Free Software model. In the absence of voluntary license compliance, license enforcement is a necessary tool to ensure all parties adhere to the same set of fair rules as set forth by the license."
FSF: Protect your privacy
The Free Software Foundation has issued a call for action to oppose amendments to Rule 41 of the US Federal Rules of Criminal Procedure that threaten internet privacy. The changes will go into effect December 1 unless a bipartisan bill called the Stopping Mass Hacking Act is approved. "The FSF opposes these changes and — in spite of its misleading use of the word "hacking" — supports the Stopping Mass Hacking Act (S. 2952, H.R. 5321), bipartisan legislation that would block the changes. The two bills are currently under review by the Judiciary Committees of the US Senate and House. **Take action: Free software activists around the world can tell the US Congress to pass the Stopping Mass Hacking Act** by using the EFF's No Global Warrants tool or by looking up your representatives if you're in the US. Not in the US? Raise your concerns with your government representative."
FSF's Defective by Design campaign continues to oppose Web DRM standard
The Free Software Foundation reports that Encrypted Media Extensions (EME), a proposed technological standard for web-based DRM, has moved to the next phase of development within the World Wide Web Consortium (W3C). "The EME standardization effort, sponsored by streaming giants like Google and Netflix, aims to take advantage of the W3C's influence over Web technology to make it cheaper and more efficient to impose DRM systems. As of yesterday, the EME proposal is now upgraded from Working Draft to Candidate Recommendation within the W3C's process. Under the W3C's rules there are at least three more chances to pull the plug on EME before it becomes a ratified standard, also known as a W3C Recommendation."
Articles of interest
Free Software Supporter Issue 99, July 2016
The Free Software Foundation's monthly newsletter covers net neutrality in Europe, LibrePlanet, GNU Hackers' Meeting, Intel & ME, and why we should get rid of ME, LulzBot TAZ 6 3D printer now FSF-certified, Licensing and Compliance Lab interviews Brett Smith, A Quick Guide to GPLv3, GCC 5.4, and several other topics.TDF and FSFE support Software Heritage
The Document Foundation and the Free Software Foundation Europe have both released announcements supporting the Software Heritage project.
TDF: "Software Heritage is an
essential building block for preserving, enhancing
and sharing the scientific and technical knowledge that is increasingly
embedded in software; it also contributes to our ability to access all the
information stored in digital form.
"
FSFE:
"The Heritage stores only Free Software, in other words, software that
can be used, studied, adapted and shared freely with others; and this is
because the Software Heritage initiative relies on being able to share
the software it stores. The Software Heritage website is designed to be
a useful tool for professionals, scientists, educators and end-users.
"
New Books
Arduino Project Handbook -- new from No Starch Press
No Starch Press has released "Arduino Project Handbook" by Mark Geddes.
Calls for Presentations
linux.conf.au 2017 in Hobart -- Talk, Tutorial, and Miniconf submissions
The Call for Proposals for linux.conf.au 2017 is open until August 5. LCA will take place January 16-20 in Hobart, Tasmania. "If you’re working with Free and Open Source Software, Open Hardware, if you’re exploring openness in a field outside of technology, or if you’re doing something that you think will be interesting to people interested in Open Source, we want to hear from you!"
CFP Deadlines: July 8, 2016 to September 6, 2016
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| July 13 | October 25 October 28 |
OpenStack Summit | Barcelona, Spain |
| July 15 | October 12 | Tracing Summit | Berlin, Germany |
| July 15 | September 7 September 9 |
LibreOffice Conference | Brno, Czech Republic |
| July 15 | October 11 | Real-Time Summit 2016 | Berlin, Germany |
| July 22 | October 7 October 8 |
Ohio LinuxFest 2016 | Columbus, OH, USA |
| July 24 | September 20 September 21 |
Lustre Administrator and Developer Workshop | Paris, France |
| July 30 | August 25 August 28 |
Linux Vacation / Eastern Europe 2016 | Grodno, Belarus |
| July 31 | September 9 September 11 |
GNU Tools Cauldron 2016 | Hebden Bridge, UK |
| July 31 | October 29 October 30 |
PyCon HK 2016 | Hong Kong, Hong Kong |
| August 1 | October 6 October 7 |
PyConZA 2016 | Cape Town, South Africa |
| August 1 | September 28 October 1 |
systemd.conf 2016 | Berlin, Germany |
| August 1 | October 8 October 9 |
Gentoo Miniconf 2016 | Prague, Czech Republic |
| August 1 | November 11 November 12 |
Seattle GNU/Linux Conference | Seattle, WA, USA |
| August 3 | October 1 October 2 |
openSUSE.Asia Summit | Yogyakarta, Indonesia |
| August 5 | January 16 January 20 |
linux.conf.au 2017 | Hobart, Australia |
| August 7 | November 1 November 4 |
PostgreSQL Conference Europe 2016 | Tallin, Estonia |
| August 7 | October 10 October 11 |
GStreamer Conference | Berlin, Germany |
| August 8 | September 8 | LLVM Cauldron | Hebden Bridge, UK |
| August 15 | October 5 October 7 |
Netdev 1.2 | Tokyo, Japan |
| August 17 | September 21 September 23 |
X Developers Conference | Helsinki, Finland |
| August 19 | October 13 | OpenWrt Summit | Berlin, Germany |
| August 20 | August 27 September 2 |
Bornhack | Aakirkeby, Denmark |
| August 20 | August 22 August 24 |
7th African Summit on FOSS | Kampala, Uganda |
| August 21 | October 22 October 23 |
Datenspuren 2016 | Dresden, Germany |
| August 24 | September 9 September 15 |
ownCloud Contributors Conference | Berlin, Germany |
| August 31 | November 12 November 13 |
PyCon Canada 2016 | Toronto, Canada |
| August 31 | October 31 | PyCon Finland 2016 | Helsinki, Finland |
| September 1 | November 1 November 4 |
Linux Plumbers Conference | Santa Fe, NM, USA |
| September 1 | November 14 | The Third Workshop on the LLVM Compiler Infrastructure in HPC | Salt Lake City, UT, USA |
| September 5 | November 17 | NLUUG (Fall conference) | Bunnik, The Netherlands |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: July 8, 2016 to September 6, 2016
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| July 2 July 9 |
DebConf16 | Cape Town, South Africa |
| July 8 July 9 |
Texas Linux Fest | Austin, TX, USA |
| July 11 July 17 |
SciPy 2016 | Austin, TX, USA |
| July 13 July 15 |
ContainerCon Japan | Tokyo, Japan |
| July 13 July 14 |
Automotive Linux Summit | Tokyo, Japan |
| July 13 July 15 |
LinuxCon Japan | Tokyo, Japan |
| July 14 July 16 |
REST Fest UK 2016 | Edinburgh, UK |
| July 17 July 24 |
EuroPython 2016 | Bilbao, Spain |
| July 30 July 31 |
PyOhio | Columbus, OH, USA |
| August 2 August 5 |
Flock to Fedora | Krakow, Poland |
| August 10 August 12 |
MonadLibre 2016 | Havana, Cuba |
| August 12 August 14 |
GNOME Users and Developers European Conference | Karlsruhe, Germany |
| August 12 August 16 |
PyCon Australia 2016 | Melbourne, Australia |
| August 18 August 20 |
GNU Hackers' Meeting | Rennes, France |
| August 18 August 21 |
Camp++ 0x7e0 | Komárom, Hungary |
| August 20 August 21 |
FrOSCon - Free and Open Source Software Conference | Sankt-Augustin, Germany |
| August 20 August 21 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
| August 22 August 24 |
ContainerCon | Toronto, Canada |
| August 22 August 24 |
LinuxCon NA | Toronto, Canada |
| August 22 August 24 |
7th African Summit on FOSS | Kampala, Uganda |
| August 24 August 26 |
YAPC::Europe Cluj 2016 | Cluj-Napoca, Romania |
| August 24 August 26 |
KVM Forum 2016 | Toronto, Canada |
| August 25 August 26 |
The Prometheus conference | Berlin, Germany |
| August 25 August 26 |
Xen Project Developer Summit | Toronto, Canada |
| August 25 August 28 |
Linux Vacation / Eastern Europe 2016 | Grodno, Belarus |
| August 25 August 26 |
Linux Security Summit 2016 | Toronto, Canada |
| August 27 September 2 |
Bornhack | Aakirkeby, Denmark |
| August 31 September 1 |
Hadoop Summit Melbourne | Melbourne, Australia |
| September 1 September 7 |
Nextcloud Conference | Berlin, Germany |
| September 1 September 8 |
QtCon 2016 | Berlin, Germany |
| September 2 September 4 |
FSFE summit 2016 | Berlin, Germany |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
