LWN.net Weekly Edition for December 5, 2013
Another daemon for managing control groups
Control groups (cgroups) in the kernel are changing, in both their implementation and their interface. One of those changes is that systems that use the new cgroup interface will require a single management process to coordinate access. For many, that management daemon may well be the systemd init replacement, but there are distributions (notably Ubuntu) and users who will want a different choice. To that end, Serge E. Hallyn is working on an alternative cgroup management daemon that he calls "cgmanager".
Cgroups are a way to group processes and to apply various resource limits to the group. The current cgroup API in the kernel is a filesystem-based interface called cgroupfs. Groups are represented as directories and processes are placed into them by writing their process ID to a special file. Additional special files are used to set various limits and to control other aspects of the group, which depend on the specific controllers that are associated with the group (i.e. CPU, memory, block I/O, etc.).
Depending on permissions, any process may be able use that interface to set up its own hierarchy of groups, which is one of the problems with the existing implementation, according to cgroup co-maintainer Tejun Heo and others. So, in the future, there will be a single process that is responsible for managing the single hierarchy of groups that will be allowed under the new cgroup interface. While systemd already has code to perform that job, systemd has never been a requirement to use cgroups—but if it isn't used, something has to take its place.
So Hallyn put out a preliminary design for cgmanager on the lxc-devel mailing list at the end of November. It envisions a single daemon process with a D-Bus interface that will manage the hierarchy, which will mount cgroupfs inside a private namespace that is accessible only by cgmanager. Processes will make requests of the daemon to create cgroups, configure them, and to move processes into them. Obviously, some of those operations are privileged, and Hallyn has worked out a set of privileges required for each. Users own the cgroups they create and can place their own processes into those cgroups. More complicated arrangements are possible, of course, but generally either require real root privileges or at least root inside a user namespace owned by the user.
There is also the concept of handing the ownership of a cgroup (or hierarchy) to another user. Doing that using the standard filesystem permissions on the cgroupfs, as is done today, is a big security hole, which can lead to various kinds of denial of service. Mediating that access through cgmanager should alleviate that problem, however.
So far, Hallyn's design has been well-received; there are questions and suggestions, of course, but overall it would seem that Hallyn has taken several constituencies into account.
The first of those is the LXC containers project, but there are several other interested parties as well. Upstart-using distributions (mostly limited to Ubuntu and derivatives—though possibly Debian too, depending) will also need some kind of cgroup manager if they intend to use cgroups. Hallyn has clearly been talking to the Upstart folks, so it would seem that cgmanager should eventually fit the bill there.
Another constituent is Google, which uses cgroups extensively in its fairly sizable datacenter. Tim Hockin from the search giant had been rather critical of the single hierarchy cgroup plan. He has since largely made his peace with the plan, but an article from July outlines Google's use of cgroups as well as some of the problems with the "new cgroups" that Hockin had foreseen. Clearly Hallyn was also paying attention to Google's needs as Hockin seems relatively pleased with the direction of cgmanager's design. In particular, Google uses the "change ownership" feature today, but it trusts all of the users on its cluster. For others, who may not be able to have that trust level, Hallyn has provided a means to get that functionality, while reducing its security impact.
Where does systemd fit into this picture? The answer to that seems to essentially be: not at all. Systemd has its own control group interface that it intends to proceed with. Two messages from June make it pretty clear that Lennart Poettering, at least, is not particularly interested in having systemd work with another cgroup manager or to provide cgroup management in a library. Poettering's contention that init should not be dependent on some other daemon seems sensible, but forcing anyone who wants to use cgroups to use systemd clearly isn't.
It would be nice if some kind of common API could be worked out so that applications and users don't have to support two different ways of accessing cgroups. It would seem that Hockin approached the systemd folks about working together but was evidently rebuffed. That may make sense for the systemd project, but will unfortunately leave applications and users who need cgroup functionality in something of a bind. Supporting both systemd and cgmanager will likely be required.
While it will undoubtedly be annoying to support two (or more if another competitor shows up) interfaces to cgroups, having two separate implementations may actually be a good thing—at least in the short term. That wouldn't preclude a common interface, of course, but that doesn't seem to be in the cards. Cgroups have been a problematic feature in the kernel since they were merged for 2.6.24 back in 2007. One hopes we are on our way toward a better implementation and user-space interface for the feature—so the more code that exercises all of that, the better.
Searching for a pump.io client
When Identi.ca switched over from the old StatusNet software to the newly minted pump.io platform earlier this year, one ripple effect was that many of the existing desktop client applications suddenly were no longer supported. Thus, users of the service lost convenient access to the updates of their friends, and were limited to the less-convenient default web interface for posting updates of their own. In the intervening months, though, clients have started to emerge, with the potential for freeing users from the web-only interface.
The root issue is that pump.io uses its own API, based on the Activity Streams data format and Atom publishing. StatusNet made a point of copying the Twitter API, in order to make things easier for third-party developers who were already interested in supporting Twitter. Of course, the degree to which any particular client made an effort to support StatusNet's enhancements (!groups, for example: topic-based discussions that users could subscribe to) varied; no doubt many developers who were primarily interested in Twitter regarded supporting other services as an afterthought.
The pump.io wiki maintains a page of known client applications. The list includes web services (some of which are utilities, like pump2rss.com), mobile device platforms, and other applications aimed at desktop users. At the moment, there are two GUI client applications with Linux support: Dianara and Pumpa. Both are written for Qt.
Dianara
Dianara has been around longer; the first public releases were in early 2012. As such, several major Linux distributions make it available through their package managers.
The early versions of Dianara supported Diaspora, the alternative social networking platform famous for its colossal crowdfunding campaign on Kickstarter. Diaspora has not yet taken off as a microblogging or messaging platform, though, so it is perhaps unsurprising that Dianara author Jan Kusanagi eventually turned his attention elsewhere. In May, Kusanagi announced that the application would target pump.io support instead.
Version 1.0 was released on October 29, with support for a significant number of pump.io features. Users can create posts with rich text formatting, attach images, and address posts to specific individuals or lists. The interface shows separate tabs for the user's "timeline" (i.e., accounts the user follows), direct messages, the user's own account stream, favorited notes, and the list of contacts the user is following. As far as interactivity is concerned, Dianara supports replying to others' notes, re-sharing notes, and marking notes as favorites. Users can also edit their profile on the pump.io service they connect to, and configure what sorts of notes trigger a desktop notification.
Dianara is pretty basic in its feature set; it limits the user to one pump.io account, and so far there are few advanced features (for example, there is no searching). It does support some pump.io operations that are less common in Twitter-centric client applications, though, such as assigning post titles, creating lists of users, and updating old posts. Post titles are interesting, because the titles appear in the activity timeline. A titled post might appear as "Nathan Willis favorited LWN Weekly Edition for December 5," while an untitled post would be seen as the enigmatic "Nathan Willis favorited a note." A small functional difference, but far more useful for users.
Pumpa
Pumpa is a much newer project—at least "on paper"; it was started in June 2013. Of course, since Dianara undertook a major rewrite in May to switch over to pump.io support, the time difference is less significant in reality. But so far, no Linux distributions package Pumpa. There are third-party repositories run by volunteers for Fedora and Debian, however, which are linked to from the project site. Everyone else will need to grab the source from author Mats Sjöberg's Gitorious repository.
The latest release is version 0.8, from November 26. Pumpa includes most of the same functionality as Dianara, including writing, addressing, replying to, and favoriting notes. But the implementation is noticeably different. The biggest change is that Dianara presents a rich-text editor for new notes; one clicks on the "F" drop-down button (which presumably stands for "Formatting") to make text bold/italic/underlined/etc or to add a hyperlink. Pumpa, in contrast, presents only a plain text editor, and interprets all of the text as Markdown syntax. Whether that is an improvement or an impediment depends on who you ask; I for one do not have Markdown memorized.
On the other hand, Pumpa's rendering of the various pump.io timelines (followed users, direct messages, etc.) is nicer—it looks like most of the StatusNet and Twitter clients, with user avatars and note text neatly rendered on a vertically-scrolling window. Dianara shows every note (including other people's notes) as a text frame, which most Qt themes render in a different color to indicate editability. Other people's notes are not editable, of course, so the result is a bunch of nested boxes that look confusing. Dianara's nested text frames also have minimum-width requirements, which spawn horizontal scrollbars if the window is too narrow, and there are a number of places where there is excessive white space between elements. In short, Pumpa's visual layout is compact and simple, while Dianara's gets in the way and makes message content harder to read.
Pumping it up
Nevertheless, both clients are simple to set up and use, which is a boon to old StatusNet users stranded by older applications like Gwibber. There is clearly room for improvement on multiple fronts—I do not know why, for example, neither program has a "new note" button on the window itself, but anyone in a desktop environment that puts application menus at the top of the screen will soon notice its absence.
It is hard to gauge whether or not pump.io's simpler design is making it easier for application developers in comparison to StatusNet. Certainly Dianara and Pumpa have come a long way quickly. There are also multiple command-line and console clients available, as well as an Emacs mode.
The more important question will be whether or not the various clients evolve to support message types beyond simple text and image posts. Sure, these two categories capture the bulk of social networking content, but Activity Streams is open to all sorts of activity messages, including user-defined ones (for example, the ih8.it site sends "hate this" messages).
Both of these desktop applications are limited right now to microblog-style usage. There is a glimmer of hope in some of the unusual Web clients emerging from pump.io, although the initial ones (including ih8.it) were developed by pump.io creator Evan Prodromou. In the long run, clients will no doubt evolve to support the message types that users want to see; now that reliable clients are appearing, perhaps those users will start to explore the possibilities.
2013 Linux and free software timeline - Q2
Here is LWN's sixteenth annual timeline of significant events in the Linux and free software world for the year. As per tradition, we will divide up the timeline up into quarters; this is our account of April–June 2013. January through March was covered last week; timelines for the remaining quarters of the year will appear in the coming weeks.
There are almost certainly some errors or omissions; if you find any, please send them to timeline@lwn.net.
LWN subscribers have paid for the development of this timeline, along with previous timelines and the weekly editions. If you like what you see here, or elsewhere on the site, please consider subscribing to LWN.
For those readers in a truly nostalgic mood, our timeline index page includes links to the previous timelines and other retrospective articles that date all the way back to 1998.
April |
Version 1.6 of the MATE desktop environment is released (announcement).
PostgreSQL announces the discovery of a severe hole, which prompts the project to re-examine its security-handling process (LWN article).
Fedora throws a subtle wrench into the works when it selects "Schrödinger's Cat" as the codename for Fedora 19, featuring an accented character and a punctuation mark (LWN article).
Yorba launches a crowdfunding campaign to underwrite work on the Geary email client (LWN blurb). The campaign falls short of the goal in May, prompting analysis by Yorba director Jim Nelson (LWN blurb).
Twisted 13.0.0 is released (announcement).
Google halts its participation in the WebKit web rendering engine project to develop its own fork, Blink (LWN blurb).
Google announces its Open Patent Non-Assertion Pledge (announcement).
The Subsurface project mourns the loss of Jan Schubert (LWN blurb).
Mozilla turns 15 (LWN blurb).
The Free Software Legal and Licensing Workshop is held in
Amsterdam, April 4 to 5 (LWN coverage).
The OpenDaylight project is announced, starting a community-driven approach to developing Software-Defined Networking (announcement).
Libre Graphics Meeting 2013 is held in Madrid, April 10 to 13 (LWN coverage).
Lucas Nussbaum is elected Debian Project Leader (announcement).
OpenMandriva selects a logo (announcement), but does not decide to officially adopt the name "OpenMandriva" until May (announcement).
Linux Hangman Rules
You take turns putting setuid root onto files in /usr/bin /usr/sbin/, etc. and if your opponent can use that to get root, even via a convoluted scenario, then you lose. The goal is to create a system running with MAXIMUM PRIVILEGE.
-- Dave Aitel
Wayland/Weston 1.1 is released (announcement).
Xen becomes a Linux Foundation project (announcement).
The 2013 Collaboration Summit is held in San Francisco, April 15 to 17 (LWN coverage).
The Storage, Filesystem, and Memory Management Summit is held in San Francisco, April 18 to 19 (LWN coverage).
The Google Test Automation Conference is held in New York City, April 23 and 24 (LWN article).
May |
Kernel 3.9 is released (LWN announcement; development statistics; merge window summaries 1, 2, 3; KernelNewbies summary).
The Linux Foundation joins GNOME's Outreach Program for Women by supporting kernel internships (LWN blurb).
OpenBSD 5.3 is released (announcement).
Ubuntu 13.04 is released (announcement).
GDB 7.6 is released (announcement).
Open Build Service (OBS) 2.4 is released (announcement).
Debian 7.0 "Wheezy" is released (announcement).
Working with Google, Adobe donates its OpenType CFF renderer code to FreeType. (announcement).
The Open Source Initiative announces a major shift in its governance process and board, part of a planned transition toward being a member-driven organization. (announcement).
The World Wide Web Consortium (W3C) publishes its draft proposal for Encrypted Media Extensions, a
DRM-support framework whose creation is driven by large media
companies (LWN article). Among other
voices, Richard Stallman criticizes the proposal as being antithetical
to the stated goals of the W3C (LWN blurb).
PyPy 2.0 is released (announcement; article)
PacketFence 4.0 is released (announcement).
Version 1.1 of the Go language is released (announcement).
Google releases a draft of the patent license for VP8 that originated from the company's agreement with MPEG-LA (announcement). The Software Freedom Law Center (SFLC) says the agreement is compatible with FOSS licensing (LWN blurb).
Moodle 2.5 is released (LWN article).
-- Bob Spencer presenting at Tizen Dev Con.
The New Yorker unveils Strongbox, a secure and anonymous service for facilitating communication between whistleblowers and journalists. Strongbox and its underlying platform DeadDrop was the last major project developed by Aaron Swartz (LWN announcement; article).
The first version of kernel tracing tool ktap is released (announcement; LWN article).
Debian GNU/Hurd 2013 is released (LWN announcement; article).
Mageia 3 is released, dedicated to the memory of developer Eugeni Dodonov ( announcement).
NetBSD 6.1 is released (announcement).
QEMU 1.5.0 is released (announcement).
Groklaw celebrates 10 years (blurb).
Google Code shuts down the ability of projects to offer file downloads (LWN blurb).
pgCon 2013 is held in Ottawa, May 21 to 24 (LWN article).
If Linux distros were Jedi, Debian would be Obi-Wan Kenobi. old & wise, with excellent foresight. He does what needs to be done.
Ubuntu is Anakin Skywalker - incredibly powerful but you never really know whether he's good or evil.
Linux Mint is Luke Skywalker - the one who will restore balance to the force.
Yoda is Slackware.
-- 4Sci
The Tizen Developer Conference is held in San Francisco, May 22 to 24 (LWN coverage).
The spring Automotive Linux Summit is held in Tokyo, May 27 to 28 (LWN coverage).
LinuxCon Japan is held in Tokyo, May 29 to 31 (LWN coverage).
A whole bunch of longstanding X Window security vulnerabilities is revealed (LWN blurb; vulnerability database entry; article).
Ubuntu closes bug #1, with Mark Shuttleworth noting that the impact of Android has shifted the focus of the computing industry away from Windows (LWN blurb).
Debian mourns the loss of Ray Dassen (LWN blurb).
June |
Texas Linux Fest 2013 is held in Austin, May 31 to June 1. It's hot (LWN coverage).
PulseAudio 4.0 is released (announcement).
Open Source leader Atul Chitnis passes away (LWN blurb).
-- Russell King
FreeBSD 8.4 is released (LWN announcement).
Subversion 1.8.0 is released (LWN announcement).
LLVM 3.3 is released, introducing full C++'11
support (LWN announcement).
Groklaw reports that the SCO v IBM case has been reopened (LWN blurb).
The Xen4CentOS project is announced, helping CentOS users run Xen after upstream distribution Red Hat drops its own Xen support (LWN announcement).
-- William McBee, on the Snowden relevations.
Elections are held for the Fedora Advisory Board, Engineering Steering
Committee, and Ambassadors Steering Committee
(LWN announcement).
Xiph.org announces that it has begun work on the Daala video codec (announcement).
PHP 5.5.0 is released (announcement).
Firefox 22 is released, enabling WebRTC support by default (announcement).
Security
Optimization-unstable code
Compilers can be tricky beasts, especially where optimizations are concerned. A recent paper [PDF] from MIT highlighted some of the problems that can be caused by perfectly legitimate—if surprising—optimizations, some that can lead to security vulnerabilities. The problem stems from C language behavior that is undefined by the standard, which allows compiler writers to optimize those statements away.
Andrew McGlashan raised the issue on the debian-security mailing list, expressing some surprise that the topic hadn't already come up. The paper specifically cites tests done on the Debian "Wheezy" (7.0) package repository, which found that 40% of 8500+ C/C++ packages have "optimization-unstable code" (or just "unstable code"). That does not mean that all of those are vulnerabilities, necessarily, but they are uses of undefined behavior—bugs, for the most part.
The unstable code was found using a static analysis tool called STACK that was written by the authors of the paper, Xi Wang, Nickolai Zeldovich, M. Frans Kaashoek, and Armando Solar-Lezama. It is based on the LLVM compiler framework and checks for ten separate undefined behaviors. Since C compilers can assume that undefined behavior is never invoked by a program, the compiler can optimize the undefined behavior away—which is what can lead to vulnerabilities.
So, what kind of undefined behavior are we talking about here? Two of the examples given early in the paper help to answer that. The first is that overflowing a pointer is undefined:
char *buf = ...;
unsigned int len = ...;
if (buf + len < buf) /* overflow check */
...
The compiler can (and often does, depending on the -O setting)
optimize the test away. On some architectures, according to the paper,
that's no great loss as the test doesn't work. But
on other architectures, it does protect against a too
large value of len. Getting rid of the test could lead to
a buffer overflow ... and buffer overflows can often be exploited.
The second example is a null pointer dereference in the Linux kernel:
struct tun_struct *tun = ...;
struct sock *sk = tun->sk;
if (!tun)
return POLLERR;
/* write to address based on tun */
Normally that code would cause a kernel oops if tun is null, but
if page zero is mapped for some reason, the code is basically harmless—as
long as the test remains. Because the compiler sees the dereference
operation, it can conclude that the pointer is always non-null and remove
the test entirely, which turns a fairly innocuous bug into a potential
kernel exploit.
Other undefined behaviors are examined as well. Signed integer overflow, division by zero, and oversized shifts are flagged, for example. In addition, operations like an overlapping memcpy(), use after free()/realloc(), and exceeding array bounds are checked.
The Debian discussion turned toward how to find and fix these kinds of bugs but, of course, they mostly or completely live upstream. As Mark Haase put it:
But Paul Wise noted that there is some ongoing work by Debian and Fedora developers to package static checkers for the distributions. STACK is on the list, he said, but relies on a version of LLVM that is not yet available for Debian. He recommended that interested folks get involved in those efforts and offered a list of links to get started.
There were some who felt the optimizations removing the unstable code were
actually compiler bugs. Miles Fidelman suggested the problem needed to be fixed
"WAY upstream
" in GCC itself: "if gcc's optimizer is opening a
class of security holes - then it's gcc that has to be fixed
". But
Haase was quick to throw cold water on that
idea, noting a GCC bug and an
LLVM blog
post series that pretty clearly show that compiler writers do not see
these kinds of optimizations as bugs. Haase said:
The problem for programmers is a lack of warnings about these kinds of
undefined constructs, Wise said. "Every use of undefined behaviour should
at minimum result in a compiler warning.
" But even doing that is
difficult (and noisy), Wade Richards said:
Joel Rees would like to see the standard
rewritten "to encourage sane behavior in
undefined situations
". Defining "sane" might be somewhat difficult,
of course.
Bernhard R. Link had a different suggestion:
Bugs in our code—many of which lead to security holes—are a never-ending problem, but over time we do at least seem to be getting some tools to assist in finding them. Given that different compilers, optimization levels, and compiler versions will give different behavior for this particular class of bugs makes them even harder to find. STACK seems like a good solution there—thankfully it is open source, unlike some other static analysis tools.
Brief items
Security quotes of the week
Geer: Trends in cyber security
Dan Geer has posted the transcript of his "Trends in cyber security" talk presented to the US National Reconnaissance Office in early November. "The trendline in the number of critical monocultures seems to be rising and many of these are embedded systems both without a remote management interface and long lived. That combination -- long lived and not reachable -- is the trend that must be reversed. Whether to insist that embedded devices self destruct at some age or that remote management of them be a condition of deployment is the question. In either case, the Internet of Things and the appearance of microcontrollers in seemingly every computing device should raise hackles on every neck."
Linux Worm Targeting Hidden Devices
The Symantec blog has a report of a new Linux worm capable of attacking a range of small, Internet-enabled devices in addition to traditional computers. "The worm, Linux.Darlloz, exploits a PHP vulnerability to propagate itself in the wild. The worm utilizes the PHP 'php-cgi' Information Disclosure Vulnerability (CVE-2012-1823), which is an old vulnerability that was patched in May 2012. The attacker recently created the worm based on the Proof of Concept (PoC) code released in late Oct 2013."
Garrett: Subverting security with kexec
Matthew Garrett demonstrates how to use the kexec() system call to change parameters in a running kernel. "The beauty of this approach is that it doesn't rely on any kernel bugs - it's using kernel functionality that was explicitly designed to let you do this kind of thing (ie, run arbitrary code in ring 0). There's not really any way to fix it beyond adding a new system call that has rather tighter restrictions on the binaries that can be loaded. If you're using signed modules but still permit kexec, you're not really adding any additional security."
New vulnerabilities
ganglia-web: cross-site scripting
| Package(s): | ganglia-web | CVE #(s): | CVE-2013-6395 | ||||||||||||
| Created: | December 1, 2013 | Updated: | December 10, 2013 | ||||||||||||
| Description: | From the Mageia advisory: XSS issue in ganglia-web makes it possible to execute JavaScript in victims' browser after tricking the victim into opening a specially crafted URL (CVE-2013-6395). | ||||||||||||||
| Alerts: |
| ||||||||||||||
gimp: code execution
| Package(s): | gimp | CVE #(s): | CVE-2013-1913 CVE-2013-1978 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 4, 2013 | Updated: | March 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A stack-based buffer overflow flaw, a heap-based buffer overflow, and an integer overflow flaw were found in the way GIMP loaded certain X Window System (XWD) image dump files. A remote attacker could provide a specially crafted XWD image file that, when processed, would cause the XWD plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2013-1913, CVE-2013-1978) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | EC2 kernel | CVE #(s): | CVE-2013-4511 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 4, 2013 | Updated: | December 4, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entry:
Multiple integer overflows in Alchemy LCD frame-buffer drivers in the Linux kernel before 3.12 allow local users to create a read-write memory mapping for the entirety of kernel memory, and consequently gain privileges, via crafted mmap operations, related to the (1) au1100fb_fb_mmap function in drivers/video/au1100fb.c and the (2) au1200fb_fb_mmap function in drivers/video/au1200fb.c. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
librsvg: denial of service
| Package(s): | librsvg | CVE #(s): | CVE-2013-1881 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 1, 2013 | Updated: | October 20, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory: librsvg was updated to fix a denial a XML External Entity Inclusion problem, where files on the system could be imported into the SVG. (CVE-2013-1881) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
links2: integer overflow
| Package(s): | links2 | CVE #(s): | CVE-2013-6050 | ||||||||||||||||||||
| Created: | December 1, 2013 | Updated: | December 29, 2014 | ||||||||||||||||||||
| Description: | From the Debian advisory: Mikulas Patocka discovered an integer overflow in the parsing of HTML tables in the Links web browser. This can only be exploited when running Links in graphical mode. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
mediawiki: multiple vulnerabilities
| Package(s): | mediawiki | CVE #(s): | CVE-2013-4567 CVE-2013-4568 CVE-2013-4572 CVE-2013-4569 CVE-2013-4573 CVE-2012-5394 | ||||||||||||||||||||||||||||||||
| Created: | December 2, 2013 | Updated: | December 18, 2013 | ||||||||||||||||||||||||||||||||
| Description: | From the Fedora advisory:
* Kevin Israel (Wikipedia user PleaseStand) identified and reported two vectors for injecting Javascript in CSS that bypassed MediaWiki's blacklist (CVE-2013-4567, CVE-2013-4568). https://bugzilla.wikimedia.org/show_bug.cgi?id=55332 * Internal review while debugging a site issue discovered that MediaWiki and the CentralNotice extension were incorrectly setting cache headers when a user was autocreated, causing the user's session cookies to be cached, and returned to other users (CVE-2013-4572). https://bugzilla.wikimedia.org/show_bug.cgi?id=53032 Additionally, the following extensions have been updated to fix security issues: * CleanChanges: MediaWiki steward Teles reported that revision-deleted IP's are not correctly hidden when this extension is used (CVE-2013-4569). https://bugzilla.wikimedia.org/show_bug.cgi?id=54294 * ZeroRatedMobileAccess: Tomasz Chlebowski reported an XSS vulnerability (CVE-2013-4573). https://bugzilla.wikimedia.org/show_bug.cgi?id=55991 * CentralAuth: MediaWiki developer Platonides reported a login CSRF in CentralAuth (CVE-2012-5394). https://bugzilla.wikimedia.org/show_bug.cgi?id=40747 | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
mod_nss: access with invalid client certificate
| Package(s): | mod_nss | CVE #(s): | CVE-2013-4566 | ||||||||||||||||||||||||||||||||||||||||
| Created: | December 4, 2013 | Updated: | December 27, 2013 | ||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A flaw was found in the way mod_nss handled the NSSVerifyClient setting for the per-directory context. When configured to not require a client certificate for the initial connection and only require it for a specific directory, mod_nss failed to enforce this requirement and allowed a client to access the directory when no valid client certificate was provided. | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
moodle: multiple vulnerabilities
| Package(s): | moodle | CVE #(s): | CVE-2013-4522 CVE-2013-4523 CVE-2013-4524 CVE-2013-4525 | ||||
| Created: | December 1, 2013 | Updated: | December 4, 2013 | ||||
| Description: | From the Mageia advisory: Some files were being delivered with incorrect headers in Moodle before 2.4.7, meaning they could be cached downstream (CVE-2013-4522). Cross-site scripting in Moodle before 2.4.7 due to JavaScript in messages being executed on some pages (CVE-2013-4523). The file system repository in Moodle before 2.4.7 was allowing access to files beyond the Moodle file area (CVE-2013-4524). Cross-site scripting in Moodle before 2.4. due to JavaScript in question answers being executed on the Quiz Results page (CVE-2013-4525). | ||||||
| Alerts: |
| ||||||
nbd: access restriction bypass
| Package(s): | nbd | CVE #(s): | |||||
| Created: | December 1, 2013 | Updated: | December 4, 2013 | ||||
| Description: | From the Debian advisory: It was discovered that nbd-server, the server for the Network Block Device protocol, did incorrect parsing of the access control lists, allowing access to any hosts with an IP address sharing a prefix with an allowed address. | ||||||
| Alerts: |
| ||||||
openjpeg: multiple vulnerabilities
| Package(s): | openjpeg | CVE #(s): | CVE-2013-1447 CVE-2013-6045 CVE-2013-6052 CVE-2013-6054 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 3, 2013 | Updated: | January 5, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Several vulnerabilities have been discovered in OpenJPEG, a JPEG 2000 image library, that may lead to denial of service (CVE-2013-1447) via application crash or high memory consumption, possible code execution through heap buffer overflows (CVE-2013-6045), information disclosure (CVE-2013-6052), or yet another heap buffer overflow that only appears to affect OpenJPEG 1.3 (CVE-2013-6054). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
pixman: denial of service
| Package(s): | pixman | CVE #(s): | |||||
| Created: | December 4, 2013 | Updated: | December 10, 2013 | ||||
| Description: | From the Ubuntu advisory:
pixman could be made to crash if it opened a specially crafted file. | ||||||
| Alerts: |
| ||||||
quassel: information leak
| Package(s): | quassel | CVE #(s): | CVE-2013-6404 | ||||||||||||
| Created: | December 1, 2013 | Updated: | January 21, 2014 | ||||||||||||
| Description: | From the Mageia advisory: Security vulnerability in Quassel before 0.9.2 through which a manipulated, but properly authenticated client was able to retrieve the backlog of other users on the same core in some cases (CVE-2013-6404). | ||||||||||||||
| Alerts: |
| ||||||||||||||
subversion: two vulnerabilities
| Package(s): | subversion | CVE #(s): | CVE-2013-4505 CVE-2013-4558 | ||||||||||||||||||||||||||||||||||||
| Created: | December 1, 2013 | Updated: | January 1, 2014 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory: mod_dontdothat allows you to block update REPORT requests against certain paths in the repository. It expects the paths in the REPORT request to be absolute URLs. Serf based clients send relative URLs instead of absolute URLs in many cases. As a result these clients are not blocked as configured by mod_dontdothat (CVE-2013-4505). When SVNAutoversioning is enabled via "SVNAutoversioning on", commits can be made by single HTTP requests such as MKCOL and PUT. If Subversion is built with assertions enabled any such requests that have non-canonical URLs, such as URLs with a trailing /, may trigger an assert. An assert will cause the Apache process to abort (CVE-2013-4558). | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
sup-mail: two command injection flaws
| Package(s): | sup-mail | CVE #(s): | CVE-2013-4478 CVE-2013-4479 | ||||
| Created: | December 1, 2013 | Updated: | December 4, 2013 | ||||
| Description: | joernchen of Phenoelit discovered two command injection flaws in Sup, a
console-based email client. An attacker might execute arbitrary command
if the user opens a maliciously crafted email.
From the Debian advisory: CVE-2013-4478: Sup wrongly handled the filename of attachments. CVE-2013-4479: Sup did not sanitize the content-type of attachments. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.13-rc2, released on November 29. Linus said: "We're back on the normal weekly schedule, although I'm planning on eventually slipping back to a Sunday release schedule. In the meantime, it's like Xmas every Friday! Or, perhaps more appropriately for the date, Hanukkah."
Stable updates: 3.12.2, 3.11.10, 3.10.21, and 3.4.71 were released on November 29; they were followed by the (large) 3.12.3, 3.10.22, and 3.4.72 releases on December 4. Note that support for 3.11.x has stopped as of 3.11.10, though Canonical will be maintaining its version of 3.11 through August 2014.
Quotes of the week
Citus Data: Linux memory manager and your big data
This Citus Data blog post describes problems encountered when a database's working set exceeds the maximum size of the kernel's active list. "It turns out LRU has two shortcomings when used as a page replacement algorithm. First, an exact LRU implementation is too costly in this context. Second, the memory manager needs to account for frequency as well, so that a large file read doesn't evict the entire cache. Therefore, Linux uses a more sophisticated algorithm than LRU; and that algorithm doesn't play along well with the workload we just described."
A fix for this problem has been in the works since 2012; activity on this patch set has recently picked up and it may yet be merged in the near future. Meanwhile, this issue has sparked a thread on the PostgreSQL list on whether the Linux kernel is a reliable platform for database applications.
Kernel development news
BPF tracing filters
The kernel's tracing functionality allows for the extraction of vast amounts of information about how the system is operating. In fact, so much information can be produced that one quickly starts to look for ways to cut down the flood. For this reason, filters can be attached to tracepoints, limiting the conditions under which the tracepoints will fire. But the current filter implementation is relatively inflexible and slow; it would be nice to be able to do better. A new patch set appears to do exactly that by introducing yet another virtual machine into the kernel.In truth, the virtual machine added by Alexei Starovoitov's patch set is not entirely new; it is a version of the Berkeley packet filter (BPF) machine which is used in the networking stack. The secure computing (seccomp) functionality also uses BPF to regulate access to system calls. Alexei's idea is to apply BPF to the question of deciding which tracepoints should fire, but he has taken the idea rather further than his predecessors.
To begin with, the "extended BPF" implemented in his patch set rather expands the capabilities of the BPF virtual machine. That machine was designed to be unable to damage the kernel; it only allows forward jumps to guarantee that programs will not loop, has no pointer types, etc. The extended BPF machine operates rather differently. The two registers available in BPF have been expanded to ten. Backward jumps are allowed (for reasons that will be mentioned below). Extended BPF programs can manipulate pointers and call kernel functions. In other words, there is quite a bit more power available here than in previous versions of the BPF machine.
These capabilities notwithstanding, Alexei claims that extended BPF programs are entirely safe to load into the kernel; he has gone as far as to suggest that unprivileged users could eventually be allowed to insert extended BPF programs into the kernel. To ensure this safety, the kernel performs a range of checks on every program before accepting it. Every jump is mapped and, while backward jumps are allowed, jumps to previously executed parts of the program are not, so loops should not be possible. Execution of the program is simulated with an in-kernel static analysis tool that tracks the contents of every register; pointer operations are only allowed if it is known that the pointer destination is meant to be accessible. Kernel functions can be called, but only those that have been explicitly made available to BPF programs running in that particular context. The total length of the program is limited, as are various resources used or declared by the program. And so on.
The BPF machine implements a simple sort of assembly language, which, while adequate for the creation of the sort of simple program it is intended for, is not necessarily convenient for users to write in. Users will not need to worry about such problems with Alexei's mechanism, since there are backends for both GCC and LLVM that allow filter code to be written in a restricted form of C. The GCC backend is available from a GitHub repository, while the LLVM version is in the LLVM tree itself. This feature, incidentally, is why extended BPF allows backward jumps: the compilers will emit them as a result of their optimization work.
The extended BPF machinery is not specific to any particular use within the kernel. Instead, it is meant to be invoked from a specific kernel subsystem with a context describing the set of available functions and any use-specific data. So, for packet filtering, that context might include the packet under consideration. In the case of tracing, the context is a subset of the processor's register contents when the tracepoint is hit. So filters must have a knowledge of which data structures will be in which registers — information which may not be readily available, especially for users who don't want to dig through the source code. This aspect has been acknowledged as one of the weakest points of the current implementation; it will likely be improved before this functionality is considered for merging.
A simple example provided with the patch set looks like this:
/*
* tracing filter example
* if attached to /sys/kernel/debug/tracing/events/net/netif_receive_skb
* it will print events for loobpack device only
*/
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/bpf.h>
#include <trace/bpf_trace.h>
void filter(struct bpf_context *ctx)
{
char devname[4] = "lo";
struct net_device *dev;
struct sk_buff *skb = 0;
skb = (struct sk_buff *)ctx->regs.si;
dev = bpf_load_pointer(&skb->dev);
if (bpf_memcmp(dev->name, devname, 2) == 0) {
char fmt[] = "skb %p dev %p \n";
bpf_trace_printk(fmt, sizeof(fmt), (long)skb, (long)dev, 0);
}
}
This filter code derives the address of the sk_buff from the passed-in context (it's in the "rsi" register), uses that to load the pointer to the associated device structure, then compares the device name stored therein against the loopback device name, finally outputting a trace message if the comparison succeeds.
On supported architectures, the BPF code is compiled to native machine code once it is accepted into the kernel. So one might expect it to be fast. Alexei ran a test on a networking tracepoint that would be hit one million times; the filter program was designed to reject all tracepoint hits, on the theory that filters will usually filter things out most of the time. The BPF filter was notably faster than the kernel's current filter mechanism, working through one million calls in about 2/3 of the time. Interestingly, is was also quite a bit faster than tracing with no filtering at all; the cost of running the filter was quite a bit less than the cost of generating the trace output.
Ingo Molnar looked at the patch set and
came to a simple conclusion: "Seems like a massive win-win scenario
to me.
" He did have one concern, though: he wants the ability to
extract BPF programs from the kernel and turn them back into some sort of
useful source form. This would, he said, make the licensing of BPF
programs clear:
By up-loading BPF into a kernel the person loading it agrees to make that code available to all users of that system who can access it, under the same license as the kernel's code (or under a more permissive license).
Others expressed concerns about the security of the system; Andi Kleen pointed out that "safe" virtual-machine systems have proved to have holes in the past, and that this one probably does as well.
Beyond security, there are a number of questions to be answered before this patch set is likely to make it into the kernel. The register-oriented interface for access to relevant data seems awkward at best. It's not clear whether BPF filters should replace normal tracepoint output, or just decide whether that output should happen. There is also the question of how this functionality relates to the Ktap mechanism; the addition of two independent virtual machines for tracing seems like an unlikely prospect. But this work has clearly generated a lot of interest, so answers to these questions may well be forthcoming.
A surprise with mutexes and reference counts
A kernel crash first reported in July for the 3.10 kernel has finally been diagnosed. In the process, some suspect locking in the pipe code has been fixed, but there is a more interesting outcome as well: it appears that an "obviously correct" reference count code pattern is not so correct after all. Understanding why requires looking at the details of how mutexes are implemented in the Linux kernel.Conceptually, a kernel mutex is a simple sleeping lock. Once a lock is obtained with mutex_lock(), the owner has exclusive access to the protected resources. Any other thread attempting to acquire the same lock will block until the lock becomes free. In practice, the implementation is a little more complex than that. As was first demonstrated in the Btrfs code, performance can be improved if a thread that encounters an unavailable lock actively spins for a brief period (in the hope that the lock will be quickly released) before going to sleep. A running thread that is able to quickly grab a mutex in this manner is likely to be cache-hot, so this type of optimistic spinning tends to improve the overall throughput of the system. Needless to say, the period of spinning should be relatively short; it also will not happen at all if another thread is already spinning or if the owner of the mutex is not actually running.
With that background in place, consider a piece of code like the following which manipulates a structure called "s"; that structure is protected by a mutex embedded within it:
int free = 0;
mutex_lock(&s->lock);
if (--s->refcount == 0)
free = 1
mutex_unlock(&s->lock);
if (free)
kfree(s);
This code looks like it should work fine; the structure is only freed if it is known that no more references to it exist. But, as Linus figured out while pondering the old bug report, the above code is not correct at all with the current mutex implementation.
The core of the mutex data structure includes an atomic counter (count) and a spinlock (wait_lock). When the lock is free, count has a value of one. A call to mutex_lock() on an available lock will simply decrement count to zero and continue; that is the fast path that, with luck, will be executed most of the time. Should the lock be taken, though, count will be set to a negative number (to indicate that somebody else wants the lock), and the frustrated caller will possibly spin, waiting for count to be set to one once again.
When the call to mutex_unlock() comes, the fast path (executed when count is zero) is, once again, simple: count is incremented back to a value of one. Should count be negative, though, the slow-path code must be run to ensure that waiting threads know that the lock has become available. In a simplified form, this code does the following:
spin_lock(&lock->wait_lock);
atomic_set(&lock->count, 1);
wake_up_process(/* the first waiting thread */);
spin_unlock(&lock->wait_lock);
The problem can now be seen: the thread which is spinning, waiting for the lock to be free, will break out of its loop once it sees the effect of the atomic_set() call above. So, while the original lock holder is calling wake_up_process() to wake somebody who is waiting for the lock, a thread on another CPU is already proceeding in the knowledge that it owns the same lock. If that thread quickly frees the data structure containing the lock, the final spin_unlock() call will make changes to memory that has already been freed and, possibly, allocated to another user. It is an uncommon race to hit, but, as the original bug report shows, it can occasionally happen.
All this led Linus to conclude:
There is an obvious question that immediately arises from this conclusion: how many other places in the kernel might be affected by this kind of bug? Mutexes and reference counts are both heavily used in the kernel; there must certainly be other places that use the two of them together in an unsafe manner (though Linus is doubtful that they exist). Needless to say, it would be nice to fix that code; the real question is how that might be done. One option is to audit all mutex-using code to find problematic usage, but that would be a long and unpleasant task — one that is unlikely to ever be completed in a comprehensive manner.
The alternative — fixing mutexes to eliminate the failure mode described above — seems easier and more robust in the long term. Ingo Molnar suggested a couple of ways in which that could be done, both based on the understanding that the use of both count and wait_lock to protect parts of the lock is at the root of the problem. The first suggestion was to eliminate count and turn wait_lock into a sort of special, tri-state spinlock; the other, instead, is to use count to control all access to the lock. Either approach, it seems, has the potential to reduce the size of the mutex structure and reduce architecture-specific code along with fixing the problem.
As of this writing, no patches have been posted. It would be surprising, though, if a fix for this particular problem did not surface by the time the 3.14 merge window opens. Locking problems are hard enough to deal with when the locking primitives have simple and easily understood behavior; having subtle traps built into that layer of the kernel is a recipe for a lot of long-term pain.
Deadline scheduling: coming soon?
Deadline scheduling was first covered here in 2009. Like much of the code in the realtime tree, though, deadline scheduling appears not to be subject to deadlines when it comes to being merged into the mainline. That said, it seems entirely possible that this longstanding project will land in a stable kernel release fairly soon, so a look at the status of this patch set, and the proposed ABI in particular, seems in order.To recap briefly: deadline scheduling does away with the concept of process priorities that has been at the core of most CPU scheduler algorithms. Instead, each process provides three parameters to the scheduler: a "worst-case execution time" describing a maximum amount of CPU time needed to accomplish its task, a period describing how often the task must be performed, and a deadline specifying when the task must first be completed. The actual scheduling algorithm is then relatively simple: the task whose deadline is closest runs first. If the scheduler takes care to not allow the creation of deadline tasks when the sum of the worst-case execution times would exceed the amount of available CPU time, it can guarantee that every task will be able to finish by its deadline.
Deadline scheduling is thus useful for realtime tasks, where completion by a deadline is a key requirement. It is also applicable to periodic tasks like streaming media processing.
In recent times, work on deadline scheduling has been done by Juri Lelli. He has posted several versions, improving things along the way. His v8 posting in October generated a fair amount of discussion, including suggestions from scheduler maintainers Peter Zijlstra and Ingo Molnar that the time had come to merge this code. That merging did not happen for 3.13, but chances are that it will for a near-future kernel release, barring some sort of unexpected roadblock. The main thing that remains to be done is to get the user-space ABI nailed down, since that aspect is hard to change after it has been released in a mainline kernel.
Controlling the scheduler
To be able to guarantee that deadlines will be met, a deadline scheduler must have an incontestable claim to the CPU, so deadline tasks will run ahead of all other tasks — even those in the realtime scheduler classes. Deadline-scheduled processes cannot take all of the available CPU time, though; the amount of time actually available is controlled by a set of sysctl knobs found under /proc/sys/kernel/. The first two already exist in current kernels: sched_rt_runtime_us and sched_rt_period_us. The first specifies the amount of CPU time (in microseconds) available to realtime tasks, while the second gives the period over which that CPU time is available. By default, 95% of the total CPU time is made available to realtime processes, leaving 5% to give a desperate system administrator a chance to recover a system from a runaway realtime process.
The new sched_dl_runtime_us knob is used to say how much of the realtime allocation is available for use by the deadline scheduler. The default setting allocates 40% for deadline scheduling, but a system administrator may well want to tweak that value. Note that, while realtime scheduling is supported by control groups, deadline scheduling has not yet been implemented at that level. How deadline scheduling should interact with group scheduling raises some interesting questions that have not yet been fully answered.
The other piece of the ABI allows processes to enter and control the deadline scheduling regime. The current system call for changing a process's scheduling class is:
int sched_setscheduler(pid_t pid, int policy, const struct sched_param *param);
The sched_param structure used in this system call is quite simple:
struct sched_param {
int sched_priority;
};
So sched_setscheduler() works fine for the currently available scheduling classes; the desired class is specified with the policy parameter, while the associated process priority goes into param. But struct sched_param clearly does not have the space needed to hold the three parameters needed with deadline scheduling, and its definition cannot be changed without breaking the existing ABI. So a new system call will be needed. As of this writing the details are still under discussion, but the new ABI can be expected to look something like this:
struct sched_attr {
int sched_priority;
unsigned int sched_flags;
u64 sched_runtime;
u64 sched_deadline;
u64 sched_period;
u32 size;
};
int sched_setscheduler2(pid_t pid, int policy, const struct sched_attr *param);
int sched_setattr(pid_t pid, const struct sched_attr *param);
int sched_getattr(pid_t pid, struct sched_attr *param, unsigned int size);
Where size (as both a parameter and a structure field) is the size of the sched_attr structure. If, in the future, the need arises to add more fields to that structure, the kernel will be able to use the size value to determine which version of the structure an application is using and respond accordingly. For the curious: size is meant to be specified within struct sched_attr when that structure is, itself, an input parameter to the kernel; otherwise size is given separately. The sched_flags field of struct sched_attr is not used in the current version of the patch.
One other noteworthy detail is that processes running in the new SCHED_DEADLINE class are not allowed to fork children in the same class. As with the realtime scheduling classes, this restriction can be worked around by setting the scheduling class to SCHED_DEADLINE|SCHED_RESET_ON_FORK, which causes the child to be placed back into the default scheduling class. Without that flag, a call to fork() will fail.
Time to merge?
The deadline scheduling patch set has a number of loose ends left to be dealt with, many of which are indicated in the patches themselves. But there comes a point where it is best to go ahead and get the code into the mainline so that said loose ends can be tied down more quickly; the deadline scheduling patches may well have reached that point. Since deadline scheduling can be added without much risk of regressions on systems where it is not in use, there should not be a whole lot more that needs to be dealt with before it can be merged.
...except, maybe, for one little thing. When deadline scheduling was discussed at the 2010 Kernel Summit, Linus and others clearly worried that there may not be actual users for this functionality. There has not been a whole lot of effort put into demonstrating users for deadline scheduling since then, though it is worth noting that the Yocto project has included the patch in some of its kernels. The JUNIPER project is also planning to use deadline scheduling, and has been supporting its development. Users like these will definitely help the deadline scheduler's case; Linus has become wary of adding features that may not actually be used. If that little point can be adequately addressed, we may have deadline scheduling in the mainline in the near future.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Page editor: Jonathan Corbet
Distributions
Planning for "openSUSE 2016"
The openSUSE project has hit a patch of slack time; 13.1 was recently released and 13.2 is roughly eight months out. That would be a good time for some long-term planning, at least according to Agustin Benito Bethencourt, the lead of the openSUSE team within SUSE. He put out two separate messages on the opensuse-project mailing list, both with subjects containing the phrase "openSUSE 2016"—clearly he is looking ahead a few years; he would like to see the rest of the project do the same. As might be guessed, there was a fair amount of discussion of Benito's ideas, but consensus seems a ways off.
Benito started with an overview of the current openSUSE "picture", consisting of the distribution's status vis a vis other distributions and its place in the Linux world. It is a picture of stability in terms of the number of users and contributors, and any signs of either growth or decline are fairly small. He posited that overall use of Linux on the server and desktop (separate from mobile and cloud) is growing, so that openSUSE's lack of growth could be seen as something of a wakeup call that something needs to change. He ended the lengthy note with three questions for participants about their "picture" as well as their perception of what others see when they look at openSUSE.
The argument Benito made was largely based on statistics gathered from various activities (downloads, updates, contributions, "social media", etc.). Some thread participants were not particularly moved by a statistical approach, with both Klaas Freitag and Togan Muftuoglu suggesting that focusing on the numbers is a waste of time. In addition, Muftuoglu suggested that the distribution should be a leader, rather than a follower, while Freitag thinks the problems stem from a lack of "do-ers":
There also is something of an "us vs. them" conflict at the heart of the discussion. Benito leads a team of SUSE employees who work on openSUSE and some other community members are concerned about SUSE dictating the direction for openSUSE. Benito, for his part, has tried to assure everyone that he is looking for a discussion, not to dictate anything, but his tone put some off. For example, Andrew Wafaa was not pleased by the tone, particularly in a follow-up post by Benito:
But the SUSE openSUSE team takes care of many of the pieces required to get a release out the door. In fact, Robert Schweikert wondered whether the community without the SUSE team would be able to put together a release on an eight-month schedule—if so, what would that release look like? The question of how both the responsibility and the governing is split between the non-SUSE and SUSE contingents is an important one, he said:
It's not just non-SUSE folks that are not completely happy with the
tone. OpenSUSE release manager Stephan "coolo" Kulow also expressed some dissatisfaction with Benito's
communication style, but noted that the idea of discussing the future was a
good one: "We can't just keep having release, party, release, party,
...
". He referred to SUSE Engineering VP Ralf Flaxa's keynote at the openSUSE conference in July;
that while "SUSE wants some things to happen
within openSUSE
", it also wants "openSUSE to be in openSUSE's
hands
" and not to dictate from on high.
Will Stephenson is concerned that creative distribution contributors have moved on to other pursuits, leaving the desktop and server distribution "scene" behind.
He wondered if openSUSE should simply be content with the status
quo or if it should try to "become an exciting and
relevant place to be once again
". In a follow-up message, he noted several areas
where openSUSE could perhaps make itself more useful to its users: cloud,
DevOps, and ARM. Those areas weren't chosen at random, but came from
what SUSE highlighted
in its press release for the 13.1 release. Focusing on those areas might
make openSUSE an
even more attractive
target for investment from SUSE, he said. There was general agreement with
that sentiment, but plenty of details to work out.
Meanwhile, in Benito's second "picture" message, he got more specific about some of his ideas. It was broken down into four areas: goals, an enhanced "Factory" release, targeting developers and every day end users with releases, and open governance. Each had a handful of bullet points to flesh them out, followed by five questions as a starting point for replies.
The goals he listed were largely agreeable to most, though there were quibbles. The most significant new pieces in Benito's message are the rolling, always installable Factory—a development release, like Fedora's Rawhide—and the target users, which is something openSUSE has long struggled with (not unlike other distributions). Targeting users and developers (e.g. non-open-source developers) was something that Per Jessen thought would mesh well with his company that has used openSUSE on the server and desktop for many years. Others seem less convinced of that particular focus, while still recognizing the need for one.
The tension between a rolling release and longer release cycles is one area that was brought up frequently. But it neatly solves the problem that some users need longer-term stability, while others are more interested in the latest applications and tools—often both are needed in the same companies, or even by the same users on different systems. Having both available in a single distribution would be attractive.
But, as Benito cautioned, it is important to distinguish between support and maintenance. The latter is what openSUSE provides, while support comes from an enterprise vendor (e.g. SUSE) for its distribution. Eventually, perhaps third-parties will provide support for community distributions as well, but "long-term support" is something of a misnomer for community distributions today.
The discussion continues, and has branched out to a separate development process proposal (with an accompanying interactive diagram) from Kulow on the opensuse-factory list. It is, essentially, an update to the "rings" idea from August, with newer ideas based on what has been learned in the interim.
While Benito's focus is a few years out (and what it takes to get there), Kulow's proposal is closer to hand, but they are still clearly related in some ways. To get to an always-usable Factory—thus a rolling release based on it—changes need to be made. Whether Kulow's specific ideas will prevail or not, most seem in favor of the goals. In fact, both the near and the far goals are gaining traction, though much is left to be fleshed out. The traditional Linux distribution model is under lots of pressures from multiple directions and it will be interesting to see where that leads for openSUSE and others as well.
Brief items
Distribution quotes of the week
...
*but*, I'm not saying that's actually what we should do. I quite like building exciting prototypes. Building Corollas probably ain't as much fun.
CentOS-6.5
CentOS 6.5 is available. "There are many fundamental changes in this release, compared with the past CentOS-6 releases, and we highly recommend everyone study the Release Notes as well as the upstream Techical Notes about the changes and how they might impact your installation." The release notes are here.
CyanogenMod 10.2.0
CyanogenMod 10.2.0 has been released. "With the release of CM 10.2.0 we end our foray into Android 4.3-land and planned releases based on that code. We will of course continue to provide hot-fixes, security patches and similar as needed. The device roster for this initial run includes all devices that received an RC1." The focus for CM11 is Android 4.4 "KitKat".
Oracle Linux Release 6 Update 5
Oracle Linux 6.5 is available. This release ships with three sets of kernel packages; Oracle's Enterprise kernel versions 2.6.39 and 3.8.13, and a Red Hat compatible kernel. Oracle's version 3 kernel has some major improvements listed in the announcement.
Distribution News
Debian GNU/Linux
Keith Packard added to the Debian Technical Committee
Debian Project Leader Lucas Nussbaum has announced the appointment of Keith Packard to the project's Technical Committee. Keith will be getting off to a running start; one of the first things the committee has to do is to resolve the issue of which init system the Debian distribution will use by default. (See this message for a description of the process that led to Keith's nomination).Bits from ARM porters
The Debian-ARM team has a status report covering buildd and porterbox status, bootloader support, Debian installer support, arm64 Debian port support, cross toolchain in Debian, Debian-Ports integration, Emdebian.org server future, cross build-depends discussion for bootstrapping, enable multiarch in buildd software, Raspbian port support, and more.Bits from Debian Med team
The Debian Med team has a few bits to share. Topics include Andreas Tille's talk at FOSDEM and DebConf 13, an interview with Andreas Tille, an article about Free Software in medicine, the annual sprint, and more.Release sprint results - team changes, auto-rm and arch status
The Debian Release Team has an update covering team membership, auto-removal of non-leaf packages, release goals, architecture status, and several other topics.
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (December 2)
- DistroWatch Weekly, Issue 536 (December 2)
- Gentoo Monthly Newsletter (November)
- Ubuntu Weekly Newsletter, Issue 345 (December 1)
Page editor: Rebecca Sobol
Development
A bit of visual programming with MicroFlo
One of the chief advertised benefits of the Arduino family of devices is its suitability for novice programmers—Arduino "sketches" (Arduino's term for programs) are easy to read and write, and the integrated development environment (IDE) is equally friendly to learn and use. But there is always more that can be done, as a recent "visual programming" project illustrates.
MicroFlo is an implementation of flow-based programming for microcontroller boards. Flow-based programming represents applications as graphs of modules that are joined by connecting together input and output streams on each module. It is a model that is conceptually easy to grasp for newcomers, in part because it lends itself so well toward visual diagrams, where the relationships between components are clear and hard to misinterpret.
MicroFlo is the creation of Jon Nordby, best known from the GEGL and MyPaint projects. The first public release was in September, followed by the latest update (version 0.2) in late November. Although the initial target platform is Arduino-compatible hardware, the long-term goals include a variety of other devices.
As the name suggests (to those who follow flow-based programming), MicroFlo is inspired by NoFlo, a flow-based development environment for JavaScript, and MicroFlo's visual interface is built with the NoFlo UI components (although the code to which they are connected is very different). The interface runs on Node.js, and allows users to construct Arduino programs by building a graph of components in the browser window. Behind the scenes, MicroFlo currently requires the Arduino IDE to be running as a back-end connection to the hardware device.
The notion of representing an Arduino sketch as a graph of nodes would not be particularly novel on its own, but MicroFlo goes further by enabling the user to change the graph in the interface and have those changes reflected almost live on the device itself. It is actually a "MicroFlo runtime" sketch that is uploaded to the Arduino itself; as the user constructs the graph in the web UI, the graph is sent to the device as parameters to the runtime. This means that programs are limited to those components supported by the MicroFlo runtime sketch (of which there are currently around 20), however, it also means that changes to the application's graph are reflected near instantaneously in the running code on the Arduino. In comparison, uploading a sketch through the Arduino IDE takes around fifteen seconds.
Blinken
To get started with MicroFlo, one needs a recent Arduino board and the Arduino IDE. Next come the prerequisites for MicroFlo itself: Node.js, the npm package manager, and the node-gyp package. The MicroFlo release packages include three pieces: the server component for Node.js (in the microflo folder), the NoFlo web UI (in the microflo-ui folder), and the library for the Arduino IDE (the microflo-arduino.zip file).
The server is installed with npm install, then started with:
node ./microflo.js runtime
The library needs to be loaded into the Arduino IDE (through the "Add library" menu item), then the "Standalone" sketch uploaded to the Arduino device. With the sketch running on the device and the server running on the PC, the user can open the index.html page from the UI folder and begin editing a graph. Pressing the "play" button relays the graph to the Arduino, where it begins executing immediately.
The modules available in the graph UI right now cover most of the basic Arduino functions: the various input/output pins (digital and analog), the built-in LED and serial port, and so on. There are also basic flow-control options, timers, and counters, and a module for reading a Dallas Semiconductor thermometer, presumably courtesy of Nordby's first release, which his blog announcement indicates he tested by controlling an old refrigerator.
The project's roadmap indicates that supporting the full set of Arduino tutorials is one of the goals, as is allowing users to build more complex programs by defining "sub-graphs"—functional units that can be combined together just like the basic modules.
That would truly be a useful addition; as it is MicroFlo makes it quite simple to tweak a graph and see its changes on the Arduino itself. However, many other visual programming environments work well when the full graphs fits neatly onto the screen, but start to break down rapidly once the graph becomes so large that horizontal and vertical scrolling are required to work on it (see the Blender node graph for one example).
Replace software here:
The other angle that Nordby highlights for MicroFlo's potential usefulness is the possibility of "IDE included" hardware devices. In a talk at the Piksel 2013 conference (of which a YouTube video is available), he noted that more and more electronic products in the home are built with software-driven microcontrollers, but that in almost every case there is no practical way to take advantage of the freedom to tinker with the software inside—even if it is free software.
By including a runtime in these devices that can be connected to a PC (and modified), it would be far simpler to reprogram household appliances. Furthermore, he said, the visual programming model makes the features of a device easier to discover and less difficult to understand. Essentially, when connecting modules is as simple as linking output pads to input pads on a graph, it becomes much more difficult to write something syntactically incorrect or dangerous.
Of course, to achieve that long-term goal, a project like MicroFlo will have to expand far beyond Arduino boards. Other device back-ends are on the project roadmap, starting with a more generic back-end for the AVR microcontrollers used by Arduino boards. Arduinos, after all, are designed for experimentation and prototyping—hence the use of pin headers rather than solder pads. Eventually, most "finished" projects (not to mention commercial products) move to a dedicated circuit, even if they continue to use a compatible microcontroller for their software.
The distant future aside, though, MicroFlo is already an interesting take on visual programming for a platform geared toward learning. Almost anyone can learn to program with an Arduino, but visual learners may find more to like about an IDE where all the connections and capabilities can be seen with the naked eye.
Brief items
Quotes of the week
Go 1.2 released
Version 1.2 of the Go language has been released. "This new release comes nearly seven months after the release of Go 1.1 in May, a much shorter period than the 14 months between 1.1 and 1.0. We anticipate a comparable interval between future major releases." See the release notes for details on the changes this time around, which include three-index slices, a preemptive goroutine scheduler, a test coverage analyzer, and more.
A Rust frontend for GCC
Philip Herron has announced the availability of a GCC frontend allowing it to compile programs in the Rust language. It is currently in an early stage but is able to compile a number of programs. "There is still a lot of work but i would really like to share it and see what people think. Personally i think rust will target GCC very well and be a good addition (if / when it works)."
Emacspeak 39.0 available
Version 39.0 of the Emacspeak audio desktop has been released. Codenamed "BigDog," the release includes several key enhancements, such as speech-enablement for Chrome, Firefox, and Ipython Notebook, improved Web search wizards and URL templates, and improved support for Emacs 24.3.
Git 1.8.5 released
Git 1.8.5 is available. This release rolls out a number of feature additions, including updates to foreign interfaces, subsystems, and ports.
EFL 1.8 released
Version 1.8 of the Enlightenment Foundation Libraries (EFL) has been released. Carsten Haitzler notes that "many major things have happened to EFL in the past year since EFL 1.7 first came out. The usual has been done in fixing bugs, optimizing speed and memory footprint, and much much more.
" The changes merge several of EFL's libraries into a single EFL package, as well as introducing new libraries for physics and D-Bus, among many other changes.
Newsletters and articles
Development newsletters from the past week and change
- Caml Weekly News (December 3)
- OpenStack Community Weekly Newsletter (November 29)
- Perl Weekly (December 2)
- PostgreSQL Weekly News (December 2)
- Ruby Weekly (November 28)
- Tor Weekly News (December 4)
Paper: How and why has the LibreOffice project evolved?
Simon Phipps pointed to a paper by Jonas Gamalielsson and Bjorn Lundell on the evolution of the Apache OpenOffice and LibreOffice development communities. "From our results, it remains to be seen to what extent the LibreOffice and Apache OpenOffice projects may successfully evolve their projects and associated communities in a way that can be sustainable long-term. So far it seems that LibreOffice has been the more successful project in terms of growing associated communities. Our results suggest that the choice of Open Source license significantly impacts on conditions for attracting contributions to Open Source projects."
The hackable watch: a wearable MSP430 MCU
At his blog, Alessandro Pasotti has written a tutorial on developing for the Texas Instruments eZ430 Chronos programmable watch using Linux, including a look at the two active firmware development projects. The Chronos is a low-power watch with RF communications and a variety of sensors; as Pasotti shows it is quite hackable, even if it has not attracted as large of a community as other wearable computing products.
Page editor: Nathan Willis
Announcements
Brief items
Cloudius Systems, HSA Foundation and Valve Join Linux Foundation
The Linux Foundation has three new members; Cloudius Systems, HSA (Heterogeneous System Architecture) Foundation and Valve. "The newest Linux Foundation members represent both nascent open source endeavors as well as established industry leaders. Companies from diverse markets, such as gaming, cloud computing and virtualization are seeing the value of Linux and collaborative development to put them out in front of competitors. For example, Valve recently announced its plans to expand its Steam platform, with 65 million active accounts and games from hundreds of developers, into the living room with the Steam Machines project. An innovative living room device, the Steam Machines will be powered by a Linux-based operating system dubbed the Steam OS. At the same time, emerging cloud, virtualization, big data and device-driven computing companies are taking advantage of open development and collaboration to ignite innovation." (Thanks to Ben Boeckel)
FLOSS 2013: The survey for Open Source contributors 10 years later
The first FLOSS survey was launched in 2002. Now a second survey is underway. The deadline for taking the survey is December 6. "This time not only developers, but all kind of contributors to Open Source projects are asked to participate. How has the community changed in the last 10 years? In how far are the points of view similar? How is the composition of the community nowadays?"
Free Software Foundation Giving Guide
The FSF has released a giving guide for the holiday season. "'The Giving Guide is a map through the minefield of restrictive electronics and Web services that many will be seeking as gifts this season, just as one might shop for fair labor products from worker-owned cooperatives, or environmentally friendly products from local sources. There's no need to sacrifice your freedom or the freedom of the people you care about,' said Libby Reinish, campaigns manager for the FSF."
Articles of interest
Free Software Supporter Issue 68, November 2013
The Free Software Foundation's newsletter for November covers the 2013 Giving Guide, free software at iD Programming Academy, LibrePlanet 2014, GNU/Linux game based on lost NASA archives, Rockstar vs. Google, DRM in cars, and several other topics.
New Books
Node.js the Right Way--New from Pragmatic Bookshelf
Pragmatic Bookshelf has released "Node.js the Right Way" by Jim R. Wilson.Perl One-Liners--New from No Starch Press
No Starch Press has released "Perl One-Liners" by Peteris Krumins.
Calls for Presentations
Distributions Developer Devroom: Call for Participation
There will be a cross-distribution miniconference at FOSDEM (February 1-2, 2014 in Brussels, Belgium. The call for participation for this miniconference is open until December 22.CFP Deadlines: December 5, 2013 to February 3, 2014
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| December 15 | February 21 February 23 |
Southern California Linux Expo | Los Angeles, CA, USA |
| December 31 | April 8 April 10 |
Open Source Data Center Conference | Berlin, Germany |
| January 7 | March 15 March 16 |
Chemnitz Linux Days 2014 | Chemnitz, Germany |
| January 10 | January 18 January 19 |
Paris Mini Debconf 2014 | Paris, France |
| January 15 | February 28 March 2 |
FOSSASIA 2014 | Phnom Penh, Cambodia |
| January 15 | April 2 April 5 |
Libre Graphics Meeting 2014 | Leipzig, Germany |
| January 17 | March 26 March 28 |
16. Deutscher Perl-Workshop 2014 | Hannover, Germany |
| January 19 | May 20 May 24 |
PGCon 2014 | Ottawa, Canada |
| January 19 | March 22 | Linux Info Tag | Augsburg, Germany |
| January 22 | May 2 May 3 |
LOPSA-EAST 2014 | New Brunswick, NJ, USA |
| January 28 | June 19 June 20 |
USENIX Annual Technical Conference | Philadelphia, PA, USA |
| January 30 | July 20 July 24 |
OSCON 2014 | Portland, OR, USA |
| January 31 | March 29 | Hong Kong Open Source Conference 2014 | Hong Kong, Hong Kong |
| January 31 | March 24 March 25 |
Linux Storage Filesystem & MM Summit | Napa Valley, CA, USA |
| January 31 | March 15 March 16 |
Women MiniDebConf Barcelona 2014 | Barcelona, Spain |
| January 31 | May 15 May 16 |
ScilabTEC 2014 | Paris, France |
| February 1 | April 29 May 1 |
Android Builders Summit | San Jose, CA, USA |
| February 1 | April 7 April 9 |
ApacheCon 2014 | Denver, CO, USA |
| February 1 | March 26 March 28 |
Collaboration Summit | Napa Valley, CA, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: December 5, 2013 to February 3, 2014
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| December 6 | CentOS Dojo | Austin, TX, USA |
| December 10 December 11 |
2013 Workshop on Spacecraft Flight Software | Pasadena, USA |
| December 13 December 15 |
SciPy India 2013 | Bombay, India |
| December 27 December 30 |
30th Chaos Communication Congress | Hamburg, Germany |
| January 6 | Sysadmin Miniconf at Linux.conf.au 2014 | Perth, Australia |
| January 6 January 10 |
linux.conf.au | Perth, Australia |
| January 13 January 15 |
Real World Cryptography Workshop | NYC, NY, USA |
| January 17 January 18 |
QtDay Italy | Florence, Italy |
| January 18 January 19 |
Paris Mini Debconf 2014 | Paris, France |
| January 31 | CentOS Dojo | Brussels, Belgium |
| February 1 February 2 |
FOSDEM 2014 | Brussels, Belgium |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
