User: Password:
Subscribe / Log in / New account Weekly Edition for July 18, 2013

A look in on Plasma 2

By Jake Edge
July 17, 2013
Akademy 2013

On the first day of Akademy 2013, Marco Martin gave a status report on the Plasma 2 project. Plasma is the umbrella term for KDE's user experience layer, which encompasses the window manager (KWin) and desktop shell. In his talk, Martin looked at where things stand today and where they are headed for the future.

[Marco Martin]

Martin began by noting that much of the recent planning for the next few years of Plasma development was done at a meeting in Nuremberg earlier this year. His talk was focused on reporting on those plans, but also explaining which parts had been implemented and what still remains to be done.

Plasma today

The existing Plasma is a library and five different shells that are targeted at specific kinds of devices (netbook, tablet, media center, desktop, and KPart—which is used by KDevelop for its dashboard, but is not exactly a "device"). Plasma is not meant to be a "one size fits all" model, but to be customized for different devices as well as for different types of users.

It is "very easy to build very different-looking desktop interfaces" with Plasma, by assembling various plugins (called "plasmoids") into the interface. He counted 71 plasmoids available in the latest KDE Software Compilation (SC) and there are many more in other places.

As far as features go, "we are pretty happy right now" with Plasma. After the 4.11 KDE SC release, feature development for Plasma 1 will cease and only bug fixes will be made for the next several years. That will be a good opportunity to improve the quality of Plasma 1, he said.

Plasma tomorrow

[Akademy group photo]

Though the team is happy with the current feature set, that doesn't mean that it is time to "go home" as there are many ways to improve Plasma for the future, Martin said. More flexibility to make it easier for third parties to create their own plasmoids and user experiences is one area for improvement. Doing more of what has been done right—while fixing things that haven't been done right—is the overall idea. But there is a "big elephant in the room"—in fact, there are four of them.

The elephants are big changes to the underlying technology that need to be addressed by Plasma 2: Qt 5, QML 2, KDE Frameworks 5, and Wayland. All of the elephants are technical, "which means fun", he said. Of the four, the switch to QML 2 will require the most work. Wayland requires quite a bit of work in KWin to adapt to the new display server, but the QML switch is the largest piece. QML is the JavaScript-based language that can be used to develop Qt-based user interface elements.

Given that everything runs in QML 1 just fine, he said, why switch to QML 2? To start with, QML 2 has support for more modern hardware. In addition, it has a better JavaScript engine and can use C++ code without requiring plugins. Beyond that, though, QML 1 is "on life support" and all of the development effort is going into QML 2. There is also a "promising ecosystem" of third-party plugins that can be imported into QML 2 code, which means a bigger toolbox is available.

Another change will be to slim down the libplasma library by moving all of the user-interface-related features to other components. That is how it should have been from the beginning, Martin said. What's left is a logical description of where the graphics are on the screen, the asynchronous data engines, runners, and services, and the logic for loading the shell. All of the QML-related code ends up in the shell. That results in a libplasma that went from roughly 3M in size to around 700K.

One shell to rule them all

Currently, there are separate executables for each shell, but that won't be the case for Plasma 2. The shell executable will instead just have code to load the user interface from QML files. So, none of the shell will be in C++, it will be a purely runtime environment loaded from two new kinds of packages: "shell" and "look and feel". The shell package will describe the activity switcher, the "chrome" in the view (backgrounds, animations, etc.), and the configuration interface for the desktop and panels.

The look and feel package defines most of the functionality the user regularly interacts with including the login manager, lock and logout screens, user switching, desktop switching, Alt+Tab, window decorations, and so on. Most of those are not managed by the shell directly, but that doesn't really matter to the user as it is all "workspace" to them. All of those user interface elements should have a consistent look and feel that can be changed through themes.

Different devices or distributions will have their own customized shell and look and feel packages to provide different user experiences. All of that will be possible without changing any of the C++ code. In addition, those packages can be changed on the fly to switch to a different user experience. For example, when a tablet is plugged into a docking station, the tablet interface could shut down and start a desktop that is geared toward mouse and keyboard use. What that means for the applications and plasmoids running at the time of the switch is up in the air, Martin said in response to a question from the audience.

Current status

So far, the team has gotten a basic shell running that uses Qt 5, QML 2, and Frameworks 5. The libplasma restructuring is nearly done, so the library is smaller and more manageable. Some QML 2 plasmoids, containments, and shell packages have been started, but the existing Plasma 1 code will need to be ported. For pieces written in QML, the port will not require much work, but those written in C++ will need some work to port them to Plasma 2. Martin summed it up by saying that the "ground work is done", but there is still plenty of work to do.

[Thanks to KDE e.V. for travel assistance to Bilbao for Akademy.]

Comments (2 posted)

Why was this package updated?

By Nathan Willis
July 17, 2013

Releasing early and often has its drawbacks, even for those who dearly love free software. One of those drawbacks is the tiresome and often thankless duty of packaging up the releases and pushing them out to users. The more frequently one does this, the greater the temptation can be to gloss over some of the tedium, such as entering detailed or informative descriptions of what has changed. Recently, Fedora discussed that very topic, looking for a way to improve the information content of RPM package updates.

Michael Catanzaro raised the subject on the fedora-devel list in late June, asking that package maintainers make an effort to write more meaningful descriptions of changes when they roll out updates. Too many updates, he said, arrive with no description beyond "update to version x.y.z" or, worse, the placeholder text "Here is where you give an explanation of your update." Since the update descriptions in RPM packages are written for the benefit of end users (as opposed to the upstream changelog, which may be read only by developers), the goal is for the description to explain the purpose of the update, if not to actually go into detail. Instances such as the ones Catanzaro cited are not the norm, of course, and presumably no packager intends to be unhelpful. The trick is figuring out how to drive the community of volunteers who publish updates in the right direction.


Not everyone perceives there to be a problem, of course. Till Maas disagreed that terse update descriptions are harmful, suggesting, for example, that updates that fix bugs are already informative enough if the bug fixed is clearly indicated in the "bugs" field. But Adam Williamson responded that even in such simple cases, the update description ought to point the end user in the right direction:

"This update simply fixes the bugs listed" is an okay description - it tells the reader what they need to know and re-assures them that the update doesn't do anything *else*. Of course, if it does, you need to explain that: "This update includes a new upstream release which fixes the bugs listed. You can find other changes in the upstream description at".

Richard Jones argued that the current tool support is inadequate, which forces people to duplicate change messages in multiple places, from the source repository to RPM package files to the update description field in Bodhi, Fedora's update-publishing tool. "In short my point is: don't moan about bad update messages when the problem is our software sucks," Jones said. When asked what the software should do, Jones proposed that RPM could be pointed toward the upstream changelog and release notes:

    %changelog -f <changelog_file>
    %changelog -g <git_repo>

    %release_notes -f <release_notes_file>

The subsequent tools in the update-release process could simply extract the information from RPM. Björn Persson challenged that proposal as unworkable, however, saying that attempting to extract changelog information automatically would require adding flags for Subversion, CVS, Monotone, Mercurial, Arch, Bazaar, and every other revision control system. Furthermore, automatically parsing the release_notes_file is hardly possible either, given that it can be in any format and any language.

Later, Sandro Mani proposed a somewhat more complex method for automatically filling the description field: pulling in the upstream changelog URL for updates that are derived from upstream releases, and pre-populating the description with bug numbers if the "bugs" field is non-zero. That suggestion was met with no discussion; perhaps because it would often result in a slightly longer (although hopefully more descriptive) placeholder.

Details, details, details

But Williamson and others also took issue with Jones's original premise, that changelog information makes for suitable update descriptions in the first place. After all, the argument goes, the description is in addition to the "bugs" field and other more technical metadata; its purpose is to be displayed to the user in the software update tool. Catanzaro asked for "some minimal level of quality to what we present to users." That statement might suggest a set of guidelines, but the discussion quickly turned to how Bodhi could be modified to catch unhelpful update descriptions and discourage them.

As T.C. Hollingsworth noted, Bodhi has two interfaces: web-based and command line. But while the command-line interface will complain if the update description is left blank, the web front end automatically inserts the placeholder text, so Bodhi does not see a blank field, and thus does not complain.

Williamson commented that Bodhi should reject the placeholder text, too. But either way, Bodhi cannot fully make up for the human factor. Michael Schwendt pointed out that no matter what rules are in place, a packager who wants to "cheat" will cheat. He then cited a long list of (hopefully intentionally) humorous update descriptions, such as "This is one of the strong, silent updates" and "Seriously, if I tell you what this update does, where is the surprise?"

Williamson had also suggested that other Fedora project members could vote down an update with an empty or meaningless description field, using Bodhi's "karma" feature. But the tricky part of that idea is that karma is currently used as a catch-all for all problems, including far more serious issues like an update not actually fixing the bug it claims to. Simply subtracting karma points does not communicate the specific issue. On top of that, the way karma is implemented, an update can still get pushed out if it has a sufficient positive karma score—which it presumably would if enough people vote for it without considering an unhelpful update description to be problematic.

The only real solution, then, might be one that works (at least in part) by changing the community's expected behavior. That is often the nature of solutions in open course community projects, but it is usually a slow course to pursue. Catanzaro originally asked if a set of guidelines should be written, before the conversation shifted to implementing changes in the packaging software itself. On the plus side, as Panu Matilainen observed, there are other projects that have achieved an admirable measure of success. The Mageia and Mandriva distributions, for example, have guidelines in place for update descriptions, in addition to pulling in some information from changelogs.

Then again, since the ultimate goal of update descriptions is to communicate important information to the end user, it may be better to ask someone other than packagers to look at the description fields. Ryan Lerch suggested granting write access to the update descriptions to others—namely the documentation team.

In a sense, update descriptions are akin to release notes in miniature, and release notes are a perpetual challenge for many software projects. They come at the end of long periods of development, merging, and testing, so they can feel like extra work that provides minimal added value. But as Catanzaro said in his original email, poor update descriptions can blemish a project's otherwise professional-looking image. More so, perhaps, if they continue to arrive with every additional update.

Comments (14 posted)

Connecting on the QUIC

By Nathan Willis
July 17, 2013

In the never-ending drive to increase the perceived speed of the Internet, improving protocol efficiency is considerably easier than rolling out faster cabling. Google is indeed setting up fiber-optic networks in a handful of cities, but most users are likely to see gains from the company's protocol experimentation, such as the recently-announced QUIC. QUIC stands for "Quick UDP Internet Connection." Like SPDY before it, it is a Google-developed extension of an existing protocol designed to reduce latency. But while SPDY worked at the application layer (modifying HTTP by multiplexing multiple requests over one connection), QUIC works at the transport layer. As the name suggests, it implements a modification of UDP, but that does not tell the whole story. In fact, it is more accurate to think of QUIC as a replacement for TCP. It is intended to optimize connection-oriented Internet applications, such as those that currently use TCP, but in order to do so it needs to sidestep the existing TCP stack.

A June post on the Chromium development blog outlines the design goals for QUIC, starting with a reduction in the number of round trips required to establish a connection. The speed of light being constant, the blog author notes, round trip times (RTTs) are essentially fixed; the only way to decrease the impact of round trips on connection latency is to make fewer of them. However, that turns out to be difficult to do within TCP itself, and TCP implementations are generally provided by the operating system, which makes experimenting with them on real users' machines difficult anyway.

In addition to side-stepping the problems of physics, QUIC is designed to address a number of pain points uncovered in the implementation of SPDY (which ran over TCP). A detailed design document goes into the specifics. First, the delay of a single TCP packet introduces "head of line" blocking in TCP, which undercuts the benefits of SPDY's application-level multiplexing by holding up all of the multiplexed streams. Second, TCP's congestion-handling throttles back the entire TCP connection when there is a lost packet—again, punishing multiple streams in the application layer above.

There are also two issues that stem from running SSL/TLS over TCP: resuming a disconnected session introduces an extra handshake due solely to the protocol design (i.e., not for security reasons, such as issuing new credentials), and the decryption of packets historically needed to be performed in order (which can magnify the effects of a delayed packet). The design document notes that the in-order decryption problem has been largely solved in subsequent revisions, but at the cost of additional bytes per packet. QUIC is designed to implement TLS-like encryption in the same protocol as the transport, thus reducing the overhead of layering TLS over TCP.

Some of these specific issues have been addressed before—including by Google engineers. For example, TCP Fast Open (TFO) reduces round trips when re-connecting to a previously visited server, as does TLS Snap Start. In that sense, QUIC aggregates these approaches and rolls in several new ones, although one reason for doing so is the project's emphasis on a specific use case: TLS-encrypted connections carrying multiple streams to and from a single server, like one often does when using a web application service.

The QUIC team's approach has been to build connection-oriented features on top of UDP, testing the result between QUIC-enabled Chromium builds and a set of (unnamed) Google servers, plus some publicly available server test tools. The specifics of the protocol are still subject to change, but Google promises to publish its results if it finds techniques that result in clear performance improvements.

QUIC trip

Like SPDY, QUIC multiplexes several streams between the same client-server pair over a single connection—thus reducing the connection setup costs, transmission of redundant information, and overhead of maintaining separate sockets and ports. But much of the work on QUIC is focused on reducing the round trips required when establishing a new connection, including the handshake step, encryption setup, and initial requests for data.

QUIC cuts into the round-trip count in several ways. First, when a client initiates a connection, it includes session negotiation information in the initial packet. Servers can publish a static configuration file to host some of this information (such as encryption algorithms supported) for access by all clients, while individual clients provide some of it on their own (such as an initial public encryption key). Since the lifetime of the server's static configuration ought to be very long, requesting it the first time only takes one round-trip in many weeks or months of browsing. Second, when servers respond to an initial connection request, they send back a server certificate, hashes of a certificate chain for the client to verify, and a synchronization cookie. In the best-case scenario, the client can check the validity of the server certificate and start sending data immediately—with only one round-trip expended.

Where the savings really come into play, however, are on subsequent connections to the same server. For repeat connections within a reasonable time frame, the client can assume that the same server certificate will still be valid. The server, however, needs a bit more proof that the computer attempting to reconnect is indeed the same client as before, not an attacker attempting a replay. The client proves its identity by returning the synchronization cookie that the server sent during the initial setup. Again, in the best-case scenario, the client can begin sending data immediately without waiting a round trip (or three) to re-establish the connection.

As of now, the exact makeup of this cookie is not set in stone. It functions much like the cookie in TFO, which was also designed at Google. The cookie's contents are opaque to the client, but the documentation suggests that it should at least include proof that the cookie-holder came from a particular IP address and port at a given time. The server-side logic for cookie lifetimes and under what circumstances to reject or revoke a connection is not mandated. The goal is that by including the cookie in subsequent messages, the client demonstrates its identity to the server without additional authentication steps. In the event that the authentication fails, the system can always fall back to the initial-connection steps. An explicit goal of the protocol design is to better support mobile clients, whose IP addresses may change frequently; even if the zero-round-trip repeat connection does not succeed every time, it still beats initiating both a new TCP and a new TLS connection on each reconnect.

Packets and loss

In addition to its rapid-connection-establishment goals, QUIC implements some mechanisms to cut down on retransmissions. First, the protocol adds packet-level forward-error-correcting (FEC) codes to the unused bytes at the end of streams. Lost data retransmission is the fallback, but the redundant data in the FEC should make it possible to reconstruct lost packets at least a portion of the time. The design document discusses using the bitwise sum of a block of packets as the FEC; the assumption is that a single-packet loss is the most common, and this FEC would allow not only the detection of but the reconstruction of such a lost packet.

Second, QUIC has a set of techniques under review to avoid congestion. By comparison, TCP employs a single technique, congestion windows, which (as mentioned previously) are unforgiving to multiplexed connections. Among the techniques being tested are packet pacing and proactive speculative retransmission.

Packet pacing, quite simply, is scheduling packets to be sent at regular intervals. Efficient pacing requires an ongoing bandwidth estimation, but when it is done right, the QUIC team believes that pacing improves resistance to packet loss caused by intermediate congestion points (such as routers). Proactive speculative retransmission amounts to sending duplicate copies of the most important packets, such as the initial encryption negotiation packets and the FEC packets. Losing either of these packet types triggers a snowball effect, so selectively duplicating them can serve as insurance.

But QUIC is designed to be flexible when it comes to congestion control. In part, the team appears to be testing out several good-sounding ideas to see how well they fare in real-world conditions. It is also helpful for the protocol to be able to adapt in the future, when new techniques or combinations of techniques prove themselves.

QUIC is still very much a work in progress. Then again, it can afford to be. Unlike SPDY, which eventually evolved into HTTP 2.0, the team behind QUIC is up front about the fact that the ideas they implement, if proven successful, would ultimately be destined for inclusion in some future revision of TCP. Building the system on UDP is a purely practical compromise: it allows QUIC's connection-management concepts to be tested on a protocol that is already understood and accepted by the Internet's routing infrastructure. Building an entirely new connection-layer protocol would be almost impossible to test, but piggybacking on UDP at least provides a start.

The project addresses several salient questions in its FAQ, including the speculation that QUIC's goals might have been easily met by running SCTP (Stream Control Transmission Protocol) over DTLS (Datagram Transport Layer Security). SCTP provides the desired multiplexing, while DTLS provides the encryption and authentication. The official answer is that SCTP and DTLS both utilize the old, round-trip–heavy semantics that QUIC is interested in dispensing with. It is possible that other results from the QUIC experiment will make it into later revisions, but without this key feature, the team evidently felt it would not learn what it wanted to. However, as the design document notes: "The eventual protocol may likely strongly resemble SCTP, using encryption strongly resembling DTLS, running atop UDP."

The "experimental" nature of QUIC makes it difficult to predict what outcome will eventually result. For a core Internet protocol, it is a bit unusual for a single company to guide development in house and deploy it in the wild, but then again, Google is in a unique position to do so with real-world testing as part of the equation: the company both runs web servers and produces a web browser client. So long as the testing and the eventual result are open, that approach certainly has its advantages over years of committee-driven debate.

Comments (34 posted)

Page editor: Jonathan Corbet


NSA surveillance and "foreigners"

By Jake Edge
July 17, 2013
Akademy 2013

A keynote that is not directly related to KDE and the work that it does is a tradition at Akademy. While that tradition was upheld again this year, Eva Galperin of the Electronic Frontier Foundation gave a talk that was both timely and applicable to everyone in the room: US National Security Agency (NSA) surveillance and what it means for non-US people. There was plenty of interest in her talk for the largely European audience, but the overview of the NSA "surveillance state" was useful to those from the US as well.

[Eva Galperin]

The US government, in conjunction with the telecommunications carriers and large internet companies like Facebook, Yahoo, Google, and Microsoft, has been carrying out "illegal surveillance" on internet and other communication for quite some time, Galperin said. We started hearing about it in 2005 from news reports that AT&T had allowed the NSA access to its network. The collection of records of phone calls was being done at an AT&T facility that is, coincidentally, just blocks from her house in San Francisco.

That led the EFF to file lawsuits against AT&T and, eventually, the NSA, over this warrantless wiretapping. The AT&T lawsuit was dismissed on national security grounds, but the other case EFF filed, Jewel v. NSA, is still ongoing. In fact, in the week prior to her talk, the courts rejected the US government request that the suit be dismissed because of national security issues. The Jewel case moving forward is "great news", she said.

The "rest of us"

But, "what about the rest of us?", she asked. For people outside of the US, whose data traverses the US or is stored there, what protections exist? The surveillance is governed by the US Foreign Intelligence Surveillance Act (FISA), which created a secret court (FIS Court, or FISC) to oversee the surveillance operations. Since it targets "foreign intelligence", FISA has "zero protections" for foreigner's data in the US. It contains "slim protections" for those in the US, but those outside are "out in the cold".

The recently released PRISM information (by way of Edward Snowden) shows that these agencies talk of the US "home field advantage" in that much of the internet's information passes through US facilities. The data stored by US cloud storage facilities as well as internet services, such as Twitter, Facebook, Skype, and those from Google, are all fair game for "extra-territorial" people.

It is not just the US that is doing this kind of surveillance, she said; "lots of countries" are doing it. There are various malware-based attacks that we know about, which have not been proved to be state-sponsored but are strongly suspected to be. She mentioned China, Libya, and Syria as countries suspected of targeting both citizens and foreigners. The German government is known to have an email-based malware attack that targets foreigners. Increasingly, domestic laws are allowing this kind of extra-territorial surveillance and those laws are increasing their reach.

FISA is cloaked in secrecy, such that internet companies like Google and Microsoft can't even report on the kinds of information they have been required to produce. Some of the most recent Snowden leaks (as of the time of Galperin's talk) have shown a great deal of cooperation between Microsoft and the NSA.

"Just" metadata

In addition, US phone carrier Verizon has reportedly turned over seven years worth of "metadata" on all calls that it handled which started or ended in the US. Metadata is defined "quite broadly" to include routing information, phone numbers, call durations, and so on, but not the actual contents of the calls. That it is "only metadata" is the justification used by the NSA, but it is no real protection, she said, noting that US Central Intelligence Agency chief David Petraeus resigned based on evidence gathered from metadata. As an example, Galperin said: "We know you called the phone sex line, and we know you talked for 30 minutes, but we don't know what you said."

The PRISM surveillance was initially suspected of being a "back door" for the NSA into various internet services. It still is not clear if any exist, but internet services do have to respond to FISA orders and may do so via these back door portals—possibly in realtime. Even without realtime access, PRISM targets email, online chats (text, audio, and video), files downloaded, and more. It only requires 51% confidence that the target is not a US citizen, which is quite a low standard.

The NSA is building a data center "the size of a small village" to analyze and store this information. In one recent month, it collected some 97 billion intelligence data items; 3 billion for US citizens, the rest is for people in the rest of the world. This data isn't only being used by US agencies, either. The UK GCHQ signals intelligence agency made 197 requests for PRISM data (that we know of). It's not clear that GCHQ is allowed to set up its own PRISM system, but it can access US PRISM data. And, as Galperin noted, it is not at all clear that the US can legally set up a system like PRISM.

FISA basics

FISA was enacted in the late 1970s in reaction to a US Supreme Court ruling in 1972 that required a warrant to do surveillance even for national security reasons. The "Church committee" of the US Senate had found widespread abuse of surveillance within the US. It illegally targeted journalists, activists, and others during the 1960s and 1970s. Initially, there were fairly strong provisions against domestic surveillance, but these have been weakened by amendments to FISA over the years.

There are two main powers granted to agencies under FISA: the "business records" and "general acquisition" powers. The business records power allows the government to compel production of any records held by a business as long as it is in furtherance of "foreign intelligence". That has been secretly decided to cover metadata. The general acquisition power allows the government to request (and compels anyone to produce) "any tangible thing" for foreign intelligence purposes.

One of the biggest problems is the secretive way that these laws and powers are interpreted. Because there is a non-adversarial interpretation process (i.e. no one is empowered to argue against the government's interpretation) the most favorable reading is adopted. The request must be "reasonably believed" to be related to foreign intelligence, which has been interpreted to mean a 51% likelihood, for example. Beyond that, the restrictions (such as they are) only apply to US citizens. The safeguards are few and it is unlikely that a foreigner could even take advantage of any that apply.

FISC is required to minimize the gathering and retention of data on US citizens, but the government "self-certifies" that any data is foreign-intelligence-oriented. The general acquisition power allows the government to request "just about anything" with low standards for "reasonable grounds" and "relevance". To challenge any of this surveillance, one must show that they have been actively targeted. With these low standards, the requests made to FISC are rarely turned down; of the 31,000 requests over the last 30 years, eleven have been declined, Galperin said.

The "tl;dr" of her talk is that there is a broad definition of intelligence, and the laws apply to foreigners differently than to US citizens. The fourth amendment to the US Constitution (which covers searches and warrants) may not apply to foreigners, for example. The congressional oversight of FISA is weak and the executive branch (US President and agencies) handles it all secretly so the US people (and everyone else) are in the dark about what is being done. Galperin mentioned a US congresswoman who recently said that everything that has been leaked so far is only "the tip of the iceberg" in terms of these surveillance activities.

What can be done?

A group of foreign non-profits has gathered together to ask the US Congress to protect foreign internet users. They also expressed "grave concern" over sharing the intelligence gathered with other governments including the Netherlands, UK, and others. Human rights include the right to privacy, Galperin said, and standing up for that right is now more important than ever. The US government was caught spying in the 1960s and 1970s, so Congress had a committee look into it and curb some of the abuses; that needs to happen again, she said.

For individuals, "use end-to-end encryption", she said. It is rare that she speaks to a group where she doesn't have to explain that term, but Akademy is one of those audiences. Encryption "does not guarantee privacy", but it makes the NSA's job much harder.

The most useful thing that people in the audience could do is to make tools that are secure—make encryption standard. The EFF is making the same pitch to Silicon Valley companies, but it is counting on free software: "Help us free software, you are our last and only hope". Please build new products, and "save us", she concluded.

[Thanks to KDE e.V. for travel assistance to Bilbao for Akademy.]

Comments (29 posted)

Brief items

Security quotes of the week

And in the meantime, my distrust of Intel's crypto has moved from "standard professional paranoia" to "actual legitimate concern".
Matt Mackall

And while you're lying awake at night worrying whether the Men in Black have backdoored the CPU in your laptop, you're missing the fact that the software that's using the random numbers has 36 different buffer overflows, of which 27 are remote-exploitable, and the crypto uses an RSA exponent of 1 and AES-CTR with a fixed IV.
Peter Gutmann

But it would be naive for anyone -- for any of us -- to assume that Russia would not attempt to leverage a situation like this for their own purposes of Internet control. Whether or not they succeed is a wholly different question, and all of us will have a say in that, one way or another.

Yes, planned or not, incidental or not, actions do have consequences, and it would be ironic indeed if Edward Snowden's stated quest to promote the cause of freedom around the world, had the unintentional effect of helping to crush Internet freedoms at the hands of his benefactors of the moment.

Lauren Weinstein

Comments (2 posted)

An overview of Linux security features (

Kernel security subsystem maintainer James Morris has posted an overview of Linux security features on the site. "A simpler approach to integrity management is the dm-verity module. This is a device mapper target which manages file integrity at the block level. It's intended to be used as part of a verified boot process, where an appropriately authorized caller brings a device online, say, a trusted partition containing kernel modules to be loaded later."

Comments (3 posted)

New vulnerabilities

ansible: man in the middle attack

Package(s):ansible CVE #(s):CVE-2013-2233
Created:July 15, 2013 Updated:July 17, 2013
Description: From the Red Hat bugzilla:

A security flaw was found in the way Ansible, a SSH-based configuration management, deployment, and task execution system, performed remote server's SSH host key management (previously ability to store known SSH server's host keys to local cache was not supported). A remote attacker could use this flaw to conduct man-in-the-middle (MiTM) attacks against the Ansible task execution system user.

Fedora FEDORA-2013-12394 ansible 2013-07-15
Fedora FEDORA-2013-12400 ansible 2013-07-15
Fedora FEDORA-2013-12389 ansible 2013-07-15

Comments (none posted)

apache: denial of service

Package(s):apache2 CVE #(s):CVE-2013-1896
Created:July 15, 2013 Updated:August 14, 2013
Description: From the CVE entry:

mod_dav.c in the Apache HTTP Server before 2.2.25 does not properly determine whether DAV is enabled for a URI, which allows remote attackers to cause a denial of service (segmentation fault) via a MERGE request in which the URI is configured for handling by the mod_dav_svn module, but a certain href attribute in XML data refers to a non-DAV URI.

openSUSE openSUSE-SU-2014:1647-1 apache2 2014-12-15
SUSE SUSE-SU-2014:1082-1 apache2 2014-09-02
Gentoo 201309-12 apache 2013-09-23
Fedora FEDORA-2013-13922 httpd 2013-08-16
Scientific Linux SLSA-2013:1156-1 httpd 2013-08-13
Oracle ELSA-2013-1156 httpd 2013-08-13
Oracle ELSA-2013-1156 httpd 2013-08-13
openSUSE openSUSE-SU-2013:1341-1 apache2 2013-08-14
openSUSE openSUSE-SU-2013:1340-1 apache2 2013-08-14
openSUSE openSUSE-SU-2013:1337-1 apache2 2013-08-14
CentOS CESA-2013:1156 httpd 2013-08-13
CentOS CESA-2013:1156 httpd 2013-08-13
Red Hat RHSA-2013:1156-01 httpd 2013-08-13
Fedora FEDORA-2013-13994 httpd 2013-08-09
Slackware SSA:2013-218-02 httpd 2013-08-06
Mandriva MDVSA-2013:193 apache 2013-07-11
Ubuntu USN-1903-1 apache2 2013-07-15

Comments (none posted)

file-roller: path traversal

Package(s):file-roller CVE #(s):CVE-2013-4668
Created:July 16, 2013 Updated:July 31, 2013
Description: From the Fedora advisory:

The File Roller archive manager for the GNOME desktop suffers from a path traversal vulnerability caused by insufficient path sanitization.

A specially crafted archive file can be used to trigger creation of arbitrary files in any location, writable by the user executing the extraction, outside the current working directory. This behaviour is triggered when the option 'Keep directory structure' is selected from the application 'Extract' dialog.

openSUSE openSUSE-SU-2013:1281-1 file-roller 2013-07-31
Fedora FEDORA-2013-12653 file-roller 2013-07-24
Ubuntu USN-1906-1 file-roller 2013-07-16
Fedora FEDORA-2013-12667 file-roller 2013-07-16

Comments (none posted)

gallery3: information disclosure

Package(s):gallery3 CVE #(s):CVE-2013-2240 CVE-2013-2241
Created:July 16, 2013 Updated:July 17, 2013
Description: From the Fedora advisory:

A security flaw was found in the way flowplayer SWF file handling functionality of Gallery version 3, an open source project with the goal to develop and support leading photo sharing web application solutions, processed certain URL fragments passed to this file (certain URL fragments were not stripped properly when these files were called via direct URL request(s)). A remote attacker could use this flaw to conduct replay attacks.

Multiple information exposure flaws were found in the way data rest core module of Gallery version 3, an open source project with the goal to develop and support leading photo sharing web application solutions, used to previously restrict access to certain items of the photo album. A remote attacker, valid Gallery 3 user, could use this flaw to possibly obtain sensitive information (file, resize or thumb path of the item in question).

Fedora FEDORA-2013-12441 gallery3 2013-07-16
Fedora FEDORA-2013-12424 gallery3 2013-07-16
Fedora FEDORA-2013-12384 gallery3 2013-07-16

Comments (none posted)

libxml2: denial of service

Package(s):libxml2 CVE #(s):CVE-2013-2877
Created:July 15, 2013 Updated:October 14, 2013
Description: From the CVE entry:

parser.c in libxml2 before 2.9.0, as used in Google Chrome before 28.0.1500.71 and other products, allows remote attackers to cause a denial of service (out-of-bounds read) via a document that ends abruptly, related to the lack of certain checks for the XML_PARSER_EOF state.

Gentoo 201412-11 emul-linux-x86-baselibs 2014-12-11
Oracle ELSA-2014-1655 libxml2 2014-10-17
Oracle ELSA-2014-0513 libxml2 2014-05-19
CentOS CESA-2014:0513 libxml2 2014-05-19
Scientific Linux SLSA-2014:0513-1 libxml2 2014-05-19
Red Hat RHSA-2014:0513-01 libxml2 2014-05-19
Gentoo 201311-06 libxml2 2013-11-10
SUSE SUSE-SU-2013:1627-1 libxml2 2013-11-04
SUSE SUSE-SU-2013:1625-1 libxml2 2013-11-04
Debian DSA-2779-1 libxml2 2013-10-13
Gentoo 201309-16 chromium 2013-09-24
openSUSE openSUSE-SU-2013:1246-1 libxml2 2013-07-24
Mandriva MDVSA-2013:198 libxml2 2013-07-24
Mageia MGASA-2013-0218 libxml2 2013-07-21
openSUSE openSUSE-SU-2013:1221-1 libxml2 2013-07-19
Debian DSA-2724-1 chromium-browser 2013-07-18
Ubuntu USN-1904-2 libxml2 2013-07-17
Ubuntu USN-1904-1 libxml2 2013-07-15

Comments (none posted)

libzrtpcpp: multiple vulnerabilities

Package(s):libzrtpcpp CVE #(s):CVE-2013-2221 CVE-2013-2222 CVE-2013-2223
Created:July 16, 2013 Updated:October 29, 2013
Description: From the Red Hat bugzilla [1, 2, 3]:

A heap-based buffer overflow flaw was found in the way libzrtpcpp, a ZRTP support library for the GNU ccRTP stack, processed certain ZRTP packets (overly-large ZRTP packets of several types). A remote attacker could provide a specially-crafted ZRTP packet that, when processed in an application linked against libzrtpcpp would lead to that application crash or, potentially, arbitrary code execution with the privileges of the user running that application. (CVE-2013-2221)

Multiple stack-based buffer overflows were found in the way libzrtpcpp, a ZRTP support library for the GNU ccRTP stack, processed certain ZRTP Hello packets (ZRTP Hello packets with an overly-large value in certain fields, including the count of public keys). A remote attacker could provide a specially-crafted ZRTP packet that, when processed in an application linked against libzrtpcpp would lead to that application crash. (CVE-2013-2222)

Multiple information (heap memory content) exposure flaws were found in the way libzrtpcpp, a ZRTP support library for the GNU ccRTP stack, processed truncated ZRTP Ping packets. A remote attacker could provide a specially-crafted ZRTP Ping packet that, when processed in an application linked against libzrtpcpp would potentially reveal sensitive information stored on the heap. (CVE-2013-2223)

openSUSE openSUSE-SU-2013:1600-1 zrtpcpp 2013-10-29
openSUSE openSUSE-SU-2013:1599-1 libzrtpcpp 2013-10-29
Gentoo 201309-13 libzrtpcpp 2013-09-24
Fedora FEDORA-2013-13018 twinkle 2013-07-24
Fedora FEDORA-2013-13019 twinkle 2013-07-24
Fedora FEDORA-2013-13018 ortp 2013-07-24
Fedora FEDORA-2013-13019 ortp 2013-07-24
Fedora FEDORA-2013-13018 libzrtpcpp 2013-07-24
Fedora FEDORA-2013-13019 libzrtpcpp 2013-07-24
Fedora FEDORA-2013-12479 libzrtpcpp 2013-07-16

Comments (none posted)

java: information disclosure

Package(s):java-1.6.0-ibm CVE #(s):CVE-2013-3743
Created:July 16, 2013 Updated:July 26, 2013
Description: From the CVE entry:

Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 6 Update 45 and earlier and 5.0 Update 45 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to AWT.

Red Hat RHSA-2014:0414-01 java-1.6.0-sun 2014-04-17
Gentoo 201401-30 oracle-jdk-bin 2014-01-26
SUSE SUSE-SU-2013:1305-1 IBM Java 1.6.0 2013-08-06
SUSE SUSE-SU-2013:1293-1 IBMJava5 JRE and IBMJava5 SDK 2013-08-02
SUSE SUSE-SU-2013:1255-3 IBM Java 1.6.0 2013-07-30
SUSE SUSE-SU-2013:1263-2 java-1_5_0-ibm 2013-07-30
SUSE SUSE-SU-2013:1255-2 java-1_6_0-ibm 2013-07-27
SUSE SUSE-SU-2013:1263-1 java-1_5_0-ibm 2013-07-27
SUSE SUSE-SU-2013:1257-1 java-1_7_0-ibm 2013-07-25
SUSE SUSE-SU-2013:1256-1 java-1_7_0-ibm 2013-07-25
SUSE SUSE-SU-2013:1255-1 java-1_6_0-ibm 2013-07-25
Ubuntu USN-1908-1 openjdk-6 2013-07-23
Red Hat RHSA-2013:1081-01 java-1.5.0-ibm 2013-07-16
Red Hat RHSA-2013:1059-01 java-1.6.0-ibm 2013-07-15

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2013-2128
Created:July 17, 2013 Updated:July 18, 2013
Description: From the CVE entry:

The tcp_read_sock function in net/ipv4/tcp.c in the Linux kernel before 2.6.34 does not properly manage skb consumption, which allows local users to cause a denial of service (system crash) via a crafted splice system call for a TCP socket.

Oracle ELSA-2013-1645 kernel 2013-11-26
Scientific Linux SL-kern-20130717 kernel 2013-07-17
Oracle ELSA-2013-1051 kernel 2013-07-16
CentOS CESA-2013:1051 kernel 2013-07-17
Red Hat RHSA-2013:1080-01 kernel 2013-07-16
Red Hat RHSA-2013:1051-01 kernel 2013-07-16

Comments (none posted)

nagstamon: information disclosure

Package(s):nagstamon CVE #(s):CVE-2013-4114
Created:July 16, 2013 Updated:January 7, 2014
Description: From the Red Hat bugzilla:

An user details information exposure flaw was found in the way Nagstamon, Nagios status monitor for desktop, performed automated requests to get information about available updates. Remote attacker could use this flaw to obtain user credentials for server monitored by the desktop status monitor due to their improper (base64 encoding based) encoding in the HTTP request, when the HTTP Basic authentication scheme was used.

Gentoo 201401-03 nagstamon 2014-01-07
Mageia MGASA-2013-0262 nagstamon 2013-08-30
openSUSE openSUSE-SU-2013:1235-1 nagstamon 2013-07-23
Fedora FEDORA-2013-12541 nagstamon 2013-07-16
Fedora FEDORA-2013-12526 nagstamon 2013-07-16

Comments (none posted)

php: code execution

Package(s):php CVE #(s):CVE-2013-4113
Created:July 15, 2013 Updated:July 23, 2013
Description: From the Red Hat advisory:

A buffer overflow flaw was found in the way PHP parsed deeply nested XML documents. If a PHP application used the xml_parse_into_struct() function to parse untrusted XML content, an attacker able to supply specially-crafted XML could use this flaw to crash the application or, possibly, execute arbitrary code with the privileges of the user running the PHP interpreter.

Gentoo 201408-11 php 2014-08-29
Fedora FEDORA-2013-23215 php 2013-12-20
Oracle ELSA-2013-1307 php53 2013-10-02
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
SUSE SUSE-SU-2013:1317-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1285-2 PHP5 2013-08-09
SUSE SUSE-SU-2013:1316-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1315-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1285-1 PHP5 2013-08-01
openSUSE openSUSE-SU-2013:1249-1 php5 2013-07-24
Oracle ELSA-2013-1063 php 2013-07-22
Fedora FEDORA-2013-12354 php 2013-07-23
Fedora FEDORA-2013-12315 php 2013-07-23
Mageia MGASA-2013-0216 php 2013-07-18
Fedora FEDORA-2013-12977 php 2013-07-18
Debian DSA-2723-1 php5 2013-07-17
Slackware SSA:2013-197-01 php 2013-07-16
Ubuntu USN-1905-1 php5 2013-07-16
Red Hat RHSA-2013:1062-01 php53 2013-07-15
Red Hat RHSA-2013:1063-01 php 2013-07-15
Red Hat RHSA-2013:1061-01 php 2013-07-15
Scientific Linux SL-php5-20130712 php53 2013-07-12
Scientific Linux SL-php-20130712 php 2013-07-12
Oracle ELSA-2013-1050 php53 2013-07-13
Oracle ELSA-2013-1049 php 2013-07-13
Oracle ELSA-2013-1049 php 2013-07-12
Mandriva MDVSA-2013:195 php 2013-07-15
CentOS CESA-2013:1050 php53 2013-07-12
CentOS CESA-2013:1049 php 2013-07-12
CentOS CESA-2013:1049 php 2013-07-12
Red Hat RHSA-2013:1050-01 php53 2013-07-12
Red Hat RHSA-2013:1049-01 php 2013-07-12

Comments (none posted)

php5: denial of service

Package(s):php5 CVE #(s):CVE-2013-4635
Created:July 16, 2013 Updated:July 17, 2013
Description: From the CVE entry:

Integer overflow in the SdnToJewish function in jewish.c in the Calendar component in PHP before 5.3.26 and 5.4.x before 5.4.16 allows context-dependent attackers to cause a denial of service (application hang) via a large argument to the jdtojewish function.

Gentoo 201408-11 php 2014-08-29
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
SUSE SUSE-SU-2013:1317-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1285-2 PHP5 2013-08-09
SUSE SUSE-SU-2013:1316-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1315-1 PHP5 2013-08-09
SUSE SUSE-SU-2013:1285-1 PHP5 2013-08-01
openSUSE openSUSE-SU-2013:1249-1 php5 2013-07-24
Ubuntu USN-1905-1 php5 2013-07-16

Comments (none posted)

python-suds: symbolic link attack

Package(s):python-suds CVE #(s):CVE-2013-2217
Created:July 17, 2013 Updated:October 13, 2016
Description: From the bug report:

An insecure temporary directory use flaw was found in the way python-suds, a Python SOAP web services client library, performed initialization of its internal file-based URL cache (predictable location was used for directory to store the cached files). A local attacker could use this flaw to conduct symbolic link attacks, possibly leading to their ability for example the SOAP .wsdl metadata to redirect queries to a different host, than originally intended.

Ubuntu USN-2008-1 suds 2013-10-24
Mageia MGASA-2013-0224 python-suds 2013-07-21
openSUSE openSUSE-SU-2013:1208-1 python-suds 2013-07-17
openSUSE openSUSE-SU-2016:2516-1 python-suds-jurko 2016-10-12

Comments (none posted)

qpid: SSL certificate spoofing

Package(s):qpid CVE #(s):CVE-2013-1909
Created:July 12, 2013 Updated:July 17, 2013

From the Red Hat advisory:

It was discovered that the Qpid Python client library for AMQP did not properly perform TLS/SSL certificate validation of the remote server's certificate, even when the 'ssl_trustfile' connection option was specified. A rogue server could use this flaw to conduct man-in-the-middle attacks, possibly leading to the disclosure of sensitive information.

Red Hat RHSA-2013:1024-01 qpid 2013-07-11

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.11-rc1, released on July 14. "Ignoring the lustre merge, I think this really was a somewhat calmer merge window. We had a few trees with problems, and we have an on-going debate about stable patches that was triggered largely thanks to this merge window, so now we'll have something to discuss for the kernel summit. But on the whole, I suspect we might be starting to see the traditional summer slump (Australia notwithstanding)." This release, alas, also has a new codename: "Linux for Workgroups."

Stable updates: 3.10.1, 3.9.10, 3.4.53, and 3.0.86 were all released on July 13. Greg warns that 3.9.10 may be the final release in the 3.9.x series.

Comments (none posted)

Quotes of the week

Not much can hurt us deep in our dark basements after all, except maybe earthquakes, gamma ray eruptions and Mom trying to clean up around the computers.
Ingo Molnar

I'm perfectly happy to run linux-scsi along reasonable standards of civility and try to keep the debates technical, but that's far easier to do on a low traffic list; obviously, I realise that style of argument doesn't suit everyone, so it's not a standard of behaviour I'd like to see universally imposed. In fact, I've got to say that I wouldn't like to see *any* behaviour standard imposed ... they're all basically cover for power plays (or soon get abused as power plays); the only real way to display leadership on behaviour standards is by example not by fiat.
James Bottomley

Comments (none posted)

Gettys: Traditional AQM is not enough

Here's a lengthy posting from Jim Gettys on the current state of the fight against bufferbloat and what needs to be done now. "Many have understood bufferbloat to be a problem that primarily occurs when a saturating 'elephant flow' is present on a link; it is easiest to test for bufferbloat this way, but this is not the only problem we face. The dominant application, the World Wide Web, is anti-social to any other application on the Internet, and it’s collateral damage is severe. Solving the latency problem, therefore, requires a two prong attack."

Comments (28 posted)

Kernel development news

The 3.11 merge window closes

By Jonathan Corbet
July 16, 2013
[New logo] Linus announced the release of 3.11-rc1 — and the closing of the 3.11 merge window — on July 14. While the merge window was open, 9,494 non-merge changesets were pulled into the mainline kernel repository. The last of those changes changed the kernel's codename to "Linux for Workgroups" and modified the boot-time logo; the new version appears to the right. Clearly, Linux development has moved into a new era.

Of those 9,494 changes, 1,219 were pulled since last week's summary. User-visible changes in that final batch of patches include:

  • The new O_TMPFILE ABI has changed slightly in response to concerns expressed by Linus. In short, open() ignores unknown flags, so software using O_TMPFILE on older kernels has no way of knowing that it is not, in fact, getting the expected temporary file semantics. Following a suggestion from Rasmus Villemoes, Al Viro changed the user-space view of O_TMPFILE to include the O_DIRECTORY and O_RDWR bits — a combination that always results in an error on previous kernels. So applications should always get an error if they attempt to use O_TMPFILE on a kernel that does not support that option.

  • The zswap compressed swap cache has been merged into the mainline. The changes to make the memory allocation layer modular, called for at this year's Storage, Filesystem, and Memory Management Summit, appear not to have been made, though.

  • The "blk-throttle" I/O bandwidth controller now properly supports control group hierarchies — but only if the non-default "sane_behavior" flag is set.

  • The "dm-switch" device mapper target maps I/O requests to a set of underlying devices. It is intended for situations where the mapping is more complicated than can be expressed with a simple target like "stripe"; see Documentation/device-mapper/switch.txt for more information.

  • New hardware support includes:

    • Systems and processors: ARM System I/O memory management units (hopefully pointing to an era where ARM processors ship with a standard IOMMU) and Broadcom BCM3368 Cable Modem SoCs.

    • InfiniBand: Mellanox Connect-IB PCI Express host channel adapters.

    • Miscellaneous: Intel's "Rapid Start Technology" suspend-to-disk mechanism and Intel x86 package thermal sensors (see Documentation/thermal/x86_pkg_temperature_thermal for more information).

    • Video4Linux: OKI Semiconductor ML86V7667 video decoders, Texas Instruments THS8200 video encoders, and Fushicai USBTV007-based video capture devices.

    • Watchdog: Broadcom BCM2835 hardware watchdogs and MEN A21 VME CPU carrier board watchdog timers.

    • Staging graduations: TI OMAP thermal management subsystems.

Changes visible to kernel developers include:

  • Module loading behavior has been changed slightly in that the load will no longer fail in the presence of unknown module parameters. Instead, such parameters will be ignored after the issuing of a log message. This change allows system configurations to continue working after a module parameter is removed or when an older kernel is booted.

  • The MIPS architecture now supports building with -fstack-protector buffer overflow detection.

Recent development cycles have lasted for about 70 days (though 3.10, at 63 days, was significantly shorter). If that pattern holds for this cycle, the 3.11 kernel can be expected around September 9.

Comments (44 posted)

Some stable tree grumbles

By Jonathan Corbet
July 17, 2013
In the dim and distant past (March 2005), the kernel developers were having a wide-ranging discussion about various perceived problems with the kernel development process, one of which was the inability to get fixes for stable kernel releases out to users. Linus suggested that a separate tree for fixes could be maintained if a suitable "sucker" could be found to manage it, but, he predicted, said sucker would "go crazy in a couple of weeks" and quit. As it turns out, Linus had not counted on just how stubborn Greg Kroah-Hartman can be; Greg (along with Chris Wright at the time) stepped forward and volunteered to maintain this tree, starting with the release of Greg has continued to maintain the stable trees ever since. Recently, though, he has expressed some frustrations about how the process is working.

In particular, the announcement of the review stage for the 3.10.1 release included a strongly-worded complaint about how subsystem maintainers are managing patches for the stable tree. He called out two behaviors that he would like to see changed:

  • Some patches are being marked for stable releases that clearly do not belong there. Cosmetic changes to debug messages were called out as an example of this type of problem.

  • More importantly: a lot of the patches marked as being for the stable tree go into the mainline during the merge window. In many cases, that means that the subsystem maintainer held onto the patches for some time — months, perhaps — rather than pushing them to Linus for a later -rc release. If the patches are important enough to go into the stable tree, Greg asked, why are they not going to Linus immediately?

Starting with the second complaint above, the explanation appears to be relatively straightforward: getting Greg to accept changes for the stable tree is rather easier than getting Linus to accept them outside of the merge window. In theory, the rules for inclusion into the stable tree are the same as for getting patches into the mainline late in the cycle: the patches in question must fix some sort of "critical" problem. In practice, Linus and Greg are at least perceived to interpret the rules differently. So developers, perhaps unwilling to risk provoking an outburst from Linus, will simply hold fixes until the next merge window comes around. As James Bottomley put it:

You mean we delay fixes to the merge window (tagged for stable) because we can't get them into Linus' tree at -rc5 on? Guilty ... that's because the friction for getting stuff in rises. It's a big fight to get something marginal in after -rc5 ... it's easy to silently tag it for stable.

Greg's plan for improving things involves watching linux-next starting around the -rc4 mainline release. If patches marked for the stable series start appearing in linux-next, he'll ask the maintainers why those patches have not yet found their way to Linus. Some of those patches may well find themselves refused entry into the stable tree if they only show up in the mainline during the merge window.

The topic of fully inappropriate patches, while the lesser part of Greg's complaint, became the larger part of the discussion. There are, it seems, any number of reasons for patches to be directed at the stable tree even if they are not stable material. At one extreme, Ben Herrenschmidt's description of how the need to get code into enterprise kernels drives the development process is well worth reading. For most other cases, though, the causes are probably more straightforward.

For years, people worried that important fixes were being overlooked and not getting into the stable updates; that led to pressure on developers to mark appropriate patches for the stable tree. This campaign has been quite successful, to the point that now, often, developers add a stable tag to a patch that fixes a bug as a matter of reflex. Subsystem maintainers are supposed to review such tags as part of their review of the patch as a whole, but that review may not always happen — or those maintainers may agree that a patch should go into the stable tree, even if it doesn't adhere to the rules. And sometimes subsystem maintainers can't remove the tag even if they want to. All this led James to propose doing away with the stable tag altogether:

The real root cause of the problem is that the cc: stable tag can't be stripped once it's in the tree, so maintainers only get to police things they put in the tree. Stuff they pull from others is already tagged and that tag can't be changed. This effectively pushes the problem out to the lowest (and possibly more inexperienced) leaves of the Maintainer tree.

James (along with others) proposes that putting a patch into the stable tree should require an explicit action on the subsystem maintainer's part. But Greg dislikes this idea, noting that maintainers are already far too busy. The whole point of the stable tree process is to make things as easy for everybody else as possible; adding work for maintainers would endanger the success of the whole exercise. That is especially true, he said, because some developers might encounter resistance from their employers:

And that annoys the hell out of some Linux companies who feel that the stable kernels compete with them. So people working for those companies might not get as much help with doing any additional work for stable kernel releases (this is not just idle gossip, I've heard it directly from management's mouths.)

Another proponent of explicit maintainer involvement is Jiri Kosina, who, in his work with SUSE's kernels, has encountered a few problems with stable kernels. While the stable tree is highly valuable to him, some of the patches in it cause regressions, some are just useless, and, for some, there is no real indication of why the patches are in the stable tree in the first place. Forcing maintainers to explicitly nominate and justify patches for the stable tree would, he said, address all three types of problem.

The first type — patches that introduce bugs of their own — will probably never be eliminated entirely; that is just how software development works. Everybody in the discussion has acknowledged that, once a buggy fix is identified, Greg quickly makes a stable release with that patch removed, so regressions tend not to stay around for long. Useless patches include those that are backported to kernels that predate the original bug; this problem could be addressed by placing more information in the changelog describing when the bug was introduced. The final type of problem raised by Jiri — mysterious patches — turned out to be security fixes. Jiri (and others) would like security fixes marked as such in the changelog, but that is unlikely to happen; instead, more effort is being made to notify distributors of security fixes via private channels.

In other words, while changes are likely to be made, they will not be fundamental in nature. Greg is likely to become fussier about the patches he accepts for the stable tree. Chances are, though, that he will never be as hard to please as Linus in this regard. In the end, the consumers of the stable tree — distributors and users both — want fixes to be included there. The stable kernel series is one of the biggest successes of the kernel development process; any changes to how they are created are likely to be relatively small and subtle. For most of us, the fixes will continue to flow as usual.

Comments (5 posted)

On kernel mailing list behavior

By Jonathan Corbet
July 17, 2013
As has been widely reported, the topic of conduct on kernel-related mailing lists has, itself, been the topic of a heated discussion on the linux-kernel mailing list. While numerous development communities have established codes of conduct over the years, the kernel has never followed suit. Might that situation be about to change? Your editor will attempt a factual description of the discussion, followed by some analysis.

What was said

The setting was an extensive discussion on policies for the management of the stable kernel series and, in particular, the selection of patches for stable updates. It was an interesting discussion in its own right (which will be covered here separately), and it was generally polite. Even so, there came a point where Sarah Sharp couldn't take it anymore:

Seriously, guys? Is this what we need in order to get improve -stable? Linus Torvalds is advocating for physical intimidation and violence. Ingo Molnar and Linus are advocating for verbal abuse.

Not *fucking* cool. Violence, whether it be physical intimidation, verbal threats or verbal abuse is not acceptable. Keep it professional on the mailing lists.

For the record, she was responding to this note from Linus:

Greg, the reason you get a lot of stable patches seems to be that you make it easy to act as a door-mat. Clearly at least some people say "I know this patch isn't important enough to send to Linus, but I know Greg will silently accept it after the fact, so I'll just wait and mark it for stable".

You may need to learn to shout at people.

Ingo's contribution was:

So Greg, if you want it all to change, create some _real_ threat: be frank with contributors and sometimes swear a bit. That will cut your mailqueue in half, promise!

Whether these messages constitute "advocating for physical intimidation and violence" or even "advocating for verbal abuse" will be left for the reader to decide. But Sarah's point was clearly not that these specific messages were out of line; she is concerned with the environment on the linux-kernel mailing list in general. She has since taken the discussion to other forums (with more examples) and, in general, seems intent on changing the nature of the community's discourse.

Needless to say, responses on the list were mixed, though they were generally polite and restrained. A number of people, Linus included, pointed out that the number of personal attacks on the list is actually quite small, and that Linus tends to reserve his strongest language for high-level maintainers who (1) are able to take it, and (2) "should know better" than to do whatever it was that set Linus off. Opinions differ on whether that is a good thing. Jens Axboe said:

I've been flamed plenty in the past, and it's been deserved (most of the time). Perhaps I have a thick skull and/or skin, but it doesn't really bother me. Or perhaps I'm just too much of an old kernel fart these days, so I grew accustomed to it. As long as I don't have to see Linus in his bathrobe, then that's enough "professionalism" for me.

On the other hand, Neil Brown echoed the feelings of a number of participants who worry that the tone of the discussion tends to discourage people from joining the community: "He is scolding people senior developers in front of newcomers. That is not likely to encourage people to want to become senior developers." Being flamed can be hard on the recipient, but it can also affect the community by deterring other developers from participating.

For his part, Linus has made it clear that he feels little need to change his tone on the list:

The fact is, people need to know what my position on things are. And I can't just say "please don't do that", because people won't listen. I say "On the internet, nobody can hear you being subtle", and I mean it.

And I definitely am not willing to string people along, either. I've had that happen too - not telling people clearly enough that I don't like their approach, they go on to re-architect something, and get really upset when I am then not willing to take their work.

Sarah responded that one can be clear without being abusive; she also suggested that Linus use his power directly (by threatening not to pull patches from the offending maintainer) rather than using strong words. For what it's worth, Linus did acknowledge, later in the discussion, that one of his more famous rants was "Not my proudest moment."

Unsurprisingly, there were few concrete outcomes from the discussion (which is still in progress as of this writing). Sarah has called for the creation of a document (written by "a trusted third party") describing acceptable conduct in the kernel community. There will almost certainly be a Kernel Summit discussion on this topic; as Linus pointed out, this kind of process-oriented discussion is the reason why the Kernel Summit exists in the first place.

Some analysis

There are, it seems, some simple statements that should not be overly controversial in the context of a discussion like this. Most people prefer an environment where people are pleasant to one another to an environment where people are harsh or abusive. An abusive community can certainly deter some potential contributors from joining; consider, for example, whether OpenBSD might have more developers if its communications were more congenial. Various development communities have set out to improve the quality of their communications, sometimes with clear success.

How do these thoughts apply in the kernel context?

It is worth pointing out that this is not the first time people have expressed concerns about how the kernel community works; it was, for example, a topic of discussion at the 2007 Kernel Summit. Numerous developers have pushed for improvements in how kernel people communicate; these efforts have happened both publicly and in private. Even Linus has said, at times, that he wished the discussion on linux-kernel were more constructive.

Your editor will assert that, in fact, the situation has improved considerably over the years. Much of that improvement is certainly due to the above-mentioned efforts. Abusive personalities have been confronted, managers have occasionally been contacted, trolls have been ignored, and more. The improvement is also certainly a result of changes in the kernel development community. We are as a whole older (and thus more restrained); the community is also much more widely paid to do its work, with the result that image-conscious companies have an incentive to step in when their developers go overboard. The tone is far more "professional," and true personal attacks are rare (though examples can certainly be found if one looks).

Over the years, the kernel development community has continued to grow. One might argue that it would have grown much more rapidly with a different culture in its mailing lists, but that is hard to verify. It is true, though, that much of that growth has come from parts of the world where people are said to be especially sensitive to direct criticism. For all its troubles, the kernel community is still sufficiently approachable that over 3,000 people per year are able to get their work reviewed and merged.

That said, the kernel is still viewed as one of the harshest communities in the free software world. It seems fairly clear that the tone of the discussion could bear some improvement, and that the current state of affairs repels some people who could otherwise be valuable contributors. So efforts like Sarah's to make things better should be welcomed; they deserve full consideration on the part of the community's leaders. But this kind of effort will be working against some constraints that make this kind of social engineering harder.

One of them is that the kernel absolutely depends on the community's unwillingness to accept substandard code. The kernel has to work in a huge variety of settings for an unbelievable number of use cases. It must integrate the work of thousands of developers and grow rapidly while staying maintainable over the long term. It is a rare software project indeed that has attained the size of the kernel and sustained its rate of change without collapsing under its own weight. If we want to still have a viable kernel a decade from now, we must pay close attention to the code that we merge now.

So it must be possible for developers to speak out against code that they see as being unsuitable for merging into the kernel. And the sad fact is that, sometimes, this message must be conveyed forcefully. Some developers are either unwilling to listen or they fail to receive the full message; as Rusty Russell put it:

You have to be harsh with code: People mistake politeness for uncertainty. Whenever I said 'I prefer if you XYZ' some proportion didn't realize I meant 'Don't argue unless you have new facts: do XYZ or go away.' This wastes my time, so I started being explicit.

The size of the community, the fact that some developers are unwilling to toss aside code they have put a lot of time into, and pressure from employers can all lead to a refusal to hear the message and, as a consequence, the need to be explicit. Any attempt to make it harder for developers to express their thoughts on the code could damage the community and, more to the point, is almost certain to fail.

That said, Rusty concluded the above message with this advice: "But be gentle with people. You've already called their baby ugly." There are certainly times when the community could be gentler with people without compromising on their code. That, of course, is exactly what people like Sarah are asking for.

Whether a documented code of conduct would push things in that direction is hard to say, though. Simply obtaining a consensus on the contents of such a document is likely to be a difficult process, though the discussion itself could be helpful in its ability to produce counterexamples. But, even if such a document were to be created, it would run a real risk of languishing under Documentation/ unheeded. Communities that have tried to establish codes of conduct have also typically included enforcement mechanisms in the mix. Groups like Fedora's "hall monitors" or Gentoo's "proctors" typically have the ability to ban users from lists and IRC channels when abuses are seen. Mozilla's community participation guidelines describe a number of escalation mechanisms. It is not at all clear that the kernel is amenable to any such enforcement mechanism, and, indeed, Sarah does not call for one; instead, she suggests:

Some people won't agree with everything in that document. The point is, they don't have to agree. They can read the document, figure out what the community expects, and figure out whether they can modify their behavior to match. If they are unwilling to change, they simply don't have to work with the developers who have signed it.

It is far from clear, though, that a document calling for any sort of substantive change would acquire signatures from a critical mass of kernel developers, or that developers who are unwilling to sign the document would be willing (or able) to avoid dealings with those who have.

So proponents of more polite discourse on linux-kernel are almost certainly left with tools like calling out undesirable behavior and leading by example — precisely the methods that have been applied thus far. Those methods have proved to be frustratingly slow at best, but, helped by the overall changes in the development community, they have proved effective. It was probably about time for another campaign for more civility to push the community subtly in the right direction. Previous efforts have managed to make things better without wrecking the community's ability to function efficiently; indeed, we have only gotten better at kernel development over time. With luck and some support from the community, we should see similar results this time.

Comments (245 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Filesystems and block I/O

Memory management


Page editor: Jonathan Corbet


Fedora wrestles with ARM as a primary architecture

By Nathan Willis
July 17, 2013

There is no denying the rise in the popularity of the ARM architecture, but how exactly the Linux community responds to its popularity is a more complicated question. Case in point: right now, the Fedora project is engaged in a lengthy debate about the recent suggestion that ARM be promoted to the status of primary architecture (PA). The key points of disagreement are not the importance of ARM, but whether Fedora's existing ARM porting team needs to produce a release equivalent to the x86 releases before being declared a PA—and, if so, precisely what constitutes equivalence.

Jaroslav Reznik proposed the promotion as a change for the Fedora 20 (F20) development cycle. In reply, Miloslav Trmač asked how many F19 packages are currently missing on the ARM platform, either because they fail to build or have been removed. He cited Fedora's guidelines for promoting a secondary architecture to PA status, which lists eleven criteria for promotion. Some of the criteria are technical requirements, such as the use of the Anaconda installer "where technically possible," while others deal more with the project infrastructure or developer-power, such as requiring all builds to occur on Fedora-maintained build servers and requiring "sufficient developer resources" to fix architecture-specific bugs."

ARM holes

In the subsequent discussion thread, a number of packages were brought up that currently do not build, most notably the GNOME Shell desktop environment and the stack protector. Specifically, GNOME Shell does not work on ARM because there are not open source video drivers for the target hardware devices, and the LLVM-based software renderer is broken. Supporters of the promotion argued both that non-GNOME desktops are supported, and that binary video drivers are available for users that want GNOME Shell specifically.

Strictly speaking, the PA promotion requirements state that requiring binary-only drivers is not allowed. But that leads to another question: whether providing support for GNOME Shell is required of all PAs. Matthew Garrett argued that the assumption has always been that all PAs "embody the same level of functionality, with the exception of fundamental differences between the architectures," down to the package level. But Peter Robinson took issue with Garrett's assumptions, noting that:

I don't necessarily agree that while the gnome desktop is the default that it's an explicit requirement. There's 4 million XOs shipping Fedora (both x86 and ARM) that don't ship with gnome3 as well as no doubt millions of instances of cloud images that don't have a requirement of a desktop yet we still call them Fedora...

But the lack of GNOME Shell support has another dimension, which is the scarcity of developer-power for Fedora's ARM team. Elsewhere in the discussion, Garrett had also contended that LLVM support on ARM has been broken for months, but that no one has fixed it. Similarly, the stack protector has been broken for some length of time, and that impacts a security feature, which if anything makes it more important than any one desktop environment. But Jonathan Masters countered that the stack-protector issue was fixed within a day after it was raised; the team simply did not know about it before it came up in the promotion discussion.

Developer time is not a simple quantity to be measured, though. As Adam Jackson weighed in on the topic of fixing LLVM, "fixing" software rendering is still a bit of a band-aid solution to the more difficult underlying issue that no one seems to be addressing:

If we really wanted to talk about graphics on arm, we'd be talking about writing drivers for GPUs. You know, fixing actual problems, instead of throwing our hands in the air and switching out the entire UX because we can't be bothered to make the core OS any good.

There were still other practical objections raised to the PA promotion change, including the speed of builds, which Aleksandar Kurtakov estimated to be about ten times slower than on the x86 architectures. The speed issue has at least two negative effects; first, as Caolán McNamara observed, it changes build workflow from a "start today, get results later" model to "start today, get results tomorrow." Second, there may need to be changes made to the Fedora build servers themselves, as they currently have a hard-coded 24-hour time limit for each build. For large ARM packages, that may be insufficient.

Speaking of Fedora hardware infrastructure, Till Maas questioned whether there will be test instances of ARM machines available for package maintainers, while others raised the same question about Fedora's QA team.

I don't think it means what you think it means

Naturally, whenever the topic of ARM hardware comes up, it quickly becomes apparent that different parties have significantly different devices in mind. Bastien Nocera asked what the focus of ARM port is, specifically whether it was focused just on development boards like the BeagleBone and PandaBoard. While fun to play with, they have questionable value as a "primary" system.

Adam Williamson pointed out that the vast array of ARM hardware on the market poses practical problems as well: what sort of images would Fedora actually be releasing? Potentially there would need to be separate builds for a range of different devices and System-on-Chip (SoC) boards, which would result in a very different "deliverable" than the unified x86 image. Also, he said, PA status would officially place the burden of testing all of the different ARM images on the already-busy QA team, "but we are not miracle workers, and we cannot test what we don't have: so we'd either need to buy a bunch of test devices or rely on people who already have an interest in using ARM and some ARM devices."

There was never a consensus reached on the target hardware question (although ARM-powered Chromebooks seemed to be the most-asked-for device). The discussion of "deliverables" ultimately circled back around to the earlier question of what packages the ARM port needed to provide to meet the criteria staked out for promotion to PA. On that point, there still seems to be little in the way of agreement. Josh Boyer, for instance, opined that the criterion ought to be that one of the "release-blocking desktops" set out in the distribution release criteria; requiring all desktop environments would be overkill.

Brendan Conoboy then asked how headless ARM servers could ever be acceptable as a PA if the criteria specify that KDE and GNOME (currently the only release-blocking desktops)are required. By the same token, server and cloud images would not be acceptable either.

Garrett replied that a release which supported only headless servers would simply not be Fedora, a position that elicited strong reaction from others. Conoboy called it an "all or nothing" stance that serves to discourage further contribution and hurt Fedora's growth. "Maybe your Fedora means desktop OS, but my Fedora has more facets than that."

Several people weighed in that the common public perception that Fedora is strictly defined as a GNOME-based desktop OS is largely the result of Fedora's history marketing that particular use case. Jiri Eischmann argued that ideally the project would present people with a range of options (e.g., desktop, server, cloud, etc.), and support whichever choice they make.


But redefining what Fedora is will clearly not be an overnight process. In the shorter term, the project does seem to be gearing up to re-evaluate the PA promotion guidelines. Despite all of the specific questions and objections raised about the ARM port's current status, the underlying issue comes down to whether PA status means recognition that the architecture has achieved parity with the existing PAs, or approval from the project to use the same resources as the other PAs—from build servers to the QA team's time.

Garrett advocates staunchly for the former interpretation of PA status, saying "You don't get to be a primary architecture until you've demonstrated that doing so won't slow down the other architectures, and that requires you to fix all of these problems yourself first." Conoboy, on the other hand, wants PA status in order to improve the ARM port: "The ARM team isn't asking for a blessing, we're asking to have builds that block ARM also block x86. At a technical level, that is a fundamental part of what being primary is."

In between lies quite a bit of middle ground. Nottingham complained that the F19 ARM release advertised a number of features (such as support for each of the major desktop environments) that were simply missing. But Toshio Kuratomi and others contended that the guidelines published were not intended to be blockers, but guides. Ultimately, as Williamson pointed out, the project can change its release criteria and its PA promotion guidelines to fit what the community wants—it just needs to decide first whether ARM is important enough to warrant that change.

Fedora has considered promoting ARM to PA status in the past. At that time, Garrett posted a draft list of requirements for a secondary architecture to qualify for promotion; the current guidelines on the wiki are an expansion of that document. The ARM port has made considerable progress in the intervening time—a fact which Conoboy called attention to on a number of specific points. Still, as of now, no architecture has ever been promoted from secondary to PA; if ARM does so it will be breaking new ground. Gaining PA status would no doubt lead to improved testing and support, among other reasons for allowing the ARM team to offload some QA and other support work to Fedora's main teams, and concentrate on architecture-specific issues.

For the time being, however, not much changes for the Fedora user interested in the ARM platform. As Daniel Berrange said, speaking as such a potential user, people who want to use Fedora on ARM are going to do so—regardless of whether it is branded a primary or a secondary architecture.

Comments (none posted)

Brief items

Distribution quote of the week

Basically, Debian is seriously behind the curve on this, but we're so used to our familiar, comfortable problems that we don't necessarily see the amount of pain that we're enduring that we don't have to endure. As Charles noted elsewhere on this thread, once you start looking at and maintaining either systemd or upstart configurations instead of init scripts, you realize what sort of rock you were beating your head against and how nice it feels for the pain to stop.
-- Russ Allbery

Comments (4 posted)

20 years of Slackware

An interesting anniversary has just quietly slipped by: Slackware 1.0 was released on July 16 1993. Twenty years later, Slackware is quiet but far from dormant. Congratulations are due to what must certainly be the oldest still-maintained Linux distribution.

Comments (21 posted)

Fedora 19 for IBM System z 64bit official release

Fedora 19 for IBM System z has been released. For more information see the architecture specific release notes.

Full Story (comments: none)

New Wayland Live CDs (with Wayland 1.2)

A new ISO of RebeccaBlackOS, a live CD featuring Wayland and Weston, is available with Wayland and Weston 1.2.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

New Debian leader seeks more innovation within project (ITWire)

ITWire has an interview with Lucas Nussbaum, the new Debian Project Leader. "I see Debian as a two-sided project. On one side, there's a technical project aiming at building an Operating System, and doing that rather successfully. And one the other side, there's a political project, that puts Free Software very high on its priority list. This duality is quite unique: there are many successful technical projects that tend to not care very much about the political aspects, as well as some political projects that prefer to ignore the reality checks that we do on a regular basis."

Comments (31 posted)

Page editor: Rebecca Sobol


Unit testing with mock objects in C

July 17, 2013

This article was contributed by Andreas Schneider and Jakub Hrozek

In software development, unit testing has become a standard part of many projects. Projects often have a set of tests to check some of the functionality of the source code. However, if there are parts which are difficult to test, then most unit testing frameworks in C don't offer an adequate solution. One example might be a program that communicates over a network. The unit tests should exercise not only the network facing components, but should also be able to be executed in environments that intentionally have no networking (such as build systems like Koji or the openSUSE Build Service).

Using a unit-test library with the support of mock objects helps testing situations like that described above. The CMocka unit-testing framework for C is an example of such a framework. We will show examples of how it can be used to add mock objects for testing your C programs. Hopefully that will lead to more use of mock objects by various projects.


Consider a set of unit tests for the following system, which was taken from a Stack Overflow answer (with permission from the author):

You're implementing a model of a restaurant and have several functions in your restaurant representing smaller units, like chef, waiter, and customer. The customer orders a dish from the waiter, which the chef will cook and send (via the waiter) back to the customer.

It is generally easy to envision testing a low-level component like the "chef". In that case, you create a test driver that exercises the chef. One test in the test suite could make orders for different dishes and verifying that the chef behaves correctly and return the dish ordered. The test driver would also try to order dishes which are not on the menu to check that the chef will complain about the order.

Testing a component which is not a leaf but is in the middle of the hierarchy (like the waiter in our example) is much harder. The waiter is influenced by other components and to verify its correct behavior we need to test it in isolation and make sure the results are not tainted by bugs in other parts of the program.

One way might be to test the waiter the same way the chef was tested. The test driver would again order dishes and make sure the waiter returns the correct dishes. But the test of the waiter component may be dependent on the correct behavior of the chef component. This dependency can be problematic if the chef component has a lot of test-unfriendly characteristics. It is possible that the chef isn't able to cook a dish because of missing ingredients (resources), he can't cook because his tools are not working (dependencies), or he has surprise orders (unexpected behavior).

But, as this is the waiter test, we want to test the waiter and not the chef. We want to make sure that the waiter delivers an order correctly to the chef and returns the ordered dish to the customer correctly. The test might also include a negative test — that the waiter is able to handle a wrong dish handed from the kitchen. In the real world, simulating failures can often be difficult.

Unit testing provides better results when testing different components independently, so the correct approach is to isolate the component or unit you want to test (the waiter in this case). The test driver should be able to create a "test double" (like a stunt double of an actor in a movie) of the chef and control it. It tells the chef what it expects it to return to the waiter after ordering a dish. This is the functionality that is provided by "mock" objects.

A large part of unit testing focuses on behavior, such as how the waiter component interacts with the chef component. A mock-based approach focuses on fully specifying what the correct interaction is and detecting when the object stops interacting the way it should. The mock object knows in advance what is supposed to happen during the test (which functions to call) and it knows how to react (which value it should return). These can be simply described as the behavior and state.

A custom mock object could be developed for the expected behavior of each test case, but a mocking framework strives to allow such a behavior specification to be clearly and easily indicated directly in the test case. The conversation surrounding a mock-based test might look like this:

  • test driver -> mock chef: expect a hot dog order and give him this dummy hot dog in response
  • test driver (posing as customer) -> waiter: I would like a hot dog please
  • waiter -> mock chef: 1 hamburger please
  • mock chef stops the test: I was told to expect a hot dog order!
  • test driver notes the problem: TEST FAILED! — the waiter changed the order

CMocka — an overview

One of the principles of CMocka is that a test application should only require the standard C library and CMocka itself, to minimize the conflicts with standard C library headers especially on a variety of different platforms. CMocka is the successor of cmockery, which was developed by Google but has been unmaintained for some time. So, CMocka was forked and will be maintained in the future.

CMocka is released under the Apache License Version 2.0. Currently, it is used by various Free Software projects such as the System Security Services Daemon (SSSD) from the FreeIPA project, csync, a user-level bidirectional file synchronizer, libssh, and elasto, a cloud storage client, which can talk to Azure and Amazon S3.

This article focuses on features that are unique to CMocka when compared to other unit testing frameworks. This includes mock objects and their usage, but it should be noted that CMocka also supports most of the features one would expect from any useful unit-testing framework, such as text fixtures or passing test states. Test fixtures are setup and teardown functions that can be shared across multiple test cases to provide common functions to prepare the test environment and destroy it afterward. With our kitchen example, the fixtures might make sure the kitchen is ready before taking orders from the waiter and cleaned up after the cooking has finished. Test states are used to provide private data which is passed around as a "state" of the unit test. For instance, if the kitchen initialization function returned a pointer to a "kitchen context", the state might contain a pointer to this kitchen context.

Users may want to refer to the CMocka documentation, where the common concepts are well explained and are accompanied by code examples.

How mock objects work in CMocka

As described in the example above, there are usually two parts in testing how an interface under test behaves with respect to other objects or interfaces we are mocking. The first is checking the input to see if the interface under test communicates with the other interfaces correctly. The second is returning pre-programmed output values and return codes in order to test how the interface under test handles both success and failure cases.

Using the waiter/chef interaction described earlier, we can consider a simple waiter function that takes an order from a customer, passes the order to the kitchen ,and then checks if the dish received from the kitchen matches the order:

    /* Waiter return codes:
     * 0  - success
     * -1 - preparing the dish failed in the kitchen
     * -2 - the kitchen succeeded, but cooked a different dish
    int waiter_process_order(char *order, char **dish)
        int rv;

        rv = chef_cook(order, dish);
        if (rv != 0) {
            fprintf(stderr, "Chef couldn't cook %s: %s\n",
                            order, chef_strerror(rv));
            return -1;

        /* Check if we received the dish we wanted from the kitchen */
        if (strcmp(order, *dish) != 0) {
            /* Do not give wrong food to the customer */
            *dish = NULL;
            return -2;

        return 0;

Because it's the waiter interface that we are testing, we want to simulate the chef with a mock object for both positive and negative tests. In other words, we would like to keep only a single instance of a chef_cook() function, but pre-program it depending on the kind of test. This is where the mocking capability of the CMocka library comes to play. Our test driver will be named __wrap_chef_cook() and replace the original chef_cook() function. The name __wrap_chef_cook() was not chosen arbitrarily; as seen below, a linker flag makes it easy to "wrap" calls when named that way.

In order to fake the different results CMocka provides two macros:

  • will_return(function, value) — This macro adds (i.e. enqueues) a value to the queue of mock values. It is intended to be used by the unit test itself, while programming the behavior of the mocked object. In our example, we will use the will_return() macro to instruct the chef to succeed, fail, or even cook a different dish than he was ordered to.

  • mock() — The macro dequeues a value from the queue of test values. The user of the mock() macro is the mocked object that uses it to learn how it should behave.

Because will_return() and mock() are intended to be used in pairs, the CMocka library will consider the test to have failed if there are more values enqueued using will_return() than are consumed with mock() and vice-versa.

The following unit-test stub illustrates how a unit test would instruct the mocked object __wrap_chef_cook() to return a particular dish by adding the dish to be returned, as well as the return value, onto the queue. The function names used in the example correspond to those in the full example from the CMocka source:

    void test_order_hotdog()
        will_return(__wrap_chef_cook, "hotdog");
        will_return(__wrap_chef_cook, 0);

Now the __wrap_chef_cook() function would be able to use these values when called (instead of chef_cook()) from the waiter_process_order() interface that is under test. The mocked __wrap_chef_cook() would pop the values from the stack using mock() and return them to the waiter:

    int __wrap_chef_cook(const char *order, char **dish_out)
        dish_out = (char *) mock();  /* dequeue first value from test driver */
        return (int) mock();         /* dequeue second value */
The same facility is available for parameter checking. There is a set of macros to enqueue variables, such as expect_string(). This macro adds a string to the queue that will then be consumed by check_expected(), which is called in the mocked function. There are several expect_*() macros that can be used to perform different kinds of checks such as checking whether a value falls into some expected range, is part of an expected set, or matches a value directly.

The following test stub illustrates how to do this in a new test. First is the function we call in the test driver:

    void test_order_hotdog()
        /* We expect the chef to receive an order for a hotdog */
        expect_string(__wrap_chef_cook, order, "hotdog");

Now the chef_cook function can check if the parameter it received is the parameter which is expected by the test driver. This can be done in the following way:

    int __wrap_chef_cook(const char *order, char **dish_out)

A CMocka example — chef returning a bad dish

This chef/waiter example is actually a part of the CMocka source code. Let's illustrate CMocka's capabilities with one part of the example source that tests that a waiter can handle when the chef returns a different dish than ordered. The test begins by enqueueing two boolean values and a string using the will_return() macro. The booleans tell the mock chef how to behave. The chef will retrieve the values using the mock() call. The first tells it whether the ordered item is a valid item from the menu, while the second tells it that it has the ingredients necessary to cook the order. Having these booleans allows the mock chef to be used to test the waiter's error handling. The final queued item is the order that the chef should return.

    int test_driver()
        will_return(__wrap_chef_cook, true);     /* Knows how to cook the dish */
        will_return(__wrap_chef_cook, true);     /* Has the ingredients */
        will_return(__wrap_chef_cook, "burger"); /* Will cook a burger */

Next, it's time to call the interface under test, the waiter, which will then call the mocked chef. In this test case, the waiter places an order for a "hotdog". As the interface specification described, the waiter must be able to detect when a bad dish was received and return an error code in that case. Also, no dish must be returned to the customer.

    int test_bad_dish()
	int rv;
	char *dish;
	rv = waiter_process("hotdog", &dish);
	assert_int_equal(rv, -2);

So the test driver programs the mock chef to "successfully" return a burger when it receives an order from the waiter—no matter what the order actually is for. CMocka invokes the waiter which calls the chef asking for a "hotdog". The chef dutifully returns a "burger" and the waiter should then return -2 and no dish. If it does, the test passes, otherwise it fails.

The full example, along with other test cases that use the chef/waiter analogy can be found in the CMocka repository.

Case study — testing the NSS responder in the SSSD

SSSD is a daemon that is able to provide identities and authenticate with accounts stored in a remote server, by using protocols like LDAP, IPA, or Active Directory. Since SSSD communicates with a server over a network, it's not trivial to test the complete functionality, especially considering that the tests must run in limited environments such as build systems. Often these are just minimal virtual machines or chroots. This section will describe how the SSSD uses CMocka for unit tests that simulate fetching accounts from remote servers.

SSSD consists of multiple processes which can be described as "front ends" and "back ends" respectively. The front ends interface with the Linux system libraries (mostly glibc and PAM), while the back ends download the data from the remote server for the front ends to process and return back to the system.

Essentially, the SSSD front end processes requests from the system for account information. If the data is available and valid in its cache, it returns that to the requester. Otherwise it requests the information via the back end; that information is then placed in the cache and the front end is notified. If the information could not be found in the cache, nor retrieved, an empty response is returned.

With traditional unit testing libraries, it's quite easy to test the sequence where valid data is present in the cache. Using stub functions simulating communication with the back end, it's also possible to test the sequence where the back end is asked for an account that does not exist. However, some scenarios are quite difficult to test, such as when the cache contains valid-but-expired data. In that case, the back end is supposed to refresh the cache with current data and return the data that was just fetched from the remote server.

SSSD uses the CMocka library to simulate behavior such as the one described above. In particular, there is a unit test that exercises the functionality of the NSS responder. It creates several mock objects that simulate updating the cache with results obtained from the network by creating a mock object in place of the back end. The mock object injects data into the cache to simulate the lookup. The test driver, which is simulating the system library that wants the account information, then receives the data that was injected.

After this unit test has finished, the test driver asserts that no data was present in the cache before the test started, and that the test returned seemingly valid data as if they were retrieved from some kind of a remote server. A very similar test has been developed to simulate the case where the cache contains some data when the test starts, but the data is not valid anymore. The test driver asserts that different (updated) data is returned to the test driver after the test finishes.

The complete unit test can be found in the SSSD project repository.

Using CMocka with ld wrapper support

CMocka has most of the features a standard unit-testing framework offers, but, in addition, has support for mock objects. As CMocka is a framework for C, mock objects normally replace functions: you have the actual implementation of a function and you want to replace it with your mock function. Consider the situation where a library contains an initialization function, in our example let's call it chef_init(), and some worker function, such as chef_cook() in the example above. You can't just mock one and use the other original function, as the same symbol name can't be used twice. There needs to be a way to trick the toolchain into using our mock worker function, but to keep using the original initialization function.

The GNU Linker has the ability to define a wrapper function and call this wrapper function instead of the original function (the gold linker supports this feature, too). This allows us to replace our actual implementation of a function with a mock object in our test code.

Keeping our chef example in mind, let's try to override the chef_cook() function. First, we need to define the wrapper. The name of the wrapper is always __wrap_symbol(), so our mock function will now be named __wrap_chef_cook(). That's a simple search-and-replace in the code, but please keep in mind that the will_return() macros that define what the mock() routines return will also need to change their argument to use the wrapper.

The second step is actually telling the linker to call __wrap_chef_cook() whenever the program would call chef_cook(). This is done by using the --wrap linker option which takes the name of the wrapped function as an argument. If the test was compiled using gcc, the invocation might look like:

    $ gcc -g -Wl,--wrap=chef_cook waiter_test.c chef.c

Another nice feature of the wrap trick is that you can even call the original function from the wrapper — just call a symbol named __real_symbol(), in our case, the test could call the original function by making a call to __real_chef_cook(). This trick is useful for keeping track of when a particular function was called, or for performing some kind of bookkeeping during the test.

You can refer to GNU binutils documentation for more information on the --wrap feature. A fully working implementation of the chef example using CMocka can be found in the CMocka repository.


Using mock objects improves testing efficiency tremendously, which will increase code quality. The authors hope that the article encourages readers to start using mock objects in their unit tests.

[Andreas Schneider and Jakub Hrozek are both Senior Software Engineers working at Red Hat. Jakub works on FreeIPA and SSSD and Andreas on Samba.]

Comments (2 posted)

Brief items

Quote of the week

Perhaps you're under the misapprehension that --force refers to the magical energy field that permeates all living things and surrounds us and penetrates us and binds the galaxy together and makes some people good with lightsabers. This is actually uppercase --Force.

Sadly, --force refers to the much more mundane 'force' in the sense of 'it was stuck for some reason, so I tried to force it, then it broke and I cut myself in the process, and now I feel like an idiot and have no one to blame but myself.'

Matt Mackall

Comments (none posted)

A new beta release of the Opus audio codec has released a beta for version 1.1 of the Opus audio codec. Xiph's Monty Montgomery introduces the update on his blog, noting: "This will be the first major update to libopus since standardization as RFC 6716 in 2012, and includes improvements to performance, encoding quality, and the library APIs." New demos are also linked to from the blog post, to showcase the audible improvements.

Comments (4 posted)

Wayland and Weston 1.2.0 released

Version 1.2.0 of the Wayland/Weston display server and compositor implementation has been released. New features abound; they include a stable Wayland server API, integrated color management, a new subsurface protocol, improved thread safety, multi-seat support, and more.

Full Story (comments: 7)

Boehm: Qt Project and Defensive Publications

At his blog, Mirko Boehm has a report from the Qt Contributor Summit at Akademy about the Qt project's new initiative to publish "defensive publications"—public documentation of new inventions intend to serve as proof of prior art against patent claims by others. The Open Invention Network is set to provide support and mentoring.

Comments (1 posted)

RProtoBuf 0.3 available

A new release of RProtobuf is available. The package provides R bindings for Google's Protobuf data encoding library. This release adds support for extensions, among other changes.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Crawford: Why mobile web apps are slow

On his blog, Drew Crawford analyzes the performance of mobile web apps to determine why they are slow compared to native apps, and what the future holds for their performance as CPU and JavaScript runtime speeds increase. Short summary of a long article: he is not optimistic that performance will improve significantly any time soon for a number of reasons. "Of the people who actually do relevant work: the view that JS in particular, or dynamic languages in general, will catch up with C, is very much the minority view. There are a few stragglers here and there, and there is also no real consensus what to do about it, or if anything should be done about it at all. But as to the question of whether, from a language perspective, in general, the JITs will catch up–the answer from the people working on them is 'no, not without changing either the language or the APIs.'" (Thanks to Sebastian Kügler.)

Comments (85 posted) the decentralized social network that's really fun ( talks with developer Evan Prodromou. "I've talked to developers over and over who are looking for a scalable open source server for their mobile social networking app. Developers who are good at iOS or Android development want to concentrate on making that front-end excellent—not on building yet another Like API. Although there are other open source social network programs, like StatusNet, most of them focus on the Web interface and leave the API as an afterthought. is first and foremost an API server. It has a default Web UI if you want to turn it on, but you can turn it off and just use the API server by itself." (LWN covered last March) (Thanks to Bryan Behrenshausen)

Comments (1 posted)

GitHub on choosing a license (InfoWorld)

At InfoWorld, Simon Phipps has posted an examination of the new software-license-selection microsite recently unveiled by GitHub. Phipps praises the move as a step forward for GitHub, although he notes that the hosting site still sports many publicly-visible repositories with no license at all.

Comments (none posted)

Page editor: Nathan Willis


Brief items

Seventy videos from Linaro Connect Europe 2013 ( has posted links to videos of seventy sessions from the recently concluded Linaro Connect event in Dublin. "The sessions spanned a wide range of topics, including Android, Builds and Baselines, Enterprise, Graphics and Multimedia, Linux Kernel, Network, Project Management Tools, Training, and more."

Comments (none posted)

FSF, other groups join EFF to sue NSA over unconstitutional surveillance

The Free Software Foundation has joined the Electronic Frontier Foundation and others in challenging the US National Security Agency's (NSA) mass surveillance of telecommunications in the United States. "The suit, *First Unitarian Church of Los Angeles v. NSA*, argues that such government surveillance of political organizations discourages citizens from contacting those organizations and therefore chills the free association and speech guaranteed by the First Amendment. The EFF will represent the politically diverse group of plaintiffs, which in addition to the FSF, includes Greenpeace, the California Guns Association, the National Organization for the Normalization of Marijuana Laws, and People for the American Way."

Full Story (comments: none)

Articles of interest

FSF: Cancel Netflix if you value freedom

The Free Software Foundation warns against Encrypted Media Extensions. "For the last few months, we've been raising an outcry against Encrypted Media Extensions (EME), a plan by Netflix and a block of other media and software companies to squeeze support for Digital Restrictions Management (DRM) into the HTML standard, the core language of the Worldwide Web. The HTML standard is set by the World Wide Web Consortium (W3C), which this block of corporations has been heavily lobbying as of late."

Full Story (comments: none)

FSFE: Open Letter on transparency to President of the European Parliament

The Free Software Foundation Europe and the Open Rights Group hace sent an open letter to the President of the European Parliament. "In their letter, the civil society groups are offering Mr Schulz their help in this effort. They are also suggesting a number of questions that should be considered in the report on transparency, such as the Parliament is held to a standard of "utmost transparency", would it be obliged to make public the source code of the software it uses?"

Full Story (comments: none)

Calls for Presentations

LPC: Call for discussion topics and BoFs

Linux Plumbers Conference will take place September 18-20, 2013 in New Orleans, Louisiana. The call for discussion topics and BoFs is open.

Comments (none posted)

HOST: Call for Applications

The Homeland Open Security Technology (HOST) project has opened a call for investment applications that support open source software to improve cybersecurity. HOST will accept applications until August 14, 2013. "HOST seeks proposals that align with its current mission to investigate open security methods, models and technologies. A case study will be conducted to collect best practices and lessons learned from each investment, with a primary goal to disseminate and share knowledge and experiences with the greater open source community and cybersecurity community."

Comments (none posted)

CFP Deadlines: July 18, 2013 to September 16, 2013

The following listing of CFP deadlines is taken from the CFP Calendar.

DeadlineEvent Dates EventLocation
July 19 October 23
October 25
Linux Kernel Summit 2013 Edinburgh, UK
July 20 January 6
January 10 Perth, Australia
July 21 October 21
October 23
KVM Forum Edinburgh, UK
July 21 October 21
October 23
LinuxCon Europe 2013 Edinburgh, UK
July 21 October 19 Central PA Open Source Conference Lancaster, PA, USA
July 22 September 19
September 20
Open Source Software for Business Prato, Italy
July 25 October 22
October 23
GStreamer Conference Edinburgh, UK
July 28 October 17
October 20
PyCon PL Szczyrk, Poland
July 29 October 28
October 31
15th Real Time Linux Workshop Lugano, Switzerland
July 29 October 29
November 1
PostgreSQL Conference Europe 2013 Dublin, Ireland
July 31 November 5
November 8
OpenStack Summit Hong Kong, Hong Kong
July 31 October 24
October 25
Automotive Linux Summit Fall 2013 Edinburgh, UK
August 7 September 12
September 14
SmartDevCon Katowice, Poland
August 15 August 22
August 25
GNU Hackers Meeting 2013 Paris, France
August 18 October 19 Hong Kong Open Source Conference 2013 Hong Kong, China
August 19 September 20
September 22
PyCon UK 2013 Coventry, UK
August 21 October 23 TracingSummit2013 Edinburgh, UK
August 22 September 25
September 27
LibreOffice Conference 2013 Milan, Italy
August 30 October 24
October 25
Xen Project Developer Summit Edinburgh, UK
August 31 October 26
October 27
T-DOSE Conference 2013 Eindhoven, Netherlands
August 31 September 24
September 25
Kernel Recipes 2013 Paris, France
September 1 November 18
November 21
2013 Linux Symposium Ottawa, Canada
September 6 October 4
October 5
Open Source Developers Conference France Paris, France
September 15 November 8 PGConf.DE 2013 Oberhausen, Germany
September 15 December 27
December 30
30th Chaos Communication Congress Hamburg, Germany
September 15 November 15
November 16
Linux Informationstage Oldenburg Oldenburg, Germany
September 15 October 3
October 4
PyConZA 2013 Cape Town, South Africa
September 15 November 22
November 24
Python Conference Spain 2013 Madrid, Spain
September 15 February 1
February 2
FOSDEM 2014 Brussels, Belgium
September 15 April 11
April 13
PyCon 2014 Montreal, Canada

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: July 18, 2013 to September 16, 2013

The following event listing is taken from the Calendar.

July 13
July 19
Akademy 2013 Bilbao, Spain
July 18
July 22
openSUSE Conference 2013 Thessaloniki, Greece
July 22
July 26
OSCON 2013 Portland, OR, USA
July 27
July 28
PyOhio 2013 Columbus, OH, USA
July 27 OpenShift Origin Community Day Mountain View, CA, USA
July 31
August 4
OHM2013: Observe Hack Make Geestmerambacht, the Netherlands
August 1
August 8
GUADEC 2013 Brno, Czech Republic
August 3
August 4
COSCUP 2013 Taipei, Taiwan
August 6
August 8
Military Open Source Summit Charleston, SC, USA
August 7
August 11
Wikimania Hong Kong, China
August 9
August 11
XDA:DevCon 2013 Miami, FL, USA
August 9
August 12
Flock - Fedora Contributor Conference Charleston, SC, USA
August 9
August 13
PyCon Canada Toronto, Canada
August 11
August 18
DebConf13 Vaumarcus, Switzerland
August 12
August 14
YAPC::Europe 2013 “Future Perl” Kiev, Ukraine
August 16
August 18
PyTexas 2013 College Station, TX, USA
August 22
August 25
GNU Hackers Meeting 2013 Paris, France
August 23
August 24
Barcamp GR Grand Rapids, MI, USA
August 24
August 25
Free and Open Source Software Conference St.Augustin, Germany
August 30
September 1
Pycon India 2013 Bangalore, India
September 3
September 5
GanetiCon Athens, Greece
September 6
September 8
Kiwi PyCon 2013 Auckland, New Zealand
September 6
September 8
State Of The Map 2013 Birmingham, UK
September 10
September 11
Malaysia Open Source Conference 2013 Kuala Lumpur, Malaysia
September 12
September 14
SmartDevCon Katowice, Poland
September 13 CentOS Dojo and Community Day London, UK

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds