User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 12, 2012

A casualty in the patent wars

By Jonathan Corbet
July 11, 2012
Software patents have long been a concern in the free software development community. For many years, though, that concern was of a theoretical nature; few patents had actually been used (in a public way, at least) to attack projects of interest. The mobile patent wars have changed that situation; now systems based on free software are on the front line in a highly visible legal battle. As a result, we are starting to feel the sting of software patents; the situation is likely to get worse before it gets better.

In late June, a US District Court granted a request by Apple to ban the sale of the Galaxy Nexus smartphone in the US due to the phone's alleged infringements of Apple's patents; the phone was then duly pulled from the Google store. That ban has since been lifted, but that should not be seen as a victory against software patents; indeed, the contrary is true. The only reason the Galaxy Nexus is available again is Google's short-term capitulation; the company has simply removed the offending features from the "Jelly Bean" Android release. Google's claim that the patents were no longer at issue was enough to get the handset back on the market—for now.

What are those features? The biggest fight seems to be over patent #6,847,959, the so-called "Siri patent." This patent, filed in 2000, has the following as its first independent claim:

A method for locating information in a computer system, comprising the steps of:

inputting an information identifier;

providing said information identifier to a plurality of plug-in modules each using a different heuristic to locate information which matches said identifier;

providing at least one candidate item of information from said modules; and

displaying a representation of said candidate item of information.

There are 17 dependent claims specifying that the "information identifier" may come from a dialog box or through voice input; the "heuristics" can involve searches on file names, file contents, local files, web pages, and so on. They narrow the scope of the patent, but do not change its fundamental nature.

Even thinking back to the year 2000, it is hard to find a great deal of novelty in this concept. If one wants to search for something, one likely wants to search all of the available resources. If one wants to search multiple locations or with multiple algorithms, one creates an API by which independent search modules can be invoked. The method described here is obvious; it should come to the mind of any developer skilled in the art of software development. But this is the valuable innovation that allowed Apple to block the sales of a competing product in one of the largest markets on the planet.

Google's response has been to cripple the functionality of its Android search bar, which will be restricted to searching the net only. Anybody running the Jelly Bean release will see that restricted functionality; it will also be pushed out to 4.0-based ("Ice Cream Sandwich") devices as an "update." And that is how things are likely to stand until the case runs its course, a process that could take years.

So, to put it bluntly: software patents have allowed a manufacturer of highly closed devices to hold one of the most open handsets available hostage and to block it from the market entirely. They have allowed said corporation to force the removal of obvious functionality from a device (mostly) based on free software. To think that this kind of thing won't happen again, or that it won't strike code that is more interesting to the free software community, is to be optimistic indeed. That does not seem to be the way the wind is blowing.

It would be nice to think that, somehow, the software patent problem will be solved in the near future. There are occasional encouraging signs, such as US appeals court judge Richard Posner tossing out another Apple case and speaking out against software patents. But actual attempts to reform the patent system never seem to get that far.

What seems more likely is that the major players in the mobile industry will eventually come together around some sort of patent pool that lets them get on with their businesses. Perhaps this will be a voluntary action, or perhaps there will be a certain amount of governmental pressure applied first. Either way, the end result is likely to be a regime in which the established players are free to get on with the process of making money while new companies, like the just-announced Jolla Ltd, face additional barriers to entry. Such a situation is not likely to be good for the industry or for free software.

But, then, one never knows. As bogus software patents threaten to take down products and services that people actually care about, we may yet see an increase in support for reforms. Perhaps the best strategy against software patents is the one we are already executing: make the best free system we can and ensure that it is widely diffused into systems that the world depends on. As patent litigation increasingly turns into a general denial of service attack against the economy as a whole, tolerance for the system may wane. One can always hope, anyway.

Comments (25 posted)

The future of Thunderbird

By Nathan Willis
July 11, 2012

Mozilla surprised Thunderbird fans on July 6 when it announced that it was pulling developers from the project. Mozilla says it will continue to test, patch, and maintain future releases — including stability and security fixes — while letting community members guide development of new features. But that promise did not prevent a slew of headlines reporting that the email client was being put out to pasture. A number of Mozilla developers have subsequently commented on the decision, helping to clarify the outlook for the future somewhat, if not completely.

Mozilla chief Mitchell Baker posted the announcement on her blog, starting with the question "is Thunderbird a likely source of innovation and of leadership in today’s Internet life? Or is Thunderbird already pretty much what its users want and mostly needs some on-going maintenance?" The answer from Mozilla's upper echelons, evidently, is that the desktop email client is essentially feature-complete, and not likely to experience further innovations. Consequently, Mozilla as a whole is better off directing its engineering resources to its current "priority" projects.

Baker's post was interpreted by many to mean that Mozilla was halting development on Thunderbird, perhaps offloading control of the project to the open source community or otherwise attempting to get rid of the project without saying that it was getting rid of the project. Thunderbird would hardly be the first open source project to suffer such a fate, so a pessimistic take on the announcement is understandable. But the details that have emerged since the announcement paint a different picture.

Details, details

On July 7, Jb Piacentino posted an announcement to the tb-planning mailing list which covered the same ground as Baker's post. In it, he assured readers that the move was not the cessation of Thunderbird development:

We're not "stopping" Thunderbird, but proposing we adapt the Thunderbird release and governance model in a way that allows both ongoing security and stability maintenance, as well as community-driven innovation and development for the product.

Thunderbird developer Ludovic Hirlimann said on his blog that Thunderbird 14, 15, and 16 would all be released before the new plan takes effect, and that the new model's practical effect would be that "we won’t have the time to work on specking, developing and testing new features," although the team would still participate in the development process.

Details about the plan are described on the Mozilla wiki. The plan draws a distinction between the normal Thunderbird and the extended support release (ESR) version. Mozilla will focus on the Thunderbird ESR releases and associated security updates, while allowing other contributors to work on the standard Thunderbird trunk. Mozilla will continue to provide the testing and release infrastructure, and Mozilla staffers will serve as the release team. But the Mozilla staffers will not be tasked with introducing new features. ESR releases are guaranteed to receive security updates for one year, rolled out with Firefox ESR, on a six-week schedule.

Despite Piacentino's reassurances and the wiki's lengthier explanation, some on the list still interpreted the news in starkly different terms. For example, while Ben Bucksch took it to mean an end-of-life announcement, Charles Tanstaafl read the announcement to mean that Mozilla employees would "focus on stability and fixing many of the long standing bugs".

Others wanted more specifics on the new process. Kai Engert asked whether the arrangement meant that Thunderbird releases would be kept in sync with Firefox on shared components (including Gecko):

The one thing I'm worried about is regressions.

Firefox and Thunderbird share application level code that is responsible for the correct functioning of security protocols.

If a change is made because it's needed by Firefox, it's easy to forget that Thunderbird may rely on the previous behaviour, and the change might cause a regression in functionality/usability/correctness/completeness for Thunderbird.

This has happened in the past. If Thunderbird becomes even less of a priority for the Mozilla project, with even fewer people available to work on cleanup and adjustments to newer Gecko core, then there's the risk that such regressions might occur more frequently in the future.

Concerns raised also included the fate of in-progress development work (such as the long awaited rewrite of Thunderbird's address book) and whether or not the outside community would be able to mentor Google Summer of Code (GSoC) projects, which have been a dependable source of new code in the past. The community has indeed played a major part in recent innovations, including the new "conversations" view extension, MIME handling, and the recent removal of RDF as a dependency. Mozilla's Mark Banner replied that Thunderbird's annual ESR releases would synchronize with the then-current Firefox release (including any Gecko updates), but that the intervening six-week security update releases would not roll in recent changes. The bulk of in-progress projects are slated to be completed before the new process begins, he added. Finally, he pointed out that Thunderbird community members had mentored past GSoC projects, so the process change should not interfere.

Email versus the web

Several Mozilla staffers commented about the announcement in blog posts of their own. Thunderbird developer Joshua Cranmer observed:

Thunderbird has not been a priority for Mozilla since before I started working on it. There really isn't any coordination in mozilla-central to make sure that any planned "featurectomies" don't impact Thunderbird—we typically get the same notice that add-on authors get, despite being arguably the largest binary user of the codebase outside of mozilla-central. Given also that the Fennec and B2G codebases were subsequently merged into mozilla-central (one of the arguments I heard about the Fennec merge was that "it's too difficult to maintain the project outside of mozilla-central") and that comm-central remains separate, it should be quickly clear how much apathy for Thunderbird existed prior to this announcement.

Cranmer did not bemoan this situation, however. He saw it as natural considering the growth of mobile email, and because "Mozilla's primary goal is to promote the Open Web." The assertion that the web — but not email — is Mozilla's central mission was also touched on in official channels. The wiki page states that the priority projects getting Mozilla's attention are "important web and mobile" efforts, "while Thunderbird remains a pure desktop only email client." Baker's blog post similarly noted that the project has "seen the rising popularity of Web-based forms of communications representing email alternatives to a desktop solution."

But Bucksch took issue with that notion in considerable detail, observing that if Thunderbird is losing out to web-based email, that constitutes a loss, because "Webmail is definitely not open. You're totally dependent on the features and limitations the provider offers [...] Privacy goes out the door with webmail. Even integrity: The ISP can even alter the message contents years after the fact, and I have no way to verify or prove this."

Mozilla's stated mission is "to promote openness, innovation and opportunity on the web, but Bucksch points out that its manifesto stakes out considerably broader principles about the openness of the Internet as a whole. Side-stepping for the moment why the organization has a separate "mission" statement and "manifesto" at all (much less inconsistent ones), the point is well-taken. If Thunderbird has failed to grab a majority of the world's email client share, what users are left with are proprietary OS-vendor clients on the desktop, or proprietary software services on the web. Mozilla Labs briefly toyed with a webmail client called Raindrop, but shuttered it before it left the experimental phase.

Perhaps competition from webmail clients is a side issue, and Mozilla is primarily readying itself to make a greater play for what it sees as the new email battleground on mobile devices, with its Boot-to-Gecko effort (which was recently renamed Firefox OS). Andrew Sutherland, a developer on Mozilla's forthcoming Firefox OS email client, told the tb-planning list that he and other team members were list subscribers, and were at least open to the possibility of collaborating with the Thunderbird community on compatibility features.

Despite the doomsday predictions that leaked out following the initial announcement, Mozilla's plans indicate that it is committed to testing and releasing Thunderbird for at least the next year or so (depending on the final release date of Thunderbird ESR 17). The distant future is less clear, but that could be said of many other projects. Anyone who doubts the ability of the Mozilla volunteer community to maintain a product needs only to look at Seamonkey, which continues to live on long after Mozilla lost interest. Still, Mozilla's second-class treatment of its email client is troubling for other reasons. Email itself may be relatively static, but IM, VOIP, and other communication methods are coming and going all the time, and Mozilla has not offered a consistent client story for them. If Firefox is Mozilla's only product, users' hope for an open web boils down to "hopefully the service providers will write open source web apps for foo" — which seems like a long shot.

Comments (10 posted)

Akademy: The Qt Project and KDE

By Jake Edge
July 11, 2012

The Qt Project was launched in October 2011 to foster the open development of the Qt toolkit. Qt is the underlying framework used by KDE, of course, so Akademy attendees are understandably interested in the status and progress of the Qt Project. Thiago Macieira provided that update in a surprisingly well-attended keynote—surprising because it was early on the day after a social event that stretched into the wee hours.

The Qt Project

[Thiago Macieira]

The Qt Project is based on four principles, Macieira said: fairness, inclusiveness, transparency, and meritocracy. Fairness means that the project is open to everyone, while inclusiveness means that people can just start participating as there are no barriers in place or fees required. Transparency covers the decision-making process which is completely open. Discussions happen on the mailing list, whose participants ultimately make any decisions. When discussion takes place elsewhere, it needs to be posted to the list for others to review and comment on. Finally, a meritocracy means that contributors who have "shown their skills and dedication" are given commit rights, and are the ones who get to make the decisions for the project. That way, the most experienced people get the most deciding power, he said.

The project has seen quite a bit of activity in the eight months it has existed. Over that period, there have been 18,000 commits to the code base, with an average of 412 per week. There were some dips in the graph that he showed, for Christmas, Easter, Norwegian national day, and one that he called "Elop". The latter correlated with Nokia CEO Stephen Elop's statements that have caused concern about the future of Qt within the company. When an audience member suggested banning Christmas to increase productivity, Macieira chuckled and said that banning Elop would be more effective.

Commit numbers do not show the whole story, though. To get a sense for the community that is coming together around the project, Macieira also looked at how many different people are contributing. Over the eight months, 481 different email addresses were used for contributions—averaging 140 different email addresses per week. There are some people using more than one address, of course, but those numbers give an idea of the number of contributors to the project.

Macieira put up a flowchart describing how to contribute to the project. It looked relatively complex, with lots of "ladders and snakes", but contributing is actually fairly straightforward, he said. He pointed to the project wiki for information on what is needed. Code contributions are managed with the project's Gerrit code review instance. Using the dashboard in that tool, one can see the status of current code reviews, look at comments that have been made on the code, see the diffs for the changes, and so on.

Code that has passed review from both humans and a bot that checks for problems can then be staged from the Gerrit system. That moves the code into an integration phase where it is merged into the mainline, compiled, and tested. Two and a half hours later, contributors will get an email with the results of the integration. It is "very simple", he said, and all of the "eleven steps" from the flowchart "boil down" to this process.

KDE and Qt

Qt and KDE are "greater together", Macieira said, and he would like to see the two communities merge into one large community. Qt provides the libraries and framework, while KDE is building applications on top of that. If KDE needs new features, they can be put into Qt as there is now a "really nice way to make that happen". In the past, there had been obstacles to moving functionality into Qt, but those are gone.

He gave several examples of people working on moving KDE functionality into Qt. The KMimeType class has recently been added to Qt as QMimeType, which was essentially just moving and renaming the code. Other KDE classes have required adaptations prior to moving into Qt, including KStandardDirs and KTempDir. David Faure has been doing much of that work, but he is not alone as Jon Layt has been working on moving the KDE printing subsystem into Qt, while Richard Moore has been adding some of the KDE encryption code (e.g. SSL sockets) to Qt.

Those are just three KDE developers who have started working in the Qt upstream, Macieira said. There is a lot more code that could make the move including things like KIO (KDE I/O), the KDE command-line parser, and KDebug.

Beyond just code contributions, the project can use help in lots of areas. One can be an advocate and help spread the word about the Qt Project. Reporting bugs (and helping to fix some of them) is another area. Documentation, translations, artwork, and so on also need people to work on them; there is "a lot that you can do", Macieira said.

Developers can also start reviewing the code that is being proposed. It is easy to get started with the code review system after creating an account. What is really needed is "more people" and KDE is an obvious source for some of those people. The two projects should "work more closely to solve our objectives together", he concluded.

Asked about the filtering capabilities in the code review system to find patches of interest for review, Macieira admitted that the search and filtering functionality could use some work. There are ways to watch specific projects by a regular expression match, but overall it lacks some features that would be useful.

Another audience member asked about the statistics and, in particular, whether they could be quantified based on whether the person considers themselves a KDE developer. That is difficult to do, Macieira said, because people wear multiple hats, but there is definitely value in doing so.

The last suggestion was for a joint KDE/Qt conference. Macieira agreed that there would be "a lot of value in getting the two communities together", but that it wouldn't work for this year. He would like to see that happen next year, perhaps as part of the Qt Contributors Summit. The summit is a hands-on working conference, without presentations like Akademy has, he said, so a separate event might be the right way to go.

After years of trying to turn Qt into a more open project with a community orientation, it is nice to see that effort start to come to fruition. Given the uncertainty in the future of Qt at Nokia, having an independent project, with contributions from others outside of the company, will help to ensure the future of the toolkit. Since KDE is one of the bigger users of Qt, it only makes sense for the two projects to work closely together—exactly what Macieira was advocating.

[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]

Comments (3 posted)

Page editor: Jonathan Corbet

Security

Cyberoam deep packet inspection and certificates

By Nathan Willis
July 11, 2012

The Tor Project recently discovered a security flaw in a line of commercial deep packet inspection (DPI) products; a flaw which an attacker could use to intercept the SSL connections of third-party users. The manufacturer of the product quickly pushed out an update, but DPI products from other vendors may still be affected.

Runa Sandvik wrote about the discovery on the Tor blog in a post dated July 3. According to that account, the discovery took place the week before, when a Tor user in Jordan contacted the project to report seeing a fake certificate for torproject.org, issued by Cyberoam. The project initially thought that a certificate authority (CA) might have been compromised (as in the DigiNotar incident in 2011), but upon investigation, that was not the case.

Cyberoam is a network security vendor that sells DPI devices. The user in Jordan was not witnessing an attack, but rather the evidence that the SSL connection to Tor had been intercepted by a Cyberoam DPI device. It is not clear from Sandvik's post whether the user was behind a corporate Cyberoam barrier (in which case he or she may have explicitly or implicitly agreed to the DPI interception) or was being monitored unwillingly. Whatever the circumstances, though, it was not the DPI filtering that constituted the security vulnerability.

CA certificate woes and default settings

Cyberoam's devices monitor SSL connections without generating browser errors by having the user install a Cyberoam CA certificate into the browser's trusted certificate store. Subsequently, the device intercepts SSL connections, issuing generated certificates (signed by the now-trusted Cyberoam CA certificate) for the requested sites and establishing the server-side connection on the other side of the intercept. This had happened to the user in Jordan, which was why he or she saw a Cyberoam-issued certificate for the torproject.org domain. The problem is that all Cyberoam devices ship with identical CA certificates and identical private keys. Consequently, Sandvik wrote, anyone can use a Cyberoam device to intercept traffic on any other Cyberoam-filtered network, or even extract the key and install it on other devices and use those to intercept traffic. In either case, the users would not detect that their traffic was being monitored by someone other than the approved authority.

Sandvik and Ben Laurie wrote a security advisory (CVE-2012-3372) about the issue and notified Cyberoam before publishing the blog post. Cyberoam wrote a post on its own blog detailing its response to the alert. According to Cyberoam's account, each affected device is capable of generating its own CA certificate, and the certificate shared by all devices was merely a "default." After Tor's alert, the company pushed out an update to all of its devices instructing administrators to generate a unique CA certificate, and it "forcefully generated unique keys for all the remaining appliances." The wording in that section is a bit ambiguous, but it appears that device administrators were encouraged to generate a new CA certificate locally, and those that did not do so quickly were updated to a unique certificate generated at Cyberoam, with further instructions on local key generation. Cyberoam (admirably) thanked Tor for the vulnerability report, and also said that the CA certificate update makes its DPI products significantly more secure than its competitors'.

For the update to take effect, users on a Cyberoam-monitored network will need to import the newly-generated CA certificate into their browser's trusted certificate store. Whether they do so willingly depends on whether they have consented to be monitored by the device. For its part, Tor expresses little sympathy for organizations using DPI to intercept users' connections, repeatedly calling them "victims" in the security alert text, and adding the footnote: "In the corporate setting, willing victims are often known as 'employees'. Unwilling victims should not, of course, install the CA certificate, nor should they click through certificate warnings." On the other hand, the alert calls Cyberoam's approach "the only legitimate way to use these devices," in contrast with monitoring schemes that require persuading a CA to issue fake certificates.

Security impact

Without knowing the details of the user who reported the issue initially, it is impossible to say whether or not Cyberoam devices are being used to monitor Internet users without their consent. There are legitimate uses for DPI, of course, such as protecting a corporate network. But the report hints at a different scenario, because the user reported that common web sites (such as Gmail and Twitter) showed the correct CA certificates — suggesting that torproject.org was being selectively targeted for interception.

In comments on the Tor blog, some readers questioned whether the case was one of consenting monitoring done at an employer, or unwilling surveillance. One anonymous reader commented that the Cyberoam devices in question only intercept SSL connections to check for malware, and that Tor had "raised a non-existing vulnerability." Sandvik replied that there were two separate issues at play: the use of the device to intercept torproject.org traffic , and the fact that all Cyberoam devices shipped with the same CA certificate.

Cyberoam's response should seal the second issue, assuming that future devices ship only with unique keys. A master CA certificate controlled by Cyberoam could still be used to sign the individual device keys. In that case, Cyberoam might sign a separate "intermediate" CA certificate for each individual device. Thus the users only need to install the original Cyberoam certificate in the browser's trust store, and the certificate trust chain still validates. If device administrators have to re-generate a new CA certificate for the device, they must have Cyberoam sign it, but they do not have to have every user install something new.

But as Sandvik asked in the comment thread, "How can you be sure that the device being used in this case is not doing DPI, but 'just' HTTPS scanning for antivirus?" Implicitly, the answer is "you can't," which is one of the fundamental justifications for Tor and similar privacy-protecting projects. How the user in Jordan came to be behind a Cyberoam scanning device is unknown, but if he or she agreed to have web traffic monitored, particularly to the point of manually installing a CA certificate in the browser, then there is little or nothing to prevent the monitoring party from engaging in all sorts of mischief. Although, in an interesting side note, several anonymous commenters on the Tor blog posted what they claim to be the Cyberoam devices' default private key. So even if few people learn a lesson about the dangers of consenting to traffic monitoring, perhaps a few others will learn a lesson about leaving the default security settings in place.

Comments (29 posted)

Brief items

Security quotes of the week

Popular pet names Rover, Cheryl and Kate could be a thing of the past. Banks are now advising parents to think carefully before naming their child’s first pet. For security reasons, the chosen name should have at least eight characters, a capital letter and a digit. It should not be the same as the name of any previous pet, and must never be written down, especially on a collar as that is the first place anyone would look. Ideally, children should consider changing the name of their pet every 12 weeks.

[...] We tried to call Barclays’ security expert R0b Ste!nway for a comment, but he was not available for 24 hours, having answered his phone incorrectly three times in succession.

-- NewsBiscuit

A group of researchers led by Professor Todd Humphreys from the University of Texas at Austin Radionavigation Laboratory recently succeeded in raising the eyebrows of the US government. With just around $1,000 in parts, Humphreys’ team took control of an unmanned aerial vehicle owned by the college, all in front of the US Department of Homeland Security.

After being challenged by his lab, the DHS dared Humphreys’ crew to hack into a drone and take command. Much to their chagrin, they did exactly that.

-- RT

Just like the huge black eye that _every major US telecom company_ got when they got caught colluding with the NSA to spy on Americans in obvious violation of US law? You'll recall that it was such a *huge* PR disaster... that they're all still doing it today(!), that Congress retroactively changed the law(!), and that the whistleblower was indicted for espionage(!).

I agree that Intel's hardware is very probably not backdoored, but that's simply not a standard by which threats should be measured in this field. Treating a backdoor scenario as outside the realm of possibility based on appeals to reputation given such obvious, massive, and recent precedent to the contrary is... not a typical security mindset, to put it mildly.

-- Matt Mackall

Comments (1 posted)

New vulnerabilities

backuppc: cross-site scripting

Package(s):backuppc CVE #(s):CVE-2011-4923
Created:July 9, 2012 Updated:July 11, 2012
Description: From the Mageia advisory:

Cross-site scripting (XSS) vulnerability in View.pm in BackupPC 3.0.0, 3.1.0, 3.2.0, 3.2.1, and possibly earlier allows remote attackers to inject arbitrary web script or HTML via the num parameter in a view action to index.cgi, related to the log file viewer

Alerts:
Mandriva MDVSA-2013:062 backuppc 2013-04-08
Mageia MGASA-2012-0165 backuppc 2012-07-14
Mageia MGASA-2012-0139 backuppc 2012-07-09

Comments (none posted)

ffmpeg: insecure frame threads

Package(s):ffmpeg CVE #(s):CVE-2011-3937
Created:July 9, 2012 Updated:July 11, 2012
Description: From the Mageia advisory:

Disallow width/height changing with frame threads

Alerts:
Gentoo 201310-12 ffmpeg 2013-10-25
Mandriva MDVSA-2013:079 ffmpeg 2013-04-09
Gentoo 201210-06 libav 2012-10-19
Mageia MGASA-2012-0143 ffmpeg 2012-07-09

Comments (none posted)

jruby: denial of service

Package(s):jruby CVE #(s):CVE-2011-4838
Created:July 10, 2012 Updated:April 29, 2015
Description: From the CVE entry:

JRuby before 1.6.5.1 computes hash values without restricting the ability to trigger hash collisions predictably, which allows context-dependent attackers to cause a denial of service (CPU consumption) via crafted input to an application that maintains a hash table.

Alerts:
Debian-LTS DLA-209-1 jruby 2015-04-29
Gentoo 201207-06 jruby 2012-07-09

Comments (none posted)

keepalived: denial of service

Package(s):keepalived CVE #(s):CVE-2011-1784
Created:July 10, 2012 Updated:April 10, 2013
Description: From the CVE entry:

The pidfile_write function in core/pidfile.c in keepalived 1.2.2 and earlier uses 0666 permissions for the (1) keepalived.pid, (2) checkers.pid, and (3) vrrp.pid files in /var/run/, which allows local users to kill arbitrary processes by writing a PID to one of these files.

Alerts:
Mandriva MDVSA-2013:096 keepalived 2013-04-10
Fedora FEDORA-2012-12367 keepalived 2012-09-04
Fedora FEDORA-2012-12377 keepalived 2012-09-04
Mageia MGASA-2012-0188 keepalived 2012-08-02
Gentoo 201207-07 keepalived 2012-07-09

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2012-2744 CVE-2012-2745
Created:July 10, 2012 Updated:October 24, 2012
Description: From the Red Hat advisory:

* A NULL pointer dereference flaw was found in the nf_ct_frag6_reasm() function in the Linux kernel's netfilter IPv6 connection tracking implementation. A remote attacker could use this flaw to send specially-crafted packets to a target system that is using IPv6 and also has the nf_conntrack_ipv6 kernel module loaded, causing it to crash. (CVE-2012-2744, Important)

* A flaw was found in the way the Linux kernel's key management facility handled replacement session keyrings on process forks. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2012-2745, Moderate)

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
openSUSE openSUSE-SU-2013:0396-1 kernel 2013-03-05
SUSE SUSE-SU-2012:1391-1 Linux kernel 2012-10-24
Ubuntu USN-1606-1 linux 2012-10-11
Ubuntu USN-1597-1 linux-ec2 2012-10-04
Ubuntu USN-1574-1 linux-lts-backport-natty 2012-09-19
Ubuntu USN-1567-1 linux 2012-09-14
Red Hat RHSA-2012:1129-01 kernel 2012-07-31
Scientific Linux SL-kern-20120726 kernel 2012-07-26
Red Hat RHSA-2012:1114-01 kernel 2012-07-24
Red Hat RHSA-2012:1148-01 kernel 2012-08-07
Oracle ELSA-2012-2026 kernel 2012-07-18
Oracle ELSA-2012-2025 kernel 2012-07-18
Ubuntu USN-1507-1 linux 2012-07-16
Oracle ELSA-2012-1064 kernel 2012-07-11
CentOS CESA-2012:1064 kernel 2012-07-10
Red Hat RHSA-2012:1064-01 kernel 2012-07-10

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2012-3375
Created:July 10, 2012 Updated:July 13, 2012
Description: From the Red Hat advisory:

* The fix for CVE-2011-1083 (RHSA-2012:0150) introduced a flaw in the way the Linux kernel's Event Poll (epoll) subsystem handled resource clean up when an ELOOP error code was returned. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2012-3375, Moderate)

Alerts:
openSUSE openSUSE-SU-2013:0396-1 kernel 2013-03-05
Mageia MGASA-2012-0237 kernel 2012-08-23
Ubuntu USN-1529-1 linux 2012-08-10
Ubuntu USN-1539-1 linux-lts-backport-oneiric 2012-08-14
Ubuntu USN-1533-1 linux 2012-08-10
Ubuntu USN-1514-1 linux-ti-omap4 2012-08-10
Red Hat RHSA-2012:1150-01 kernel-rt 2012-08-08
Ubuntu USN-1532-1 linux-ti-omap4 2012-08-10
Scientific Linux SL-kern-20120712 kernel 2012-07-12
Oracle ELSA-2012-1061 kernel 2012-07-11
CentOS CESA-2012:1061 kernel 2012-07-10
Red Hat RHSA-2012:1061-01 kernel 2012-07-10

Comments (none posted)

libgdata: insufficient certificate verification

Package(s):libgdata CVE #(s):CVE-2012-1177
Created:July 11, 2012 Updated:August 29, 2012
Description: From the Novell bugzilla:

libgdata doesn't validate ssl certificates for all connections

Alerts:
Ubuntu USN-1547-1 libgdata, evolution-data-server 2012-08-28
Mageia MGASA-2012-0190 libgdata 2012-08-02
Mandriva MDVSA-2012:111 libgdata 2012-07-25
Gentoo 201208-06 libgdata 2012-08-14
openSUSE openSUSE-SU-2012:0862-1 libgdata 2012-07-11

Comments (none posted)

mod_security: modsecurity multipart bypasses

Package(s):mod_security CVE #(s):
Created:July 11, 2012 Updated:July 11, 2012
Description: From the Fedora advisory:

ModSecurity Multipart Bypasses fixed by this upstream release. [v2.6.6] Upgrade to the latest stable upstream release. Upgraded mod_security package.

Alerts:
Fedora FEDORA-2012-9824 mod_security 2012-07-10

Comments (none posted)

mod_security_crs: modsecurity core rule set multipart bypasses

Package(s):mod_security_crs CVE #(s):
Created:July 11, 2012 Updated:July 11, 2012
Description: From the Fedora advisory:

ModSecurity Core Rule Set Multipart Bypasses fixed by this upstream release. [v2.2.5] Updated spec file. ModSecurity Rules

Alerts:
Fedora FEDORA-2012-9813 mod_security_crs 2012-07-10

Comments (none posted)

openjpeg: code execution

Package(s):openjpeg CVE #(s):CVE-2012-3358
Created:July 11, 2012 Updated:June 19, 2013
Description: From the Red Hat advisory:

An input validation flaw, leading to a heap-based buffer overflow, was found in the way OpenJPEG handled the tile number and size in an image tile header. A remote attacker could provide a specially-crafted image file that, when decoded using an application linked against OpenJPEG, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application.

Alerts:
Gentoo 201310-07 openjpeg 2013-10-10
Fedora FEDORA-2013-8953 openjpeg 2013-06-19
Mandriva MDVSA-2013:110 openjpeg 2013-04-10
Debian DSA-2629-1 openjpeg 2013-02-25
Mageia MGASA-2012-0165 openjpeg 2012-07-14
Scientific Linux SL-open-20120711 openjpeg 2012-07-11
Oracle ELSA-2012-1068 openjpeg 2012-07-11
Mandriva MDVSA-2012:104 openjpeg 2012-07-12
CentOS CESA-2012:1068 openjpeg 2012-07-11
Red Hat RHSA-2012:1068-01 openjpeg 2012-07-11

Comments (none posted)

pidgin: remote code execution

Package(s):pidgin CVE #(s):CVE-2012-3374
Created:July 9, 2012 Updated:March 15, 2013
Description: From the Debian advisory:

Ulf Härnhammar found a buffer overflow in Pidgin, a multi protocol instant messaging client. The vulnerability can be exploited by an incoming message in the MXit protocol plugin. A remote attacker may cause a crash, and in some circumstances can lead to remote code execution.

Alerts:
Oracle ELSA-2013-0646 pidgin 2013-03-14
Gentoo 201209-17 pidgin 2012-09-27
Scientific Linux SL-pidg-20120719 pidgin 2012-07-19
Oracle ELSA-2012-1102 pidgin 2012-07-20
SUSE SUSE-SU-2012:0890-1 pidgin, finch and libpurple 2012-07-19
CentOS CESA-2012:1102 pidgin 2012-07-19
CentOS CESA-2012:1102 pidgin 2012-07-19
Red Hat RHSA-2012:1102-01 pidgin 2012-07-19
Fedora FEDORA-2012-10294 pidgin 2012-07-14
Slackware SSA:2012-195-02 pidgin 2012-07-14
Mandriva MDVSA-2012:105 pidgin 2012-07-12
Fedora FEDORA-2012-10287 pidgin 2012-07-10
Mageia MGASA-2012-0154 pidgin 2012-07-10
Ubuntu USN-1500-1 pidgin 2012-07-09
Debian DSA-2509-1 pidgin 2012-07-08

Comments (none posted)

pidgin: information disclosure

Package(s):pidgin CVE #(s):CVE-2011-4922
Created:July 10, 2012 Updated:July 11, 2012
Description: From the Ubuntu advisory:

Julia Lawall discovered that Pidgin incorrectly cleared memory contents used in cryptographic operations. An attacker could exploit this to read the memory contents, leading to an information disclosure. This issue only affected Ubuntu 10.04 LTS.

Alerts:
Ubuntu USN-1500-1 pidgin 2012-07-09

Comments (none posted)

python3: data leaks, memory damage, and crash

Package(s):python3 CVE #(s):CVE-2012-2135
Created:July 11, 2012 Updated:August 13, 2012
Description: From the Novell bugzilla:

In the utf-16 decoder after calling unicode_decode_call_errorhandler aligned_end is not updated. This may potentially cause data leaks, memory damage, and crash. The bug introduced by implementation of the issue #4868. In a similar situation in the utf-8 decoder aligned_end is updated.

Alerts:
Ubuntu USN-1616-1 python3.1 2012-10-24
Ubuntu USN-1615-1 python3.2 2012-10-23
Mageia MGASA-2012-0208 python3 2012-08-12
openSUSE openSUSE-SU-2012:0861-1 python3 2012-07-11

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2012-3825 CVE-2012-3826
Created:July 11, 2012 Updated:July 11, 2012
Description: From the CVE entries:

Multiple integer overflows in Wireshark 1.4.x before 1.4.13 and 1.6.x before 1.6.8 allow remote attackers to cause a denial of service (infinite loop) via vectors related to the (1) BACapp and (2) Bluetooth HCI dissectors, a different vulnerability than CVE-2012-2392. (CVE-2012-3825)

Multiple integer underflows in Wireshark 1.4.x before 1.4.13 and 1.6.x before 1.6.8 allow remote attackers to cause a denial of service (loop) via vectors related to the R3 dissector, a different vulnerability than CVE-2012-2392. (CVE-2012-3826)

Alerts:
Scientific Linux SLSA-2013:1569-2 wireshark 2013-12-09
Oracle ELSA-2013-1569 wireshark 2013-11-26
Red Hat RHSA-2013:1569-02 wireshark 2013-11-21
Fedora FEDORA-2012-10175 wireshark 2012-07-10

Comments (none posted)

xorg-server: code execution

Package(s):xorg-server CVE #(s):CVE-2012-2118
Created:July 10, 2012 Updated:April 11, 2013
Description: From the CVE entry:

Format string vulnerability in the LogVHdrMessageVerb function in os/log.c in X.Org X11 1.11 allows attackers to cause a denial of service or possibly execute arbitrary code via format string specifiers in an input device name.

Alerts:
Mandriva MDVSA-2013:139 x11-server 2013-04-10
Mageia MGASA-2012-0299 x11-server 2012-10-20
Ubuntu USN-1502-1 xorg-server 2012-07-11
Gentoo 201207-04 xorg-server 2012-07-09

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.5-rc6, released on July 7. "There's mainly some btrfs and md stuff in here, with the normal driver changes, arm updates and some networking changes. And a smattering of random stuff (including docs etc). None of it looks very scary, it's all pretty small, and there aren't even all that many of those small changes." Linus also notes that the 3.6 merge window is likely to hit when a lot of developers are on vacation, so the 3.6 kernel might contain a relatively small set of changes.

Stable updates: 3.2.22 was released on July 5. The 3.2.23 update is in the review process as of this writing; it can be expected almost anytime.

Comments (none posted)

Quotes of the week

One man's idiom is another man's idiocy.
Andrew Morton

Perhaps it's a typo, and was meant to be AArgh64
Jan Ceuleers

Dunno, the moment Apple comes out with their 64-bit iPhone6 (or whichever version they'll go 64-bit) *everyone* will scramble to go 64-bit, for marketing reasons. Code and SoC details will be ported to 64-bit in the usual fashion: in the crudest, fastest possible way.

There's also a technological threshold: once RAM in a typical smartphone goes above 2GB the pain of a 32-bit kernel becomes significant. We are only about a year away from that point.

So are you *really* convinced that the colorful ARM SoC world is not going to go 64-bit and will all unify behind a platform, and that we can actually force this process by not accepting non-generic patches? Is such a platform design being enforced by ARM, like Intel does it on the x86 side?

Ingo Molnar

A number of the developers all went to a climbing gym one evening, and I found myself climbing with another kernel developer who worked for a different company, someone whose code I had rejected in the past for various reasons, and then eventually accepted after a number of different iterations. So I've always thought after that incident, "always try to be nice in email, you never know when the person on the other side of the email might be holding onto a rope ensuring your safety."
Greg Kroah-Hartman

Comments (none posted)

30 Linux Kernel Developers in 30 Weeks: Greg Kroah-Hartman (Linux.com)

Jennifer Cloer interviews Greg Kroah-Hartman for the Linux.com series. "I was an embedded software developer testing the device I was working on (a barcode scanner) with all different operating systems to ensure that I had gotten the USB firmware correct. Linux had very little USB support at the time, and I realized I could help out and contribute to make it work better. One thing led to another and I soon got a job doing Linux kernel development full time over 10 years ago and never looked back."

Comments (none posted)

Kernel development news

Btrfs send/receive

By Jonathan Corbet
July 11, 2012
The btrfs snapshot capability allows a system administrator to quickly capture the state of a filesystem at any given time. Thanks to the copy-on-write mechanism used by btrfs, snapshots share data with other snapshots or the "live" system; blocks are only duplicated when they are changed. While btrfs makes the creation and management of snapshots easy, it currently lacks the ability to efficiently determine what the differences are between two snapshots and save that information for future use. Given that some other advanced filesystems (ZFS, for example) offer that capability, btrfs can arguably be seen as falling a little short in this particular area.

Happily, that situation appears to be about to change, as Alexander Block's btrfs send/receive patch set has been well received by the development community. In short, with this patch set (and the associated user space tools), btrfs can be instructed to calculate the set of changes made between two snapshots and serialize them to a file. That file can then be replayed elsewhere, possibly at some future time, to regenerate one snapshot from the other.

This functionality is implemented with the new BTRFS_IOC_SEND ioctl() command. In its simplest form, this operation accepts a file descriptor representing a mounted volume and the subvolume ID corresponding to the snapshot of interest; it will then find the changes between the snapshot and the "parent" snapshot it was generated from. There are more options, though:

  • The operation can actually take a list of snapshot/subvolume IDs and generate a combined file for all of them.

  • The parent snapshot can be specified explicitly. That may be required for older btrfs volumes that lack the needed identifying information. It may also be useful to generate differences that skip over a set of snapshots — differences from a grandparent, say, instead of the direct parent.

  • The command also accepts an optional list of "clone sources." Those are subvolumes that can be expected to exist on the receiving side; when possible, data blocks will be "cloned" from those snapshots rather than being written into the differences file. That reduces the size of the differences, and enables better data sharing on the receive side.

The generated file is essentially a set of instructions for converting the parent snapshot into the one being "sent." The list of commands is surprisingly long, including operations like create a file (or directory, device node, FIFO, symbolic link, ...), rename or link a file, unlink a file, set and remove extended attributes, write data, clone data blocks, truncate a file, change ownership and permissions, set file times, and so on. The code that generates this file is also surprisingly long, being several thousand lines of complex, nearly uncommented functions (some of the comments that do exist, saying things like "magic happens here," are not entirely helpful).

Interestingly, according to the patch introduction, the custom file format was not in the original plan. Instead, the output was meant to be in something close to the tar file format — close enough that the tar command could be used to extract data from it. Tar turned out not to have the needed capabilities, though, so a new format was created. The format should be considered to be in flux still, though, clearly, it will need to stabilize before this feature can be considered ready for production use. As it happens, the playback of this file can be done almost entirely in user space, so there is no need for a BTRFS_IOC_RECEIVE operation.

At the command level, using this feature can be as simple as:

    btrfs send snapshot

This will send the given snapshot (in its entirety) to the standard output stream. Writing the command as:

    btrfs send -i oldsnap snapshot

will cause the creation of an incremental send containing just the differences from oldsnap. The receive command can be used to apply a file created by btrfs send to an existing filesystem.

The primary use case for this feature (which is clearly patterned after the ZFS send/receive functionality) is backups in various forms. A cron job could easily send a snapshot to a remote server on a regular basis, maintaining a mirror of a filesystem there. The send files can simply be stored as backups; an entire volume can be sent as a full backup, while snapshots are easily sent as incrementals. With some additional tooling, the send/receive feature could develop into an advanced backup capability with low-level support from the underlying filesystem.

That is for some time in the future, though; the feature is currently experimental, and Alexander warns potential users:

If you use it for backups, you're taking big risks and may end up with unusable backups. Please do not only count on btrfs send/receive backups!

That said, there seems to be a fair amount of interest in this feature (btrfs creator Chris Mason described it as "just awesome"), so chances are it will be worked into reasonable shape relatively quickly. Then btrfs will have one more useful feature and one less reason to be concerned about comparisons with that other filesystem.

Comments (8 posted)

Supporting 64-bit ARM systems

By Jonathan Corbet
July 10, 2012
ARM is one of the most successful processor architectures ever created; most of us possess several ARM cores for every x86 processor we have. ARM is very much thought of as an embedded systems processor; it is focused on minimal power use and the ability to be built into a variety of system-on-chip configurations. The "small systems" image of ARM is certainly encouraged by the fact that ARM processors are all 32-bit only. That situation is about to change, though, with the arrival of 64-bit ARM processors. Linux will be ready for these systems — the first set of 64-bit ARM support patches have just been posted — but there is still some debate around a couple of fundamental decisions.

One might well wonder whether a 64-bit ARM processor is truly needed. 64-bit computing seems a bit rich even for the fanciest handsets or tablets, much less for the kind of embedded controllers where ARM processors predominate. But mobile devices are beginning to push the memory-addressing limits of 32-bit systems; even a 1GB system requires the use of high memory in most configurations. So, even if the heavily foreshadowed ARM server systems never materialize, there will be a need for 64-bit ARM processors just to be able to efficiently use the memory that future mobile devices will have. "Mobile" and "embedded" no longer mean "tiny."

Naturally, Linux support is an important precondition to a successful 64-bit ARM processor introduction, so ARM has been supporting work in that area for some time. The initial GCC patches were posted back in May, and the first set of kernel patches was posted by Catalin Marinas on July 6. All this code exists despite the fact that no 64-bit ARM hardware is yet available; it all has been developed on simulators. Once the hardware shows up, with luck, the software will work with a minimum of tweaking required.

64-bit ARM support involves the addition of thousands of lines of new code via a 36-part patch set. There are some novel features, such as the ability to run with a 64KB native memory page size, and a lot of important technical decisions to be reviewed. So the kernel developers did what one would expect: they started complaining about the name given to the architecture. That name ("AArch64") strikes many as simultaneously redundant (of course it is an architecture) and uninformative (what does "A" stand for?). Many would prefer either ARMv8 (which is the actual hardware architecture name—"AArch64" is ARMv8's 64-bit operating mode) or arm64.

Arguments in favor of the current name include the fact that it is already used to identify the architecture in the ELF triplet used in binaries; using the same name everywhere should help to reduce confusion. But, then, as Arnd Bergmann noted: "If everything else is aarch64, we should use that in the kernel directory too, but if everyone calls it arm64 anyway, we should probably use that name for as many things as possible." Jon Masters added that, in classic contrarian style, he likes the name as it is; Fedora is planning to use "aarch64" as the name for its 64-bit ARM releases. Others, such as Ingo Molnar, argue in favor of changing the name now when it is relatively easy to do. Catalin seems inclined to keep the current name but says he will think about it before posting the next version of the patch series.

An arguably more substantive question was raised by a number of developers: wouldn't it make more sense to unify the 32-bit and 64-bit ARM implementations from the outset? A number of other architectures (x86, PowerPC, SPARC, and MIPS) all started with separate implementations, but ended up merging them later on, usually with some significant pain involved. Rather than leave that pain for future ARM developers, it has been suggested that, perhaps, it would be better to start with a unified implementation.

There are a lot of reasons given for the separate 64-bit ARM architecture implementation. Much of the relevant thinking can be found in this note from Arnd. The 64-bit ARM instruction set is completely different from the 32-bit variety, to the point that there is no possibility of writing assembly code that works on both architectures. The system call interfaces also differ significantly, with the 64-bit version taking a more standard approach and leaving a lot of legacy code behind. The 64-bit implementation hopes to leave the entire 32-bit ARM "platform" concept behind as well; indeed, as Jon put it, there are hopes that it will be possible to have a single kernel binary that runs on all 64-bit ARM systems from the outset. In general, it is said, giving AArch64 a clean start in its own top-level hierarchy will make it possible to leave a lot of ARM baggage behind and will result in a better implementation overall.

Others were quick to point out that most of these arguments have been heard in the context of other architectures. x86_64 was also meant to be a clean start that dumped a lot of old i386 code. In the end, things have turned out otherwise. It may be possible that things are different here; 32-bit ARM has rather more legacy baggage than other architectures did, and the processor differences seem to be larger. Some have said that the proper comparison is with x86 and ia64, though one gets the sense that the AArch64 developers don't want to be seen in the same light as ia64 in general.

This decision will come down to what the AArch64 developers want, in the end; it's up to them to produce a working implementation and to maintain it into the future. If they insist that it should be a separate top-level architecture, it is unlikely that others will block its merging for that reason alone. Of course, it will also be up to those developers to manage a merger of the two in the future, should that prove to be necessary. If nothing else, life as a separate top-level architecture will allow some experimentation without the fear of breaking older 32-bit systems; the result could be a better unified architecture some years from now, should things move in that direction.

Thus far, there has been little in the way of deeper technical criticism of the AArch64 patch set. Things may stay that way. The code has already been through a number of rounds of private review involving prominent developers, so the worst problems should already have been found and addressed. Few developers have the understanding of this new processor that would be necessary to truly understand much of the code. So it may go into the mainline kernel (perhaps as early as 3.7) without a whole lot of substantial changes. After that, all that will be needed is actual hardware; then things should get truly interesting.

Comments (39 posted)

Linux power management: The documentation I wanted to read

July 10, 2012

This article was contributed by Neil Brown

Last week we discussed three elements that might serve to guide the creation of introductory technical documentation. This week we put those elements to the test by using them to create some introductory documentation for Linux power management. For me, this exercise precisely answers the question "What were you looking for that you didn't find?", as it is the documentation I would have liked to read.

This documentation is necessarily incomplete, partly because my own experience is not yet broad enough to provide a comprehensive document, and partly because doing so might try the patience of the present readership. As such it stops short of delving into the details of hibernation and completely omits any treatment of quality-of-service and wakeup sources, all of which would have an important place in a more complete document. Fortunately there are still sufficient topics to showcase the presentation of structure, purpose, and examples.

Three perspectives on Linux power management

The power management infrastructure in Linux is quite complex, but hopefully not intractably so. To get a handle on this complexity it is helpful to view it from three different perspectives. The first perspective highlights the different holistic states of the system which roughly divide into "in use", "not in use", and "indefinitely not in use", corresponding to "run time power management", "suspend" and "hibernate". One of the distinctions between these is the size of the power switch. The first uses lots of little power switches at different times, while the last turns off everything all at once (except maybe a real-time clock or similar).

The second of these states is somewhat harder to define. It covers a range of states which are not easy to clearly differentiate. At one end of the spectrum we have the traditional "suspend" mode of a laptop, which is almost like hibernation but uses a little more power and is a little quicker to get into and out of. Once the laptop has entered suspend it really must stay there using minimal power until it is explicitly wakened, as it might have been placed in a padded case for transport and any increase in power usage could result in over-heating and damage. This state is often entered with help from BIOS firmware so, to the OS, it is a bit like a single power switch which transitions from "on" to "suspend".

At the other end of the spectrum is the way that "suspend" is used in the Android mobile platform and similar devices. These devices are expected to wake up spontaneously for various reasons, whether due to an incoming phone call, a reminder alarm, or just a periodic check for new email or software updates. Management of power and temperature is generally better than notebooks so the risk of over-heating is not present. There is normally little or no firmware and the entire power-management transition is handled by the OS, so it is responsible for turning off each individual device in the correct order, and then restoring them again later.

Between these extremes of a light hibernation and a heavy snooze there is room for other possibilities. A server might use a BIOS-based suspend to save power after arranging for wake-ups via wake-on-LAN or a realtime clock alarm. This can be seen as a deeper sleep than an Android phone normally enters, but not as deep as the laptop in its padded case. The "suspend" mode in Linux attempts to cater to all of these and that flexibility leads to some of the complexity.

The second perspective highlights the broad variety in components that need to be managed. Some, like rotating disk drives, have a high cost in power and time for turning off and on again, while others like an LED have essentially no cost. Some, such as a UART, need to either be off or sufficiently on to be able to accept full-rate data at any moment. Others, such as USB, can enter intermediate states where they can receive external signaling, but are free to take some time to fully wake up.

Other sources of variability include the level of independence from other devices, the degree of involvement of user-space in management of the device, and how power is routed - whether through the same bus that commands and data flow, or through some separate regulator or "power domain". These are just some of the ways that devices can vary and thus some of the issues that Linux power management needs to be prepared for.

The final perspective highlights the different stages on the way towards a low-power state, and on the way back to full functionality. The key elements of the low-power transition are to move all relevant components to a quiescent state, to record that state, then to stop powering some or all of the components; similar elements apply on the way back up. The details of managing all the aforementioned complexity through this simple transition means that we have quite a few stages as we will shortly see.

Two Understandings

Part of understanding the solution to managing this complexity is understanding the balance that has been chosen between a "mid-layer" solution and a "library" solution. That is, how much responsibility for correct behavior and sequencing is taken by the core code and imposed on the drivers, and how much of the responsibility is left in the hands of the drivers. Centralizing responsibility tends to be safe but inflexible, while distributing it is risky but versatile. Linux power management takes a middle road, so it is important to understand where each responsibility lies.

The main imposition made by the PM core is the over-all sequencing of suspend and resume. Allowing individual drivers to take a more active role in this process would probably require a general dependency solver and would undoubtedly make debugging a lot harder. In contrast, choices that are local to a specific device, such as timeouts before power management activates, or the use of a separate thread for performing power management actions are actively supported by the core without being imposed on drivers that don't want them.

One other imposition, which will be raised again later, involves interaction with interrupts. The PM core strongly encourages a specific sequencing, but does provide hooks for a driver to escape it if absolutely necessary.

Understanding Linux power management also requires knowing how devices are classified in Linux. The most obvious classification is shown by the "subsystem" link that can be found in the sysfs entry for the device. This points to either a "bus" or a "class" that the device belongs to. This subsystem roughly describes the interface that the device provides. Together with this can be a "device type" which allows further specialization. A simple example is that members of the class "block" - which are block devices such a disk drives - can be of type "disk" or type "partition" reflecting the fact that both the whole device and each individual partition is a block device, but that they do have some specific behaviors that are quite different.

Finally each device can have a "power domain" (or pm_domain). This is an abstraction that is currently only used for ARM SoC modules and represents the fact that different collections of devices within the SoC can be powered on or off together, thus the power domain may need to know when each device changes power state so it can re-evaluate or adjust the overall state of the domain.

These classifications are used to direct all the power management calls that are described below. If a device has a power domain, it gets to handle the call. If not, but the type, class, or bus declares any PM operations, those operations get to handle the call, otherwise the call is handled by the driver for the device. The PM core doesn't attempt to call all of the possible handlers for a particular device, just the first that is found. This is an example of distribution of responsibility. The first handler has the freedom to call more specific handlers, or to do all the work itself, and equally has the responsibility to ensure all required handlers are called.

For example, the power domain handler for the OMAP platform (in arch/arm/plat-omap/omap_device.c) calls the driver-specific handler (bypassing any subsystem handlers) before or after doing any OMAP-specific handling. The MMC bus handlers call into driver-specific handlers which are stored in a non-standard location - presumably for historical reasons.

With these perspective and understandings in place, we can move on to some specifics.

Runtime PM

Runtime power management has the fewest states and so is probably the best place to start digging into details. This is the part of Linux PM that manages power for individual devices without taking the whole system into a low-power state.

In this case the most interesting stage of the transition to lower power is "move to quiescent state". Once that is done there is one method call (runtime_suspend()) which combines "record current state" and "remove power", and another (runtime_resume()) which must restore power and reload any needed device state.

For runtime PM, the "move to quiescent state" transition is a cause, not an effect - the new state isn't requested, it is simply noticed. The PM core keeps track of the activity of each device using two counters and an optional timer. One counter (usage_count) counts active references to the device. These may be external references such as open file handles, or other devices that are making use of this one, or they may be internal references used to hold the device active for the duration of some operation. The other counter (child_count) counts the number of children that are active. The timer can be used to add a delay between the counters reaching zero and the device being considered to be idle. This is useful for devices with a high cost for turning on or off.

This "autosuspend" timer is not widely used at present, with only nine drivers calling pm_runtime_put_autosuspend() to start the timer, while 14 call pm_runtime_set_autosuspend_delay() which sets the timeout (though that can be set via sysfs). One user is the omap_hsmmc driver for the High Speed Multi-Media Card interface in OMAP processors. It sets a 100ms delay before declaring a device to be truly idle, presumably due to costs in activating and deactivating the cards.

The counter of active children can optionally be ignored when determining whether a device is idle. Normally the parent is needed to access the child - typically the parent is a bus sending commands to the child - so powering down the parent while children are active would be counterproductive. Sometimes it is useful though.

One good example is an I2C bus. I2C (inter-integrated circuit) is a very simple 2-wire bus for signaling between integrated circuits on a board. It doesn't carry power, only a clock signal and a bidirectional data signal. The bus is entirely master-driven. Slaves (which appears as children in the Linux device tree) cannot signal the master directly at all, they simply respond to commands from the master.

As an I2C controller is very cheap to turn on before a command is sent, and off after the response is received, there is no need to keep it powered just because its child (which could be a sensor that is monitoring the environment and may have a higher turn-on cost) is left on. Consequently some I2C controllers, such as i2c-nomadik and i2c-sh_mobile use pm_suspend_ignore_children() to allow them to report as idle even when they have active children.

When a device is deemed to be idle by the above criteria its runtime_idle() method is called. This function will normally perform any further checks (as does usb_runtime_idle()) and possibly call pm_runtime_suspend() to initiate the change in power state. For a slight variation, lnw_gpio_runtime_idle() in the gpio-langwell.c driver doesn't call pm_runtime_suspend() directly but rather calls pm_schedule_suspend() with a 500ms delay. Presumably this design predates the introduction of the autosuspend feature.

There is one class of devices that does not follow this structure for power management, and that is the CPU. The general pattern of entering a quiescent state, recording state information, and reducing power usage is the same, however the particular implementation is vastly different. This is partly due to the uniquely central role that the CPU plays, and partly due to the fact that a CPU typically has many more levels and styles of power saving. Runtime PM for the CPU is implemented using the cpuidle, cpufreq, and CPU hotplug subsystems, which will not be discussed further here; see this article for an introduction to cpuidle.

System Suspend

It can be helpful to view the "suspend" process as forcing all devices into a quiescent state, and then simply allowing runtime power management to put them all to sleep. The last to go to sleep would be the CPU (or CPUs) under the guidance of "cpuidle". While this isn't the way it is actually implemented, it provides a perspective which exposes the relationship between suspend and runtime PM quite well.

There are several reasons for not implementing it this way. Possibly the most unavoidable is that PM_RUNTIME and SUSPEND are separate kernel config options and there is a desire to keep it that way, so neither can rely on the other being present. There is also the fact that a BIOS (such as ACPI) might be involved in one or the other and may impose different handling requirements. Finally, individual drivers might want to make different decisions based on what sort of power management is happening, so it is generally best to tell them what is actually happening, rather than pretending that one thing is a form of another.

Forcing devices into a quiescent state has an important difference from just allowing them to get there on their own - any interdependencies between devices need to be explicitly handled. Linux PM has chosen to manage this by having a clear sequence of steps for transitioning to low power, and an explicit ordering of devices so that they make each step in a well defined order.

The ordering (stored in dpm_list linked through dev->power.entry) is normally the order in which devices are registered, with new devices added to the end, thus being after any devices that they depend on. However it is possible to reorder the list using device_move() which gives a device a new parent, and can place it directly after that parent, or at the end of the list. For example, when an rfcomm tty-over-bluetooth device is opened, a bluetooth connection is created and the tty device is reparented under the relevant bluetooth device and placed at the end of the device list.

The first stage of suspend, after some preliminaries like calling "sync" to flush out dirty data and switching to a separate virtual console, is to move all processes into a quiescent state. Devices which interact closely with processes need a chance to have one last chat before their process goes to sleep and this is achieved by registering a "notifier" which gets called before processes are put to sleep, and again when they are woken up.

This is variously used to:

  • load firmware in case it will be needed during resume
  • copy device state to swappable memory as may be needed when the device state can be enormous such as video RAM (drivers/gpu/drm/vmwgfx)
  • avoid deadlock when interacting with sysfs (drivers/acpi/battery.c)
  • preemptively remove devices that might be removed while the system is suspended, so appropriate cleanup can happen (drivers/mmc/core)

and a few other minor tasks. Once these notifiers run, all processes are sent a special signal which results them being moved to the "freezer" where they are forced to wait for system resume to happen.

Once all processes are quiescent, the next step is to instruct all devices to also become quiescent. To do this we need to walk the list in reverse order, putting children to sleep before their parents - as the parent may be needed to help put the child to sleep. However as a new child could be born at any moment (e.g. due to a device being plugged in), and as children get added to the end of the list, we might miss some children on the first pass. To avoid this, the PM core makes two passes over the list. The first pass starts at the beginning and simply asks devices to stop adding children by calling their "prepare()" method. If children are born during this time they can only be added after the current pointer in the list, and so will not be missed. Once this is complete we know that no new devices will be added, so the list is walked in the reverse order calling the "suspend" method.

The "suspend" method is actually three separate methods, suspend(), suspend_late(), and suspend_noirq(), which can share among themselves the three tasks of making the device quiescent, saving any state, and reducing power usage. How much of which task is allocated to which methods is largely up to each driver providing that the division works with the calling patterns of the three methods.

Calls to these methods are made to all devices in child-before-parent order and the sets of calls are interleaved with system-wide suspend operations, made largely through the suspend_ops dispatch table. The ordering is roughly:

  • system wide begin()
  • per-device suspend()
  • system wide prepare()
  • per-device suspend_late()
  • system wide disable (almost) all interrupt handlers
  • per-device suspend_noirq()
  • system wide prepare_late()
  • disable nonboot CPUs
  • syscore_suspend()
  • system wide enter()

Note that it is possible for the sequence from system wide prepare() onwards to be repeated (after being unwound by corresponding "resume" actions) without going all the way up to fully awake and starting the sequence from the top. This happens if the suspend_again() suspend operation requests it. Currently this is only requested by the charger manager which often needs to wake up parts of the system to check battery charging state, without wanting the cost of a full wakeup.

Deducing the purpose of these method calls by looking for example usage in the code is problematic for a number of reasons.

  • For the system-wide operations (begin(), prepare(), prepare_late()), there are few users and those that exist do not make their purpose clear to an untrained observer. The most complete user is ACPI, so possibly a full understanding of that specification would help. Unfortunately that is beyond the scope of this article (and of this author).

    In general, ACPI recommends specific procedures for entering and leaving system sleep states (such as suspend) and Linux PM was modeled on that and then adjusted to meet broader needs. For example, prepare_late() was added to resolve a conflict between the needs of ACPI and the needs of the ARM platform.

  • For the per-device operations, suspend_late() was only recently added (commit cf579dfb82550e3) and there are no users in Linux-3.4. So any examples we find may be working around the absence of suspend_late() and so should not be copied.
  • The initial reason for producing this document was finding code in drivers that simply wasn't working correctly and trying to understand what "correctly" might mean. Those drivers clearly cannot be used as good examples and there is evidence that other drivers aren't always doing the right thing, so any example may be equally faulty.

Examining the documentation brings a little more useful information.

  • suspend() should leave the device in a quiescent state
  • suspend_late() can often be the same as runtime_suspend() (see also commit cf579dfb82550e3)
  • suspend_noirq() happens after interrupts are disabled and is useful when shared interrupts are used as you can be certain that the interrupt handler will not be called after suspend_noirq() runs. Some interrupts, such as the timer interrupt, are not disabled.

One observation from the code that seems to be important before we try to paint the big picture is that, after calling the suspend() method on a device, runtime power management is disabled. The purpose of this seems to be to stop runtime PM from racing with system suspend PM - we really don't want two threads trying to power off a device at once, and this is the interlock that prevents that. It also prevents runtime PM from powering the device back on again, so any device that might be needed in the late states of power management needs to be left on when runtime PM is disabled.

Tying all these threads together we get that:

  • The suspend() method should cause the device to stop doing anything, and enter a state much like it would be just before runtime PM might decide to turn it off. So it should wait for any DMA requests to complete and ensure new ones won't start. It should stop transmitting information and ensure that incoming information is either ignored, or triggers a wake-from-suspend (possibly marking the interrupt for wakeups). It should cancel any timers and generally prepare for nothing to happen for a while.

    If the device might be needed to power down other devices, such as an I2C controller that might be needed to tell some regulator to turn off, then the device should be activated for runtime PM purposes so that it will still be active when runtime PM is disabled.

    Part of the task of ignoring incoming information is to ensure that no new children will be created much like the runtime PM prepare() method does. Having new devices appear after suspend() would be awkward.

  • The suspend_late() method should power off the device in much the same way that runtime_suspend() does, and it may be exactly the same routine as runtime_suspend(). Occasionally preparing the device to wake up may differ between the system suspend and runtime PM cases. This would be one situation where suspend_late() might need to be different from runtime_suspend().

    The only case where suspend_late() should not be used is where interrupts might still be delivered, but the interrupt handler cannot tolerate the device being off. In many cases the suspend() routine will have put the device in a state in which it will not generate interrupts. Likely exceptions to this are when the interrupt line is shared, or when the device supports wake-from-suspend and so deliberately does not disable interrupts.

    If the platform that the device runs on uses BIOS support to enter suspend, then it is possible that this support will power off the device, so suspend_late() does not need to bother. If it doesn't, it could still be that the device gets powered off by instructing the BIOS to effect the state change, and it may require different power-off procedures for runtime PM and for entering suspend. If this is the case, then suspend_late() will quite likely be very different from runtime_suspend().

  • The suspend_noirq() method is an alternative to suspend_late() but is run without interrupts enabled. It is unlikely that any driver will provide both methods.

    Having interrupts disabled means not only that an interrupt will not occur at an awkward time, but also that using any functionality that requires interrupts will not work. So if the driver uses an I2C bus or similar to tell the device to turn off, and if the I2C bus uses interrupts to indicate completion (which is normal), then either the device must be powered-off in suspend_late, or the I2C interrupt must be marked IRQF_NO_SUSPEND.

Paired with each of these methods is a method that is called when returning back towards full-functionality: resume_noirq(), resume_early() and resume(). These simply do the reverse of what the corresponding "suspend" function did.

Closing

Structure, purpose, and examples - these seem to be the elements that distinguish good documentation and enable the reader not just to collect knowledge but to gain understanding. I'll leave you, dear reader, to be the judge of whether their presence here is sufficient to bring an understanding of power management, or indeed an understanding of quality documentation.

I would like to thank Rafael Wysocki for valuable review of an early draft of this article.

Comments (5 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Searching for common ground between Debian and FSF

By Nathan Willis
July 11, 2012

On July 3, Debian Project leader Stefano Zacchiroli announced the launch of a new effort to clarify why Debian is not endorsed on the Free Software Foundation's (FSF's) free distribution list, and perhaps even make changes to Debian so that it met the FSF's requirements. That effort has spawned a mailing list where the two projects are talking about the differences in their goals and principles, but a plan of action is yet to come.

Zacchiroli cited three reasons for pursuing inclusion on the FSF distribution list. First, Debian's absence on the list has historically led to a duplication of effort, with derivative distributions created "that are essentially Debian, modulo the changes necessary to be listed." Second, many in the Debian community choose the distribution because of its rigorous stance on software freedom, and there is likely to be a large overlap between them and FSF supporters. Third, Debian's goals in software freedom are essentially self-reviewed, so measuring the distribution against an external standard could reveal valuable information about Debian's successes or failures and its general perception by outsiders.

Conflicting legalese

Although one of the possible outcomes of the effort is getting Debian included on the FSF distribution list, Zacchiroli stated at the outset that documenting Debian's position on why it does not meet the criteria listed by the FSF might also be an acceptable result. He proposes "to work with the FSF to review the issues they claim apply to Debian" in bug-triage fashion. "Some of the bugs will be valid, some of them will be not, and on some there will be disagreement between submitter and 'maintainer'." Should Debian and FSF be unable to resolve the "bug validity" of the outstanding issues that keep Debian off of the FSF distribution list, Zacchiroli said, "at that point we will have obtained a list of blockers, that could than be used as documentation for Debian users who wonder why Debian and FSF disagree on the Free-ness of Debian."

Accepting the possibility that the two projects might not reach common ground is important, because the biggest obstacle to Debian's inclusion on the list is the FSF's requirement that distributions "not steer users towards obtaining any nonfree information for practical use, or encourage them to do so," and the projects are definitely divided on how that guideline applies. To the FSF, not only must the distribution not have any repositories containing non-free software, but it must not refer to third-party repositories that are not committed exclusively to free software "even if they only have free software today," and individual applications cannot suggest installing non-free plugins or documentation. The latter requirement, for example, disqualifies Mozilla Firefox, because its official add-ons site contains proprietary extensions and plugins — and it disqualifies Iceweasel, Debian's rebranded version of Firefox.

Therein lies the tricky part. Iceweasel is a repackaged version of Firefox built by Debian to cope with incompatibilities between Mozilla's trademark guidelines and the Debian Free Software Guidelines (DFSG). Although Iceweasel complies with the DFSG, it does not meet FSF's distribution guidelines. Conversely, many FSF documents are under the GNU Free Documentation License (GFDL), which does not meet DFSG requirements. Debian's explanation of GFDL's incompatibility notes "that this does not imply any hostility towards the Free Software Foundation" or that the project dissuades others from using GFDL.

Nitty gritty

Debian intentionally separates non-free software into a separate repository (named nonfree) which it states is not part of the Debian system, but in its explanation of Debian's status, FSF argues that this is not enough — the nonfree repository is hosted on Debian project servers, and there are references to it in the online documentation. A related problem is the contrib repository, which includes some packages that FSF claims "exist to load separately distributed proprietary programs." Finally, although Debian no longer includes any binary blob kernel modules, FSF points out that the installer still recommends some of them for specific hardware.

Assessing the content of those repositories is a natural first step. Practically speaking, there is no list of exactly which packages in nonfree or contrib violate the FSF guidelines. Paul Wise pointed out an older project to document Debian's nonfree packages and said that "recent policy changes added the requirement for the debian/copyright file to document why something is non-free." The information in the non-free tracking system is quite old (early 2008); updating it could take considerable time, but Zacchiroli suggested reviving it — turning each tracking system entry into a bug report against the relevant package, tagging the reports, and linking each report to the appropriate policy that clarifies why the package is non-free.

Early on in the list discussion, Thorsten Alteholz proposed rolling a "Debian" distribution that intentionally follows the FSF guidelines, and separating it from a "Debian Extended" distribution that includes access to the non-free and contrib repositories. That idea did not gain significant traction. Bryan Quigley suggested looking for packages in nonfree that might be encouraged to relicense, and compiled a list of "low-hanging fruit" including several varieties of non-software package: firmware packages, fonts, documentation, data, and so forth. Daniel Kahn Gillmor liked the concept, but said that most projects have reasons for choosing the licenses they use, so "Convincing the upstream of every package in non-free to change their license seems implausible, so that means that some packages would likely remain."

But Henry Jensen contended that fixing up nonfree and contrib would not be enough on its own, because of the "steer users towards nonfree" requirement. "So, every explicit mentioning of non-free software could be interpreted as recommendation." He posted a list of the Debian components he believed needed fixing. In addition to Iceweasel and other programs that use plugins, he listed the Linux kernel (because it logs the names of proprietary firmware files it expects to see but finds missing), the official Debian web and wiki sites (because they mention non-free software), and the official forums and mailing lists (which lack a moderation system to discourage users from asking about or discussing non-free software). He cited references for the kernel and forum issues, including a 2010 message in which Richard Stallman said a distribution's official forums should not include advice on how to run non-free programs.

The discussion sparked (perhaps predictably) a brief flurry of debate over the merits of FSF's guidelines, and specifically whether or not they go too far when they ban discussion of non-free software. As is typical of debates over free software ideals, there was a wide spectrum of opinion. But personal opinions are not the issue. As Mason Loring Bliss put it, "we're not here to discuss my standards. :P The FSF has, effectively, drawn a line in the sand, and it's their line to draw." Ian Jackson encouraged participants to refrain from dogmatic arguments, and for everyone to treat each other as allies.

The long road ahead

But Jackson's appeal for respectful disagreement also conceded that full agreement between the projects might be unattainable. "If you can't convince your ally on some point then the right thing to do is not to browbeat them harder. The right thing to do is to agree to differ, and move onto a topic where cooperation is possible." So far, there appears to be little progress on the underlying issue of whether the nonfree and contrib repositories are suitably disconnected from the Debian distribution. That issue is the most fundamental, and it is what led to the brief philosophical debate. Documenting the contents of the repositories may be helpful, but ultimately it is their availability that FSF finds objectionable. Jason Self asked if moving the nonfree and contrib repositories to a different virtual host would satisfy the requirements, but so far there has been no reply.

Exactly where FSF decides to draws its lines ultimately involves some judgment calls by humans, of course (I am reminded of Matthew Garrett's 2008 list of things in your computer that you do not have the source code for, including a great many firmware and microcontroller examples), but it draws those lines clearly. If the presence of any information about non-free software on any Debian site or service disqualifies Debian from meeting FSF's distribution guidelines, then it is hard to see how the two projects will find middle ground. Which is not to say that there is no hope — Michael Gilbert pointed to an FSF statement about where Stallman presents a more nuanced approach to balancing the pros and cons of non-free games than he is often given credit for. But these are clearly two projects with firm beliefs about their own ideals, and well-established rationales to back them up. Compromise can hardly be simple.

Comments (36 posted)

Brief items

Distribution quotes of the week

Oh, if the kernel breaks some standard user space, that counts. Tons of people run Debian unstable (and from my limited interactions with it, for damn good reasons: -stable tends to run so old versions of everything that you have to sometimes deal with cuneiform writing when using it)
-- Linus Torvalds

Typically I find the freeze time a hard one to work on Debian. I've never been very effective at chasing and resolving RC bugs. On the other hand, long, protracted freezes are demotivational for everyone. I'm going to try and focus on goals which hasten the release of wheezy and resist the urge to work on things that would not show up until wheezy+1.
-- Jon Dowland

Comments (1 posted)

Android 4.1 in AOSP

Jean-Baptiste Queru has announced the source code release of Android 4.1 (Jelly Bean).

Comments (28 posted)

CentOS-6.3 released

CentOS 6.3 is available. "CentOS-6.3 is based on the upstream release EL 6.3 and includes packages from all variants. All upstream repositories have been combined into one, to make it easier for end users to work with." See the release notes for details.

Full Story (comments: 2)

Distribution News

Fedora

Please welcome Jaroslav Reznik as the new Fedora Program Manager

Fedora Project Leader Robyn Bergeron has announced that Jaroslav Reznik will be the new Fedora Program Manager. "Many of you know Jaroslav from his contributions to the Fedora Community both as an Ambassador in EMEA, as well as his work as a Fedora Board member; his previous role within Red Hat was as part of the Base OS Development Team in the Brno office, working on Matahari and KDE, and I'm sure that his past experiences in development will be incredibly helpful to him as he takes on this role."

Full Story (comments: 2)

Fedora 15 End of Life

Fedora 15 has reached its end-of-life. There will be no further updates, including security updates for Fedora 15.

Full Story (comments: none)

Mandriva Linux

Mandriva community votes for new distribution name

Charles Schulz reports that the Mandriva distribution will get a new name. "Starting now, we have opened a poll that will let you pick the name of the future distribution (and its foundation). In the future, Mandriva as a brand name will remain the name of the company (Mandriva S.A.) but the community itself will have a different name and a different branding, although it is also possible that the brand and the name will keep a tight connection with Mandriva. We had to prepare the available choices; we came up with some names during the meeting in Paris, we also listened to some ideas expressed on the Foundation mailing list. Last but not least we left the possibility to send us suggestions for other names. If a suggestion appears to be really popular we will consider it provided it’s available of course."

Comments (14 posted)

New Distributions

Emmabuntüs

Emmabuntüs is an Ubuntu based distribution that hails from France. It's designed to be easy for people new to GNU/Linux to use out-of-the-box. There are several flavors available. The newest flavor, Emmabuntüs 2 1.00, is based on Xubuntu 12.04 and will be available July 14, 2012.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Bergius: The Dreams of the MeeGo Diaspora

Henri Bergius reviews the history of MeeGo and looks forward to what may yet become of it. "Many of the things people associate with iPad were already common for us in the old Internet Tablet times. I was getting my morning news on the 770 with Google Reader just like I now do with Pulse on an Android tablet, and I was sharing my location with friends via Plazes like people now do with Foursquare. The only difference is that back then the tablets were for a bit more exclusive club of Linux enthusiasts."

Comments (64 posted)

Page editor: Rebecca Sobol

Development

Akademy: KWin scripting

By Jake Edge
July 11, 2012

An effort to make KWin, KDE's window manager, more flexible, goes back more than two years, but it is now bearing fruit. Adding scripting support so that less technical users can make changes to the behavior and appearance of KDE without having to resort to C++ is now possible. KWin hacker Martin Gräßlin talked about the history and status of KWin scripting on July 1 at Akademy.

Some history

[Martin Gräßlin]

The idea of KWin scripting goes back to the Tokamak IV sprint in February 2010. The intent is to make the window rules more flexible. At that time, KWin rules were static and there was no way to say "I want all GIMP windows to go to a certain virtual desktop", for example. Gräßlin discussed the idea with Nuno Pinheiro, who leads the Oxygen theme project, at Akademy 2010, with Pinheiro making it clear that not requiring C++ was important.

A 2010 Google Summer of Code project added scripting support to KWin, but it suffered from a number of drawbacks. The API was hand-crafted and the documentation was hand-written. In addition, it "interweaved" the scripting and KWin core components in undesirable ways. It was, Gräßlin said, a prototype and one that should never have been merged.

One thing that was needed to support scripting is a generic animation effect. Previously, code was copied into many places to do the same kinds of effects for window state changes (e.g. window close or minimize). That meant that bugs got copied as well. A base implementation for window effects was added to react on state changes. That implementation is now made available for use in scripts.

Supporting a touch-based interface like KDE's Plasma Active was a "completely new world to us as a window manager", he said. KWin was highly mouse and keyboard-driven, which is not suitable for touch interfaces. Also, Plasma Active needs windows to always start maximized, which was a big difference. KWin started off supporting Plasma Active using ifdefs, but that became unwieldy. What was needed was a way to put a new interface on top of KWin without changing the core.

KWin changes

The window switcher is one place where scripting has come into play. Window switching is "surprisingly complicated" because there can't be one UI that fits all users, Gräßlin said. If there are just a few windows, there can be a simpler UI, but for ten windows the UI can't be the same. Since KDE 4.8, window switching can be scripted using QML, a JavaScript-based scripting language. There are methods to render thumbnails of the active windows and different use cases can use different layouts.

The same goes for desktop switching. It uses the same framework as the window switching support and will be able to be scripted using QML in KDE 4.9. Right now there is only layout available, as it is "not a focus" for the KWin project, but more could be added.

Support for QtScript, another JavaScript-based language, has been added for KWin as well. Those scripts can access all of the KWin options and client programs are exported to the scripts. Using D-Bus, scripts can be loaded and unloaded at runtime. Scripts can also call D-Bus methods and have their own configuration values.

Effects scripts can be written based on the AnimationEffect class. The scripts are written in QtScript, with an API that is similar to that of the KWin scripts. Instead of accessing clients, though, the effects scripts access windows. There is "no impact on performance", Gräßlin said, because no JavaScript is executed while rendering. In addition, window decorations in KWin have been rewritten in QML. That gives better performance, while allowing things like interactive previews of themes.

The KWin team likes to eat its own dog food, but that was difficult with the prototype scripting implementation. Most of the team did not know how to use the prototype. The newer scripting support is better understood, so the team is using it to find and fix bugs early on.

For KDE 4.9, several effects were ported from C++ to QML, including the "fade" and "fade desktop" effects. There are more effects to be done, and that is worth doing because 300 lines of C++ can be replaced with 20 lines of JavaScript, he said. He encourages other developers to work on this porting effort.

KWin is preparing to support Wayland by adding client properties to its API. These QProperties describe the client interface so that non-X clients can be handled. That means there will be one code base that can handle both window manager and compositing modes.

Multi-screen handling has also been changed substantially. There used to be lots of options governing the behavior of windows on multiple screens for historic reasons, but those have all been replaced. Now there is a single script for the only sensible choice: a video wall spanning all the screens. Other use cases could be scripted if needed, he said.

Scripting much of the functionality that used to be done in C++ has had a dramatic effect on the code size. KWin lost 5000 lines of C++, which was replaced with a few hundred lines of QML, Gräßlin said.

Third parties

Various third parties have started using KWin scripting. Get Hot New Stuff, the freedesktop.org project for sharing desktop resources, is using KWin scripts. A project to do window tiling for KWin exists as well. A tiling window manager struck some in the audience as rather amusing, but the implementation is working quite well and is not something that could have been done with C++, he said.

The "most impressive" user of KWin scripting is the Arctos Dashboard. It gives an overview of the open windows, has a KRunner interface, and an activity switcher. The developer has been collaborating with the KWin team and it has been a nice experience to have someone outside the project using and testing the scripting support, he said.

There may be other projects interested in KWin, beyond just KDE's Plasma. Razor-qt is interested, and Unity may want KWin because there are problems with its window manager (Compiz), he said. KWin has lost its static Plasma dependencies as they have been moved to runtime. There are still a few dependencies on kdelibs, but KWin could move to Qt-only in the future now that some KDE functionality is moving down into Qt.

Down the road, there are plans for integrating scripting with PlasMate, a development environment for creating Plasma widgets, themes, and more. There is a GSoC project underway to do that. Support for desktop thumbnails (e.g. for desktop switching) is already working though there are a few "glitches" with OpenGL rendering that need to be worked out.

Adding the ability to do unit testing with scripts is also in the works. Unit testing window managers has traditionally been difficult, so being able to inject test scripts simulating user interaction into a running KWin instance will be quite useful. Gräßlin said he is "really looking forward to that" as it will lead to a better tested code base for KDE 4.10 and 4.11.

The KWin team is looking for more people to try out the scripting additions and report on problems that they find. In addition, they are interested in hearing about use cases that they haven't thought about. It is "really easy to add things" and the team is open to new ideas, he said, so "give it a try". Scripting support has simplified KWin while also providing new capabilities; more input and testing can only help to make it better.

[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]

Comments (none posted)

Brief items

Quote of the week

"Without the ability to play around, mess about with the code without consequences and privately on your own computer, you can’t be truly creative with it; and if you’re not being creative, it isn’t fun!"
Scott James Remnant

Comments (6 posted)

GNOME 3.5.3 development release

GNOME 3.5.3 is now available. This development release marks the debut of a new API for Evolution Data Server, plus the project promises that "as developer you'll enjoy new widgets from GTK+ (GtkSearchEntry and GtkMenuButton), as user you'll delight the new Empathy interface, in a pure GNOME3-ish style." Nevertheless, this is a development snapshot release not intended for daily usage, despite the enjoyment and delight.

Full Story (comments: none)

Galaxy in-memory data grid released

Parallel Universe has announced the release of its Galaxy distributed memory system under the Lesser GPL. "What makes Galaxy different from other IMDGs is the way it assigns data items to cluster node. Instead of sharding the data on one of the keys using a consistent hashing scheme, Galaxy dynamically moves objects from one node to another as needed using a cache-coherence protocol similar to that found in CPUs. This makes Galaxy suitable for applications with predictable data access patterns, i.e. applications where the data items behave according to some proximity metric, and items that are 'closer' together are more likely to be accessed together than items that are 'far' apart."

Comments (27 posted)

Open Font Library Release 0.5

The Open Font Library project has rolled out release 0.5, tagged "Calling All Translators and Typophiles." The site hosts open fonts and serves web fonts via CSS; this version adds "an online translation tool which allows volunteers to improve language support" and the first release of a documentation push to create an online open font development guidebook.

Full Story (comments: none)

Version 5.3.0 of the Linux Libertine OS Font Family

Version 5.3.0 of the Linux Libertine font family is now available. This release includes a brand new face, Libertine Mono, in addition to the existing Libertine and Biolinium. Libertine Mono is a monospaced font suitable for terminal or development usage that matches the Libertine serif face stylistically.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Baker: Thunderbird: Stability and Community Innovation

Lizard wrangler Mitchell Baker reports on the future of Thunderbird. "Once again we’ve been asking the question: is Thunderbird a likely source of innovation and of leadership in today’s Internet life? Or is Thunderbird already pretty much what its users want and mostly needs some on-going maintenance? Much of Mozilla’s leadership — including that of the Thunderbird team — has come to the conclusion that on-going stability is the most important thing, and that continued innovation in Thunderbird is not a priority for Mozilla’s product efforts... As a result, the Thunderbird team has developed a plan that provides both stability for Thunderbird’s current state and allows the Thunderbird community to innovate if it chooses."

Comments (12 posted)

DiCarlo: Everybody hates Firefox updates

Mozilla's Jono DiCarlo writes on his personal blog about the consistent complaints he hears from users about Firefox's new rapid release process. "Of course nobody says "rapid release process" because people don't know that's what it was called. They might start out complaining about version numbers, or some plugin that doesn't work right, but when I ask enough questions to get to the root of the problem, it's always the rapid release process." The intrusiveness of the update process has driven users to Chrome, he says, and is a departure from Firefox's previous users-first mantras. "This isn't 'Firefox answers to nobody but you', it's 'Firefox answers to nothing but Mozilla's arbitrary six-week update schedule.' "

Comments (97 posted)

Total bankers: Twitter and LinkedIn's cynical API play (The Register)

Writing at The Register, Matt Asay takes a critical look at the business of open APIs, arguing that recent events with Twitter, LinkedIn, and Netflix highlight the dangers of building a product or business on someone else's API. "I don't know if there's a real shift in how the tech world treats its customers, but the API opportunism that is currently en vogue will likely lead to developers mistrusting open APIs and rolling their own services from the ground up, end to end." Is the solution, he asks, "an Open Source Definition for APIs and platforms"?

Comments (8 posted)

Yaghmour: Extending Android's HAL

Karim Yaghmour has posted a brief tutorial on how to add new device support to Android. "Contrary to standard 'vanilla Linux', Android requires more than just proper device drivers to function on hardware. It in fact defines a new Hardware Abstraction Layer (HAL) which defines an API for each type of hardware supported by Android's core. In order for an ODM's hardware to properly interface with Android, it must provide a hardware 'module' (unrelated to kernel modules) which conforms to the API specified for that type of hardware. This blog shows you how to extend Android's HAL by adding your own new type hardware to the Android stack."

Comments (43 posted)

Page editor: Nathan Willis

Announcements

Articles of interest

Baker: Contributor Imprisoned in Syria

Mitchell Baker has written a blog post expressing Mozilla's support for the Internet-based campaign on behalf of Bassel Khartabil, an open source developer who has been detained in Syria. "Mozilla supports efforts to obtain the release of Bassel Khartabil (also known as Bassel Safadi), a valuable contributor to and leader in the technology community. Bassel’s expertise and focus across all aspects of his work has been in support of the development of publicly available, free, open source computer software code and technology." In addition to Mozilla work, Khartabil has contributed to Creative Commons, Open Clip Art Library, and several other projects. The campaign centers on a petition and letter-writing drive at freebassel.org.

Comments (none posted)

GitHub finally raises funding (GigaOm)

GigaOm reports that GitHub has raised $100 million in venture funding. "The startup will use the funding to hire additional employees and expand to new platforms such as mobile. CEO Tom Preston-Werner said the company hopes to develop new features but also improve existing ones, such as web applications for different operating systems. The idea is to make GitHub useful for a broad range of clients, from individual hackers to large enterprises, and from software developers to designers or authors."

Comments (20 posted)

The next GPL: Why it's being shaped on GitHub (InfoWorld)

Simon Phipps writes on Richard Fontana's GPLv3 fork being developed on Github. "To date, Fontana's changes mostly relate to the concerns of the corporate voices in the GPLv3 process. The preamble -- 'an inspiring and important political statement,' according to Fontana -- has been removed, as has the 'How to Apply' appendix. Those simple steps alone dramatically shorten the text. The rest of the changes seem to fix apparently redundant compromises that made their way into the text as part of the negotiation process among all the corporate participants."

Comments (48 posted)

Intel Loses One Of Their Linux Driver Developers (Phoronix)

Phoronix reports the death of Eugeni Dodonov, a former Mandriva developer who was most recently working for the Intel Open-Source Technology Center. He was killed in a bicycle accident, details are still under investigation. This article (in Brazilian Portuguese) has a bit more information.

Comments (18 posted)

Judge who shelved Apple trial says patent system out of sync (Reuters)

Reuters talks with Richard Posner, a US federal appeals court judge who presided over Apple's lawsuit against Google's Motorola Mobility, and other software patent cases. "Posner said some industries, like pharmaceuticals, had a better claim to intellectual property protection because of the enormous investment it takes to create a successful drug. Advances in software and other industries cost much less, he said, and the companies benefit tremendously from being first in the market with gadgets - a benefit they would still get if there were no software patents."

Comments (33 posted)

Ex-Nokia guys start mystery company to build Linux-based phones (VentureBeat)

VentureBeat covers Jolla Mobile, a six-man startup that aims to release MeeGo phones. "Jolla will soon be making announcements about its version of MeeGo as well as a brand new smartphone it’s bringing to market. The company so far is composed of “N9 core professionals and MeeGo community alumni,” the [Twitter] page reads. While N9 folks will be at the helm, there is no intention to offer support for Nokia’s N9 smartphones — more on those later."

Comments (7 posted)

New Books

Deploying with JRuby--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Deploying with JRuby" by Joe Kutner.

Full Story (comments: none)

Programming Computer Vision with Python--New from O'Reilly Media

O'Reilly Media has released "Programming Computer Vision with Python" by Jan Erik Solem.

Full Story (comments: none)

Team Geek--New from O'Reilly Media

O'Reilly Media has released "Team Geek" by Brian Fitzpatrick and Ben Collins-Sussman.

Full Story (comments: none)

Upcoming Events

PyCon UK 2012

PyCon UK 2012 takes place September 28 - October 1 in Coventry, UK. Early bird rates are available until August 12. "This year we're running sprints as an integral part of the conference, instead of being tacked on afterwards. Each sprint will have an Introductory talk on Friday 28th, then have sessions throughout the weekend, which you will be able to drop in to between any talks you can't miss. For the sprint diehards, sessions will continue into Monday to wrap up any loose ends."

Full Story (comments: none)

Events: July 12, 2012 to September 10, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 7
July 12
Libre Software Meeting / Rencontres Mondiales du Logiciel Libre Geneva, Switzerland
July 8
July 14
DebConf12 Managua, Nicaragua
July 10
July 15
Wikimania Washington, DC, USA
July 11
July 13
Linux Symposium Ottawa, Canada
July 14
July 15
Community Leadership Summit 2012 Portland, OR, USA
July 16
July 20
OSCON Portland, OR, USA
July 26
July 29
GNOME Users And Developers European Conference A Coruña, Spain
August 3
August 4
Texas Linux Fest San Antonio, TX, USA
August 8
August 10
21st USENIX Security Symposium Bellevue, WA, USA
August 18
August 19
PyCon Australia 2012 Hobart, Tasmania
August 20
August 21
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 20
August 22
YAPC::Europe 2012 in Frankfurt am Main Frankfurt/Main, Germany
August 25 Debian Day 2012 Costa Rica San José, Costa Rica
August 27
August 28
GStreamer conference San Diego, CA, USA
August 27
August 28
XenSummit North America 2012 San Diego, CA, USA
August 27
August 29
Kernel Summit San Diego, CA, USA
August 28
August 30
Ubuntu Developer Week IRC
August 29
August 31
LinuxCon North America San Diego, CA, USA
August 29
August 31
2012 Linux Plumbers Conference San Diego, CA, USA
August 30
August 31
Linux Security Summit San Diego, CA, USA
August 31
September 2
Electromagnetic Field Milton Keynes, UK
September 1
September 2
Kiwi PyCon 2012 Dunedin, New Zealand
September 1 Panel Discussion Indonesia Linux Conference 2012 Malang, Indonesia
September 1
September 2
VideoLAN Dev Days 2012 Paris, France
September 3
September 4
Foundations of Open Media Standards and Software Paris, France
September 3
September 8
DjangoCon US Washington, DC, USA
September 4
September 5
Magnolia Conference 2012 Basel, Switzerland
September 8
September 9
Hardening Server Indonesia Linux Conference 2012 Malang, Indonesia

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds