In late June, a US District Court granted a request by Apple to ban the sale of the Galaxy Nexus smartphone in the US due to the phone's alleged infringements of Apple's patents; the phone was then duly pulled from the Google store. That ban has since been lifted, but that should not be seen as a victory against software patents; indeed, the contrary is true. The only reason the Galaxy Nexus is available again is Google's short-term capitulation; the company has simply removed the offending features from the "Jelly Bean" Android release. Google's claim that the patents were no longer at issue was enough to get the handset back on the market—for now.
What are those features? The biggest fight seems to be over patent #6,847,959, the so-called "Siri patent." This patent, filed in 2000, has the following as its first independent claim:
inputting an information identifier;
providing said information identifier to a plurality of plug-in modules each using a different heuristic to locate information which matches said identifier;
providing at least one candidate item of information from said modules; and
displaying a representation of said candidate item of information.
There are 17 dependent claims specifying that the "information identifier" may come from a dialog box or through voice input; the "heuristics" can involve searches on file names, file contents, local files, web pages, and so on. They narrow the scope of the patent, but do not change its fundamental nature.
Even thinking back to the year 2000, it is hard to find a great deal of novelty in this concept. If one wants to search for something, one likely wants to search all of the available resources. If one wants to search multiple locations or with multiple algorithms, one creates an API by which independent search modules can be invoked. The method described here is obvious; it should come to the mind of any developer skilled in the art of software development. But this is the valuable innovation that allowed Apple to block the sales of a competing product in one of the largest markets on the planet.
Google's response has been to cripple the functionality of its Android search bar, which will be restricted to searching the net only. Anybody running the Jelly Bean release will see that restricted functionality; it will also be pushed out to 4.0-based ("Ice Cream Sandwich") devices as an "update." And that is how things are likely to stand until the case runs its course, a process that could take years.
So, to put it bluntly: software patents have allowed a manufacturer of highly closed devices to hold one of the most open handsets available hostage and to block it from the market entirely. They have allowed said corporation to force the removal of obvious functionality from a device (mostly) based on free software. To think that this kind of thing won't happen again, or that it won't strike code that is more interesting to the free software community, is to be optimistic indeed. That does not seem to be the way the wind is blowing.
It would be nice to think that, somehow, the software patent problem will be solved in the near future. There are occasional encouraging signs, such as US appeals court judge Richard Posner tossing out another Apple case and speaking out against software patents. But actual attempts to reform the patent system never seem to get that far.
What seems more likely is that the major players in the mobile industry will eventually come together around some sort of patent pool that lets them get on with their businesses. Perhaps this will be a voluntary action, or perhaps there will be a certain amount of governmental pressure applied first. Either way, the end result is likely to be a regime in which the established players are free to get on with the process of making money while new companies, like the just-announced Jolla Ltd, face additional barriers to entry. Such a situation is not likely to be good for the industry or for free software.
But, then, one never knows. As bogus software patents threaten to take down products and services that people actually care about, we may yet see an increase in support for reforms. Perhaps the best strategy against software patents is the one we are already executing: make the best free system we can and ensure that it is widely diffused into systems that the world depends on. As patent litigation increasingly turns into a general denial of service attack against the economy as a whole, tolerance for the system may wane. One can always hope, anyway.
Mozilla surprised Thunderbird fans on July 6 when it announced that it was pulling developers from the project. Mozilla says it will continue to test, patch, and maintain future releases — including stability and security fixes — while letting community members guide development of new features. But that promise did not prevent a slew of headlines reporting that the email client was being put out to pasture. A number of Mozilla developers have subsequently commented on the decision, helping to clarify the outlook for the future somewhat, if not completely.
Mozilla chief Mitchell Baker posted the announcement on her blog, starting with the question "is Thunderbird a likely source of innovation and of leadership in today’s Internet life? Or is Thunderbird already pretty much what its users want and mostly needs some on-going maintenance?" The answer from Mozilla's upper echelons, evidently, is that the desktop email client is essentially feature-complete, and not likely to experience further innovations. Consequently, Mozilla as a whole is better off directing its engineering resources to its current "priority" projects.
Baker's post was interpreted by many to mean that Mozilla was halting development on Thunderbird, perhaps offloading control of the project to the open source community or otherwise attempting to get rid of the project without saying that it was getting rid of the project. Thunderbird would hardly be the first open source project to suffer such a fate, so a pessimistic take on the announcement is understandable. But the details that have emerged since the announcement paint a different picture.
On July 7, Jb Piacentino posted an announcement to the tb-planning mailing list which covered the same ground as Baker's post. In it, he assured readers that the move was not the cessation of Thunderbird development:
Thunderbird developer Ludovic Hirlimann said on his blog that Thunderbird 14, 15, and 16 would all be released before the new plan takes effect, and that the new model's practical effect would be that "we won’t have the time to work on specking, developing and testing new features," although the team would still participate in the development process.
Details about the plan are described on the Mozilla wiki. The plan draws a distinction between the normal Thunderbird and the extended support release (ESR) version. Mozilla will focus on the Thunderbird ESR releases and associated security updates, while allowing other contributors to work on the standard Thunderbird trunk. Mozilla will continue to provide the testing and release infrastructure, and Mozilla staffers will serve as the release team. But the Mozilla staffers will not be tasked with introducing new features. ESR releases are guaranteed to receive security updates for one year, rolled out with Firefox ESR, on a six-week schedule.
Despite Piacentino's reassurances and the wiki's lengthier explanation, some on the list still interpreted the news in starkly different terms. For example, while Ben Bucksch took it to mean an end-of-life announcement, Charles Tanstaafl read the announcement to mean that Mozilla employees would "focus on stability and fixing many of the long standing bugs".
Others wanted more specifics on the new process. Kai Engert asked whether the arrangement meant that Thunderbird releases would be kept in sync with Firefox on shared components (including Gecko):
Firefox and Thunderbird share application level code that is responsible for the correct functioning of security protocols.
If a change is made because it's needed by Firefox, it's easy to forget that Thunderbird may rely on the previous behaviour, and the change might cause a regression in functionality/usability/correctness/completeness for Thunderbird.
This has happened in the past. If Thunderbird becomes even less of a priority for the Mozilla project, with even fewer people available to work on cleanup and adjustments to newer Gecko core, then there's the risk that such regressions might occur more frequently in the future.
Concerns raised also included the fate of in-progress development work (such as the long awaited rewrite of Thunderbird's address book) and whether or not the outside community would be able to mentor Google Summer of Code (GSoC) projects, which have been a dependable source of new code in the past. The community has indeed played a major part in recent innovations, including the new "conversations" view extension, MIME handling, and the recent removal of RDF as a dependency. Mozilla's Mark Banner replied that Thunderbird's annual ESR releases would synchronize with the then-current Firefox release (including any Gecko updates), but that the intervening six-week security update releases would not roll in recent changes. The bulk of in-progress projects are slated to be completed before the new process begins, he added. Finally, he pointed out that Thunderbird community members had mentored past GSoC projects, so the process change should not interfere.
Several Mozilla staffers commented about the announcement in blog posts of their own. Thunderbird developer Joshua Cranmer observed:
Cranmer did not bemoan this situation, however. He saw it as natural considering the growth of mobile email, and because "Mozilla's primary goal is to promote the Open Web." The assertion that the web — but not email — is Mozilla's central mission was also touched on in official channels. The wiki page states that the priority projects getting Mozilla's attention are "important web and mobile" efforts, "while Thunderbird remains a pure desktop only email client." Baker's blog post similarly noted that the project has "seen the rising popularity of Web-based forms of communications representing email alternatives to a desktop solution."
But Bucksch took issue with that notion in considerable detail, observing that if Thunderbird is losing out to web-based email, that constitutes a loss, because "Webmail is definitely not open. You're totally dependent on the features and limitations the provider offers [...] Privacy goes out the door with webmail. Even integrity: The ISP can even alter the message contents years after the fact, and I have no way to verify or prove this."
Mozilla's stated mission is "to promote openness, innovation and opportunity on the web, but Bucksch points out that its manifesto stakes out considerably broader principles about the openness of the Internet as a whole. Side-stepping for the moment why the organization has a separate "mission" statement and "manifesto" at all (much less inconsistent ones), the point is well-taken. If Thunderbird has failed to grab a majority of the world's email client share, what users are left with are proprietary OS-vendor clients on the desktop, or proprietary software services on the web. Mozilla Labs briefly toyed with a webmail client called Raindrop, but shuttered it before it left the experimental phase.
Perhaps competition from webmail clients is a side issue, and Mozilla is primarily readying itself to make a greater play for what it sees as the new email battleground on mobile devices, with its Boot-to-Gecko effort (which was recently renamed Firefox OS). Andrew Sutherland, a developer on Mozilla's forthcoming Firefox OS email client, told the tb-planning list that he and other team members were list subscribers, and were at least open to the possibility of collaborating with the Thunderbird community on compatibility features.
Despite the doomsday predictions that leaked out following the initial announcement, Mozilla's plans indicate that it is committed to testing and releasing Thunderbird for at least the next year or so (depending on the final release date of Thunderbird ESR 17). The distant future is less clear, but that could be said of many other projects. Anyone who doubts the ability of the Mozilla volunteer community to maintain a product needs only to look at Seamonkey, which continues to live on long after Mozilla lost interest. Still, Mozilla's second-class treatment of its email client is troubling for other reasons. Email itself may be relatively static, but IM, VOIP, and other communication methods are coming and going all the time, and Mozilla has not offered a consistent client story for them. If Firefox is Mozilla's only product, users' hope for an open web boils down to "hopefully the service providers will write open source web apps for foo" — which seems like a long shot.
The Qt Project was launched in October 2011 to foster the open development of the Qt toolkit. Qt is the underlying framework used by KDE, of course, so Akademy attendees are understandably interested in the status and progress of the Qt Project. Thiago Macieira provided that update in a surprisingly well-attended keynote—surprising because it was early on the day after a social event that stretched into the wee hours.
The Qt Project is based on four principles, Macieira said: fairness, inclusiveness, transparency, and meritocracy. Fairness means that the project is open to everyone, while inclusiveness means that people can just start participating as there are no barriers in place or fees required. Transparency covers the decision-making process which is completely open. Discussions happen on the mailing list, whose participants ultimately make any decisions. When discussion takes place elsewhere, it needs to be posted to the list for others to review and comment on. Finally, a meritocracy means that contributors who have "shown their skills and dedication" are given commit rights, and are the ones who get to make the decisions for the project. That way, the most experienced people get the most deciding power, he said.
The project has seen quite a bit of activity in the eight months it has existed. Over that period, there have been 18,000 commits to the code base, with an average of 412 per week. There were some dips in the graph that he showed, for Christmas, Easter, Norwegian national day, and one that he called "Elop". The latter correlated with Nokia CEO Stephen Elop's statements that have caused concern about the future of Qt within the company. When an audience member suggested banning Christmas to increase productivity, Macieira chuckled and said that banning Elop would be more effective.
Commit numbers do not show the whole story, though. To get a sense for the community that is coming together around the project, Macieira also looked at how many different people are contributing. Over the eight months, 481 different email addresses were used for contributions—averaging 140 different email addresses per week. There are some people using more than one address, of course, but those numbers give an idea of the number of contributors to the project.
Macieira put up a flowchart describing how to contribute to the project. It looked relatively complex, with lots of "ladders and snakes", but contributing is actually fairly straightforward, he said. He pointed to the project wiki for information on what is needed. Code contributions are managed with the project's Gerrit code review instance. Using the dashboard in that tool, one can see the status of current code reviews, look at comments that have been made on the code, see the diffs for the changes, and so on.
Code that has passed review from both humans and a bot that checks for problems can then be staged from the Gerrit system. That moves the code into an integration phase where it is merged into the mainline, compiled, and tested. Two and a half hours later, contributors will get an email with the results of the integration. It is "very simple", he said, and all of the "eleven steps" from the flowchart "boil down" to this process.
Qt and KDE are "greater together", Macieira said, and he would like to see the two communities merge into one large community. Qt provides the libraries and framework, while KDE is building applications on top of that. If KDE needs new features, they can be put into Qt as there is now a "really nice way to make that happen". In the past, there had been obstacles to moving functionality into Qt, but those are gone.
He gave several examples of people working on moving KDE functionality into
Qt. The KMimeType class has recently been added to Qt as QMimeType,
was essentially just moving and renaming the code. Other KDE classes have
required adaptations prior to moving into Qt, including KStandardDirs and
KTempDir. David Faure has been doing much of that
work, but he is not alone as Jon Layt has been working on moving the KDE
printing subsystem into Qt, while Richard Moore has been adding some of the
KDE encryption code (e.g. SSL sockets) to Qt.
Those are just three KDE developers who have started working in the Qt upstream, Macieira said. There is a lot more code that could make the move including things like KIO (KDE I/O), the KDE command-line parser, and KDebug.
Beyond just code contributions, the project can use help in lots of areas. One can be an advocate and help spread the word about the Qt Project. Reporting bugs (and helping to fix some of them) is another area. Documentation, translations, artwork, and so on also need people to work on them; there is "a lot that you can do", Macieira said.
Developers can also start reviewing the code that is being proposed. It is easy to get started with the code review system after creating an account. What is really needed is "more people" and KDE is an obvious source for some of those people. The two projects should "work more closely to solve our objectives together", he concluded.
Asked about the filtering capabilities in the code review system to find patches of interest for review, Macieira admitted that the search and filtering functionality could use some work. There are ways to watch specific projects by a regular expression match, but overall it lacks some features that would be useful.
Another audience member asked about the statistics and, in particular, whether they could be quantified based on whether the person considers themselves a KDE developer. That is difficult to do, Macieira said, because people wear multiple hats, but there is definitely value in doing so.
The last suggestion was for a joint KDE/Qt conference. Macieira agreed that there would be "a lot of value in getting the two communities together", but that it wouldn't work for this year. He would like to see that happen next year, perhaps as part of the Qt Contributors Summit. The summit is a hands-on working conference, without presentations like Akademy has, he said, so a separate event might be the right way to go.
After years of trying to turn Qt into a more open project with a community orientation, it is nice to see that effort start to come to fruition. Given the uncertainty in the future of Qt at Nokia, having an independent project, with contributions from others outside of the company, will help to ensure the future of the toolkit. Since KDE is one of the bigger users of Qt, it only makes sense for the two projects to work closely together—exactly what Macieira was advocating.
[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]
The Tor Project recently discovered a security flaw in a line of commercial deep packet inspection (DPI) products; a flaw which an attacker could use to intercept the SSL connections of third-party users. The manufacturer of the product quickly pushed out an update, but DPI products from other vendors may still be affected.
Runa Sandvik wrote about the discovery on the Tor blog in a post dated July 3. According to that account, the discovery took place the week before, when a Tor user in Jordan contacted the project to report seeing a fake certificate for torproject.org, issued by Cyberoam. The project initially thought that a certificate authority (CA) might have been compromised (as in the DigiNotar incident in 2011), but upon investigation, that was not the case.
Cyberoam is a network security vendor that sells DPI devices. The user in Jordan was not witnessing an attack, but rather the evidence that the SSL connection to Tor had been intercepted by a Cyberoam DPI device. It is not clear from Sandvik's post whether the user was behind a corporate Cyberoam barrier (in which case he or she may have explicitly or implicitly agreed to the DPI interception) or was being monitored unwillingly. Whatever the circumstances, though, it was not the DPI filtering that constituted the security vulnerability.
Cyberoam's devices monitor SSL connections without generating browser errors by having the user install a Cyberoam CA certificate into the browser's trusted certificate store. Subsequently, the device intercepts SSL connections, issuing generated certificates (signed by the now-trusted Cyberoam CA certificate) for the requested sites and establishing the server-side connection on the other side of the intercept. This had happened to the user in Jordan, which was why he or she saw a Cyberoam-issued certificate for the torproject.org domain. The problem is that all Cyberoam devices ship with identical CA certificates and identical private keys. Consequently, Sandvik wrote, anyone can use a Cyberoam device to intercept traffic on any other Cyberoam-filtered network, or even extract the key and install it on other devices and use those to intercept traffic. In either case, the users would not detect that their traffic was being monitored by someone other than the approved authority.
Sandvik and Ben Laurie wrote a security advisory (CVE-2012-3372) about the issue and notified Cyberoam before publishing the blog post. Cyberoam wrote a post on its own blog detailing its response to the alert. According to Cyberoam's account, each affected device is capable of generating its own CA certificate, and the certificate shared by all devices was merely a "default." After Tor's alert, the company pushed out an update to all of its devices instructing administrators to generate a unique CA certificate, and it "forcefully generated unique keys for all the remaining appliances." The wording in that section is a bit ambiguous, but it appears that device administrators were encouraged to generate a new CA certificate locally, and those that did not do so quickly were updated to a unique certificate generated at Cyberoam, with further instructions on local key generation. Cyberoam (admirably) thanked Tor for the vulnerability report, and also said that the CA certificate update makes its DPI products significantly more secure than its competitors'.
For the update to take effect, users on a Cyberoam-monitored network will need to import the newly-generated CA certificate into their browser's trusted certificate store. Whether they do so willingly depends on whether they have consented to be monitored by the device. For its part, Tor expresses little sympathy for organizations using DPI to intercept users' connections, repeatedly calling them "victims" in the security alert text, and adding the footnote: "In the corporate setting, willing victims are often known as 'employees'. Unwilling victims should not, of course, install the CA certificate, nor should they click through certificate warnings." On the other hand, the alert calls Cyberoam's approach "the only legitimate way to use these devices," in contrast with monitoring schemes that require persuading a CA to issue fake certificates.
Without knowing the details of the user who reported the issue initially, it is impossible to say whether or not Cyberoam devices are being used to monitor Internet users without their consent. There are legitimate uses for DPI, of course, such as protecting a corporate network. But the report hints at a different scenario, because the user reported that common web sites (such as Gmail and Twitter) showed the correct CA certificates — suggesting that torproject.org was being selectively targeted for interception.
In comments on the Tor blog, some readers questioned whether the case was one of consenting monitoring done at an employer, or unwilling surveillance. One anonymous reader commented that the Cyberoam devices in question only intercept SSL connections to check for malware, and that Tor had "raised a non-existing vulnerability." Sandvik replied that there were two separate issues at play: the use of the device to intercept torproject.org traffic , and the fact that all Cyberoam devices shipped with the same CA certificate.
Cyberoam's response should seal the second issue, assuming that future devices ship only with unique keys. A master CA certificate controlled by Cyberoam could still be used to sign the individual device keys. In that case, Cyberoam might sign a separate "intermediate" CA certificate for each individual device. Thus the users only need to install the original Cyberoam certificate in the browser's trust store, and the certificate trust chain still validates. If device administrators have to re-generate a new CA certificate for the device, they must have Cyberoam sign it, but they do not have to have every user install something new.
But as Sandvik asked in the comment thread, "How can you be sure that the device being used in this case is not doing DPI, but 'just' HTTPS scanning for antivirus?" Implicitly, the answer is "you can't," which is one of the fundamental justifications for Tor and similar privacy-protecting projects. How the user in Jordan came to be behind a Cyberoam scanning device is unknown, but if he or she agreed to have web traffic monitored, particularly to the point of manually installing a CA certificate in the browser, then there is little or nothing to prevent the monitoring party from engaging in all sorts of mischief. Although, in an interesting side note, several anonymous commenters on the Tor blog posted what they claim to be the Cyberoam devices' default private key. So even if few people learn a lesson about the dangers of consenting to traffic monitoring, perhaps a few others will learn a lesson about leaving the default security settings in place.
[...] We tried to call Barclays’ security expert R0b Ste!nway for a comment, but he was not available for 24 hours, having answered his phone incorrectly three times in succession.
After being challenged by his lab, the DHS dared Humphreys’ crew to hack into a drone and take command. Much to their chagrin, they did exactly that.
I agree that Intel's hardware is very probably not backdoored, but that's simply not a standard by which threats should be measured in this field. Treating a backdoor scenario as outside the realm of possibility based on appeals to reputation given such obvious, massive, and recent precedent to the contrary is... not a typical security mindset, to put it mildly.
|Created:||July 9, 2012||Updated:||July 11, 2012|
|Description:||From the Mageia advisory:
Cross-site scripting (XSS) vulnerability in View.pm in BackupPC 3.0.0, 3.1.0, 3.2.0, 3.2.1, and possibly earlier allows remote attackers to inject arbitrary web script or HTML via the num parameter in a view action to index.cgi, related to the log file viewer
|Created:||July 9, 2012||Updated:||July 11, 2012|
|Description:||From the Mageia advisory:
Disallow width/height changing with frame threads
|Created:||July 10, 2012||Updated:||April 29, 2015|
|Description:||From the CVE entry:
JRuby before 220.127.116.11 computes hash values without restricting the ability to trigger hash collisions predictably, which allows context-dependent attackers to cause a denial of service (CPU consumption) via crafted input to an application that maintains a hash table.
|Created:||July 10, 2012||Updated:||April 10, 2013|
|Description:||From the CVE entry:
The pidfile_write function in core/pidfile.c in keepalived 1.2.2 and earlier uses 0666 permissions for the (1) keepalived.pid, (2) checkers.pid, and (3) vrrp.pid files in /var/run/, which allows local users to kill arbitrary processes by writing a PID to one of these files.
|Package(s):||kernel||CVE #(s):||CVE-2012-2744 CVE-2012-2745|
|Created:||July 10, 2012||Updated:||October 24, 2012|
|Description:||From the Red Hat advisory:
* A NULL pointer dereference flaw was found in the nf_ct_frag6_reasm() function in the Linux kernel's netfilter IPv6 connection tracking implementation. A remote attacker could use this flaw to send specially-crafted packets to a target system that is using IPv6 and also has the nf_conntrack_ipv6 kernel module loaded, causing it to crash. (CVE-2012-2744, Important)
* A flaw was found in the way the Linux kernel's key management facility handled replacement session keyrings on process forks. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2012-2745, Moderate)
|Created:||July 10, 2012||Updated:||July 13, 2012|
|Description:||From the Red Hat advisory:
* The fix for CVE-2011-1083 (RHSA-2012:0150) introduced a flaw in the way the Linux kernel's Event Poll (epoll) subsystem handled resource clean up when an ELOOP error code was returned. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2012-3375, Moderate)
|Created:||July 11, 2012||Updated:||August 29, 2012|
|Description:||From the Novell bugzilla:
libgdata doesn't validate ssl certificates for all connections
|Created:||July 11, 2012||Updated:||July 11, 2012|
|Description:||From the Fedora advisory:
ModSecurity Multipart Bypasses fixed by this upstream release. [v2.6.6] Upgrade to the latest stable upstream release. Upgraded mod_security package.
|Created:||July 11, 2012||Updated:||July 11, 2012|
|Description:||From the Fedora advisory:
ModSecurity Core Rule Set Multipart Bypasses fixed by this upstream release. [v2.2.5] Updated spec file. ModSecurity Rules
|Created:||July 11, 2012||Updated:||June 19, 2013|
|Description:||From the Red Hat advisory:
An input validation flaw, leading to a heap-based buffer overflow, was found in the way OpenJPEG handled the tile number and size in an image tile header. A remote attacker could provide a specially-crafted image file that, when decoded using an application linked against OpenJPEG, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application.
|Created:||July 9, 2012||Updated:||March 15, 2013|
|Description:||From the Debian advisory:
Ulf Härnhammar found a buffer overflow in Pidgin, a multi protocol instant messaging client. The vulnerability can be exploited by an incoming message in the MXit protocol plugin. A remote attacker may cause a crash, and in some circumstances can lead to remote code execution.
|Created:||July 10, 2012||Updated:||July 11, 2012|
|Description:||From the Ubuntu advisory:
Julia Lawall discovered that Pidgin incorrectly cleared memory contents used in cryptographic operations. An attacker could exploit this to read the memory contents, leading to an information disclosure. This issue only affected Ubuntu 10.04 LTS.
|Created:||July 11, 2012||Updated:||August 13, 2012|
|Description:||From the Novell bugzilla:
In the utf-16 decoder after calling unicode_decode_call_errorhandler aligned_end is not updated. This may potentially cause data leaks, memory damage, and crash. The bug introduced by implementation of the issue #4868. In a similar situation in the utf-8 decoder aligned_end is updated.
|Package(s):||wireshark||CVE #(s):||CVE-2012-3825 CVE-2012-3826|
|Created:||July 11, 2012||Updated:||July 11, 2012|
|Description:||From the CVE entries:
Multiple integer overflows in Wireshark 1.4.x before 1.4.13 and 1.6.x before 1.6.8 allow remote attackers to cause a denial of service (infinite loop) via vectors related to the (1) BACapp and (2) Bluetooth HCI dissectors, a different vulnerability than CVE-2012-2392. (CVE-2012-3825)
Multiple integer underflows in Wireshark 1.4.x before 1.4.13 and 1.6.x before 1.6.8 allow remote attackers to cause a denial of service (loop) via vectors related to the R3 dissector, a different vulnerability than CVE-2012-2392. (CVE-2012-3826)
|Created:||July 10, 2012||Updated:||April 11, 2013|
|Description:||From the CVE entry:
Format string vulnerability in the LogVHdrMessageVerb function in os/log.c in X.Org X11 1.11 allows attackers to cause a denial of service or possibly execute arbitrary code via format string specifiers in an input device name.
Page editor: Jake Edge
Brief itemsreleased on July 7. "There's mainly some btrfs and md stuff in here, with the normal driver changes, arm updates and some networking changes. And a smattering of random stuff (including docs etc). None of it looks very scary, it's all pretty small, and there aren't even all that many of those small changes." Linus also notes that the 3.6 merge window is likely to hit when a lot of developers are on vacation, so the 3.6 kernel might contain a relatively small set of changes.
There's also a technological threshold: once RAM in a typical smartphone goes above 2GB the pain of a 32-bit kernel becomes significant. We are only about a year away from that point.
So are you *really* convinced that the colorful ARM SoC world is not going to go 64-bit and will all unify behind a platform, and that we can actually force this process by not accepting non-generic patches? Is such a platform design being enforced by ARM, like Intel does it on the x86 side?
Kernel development news
Happily, that situation appears to be about to change, as Alexander Block's btrfs send/receive patch set has been well received by the development community. In short, with this patch set (and the associated user space tools), btrfs can be instructed to calculate the set of changes made between two snapshots and serialize them to a file. That file can then be replayed elsewhere, possibly at some future time, to regenerate one snapshot from the other.
This functionality is implemented with the new BTRFS_IOC_SEND ioctl() command. In its simplest form, this operation accepts a file descriptor representing a mounted volume and the subvolume ID corresponding to the snapshot of interest; it will then find the changes between the snapshot and the "parent" snapshot it was generated from. There are more options, though:
The generated file is essentially a set of instructions for converting the parent snapshot into the one being "sent." The list of commands is surprisingly long, including operations like create a file (or directory, device node, FIFO, symbolic link, ...), rename or link a file, unlink a file, set and remove extended attributes, write data, clone data blocks, truncate a file, change ownership and permissions, set file times, and so on. The code that generates this file is also surprisingly long, being several thousand lines of complex, nearly uncommented functions (some of the comments that do exist, saying things like "magic happens here," are not entirely helpful).
Interestingly, according to the patch introduction, the custom file format was not in the original plan. Instead, the output was meant to be in something close to the tar file format — close enough that the tar command could be used to extract data from it. Tar turned out not to have the needed capabilities, though, so a new format was created. The format should be considered to be in flux still, though, clearly, it will need to stabilize before this feature can be considered ready for production use. As it happens, the playback of this file can be done almost entirely in user space, so there is no need for a BTRFS_IOC_RECEIVE operation.
At the command level, using this feature can be as simple as:
btrfs send snapshot
This will send the given snapshot (in its entirety) to the standard output stream. Writing the command as:
btrfs send -i oldsnap snapshot
will cause the creation of an incremental send containing just the differences from oldsnap. The receive command can be used to apply a file created by btrfs send to an existing filesystem.
The primary use case for this feature (which is clearly patterned after the ZFS send/receive functionality) is backups in various forms. A cron job could easily send a snapshot to a remote server on a regular basis, maintaining a mirror of a filesystem there. The send files can simply be stored as backups; an entire volume can be sent as a full backup, while snapshots are easily sent as incrementals. With some additional tooling, the send/receive feature could develop into an advanced backup capability with low-level support from the underlying filesystem.
That is for some time in the future, though; the feature is currently experimental, and Alexander warns potential users:
That said, there seems to be a fair amount of interest in this feature (btrfs creator Chris Mason described it as "just awesome"), so chances are it will be worked into reasonable shape relatively quickly. Then btrfs will have one more useful feature and one less reason to be concerned about comparisons with that other filesystem.
One might well wonder whether a 64-bit ARM processor is truly needed. 64-bit computing seems a bit rich even for the fanciest handsets or tablets, much less for the kind of embedded controllers where ARM processors predominate. But mobile devices are beginning to push the memory-addressing limits of 32-bit systems; even a 1GB system requires the use of high memory in most configurations. So, even if the heavily foreshadowed ARM server systems never materialize, there will be a need for 64-bit ARM processors just to be able to efficiently use the memory that future mobile devices will have. "Mobile" and "embedded" no longer mean "tiny."
Naturally, Linux support is an important precondition to a successful 64-bit ARM processor introduction, so ARM has been supporting work in that area for some time. The initial GCC patches were posted back in May, and the first set of kernel patches was posted by Catalin Marinas on July 6. All this code exists despite the fact that no 64-bit ARM hardware is yet available; it all has been developed on simulators. Once the hardware shows up, with luck, the software will work with a minimum of tweaking required.
64-bit ARM support involves the addition of thousands of lines of new code via a 36-part patch set. There are some novel features, such as the ability to run with a 64KB native memory page size, and a lot of important technical decisions to be reviewed. So the kernel developers did what one would expect: they started complaining about the name given to the architecture. That name ("AArch64") strikes many as simultaneously redundant (of course it is an architecture) and uninformative (what does "A" stand for?). Many would prefer either ARMv8 (which is the actual hardware architecture name—"AArch64" is ARMv8's 64-bit operating mode) or arm64.
Arguments in favor of the current name include the fact that it is already used to identify the architecture in the ELF triplet used in binaries; using the same name everywhere should help to reduce confusion. But, then, as Arnd Bergmann noted: "If everything else is aarch64, we should use that in the kernel directory too, but if everyone calls it arm64 anyway, we should probably use that name for as many things as possible." Jon Masters added that, in classic contrarian style, he likes the name as it is; Fedora is planning to use "aarch64" as the name for its 64-bit ARM releases. Others, such as Ingo Molnar, argue in favor of changing the name now when it is relatively easy to do. Catalin seems inclined to keep the current name but says he will think about it before posting the next version of the patch series.
An arguably more substantive question was raised by a number of developers: wouldn't it make more sense to unify the 32-bit and 64-bit ARM implementations from the outset? A number of other architectures (x86, PowerPC, SPARC, and MIPS) all started with separate implementations, but ended up merging them later on, usually with some significant pain involved. Rather than leave that pain for future ARM developers, it has been suggested that, perhaps, it would be better to start with a unified implementation.
There are a lot of reasons given for the separate 64-bit ARM architecture implementation. Much of the relevant thinking can be found in this note from Arnd. The 64-bit ARM instruction set is completely different from the 32-bit variety, to the point that there is no possibility of writing assembly code that works on both architectures. The system call interfaces also differ significantly, with the 64-bit version taking a more standard approach and leaving a lot of legacy code behind. The 64-bit implementation hopes to leave the entire 32-bit ARM "platform" concept behind as well; indeed, as Jon put it, there are hopes that it will be possible to have a single kernel binary that runs on all 64-bit ARM systems from the outset. In general, it is said, giving AArch64 a clean start in its own top-level hierarchy will make it possible to leave a lot of ARM baggage behind and will result in a better implementation overall.
Others were quick to point out that most of these arguments have been heard in the context of other architectures. x86_64 was also meant to be a clean start that dumped a lot of old i386 code. In the end, things have turned out otherwise. It may be possible that things are different here; 32-bit ARM has rather more legacy baggage than other architectures did, and the processor differences seem to be larger. Some have said that the proper comparison is with x86 and ia64, though one gets the sense that the AArch64 developers don't want to be seen in the same light as ia64 in general.
This decision will come down to what the AArch64 developers want, in the end; it's up to them to produce a working implementation and to maintain it into the future. If they insist that it should be a separate top-level architecture, it is unlikely that others will block its merging for that reason alone. Of course, it will also be up to those developers to manage a merger of the two in the future, should that prove to be necessary. If nothing else, life as a separate top-level architecture will allow some experimentation without the fear of breaking older 32-bit systems; the result could be a better unified architecture some years from now, should things move in that direction.
Thus far, there has been little in the way of deeper technical criticism of the AArch64 patch set. Things may stay that way. The code has already been through a number of rounds of private review involving prominent developers, so the worst problems should already have been found and addressed. Few developers have the understanding of this new processor that would be necessary to truly understand much of the code. So it may go into the mainline kernel (perhaps as early as 3.7) without a whole lot of substantial changes. After that, all that will be needed is actual hardware; then things should get truly interesting.Last week we discussed three elements that might serve to guide the creation of introductory technical documentation. This week we put those elements to the test by using them to create some introductory documentation for Linux power management. For me, this exercise precisely answers the question "What were you looking for that you didn't find?", as it is the documentation I would have liked to read.
This documentation is necessarily incomplete, partly because my own experience is not yet broad enough to provide a comprehensive document, and partly because doing so might try the patience of the present readership. As such it stops short of delving into the details of hibernation and completely omits any treatment of quality-of-service and wakeup sources, all of which would have an important place in a more complete document. Fortunately there are still sufficient topics to showcase the presentation of structure, purpose, and examples.
The power management infrastructure in Linux is quite complex, but hopefully not intractably so. To get a handle on this complexity it is helpful to view it from three different perspectives. The first perspective highlights the different holistic states of the system which roughly divide into "in use", "not in use", and "indefinitely not in use", corresponding to "run time power management", "suspend" and "hibernate". One of the distinctions between these is the size of the power switch. The first uses lots of little power switches at different times, while the last turns off everything all at once (except maybe a real-time clock or similar).
The second of these states is somewhat harder to define. It covers a range of states which are not easy to clearly differentiate. At one end of the spectrum we have the traditional "suspend" mode of a laptop, which is almost like hibernation but uses a little more power and is a little quicker to get into and out of. Once the laptop has entered suspend it really must stay there using minimal power until it is explicitly wakened, as it might have been placed in a padded case for transport and any increase in power usage could result in over-heating and damage. This state is often entered with help from BIOS firmware so, to the OS, it is a bit like a single power switch which transitions from "on" to "suspend".
At the other end of the spectrum is the way that "suspend" is used in the Android mobile platform and similar devices. These devices are expected to wake up spontaneously for various reasons, whether due to an incoming phone call, a reminder alarm, or just a periodic check for new email or software updates. Management of power and temperature is generally better than notebooks so the risk of over-heating is not present. There is normally little or no firmware and the entire power-management transition is handled by the OS, so it is responsible for turning off each individual device in the correct order, and then restoring them again later.
Between these extremes of a light hibernation and a heavy snooze there is room for other possibilities. A server might use a BIOS-based suspend to save power after arranging for wake-ups via wake-on-LAN or a realtime clock alarm. This can be seen as a deeper sleep than an Android phone normally enters, but not as deep as the laptop in its padded case. The "suspend" mode in Linux attempts to cater to all of these and that flexibility leads to some of the complexity.
The second perspective highlights the broad variety in components that need to be managed. Some, like rotating disk drives, have a high cost in power and time for turning off and on again, while others like an LED have essentially no cost. Some, such as a UART, need to either be off or sufficiently on to be able to accept full-rate data at any moment. Others, such as USB, can enter intermediate states where they can receive external signaling, but are free to take some time to fully wake up.
Other sources of variability include the level of independence from other devices, the degree of involvement of user-space in management of the device, and how power is routed - whether through the same bus that commands and data flow, or through some separate regulator or "power domain". These are just some of the ways that devices can vary and thus some of the issues that Linux power management needs to be prepared for.
The final perspective highlights the different stages on the way towards a low-power state, and on the way back to full functionality. The key elements of the low-power transition are to move all relevant components to a quiescent state, to record that state, then to stop powering some or all of the components; similar elements apply on the way back up. The details of managing all the aforementioned complexity through this simple transition means that we have quite a few stages as we will shortly see.
Part of understanding the solution to managing this complexity is understanding the balance that has been chosen between a "mid-layer" solution and a "library" solution. That is, how much responsibility for correct behavior and sequencing is taken by the core code and imposed on the drivers, and how much of the responsibility is left in the hands of the drivers. Centralizing responsibility tends to be safe but inflexible, while distributing it is risky but versatile. Linux power management takes a middle road, so it is important to understand where each responsibility lies.
The main imposition made by the PM core is the over-all sequencing of suspend and resume. Allowing individual drivers to take a more active role in this process would probably require a general dependency solver and would undoubtedly make debugging a lot harder. In contrast, choices that are local to a specific device, such as timeouts before power management activates, or the use of a separate thread for performing power management actions are actively supported by the core without being imposed on drivers that don't want them.
One other imposition, which will be raised again later, involves interaction with interrupts. The PM core strongly encourages a specific sequencing, but does provide hooks for a driver to escape it if absolutely necessary.
Understanding Linux power management also requires knowing how devices are classified in Linux. The most obvious classification is shown by the "subsystem" link that can be found in the sysfs entry for the device. This points to either a "bus" or a "class" that the device belongs to. This subsystem roughly describes the interface that the device provides. Together with this can be a "device type" which allows further specialization. A simple example is that members of the class "block" - which are block devices such a disk drives - can be of type "disk" or type "partition" reflecting the fact that both the whole device and each individual partition is a block device, but that they do have some specific behaviors that are quite different.
Finally each device can have a "power domain" (or pm_domain). This is an abstraction that is currently only used for ARM SoC modules and represents the fact that different collections of devices within the SoC can be powered on or off together, thus the power domain may need to know when each device changes power state so it can re-evaluate or adjust the overall state of the domain.
These classifications are used to direct all the power management calls that are described below. If a device has a power domain, it gets to handle the call. If not, but the type, class, or bus declares any PM operations, those operations get to handle the call, otherwise the call is handled by the driver for the device. The PM core doesn't attempt to call all of the possible handlers for a particular device, just the first that is found. This is an example of distribution of responsibility. The first handler has the freedom to call more specific handlers, or to do all the work itself, and equally has the responsibility to ensure all required handlers are called.
For example, the power domain handler for the OMAP platform (in arch/arm/plat-omap/omap_device.c) calls the driver-specific handler (bypassing any subsystem handlers) before or after doing any OMAP-specific handling. The MMC bus handlers call into driver-specific handlers which are stored in a non-standard location - presumably for historical reasons.
With these perspective and understandings in place, we can move on to some specifics.
Runtime power management has the fewest states and so is probably the best place to start digging into details. This is the part of Linux PM that manages power for individual devices without taking the whole system into a low-power state.
In this case the most interesting stage of the transition to lower power is "move to quiescent state". Once that is done there is one method call (runtime_suspend()) which combines "record current state" and "remove power", and another (runtime_resume()) which must restore power and reload any needed device state.
For runtime PM, the "move to quiescent state" transition is a cause, not an effect - the new state isn't requested, it is simply noticed. The PM core keeps track of the activity of each device using two counters and an optional timer. One counter (usage_count) counts active references to the device. These may be external references such as open file handles, or other devices that are making use of this one, or they may be internal references used to hold the device active for the duration of some operation. The other counter (child_count) counts the number of children that are active. The timer can be used to add a delay between the counters reaching zero and the device being considered to be idle. This is useful for devices with a high cost for turning on or off.
This "autosuspend" timer is not widely used at present, with only nine drivers calling pm_runtime_put_autosuspend() to start the timer, while 14 call pm_runtime_set_autosuspend_delay() which sets the timeout (though that can be set via sysfs). One user is the omap_hsmmc driver for the High Speed Multi-Media Card interface in OMAP processors. It sets a 100ms delay before declaring a device to be truly idle, presumably due to costs in activating and deactivating the cards.
The counter of active children can optionally be ignored when determining whether a device is idle. Normally the parent is needed to access the child - typically the parent is a bus sending commands to the child - so powering down the parent while children are active would be counterproductive. Sometimes it is useful though.
One good example is an I2C bus. I2C (inter-integrated circuit) is a very simple 2-wire bus for signaling between integrated circuits on a board. It doesn't carry power, only a clock signal and a bidirectional data signal. The bus is entirely master-driven. Slaves (which appears as children in the Linux device tree) cannot signal the master directly at all, they simply respond to commands from the master.
As an I2C controller is very cheap to turn on before a command is sent, and off after the response is received, there is no need to keep it powered just because its child (which could be a sensor that is monitoring the environment and may have a higher turn-on cost) is left on. Consequently some I2C controllers, such as i2c-nomadik and i2c-sh_mobile use pm_suspend_ignore_children() to allow them to report as idle even when they have active children.
When a device is deemed to be idle by the above criteria its runtime_idle() method is called. This function will normally perform any further checks (as does usb_runtime_idle()) and possibly call pm_runtime_suspend() to initiate the change in power state. For a slight variation, lnw_gpio_runtime_idle() in the gpio-langwell.c driver doesn't call pm_runtime_suspend() directly but rather calls pm_schedule_suspend() with a 500ms delay. Presumably this design predates the introduction of the autosuspend feature.
There is one class of devices that does not follow this structure for power management, and that is the CPU. The general pattern of entering a quiescent state, recording state information, and reducing power usage is the same, however the particular implementation is vastly different. This is partly due to the uniquely central role that the CPU plays, and partly due to the fact that a CPU typically has many more levels and styles of power saving. Runtime PM for the CPU is implemented using the cpuidle, cpufreq, and CPU hotplug subsystems, which will not be discussed further here; see this article for an introduction to cpuidle.
It can be helpful to view the "suspend" process as forcing all devices into a quiescent state, and then simply allowing runtime power management to put them all to sleep. The last to go to sleep would be the CPU (or CPUs) under the guidance of "cpuidle". While this isn't the way it is actually implemented, it provides a perspective which exposes the relationship between suspend and runtime PM quite well.
There are several reasons for not implementing it this way. Possibly the most unavoidable is that PM_RUNTIME and SUSPEND are separate kernel config options and there is a desire to keep it that way, so neither can rely on the other being present. There is also the fact that a BIOS (such as ACPI) might be involved in one or the other and may impose different handling requirements. Finally, individual drivers might want to make different decisions based on what sort of power management is happening, so it is generally best to tell them what is actually happening, rather than pretending that one thing is a form of another.
Forcing devices into a quiescent state has an important difference from just allowing them to get there on their own - any interdependencies between devices need to be explicitly handled. Linux PM has chosen to manage this by having a clear sequence of steps for transitioning to low power, and an explicit ordering of devices so that they make each step in a well defined order.
The ordering (stored in dpm_list linked through dev->power.entry) is normally the order in which devices are registered, with new devices added to the end, thus being after any devices that they depend on. However it is possible to reorder the list using device_move() which gives a device a new parent, and can place it directly after that parent, or at the end of the list. For example, when an rfcomm tty-over-bluetooth device is opened, a bluetooth connection is created and the tty device is reparented under the relevant bluetooth device and placed at the end of the device list.
The first stage of suspend, after some preliminaries like calling "sync" to flush out dirty data and switching to a separate virtual console, is to move all processes into a quiescent state. Devices which interact closely with processes need a chance to have one last chat before their process goes to sleep and this is achieved by registering a "notifier" which gets called before processes are put to sleep, and again when they are woken up.
This is variously used to:
and a few other minor tasks. Once these notifiers run, all processes are sent a special signal which results them being moved to the "freezer" where they are forced to wait for system resume to happen.
Once all processes are quiescent, the next step is to instruct all devices to also become quiescent. To do this we need to walk the list in reverse order, putting children to sleep before their parents - as the parent may be needed to help put the child to sleep. However as a new child could be born at any moment (e.g. due to a device being plugged in), and as children get added to the end of the list, we might miss some children on the first pass. To avoid this, the PM core makes two passes over the list. The first pass starts at the beginning and simply asks devices to stop adding children by calling their "prepare()" method. If children are born during this time they can only be added after the current pointer in the list, and so will not be missed. Once this is complete we know that no new devices will be added, so the list is walked in the reverse order calling the "suspend" method.
The "suspend" method is actually three separate methods, suspend(), suspend_late(), and suspend_noirq(), which can share among themselves the three tasks of making the device quiescent, saving any state, and reducing power usage. How much of which task is allocated to which methods is largely up to each driver providing that the division works with the calling patterns of the three methods.
Calls to these methods are made to all devices in child-before-parent order and the sets of calls are interleaved with system-wide suspend operations, made largely through the suspend_ops dispatch table. The ordering is roughly:
Note that it is possible for the sequence from system wide prepare() onwards to be repeated (after being unwound by corresponding "resume" actions) without going all the way up to fully awake and starting the sequence from the top. This happens if the suspend_again() suspend operation requests it. Currently this is only requested by the charger manager which often needs to wake up parts of the system to check battery charging state, without wanting the cost of a full wakeup.
Deducing the purpose of these method calls by looking for example usage in the code is problematic for a number of reasons.
For the system-wide operations (begin(), prepare(), prepare_late()), there are few users and those that exist do not make their purpose clear to an untrained observer. The most complete user is ACPI, so possibly a full understanding of that specification would help. Unfortunately that is beyond the scope of this article (and of this author).
In general, ACPI recommends specific procedures for entering and leaving system sleep states (such as suspend) and Linux PM was modeled on that and then adjusted to meet broader needs. For example, prepare_late() was added to resolve a conflict between the needs of ACPI and the needs of the ARM platform.
Examining the documentation brings a little more useful information.
One observation from the code that seems to be important before we try to paint the big picture is that, after calling the suspend() method on a device, runtime power management is disabled. The purpose of this seems to be to stop runtime PM from racing with system suspend PM - we really don't want two threads trying to power off a device at once, and this is the interlock that prevents that. It also prevents runtime PM from powering the device back on again, so any device that might be needed in the late states of power management needs to be left on when runtime PM is disabled.
Tying all these threads together we get that:
The suspend() method should cause the device to stop doing anything, and enter a state much like it would be just before runtime PM might decide to turn it off. So it should wait for any DMA requests to complete and ensure new ones won't start. It should stop transmitting information and ensure that incoming information is either ignored, or triggers a wake-from-suspend (possibly marking the interrupt for wakeups). It should cancel any timers and generally prepare for nothing to happen for a while.
If the device might be needed to power down other devices, such as an I2C controller that might be needed to tell some regulator to turn off, then the device should be activated for runtime PM purposes so that it will still be active when runtime PM is disabled.
Part of the task of ignoring incoming information is to ensure that no new children will be created much like the runtime PM prepare() method does. Having new devices appear after suspend() would be awkward.
The suspend_late() method should power off the device in much the same way that runtime_suspend() does, and it may be exactly the same routine as runtime_suspend(). Occasionally preparing the device to wake up may differ between the system suspend and runtime PM cases. This would be one situation where suspend_late() might need to be different from runtime_suspend().
The only case where suspend_late() should not be used is where interrupts might still be delivered, but the interrupt handler cannot tolerate the device being off. In many cases the suspend() routine will have put the device in a state in which it will not generate interrupts. Likely exceptions to this are when the interrupt line is shared, or when the device supports wake-from-suspend and so deliberately does not disable interrupts.
If the platform that the device runs on uses BIOS support to enter suspend, then it is possible that this support will power off the device, so suspend_late() does not need to bother. If it doesn't, it could still be that the device gets powered off by instructing the BIOS to effect the state change, and it may require different power-off procedures for runtime PM and for entering suspend. If this is the case, then suspend_late() will quite likely be very different from runtime_suspend().
The suspend_noirq() method is an alternative to suspend_late() but is run without interrupts enabled. It is unlikely that any driver will provide both methods.
Having interrupts disabled means not only that an interrupt will not occur at an awkward time, but also that using any functionality that requires interrupts will not work. So if the driver uses an I2C bus or similar to tell the device to turn off, and if the I2C bus uses interrupts to indicate completion (which is normal), then either the device must be powered-off in suspend_late, or the I2C interrupt must be marked IRQF_NO_SUSPEND.
Paired with each of these methods is a method that is called when returning back towards full-functionality: resume_noirq(), resume_early() and resume(). These simply do the reverse of what the corresponding "suspend" function did.
Structure, purpose, and examples - these seem to be the elements that distinguish good documentation and enable the reader not just to collect knowledge but to gain understanding. I'll leave you, dear reader, to be the judge of whether their presence here is sufficient to bring an understanding of power management, or indeed an understanding of quality documentation.
I would like to thank Rafael Wysocki for valuable review of an early draft of this article.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
On July 3, Debian Project leader Stefano Zacchiroli announced the launch of a new effort to clarify why Debian is not endorsed on the Free Software Foundation's (FSF's) free distribution list, and perhaps even make changes to Debian so that it met the FSF's requirements. That effort has spawned a mailing list where the two projects are talking about the differences in their goals and principles, but a plan of action is yet to come.
Zacchiroli cited three reasons for pursuing inclusion on the FSF distribution list. First, Debian's absence on the list has historically led to a duplication of effort, with derivative distributions created "that are essentially Debian, modulo the changes necessary to be listed." Second, many in the Debian community choose the distribution because of its rigorous stance on software freedom, and there is likely to be a large overlap between them and FSF supporters. Third, Debian's goals in software freedom are essentially self-reviewed, so measuring the distribution against an external standard could reveal valuable information about Debian's successes or failures and its general perception by outsiders.
Although one of the possible outcomes of the effort is getting Debian included on the FSF distribution list, Zacchiroli stated at the outset that documenting Debian's position on why it does not meet the criteria listed by the FSF might also be an acceptable result. He proposes "to work with the FSF to review the issues they claim apply to Debian" in bug-triage fashion. "Some of the bugs will be valid, some of them will be not, and on some there will be disagreement between submitter and 'maintainer'." Should Debian and FSF be unable to resolve the "bug validity" of the outstanding issues that keep Debian off of the FSF distribution list, Zacchiroli said, "at that point we will have obtained a list of blockers, that could than be used as documentation for Debian users who wonder why Debian and FSF disagree on the Free-ness of Debian."
Accepting the possibility that the two projects might not reach common ground is important, because the biggest obstacle to Debian's inclusion on the list is the FSF's requirement that distributions "not steer users towards obtaining any nonfree information for practical use, or encourage them to do so," and the projects are definitely divided on how that guideline applies. To the FSF, not only must the distribution not have any repositories containing non-free software, but it must not refer to third-party repositories that are not committed exclusively to free software "even if they only have free software today," and individual applications cannot suggest installing non-free plugins or documentation. The latter requirement, for example, disqualifies Mozilla Firefox, because its official add-ons site contains proprietary extensions and plugins — and it disqualifies Iceweasel, Debian's rebranded version of Firefox.
Therein lies the tricky part. Iceweasel is a repackaged version of Firefox built by Debian to cope with incompatibilities between Mozilla's trademark guidelines and the Debian Free Software Guidelines (DFSG). Although Iceweasel complies with the DFSG, it does not meet FSF's distribution guidelines. Conversely, many FSF documents are under the GNU Free Documentation License (GFDL), which does not meet DFSG requirements. Debian's explanation of GFDL's incompatibility notes "that this does not imply any hostility towards the Free Software Foundation" or that the project dissuades others from using GFDL.
Debian intentionally separates non-free software into a separate repository (named nonfree) which it states is not part of the Debian system, but in its explanation of Debian's status, FSF argues that this is not enough — the nonfree repository is hosted on Debian project servers, and there are references to it in the online documentation. A related problem is the contrib repository, which includes some packages that FSF claims "exist to load separately distributed proprietary programs." Finally, although Debian no longer includes any binary blob kernel modules, FSF points out that the installer still recommends some of them for specific hardware.
Assessing the content of those repositories is a natural first step. Practically speaking, there is no list of exactly which packages in nonfree or contrib violate the FSF guidelines. Paul Wise pointed out an older project to document Debian's nonfree packages and said that "recent policy changes added the requirement for the debian/copyright file to document why something is non-free." The information in the non-free tracking system is quite old (early 2008); updating it could take considerable time, but Zacchiroli suggested reviving it — turning each tracking system entry into a bug report against the relevant package, tagging the reports, and linking each report to the appropriate policy that clarifies why the package is non-free.
Early on in the list discussion, Thorsten Alteholz proposed rolling a "Debian" distribution that intentionally follows the FSF guidelines, and separating it from a "Debian Extended" distribution that includes access to the non-free and contrib repositories. That idea did not gain significant traction. Bryan Quigley suggested looking for packages in nonfree that might be encouraged to relicense, and compiled a list of "low-hanging fruit" including several varieties of non-software package: firmware packages, fonts, documentation, data, and so forth. Daniel Kahn Gillmor liked the concept, but said that most projects have reasons for choosing the licenses they use, so "Convincing the upstream of every package in non-free to change their license seems implausible, so that means that some packages would likely remain."
But Henry Jensen contended that fixing up nonfree and contrib would not be enough on its own, because of the "steer users towards nonfree" requirement. "So, every explicit mentioning of non-free software could be interpreted as recommendation." He posted a list of the Debian components he believed needed fixing. In addition to Iceweasel and other programs that use plugins, he listed the Linux kernel (because it logs the names of proprietary firmware files it expects to see but finds missing), the official Debian web and wiki sites (because they mention non-free software), and the official forums and mailing lists (which lack a moderation system to discourage users from asking about or discussing non-free software). He cited references for the kernel and forum issues, including a 2010 message in which Richard Stallman said a distribution's official forums should not include advice on how to run non-free programs.
The discussion sparked (perhaps predictably) a brief flurry of debate over the merits of FSF's guidelines, and specifically whether or not they go too far when they ban discussion of non-free software. As is typical of debates over free software ideals, there was a wide spectrum of opinion. But personal opinions are not the issue. As Mason Loring Bliss put it, "we're not here to discuss my standards. :P The FSF has, effectively, drawn a line in the sand, and it's their line to draw." Ian Jackson encouraged participants to refrain from dogmatic arguments, and for everyone to treat each other as allies.
But Jackson's appeal for respectful disagreement also conceded that full agreement between the projects might be unattainable. "If you can't convince your ally on some point then the right thing to do is not to browbeat them harder. The right thing to do is to agree to differ, and move onto a topic where cooperation is possible." So far, there appears to be little progress on the underlying issue of whether the nonfree and contrib repositories are suitably disconnected from the Debian distribution. That issue is the most fundamental, and it is what led to the brief philosophical debate. Documenting the contents of the repositories may be helpful, but ultimately it is their availability that FSF finds objectionable. Jason Self asked if moving the nonfree and contrib repositories to a different virtual host would satisfy the requirements, but so far there has been no reply.
Exactly where FSF decides to draws its lines ultimately involves some judgment calls by humans, of course (I am reminded of Matthew Garrett's 2008 list of things in your computer that you do not have the source code for, including a great many firmware and microcontroller examples), but it draws those lines clearly. If the presence of any information about non-free software on any Debian site or service disqualifies Debian from meeting FSF's distribution guidelines, then it is hard to see how the two projects will find middle ground. Which is not to say that there is no hope — Michael Gilbert pointed to an FSF statement about where Stallman presents a more nuanced approach to balancing the pros and cons of non-free games than he is often given credit for. But these are clearly two projects with firm beliefs about their own ideals, and well-established rationales to back them up. Compromise can hardly be simple.
FedoraMany of you know Jaroslav from his contributions to the Fedora Community both as an Ambassador in EMEA, as well as his work as a Fedora Board member; his previous role within Red Hat was as part of the Base OS Development Team in the Brno office, working on Matahari and KDE, and I'm sure that his past experiences in development will be incredibly helpful to him as he takes on this role."
Mandriva Linuxreports that the Mandriva distribution will get a new name. "Starting now, we have opened a poll that will let you pick the name of the future distribution (and its foundation). In the future, Mandriva as a brand name will remain the name of the company (Mandriva S.A.) but the community itself will have a different name and a different branding, although it is also possible that the brand and the name will keep a tight connection with Mandriva. We had to prepare the available choices; we came up with some names during the meeting in Paris, we also listened to some ideas expressed on the Foundation mailing list. Last but not least we left the possibility to send us suggestions for other names. If a suggestion appears to be really popular we will consider it provided it’s available of course."
Newsletters and articles of interest
Page editor: Rebecca Sobol
An effort to make KWin, KDE's window manager, more flexible, goes back more than two years, but it is now bearing fruit. Adding scripting support so that less technical users can make changes to the behavior and appearance of KDE without having to resort to C++ is now possible. KWin hacker Martin Gräßlin talked about the history and status of KWin scripting on July 1 at Akademy.
The idea of KWin scripting goes back to the Tokamak IV sprint in February 2010. The intent is to make the window rules more flexible. At that time, KWin rules were static and there was no way to say "I want all GIMP windows to go to a certain virtual desktop", for example. Gräßlin discussed the idea with Nuno Pinheiro, who leads the Oxygen theme project, at Akademy 2010, with Pinheiro making it clear that not requiring C++ was important.
A 2010 Google Summer of Code project added scripting support to KWin, but it suffered from a number of drawbacks. The API was hand-crafted and the documentation was hand-written. In addition, it "interweaved" the scripting and KWin core components in undesirable ways. It was, Gräßlin said, a prototype and one that should never have been merged.
One thing that was needed to support scripting is a generic animation effect. Previously, code was copied into many places to do the same kinds of effects for window state changes (e.g. window close or minimize). That meant that bugs got copied as well. A base implementation for window effects was added to react on state changes. That implementation is now made available for use in scripts.
Supporting a touch-based interface like KDE's Plasma Active was a "completely new world to us as a window manager", he said. KWin was highly mouse and keyboard-driven, which is not suitable for touch interfaces. Also, Plasma Active needs windows to always start maximized, which was a big difference. KWin started off supporting Plasma Active using ifdefs, but that became unwieldy. What was needed was a way to put a new interface on top of KWin without changing the core.
The same goes for desktop switching. It uses the same framework as the window switching support and will be able to be scripted using QML in KDE 4.9. Right now there is only layout available, as it is "not a focus" for the KWin project, but more could be added.
The KWin team likes to eat its own dog food, but that was difficult with the prototype scripting implementation. Most of the team did not know how to use the prototype. The newer scripting support is better understood, so the team is using it to find and fix bugs early on.
KWin is preparing to support Wayland by adding client properties to its API. These QProperties describe the client interface so that non-X clients can be handled. That means there will be one code base that can handle both window manager and compositing modes.
Multi-screen handling has also been changed substantially. There used to be lots of options governing the behavior of windows on multiple screens for historic reasons, but those have all been replaced. Now there is a single script for the only sensible choice: a video wall spanning all the screens. Other use cases could be scripted if needed, he said.
Scripting much of the functionality that used to be done in C++ has had a dramatic effect on the code size. KWin lost 5000 lines of C++, which was replaced with a few hundred lines of QML, Gräßlin said.
Various third parties have started using KWin scripting. Get Hot New Stuff, the freedesktop.org project for sharing desktop resources, is using KWin scripts. A project to do window tiling for KWin exists as well. A tiling window manager struck some in the audience as rather amusing, but the implementation is working quite well and is not something that could have been done with C++, he said.
The "most impressive" user of KWin scripting is the Arctos Dashboard. It gives an overview of the open windows, has a KRunner interface, and an activity switcher. The developer has been collaborating with the KWin team and it has been a nice experience to have someone outside the project using and testing the scripting support, he said.
There may be other projects interested in KWin, beyond just KDE's Plasma. Razor-qt is interested, and Unity may want KWin because there are problems with its window manager (Compiz), he said. KWin has lost its static Plasma dependencies as they have been moved to runtime. There are still a few dependencies on kdelibs, but KWin could move to Qt-only in the future now that some KDE functionality is moving down into Qt.
Down the road, there are plans for integrating scripting with PlasMate, a development environment for creating Plasma widgets, themes, and more. There is a GSoC project underway to do that. Support for desktop thumbnails (e.g. for desktop switching) is already working though there are a few "glitches" with OpenGL rendering that need to be worked out.
Adding the ability to do unit testing with scripts is also in the works. Unit testing window managers has traditionally been difficult, so being able to inject test scripts simulating user interaction into a running KWin instance will be quite useful. Gräßlin said he is "really looking forward to that" as it will lead to a better tested code base for KDE 4.10 and 4.11.
The KWin team is looking for more people to try out the scripting additions and report on problems that they find. In addition, they are interested in hearing about use cases that they haven't thought about. It is "really easy to add things" and the team is open to new ideas, he said, so "give it a try". Scripting support has simplified KWin while also providing new capabilities; more input and testing can only help to make it better.
[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]
Newsletters and articles
Page editor: Nathan Willis
Articles of interestblog post expressing Mozilla's support for the Internet-based campaign on behalf of Bassel Khartabil, an open source developer who has been detained in Syria. "Mozilla supports efforts to obtain the release of Bassel Khartabil (also known as Bassel Safadi), a valuable contributor to and leader in the technology community. Bassel’s expertise and focus across all aspects of his work has been in support of the development of publicly available, free, open source computer software code and technology." In addition to Mozilla work, Khartabil has contributed to Creative Commons, Open Clip Art Library, and several other projects. The campaign centers on a petition and letter-writing drive at freebassel.org. reports that GitHub has raised $100 million in venture funding. "The startup will use the funding to hire additional employees and expand to new platforms such as mobile. CEO Tom Preston-Werner said the company hopes to develop new features but also improve existing ones, such as web applications for different operating systems. The idea is to make GitHub useful for a broad range of clients, from individual hackers to large enterprises, and from software developers to designers or authors." writes on Richard Fontana's GPLv3 fork being developed on Github. "To date, Fontana's changes mostly relate to the concerns of the corporate voices in the GPLv3 process. The preamble -- 'an inspiring and important political statement,' according to Fontana -- has been removed, as has the 'How to Apply' appendix. Those simple steps alone dramatically shorten the text. The rest of the changes seem to fix apparently redundant compromises that made their way into the text as part of the negotiation process among all the corporate participants." reports the death of Eugeni Dodonov, a former Mandriva developer who was most recently working for the Intel Open-Source Technology Center. He was killed in a bicycle accident, details are still under investigation. This article (in Brazilian Portuguese) has a bit more information. talks with Richard Posner, a US federal appeals court judge who presided over Apple's lawsuit against Google's Motorola Mobility, and other software patent cases. "Posner said some industries, like pharmaceuticals, had a better claim to intellectual property protection because of the enormous investment it takes to create a successful drug. Advances in software and other industries cost much less, he said, and the companies benefit tremendously from being first in the market with gadgets - a benefit they would still get if there were no software patents." covers Jolla Mobile, a six-man startup that aims to release MeeGo phones. "Jolla will soon be making announcements about its version of MeeGo as well as a brand new smartphone it’s bringing to market. The company so far is composed of “N9 core professionals and MeeGo community alumni,” the [Twitter] page reads. While N9 folks will be at the helm, there is no intention to offer support for Nokia’s N9 smartphones — more on those later."
Upcoming EventsThis year we're running sprints as an integral part of the conference, instead of being tacked on afterwards. Each sprint will have an Introductory talk on Friday 28th, then have sessions throughout the weekend, which you will be able to drop in to between any talks you can't miss. For the sprint diehards, sessions will continue into Monday to wrap up any loose ends."
|Libre Software Meeting / Rencontres Mondiales du Logiciel Libre||Geneva, Switzerland|
|Wikimania||Washington, DC, USA|
|Linux Symposium||Ottawa, Canada|
|Community Leadership Summit 2012||Portland, OR, USA|
|OSCON||Portland, OR, USA|
|GNOME Users And Developers European Conference||A Coruña, Spain|
|Texas Linux Fest||San Antonio, TX, USA|
|21st USENIX Security Symposium||Bellevue, WA, USA|
|PyCon Australia 2012||Hobart, Tasmania|
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|YAPC::Europe 2012 in Frankfurt am Main||Frankfurt/Main, Germany|
|August 25||Debian Day 2012 Costa Rica||San José, Costa Rica|
|GStreamer conference||San Diego, CA, USA|
|XenSummit North America 2012||San Diego, CA, USA|
|Kernel Summit||San Diego, CA, USA|
|Ubuntu Developer Week||IRC|
|LinuxCon North America||San Diego, CA, USA|
|2012 Linux Plumbers Conference||San Diego, CA, USA|
|Linux Security Summit||San Diego, CA, USA|
|Electromagnetic Field||Milton Keynes, UK|
|Kiwi PyCon 2012||Dunedin, New Zealand|
|September 1||Panel Discussion Indonesia Linux Conference 2012||Malang, Indonesia|
|VideoLAN Dev Days 2012||Paris, France|
|Foundations of Open Media Standards and Software||Paris, France|
|DjangoCon US||Washington, DC, USA|
|Magnolia Conference 2012||Basel, Switzerland|
|Hardening Server Indonesia Linux Conference 2012||Malang, Indonesia|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds