|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 26, 2012

WHATWG severs collaboration with W3C on HTML

By Nathan Willis
July 25, 2012

Fans of HTML will be either thrilled or annoyed by the news that there will soon be two independently maintained standards claiming to be the authoritative definition of HTML. The Web Hypertext Application Technology Working Group (WHATWG), a team comprised of representatives from various browser makers, announced that it is terminating its collaboration with the World Wide Web Consortium (W3C) on the standardization of HTML 5. In WHATWG's account of the split, it is continuing to develop the "living standard" of HTML, while W3C's HTML 5 specification is a frozen "snapshot" of HTML. Based on the public statements and history between the two groups, the underlying issue has more to do with the standardization process than it does with technical differences between their versions of HTML. Nevertheless, competition over who has the right to declare their vision of HTML the official standard will likely cause headaches for web developers.

WHATWG founder and chief public spokesman Ian Hickson posted the news to the WHATWG mailing list on July 19. Hickson (who is a Google employee) had been the primary editor for both the WHATWG and for W3C's HTML Working Group. According to the announcement, Hickson will continue to be the lead editor of WHATWG's HTML work, but leave the W3C editor position. WHATWG will formally be a W3C "community group" (CG), and will continue to use a W3C-hosted issue tracker. On the latter point, however, Hickson and the W3C did go through the existing bugs filed against the previously-unified HTML specification and clone separate copies, one for each of the now distinct specifications.

The upshot is that WHATWG now regards its version of HTML as the definitive standard, which will continue to evolve, without declaring numbered versions. Hickson concluded "My hope is that the net effect of all this will be that work on the HTML Living Standard will accelerate again, resuming the pace it had before we started working with the W3C working group."

Hickson cited two reasons for the split. First, the W3C separated out several parts of the HTML 5 specification into distinct sub-specifications (such as the 2D canvas element, postMessage, and server-sent events). The result, he said, was "an increasing confusion of versions" of the specification, in response to which WHATWG "went back to just having a single spec on the WHATWG side which contains everything I work on." Second is a divergence between the WHATWG and W3C processes. Attempting to explain what that means, Hickson described WHATWG's process as "fixing bugs as we find them, adding new features as they become necessary and viable, and generally tracking implementations." In contrast, he said, the W3C HTML working group is focused on "creating a snapshot developed according to the venerable W3C process."

What, work in a group?

To the untrained ear, those reasons might sound like WHATWG simply does not want to participate in a standards process at all. But there is more pointed criticism of the W3C on the WHATWG site, which calls the term HTML 5 a "buzzword" and enumerates several specific differences between the two standards, linking most of those to W3C HTML working group decisions that WHATWG evidently found disagreeable. The decisions that WHATWG critiques date back to mid-2010, although many of them seem to be connected to the working group's recent attempts to finalize HTML 5 over the course of 2011-2012 — such as disagreements over the actual wording of the specification and its inline advice (as opposed to, say, contradicting definitions of HTML elements or attributes).

WHATWG's disinterest in finalizing the standard was also evident in a blog post Hickson made in January 2011. That post announced that WHATWG's version of HTML would drop version numbers altogether, on the grounds that "the technology is not versioned and instead we just have a living document that defines the technology ." The HTML "Living Standard" terminology persists in the group's current communication. W3C, in contrast, is still moving forward with finalizing the HTML 5 specification and its successors. In an April message to the HTML Working Group list, Maciej Stachowiak said the W3C has begun to extend the HTML Working Group's charter to tackle "HTML.Next" and will proceed to examine proposals, including "the WHATWG HTML specification, which we anticipate will be one such proposal."

Viewed in that sense, the split between the two groups is not so much a forking of HTML into separate proposals as W3C's HTML 5 is a "frozen branch" of WHATWG's trunk. Or it would be, were it not for both groups' claims to represent the official standard for HTML. Hickson made this claim explicitly in his email, calling WHATWG's HTML Living Standard the "canonical description of HTML and related technologies". W3C's claim to ownership is less overt, but the group and its founder Tim Berners-Lee have written and formalized HTML since its inception in the 1990s. The first draft was written in 1993, with subsequent revisions published as IETF RFCs before the W3C process took over.

WHATWG was founded by a group of browser makers who felt that W3C's process was too slow and bureaucratic, so perhaps it is miraculous that the two groups were able to collaborate so successfully as long as they did. In any case, the practical question is what the split means for HTML developers — and, by extension, web users.

The great thing about standards is ...

The principal reason for concern is that if the two specifications drift in different directions, web developers could be forced to add even more workarounds to match both, or else sites could be branded "WHATWG HTML compatible" or "W3C HTML compatible," reminiscent of a return to the dark days of serious browser incompatibilities. In addition, WHATWG's "living standard" approach has its own critics, principally on the grounds that a constantly-in-flux document makes for a poor standard. A commenter named Mike asked on the 2011 blog post "How do you make a test suite and a browser compatibility chart for a “living standard”? It sounds like HTML is becoming a sort of Wikipedia revision style chaotic nightmare." Similarly, Jukka Korpela said:

“Living specification” sounds like a draft that may and will change at any moment and is probably not even complete at any moment of time but is still called a “specification”, since that sounds cool, technological, and impressive.

I can’t believe I feel the need to explain such a trivial thing, but really, a “specification” is a complete, consistent, stable, and published normative description. It can be cited and referred to as a requirement, e.g. in contracts and product descriptions. Typos, apparent mistakes etc. can and should be corrected, but the content is not changed as you go just because someone or some committee changes its mind. The specification definitely does not “live”; its life is in serving various purposes _as it is_. Development work takes place elsewhere and may eventually lead to a new specification.

So “living specification” is about as oxymoronic as you can get.

Indeed, the current presentation of WHATWG's incarnation of HTML does sport a Wikipedia-style "last updated" timestamp at the top (the last update was July 24, 2012 as of press time), plus floating tooltip-style markers that point out certain paragraphs as "ready for first implementations" from the margins — both features that hardly inspire confidence in the specification's stability.

Steve Faulkner replied to Hickson on the WHATWG list and took direct issue with WHATWG's assertion of canonical-ness. "The claim that HTML the living standard is canonical appears to imply that the requirements and advice contained within HTML the living standard is more correct than what is in the HTML5 specification." In particular, Faulkner pointed out that WHATWG's specification, like W3C's, cover only browser implementations and not authoring recommendations (which tackle critical issues like accessibility in addition to stylistic advice). On that front, Faulkner said, the W3C's specification has the "more accurate set of requirements and advice, that takes into account current implementation realities, thus providing [authors] with more practical advice and thus end users with a better experience."

Hickson and WHATWG have not responded publicly to Faulkner's message, but perhaps that is to be expected. It seems clear that WHATWG's primary motivation is continuing to add to HTML and related technologies as quickly and as often as its members wish. Of course, that highlights another thorny issue. As a standards-setting organization, WHATWG is not particularly accountable; membership is by invitation-only and Hickson can only be removed as HTML editor by the members. Other members of the public are welcome to join the mailing lists as "contributors," however. W3C may not be particularly democratic either, but its flavor of bureaucracy is more diffuse, with various working groups, interest groups, coordination groups, and boards.

The H postulated in 2011 that the seeds for the divergence of the two groups could be traced back to WHATWG's dislike of W3C's HTML 5 logo and related branding effort. One would hope that a core web technology like HTML would be above that level of triviality, but the alternative reasons given in public are not much more satisfying. With any luck, though, the web will eventually route around the damage — one way or another.

Comments (26 posted)

Akademy: KDE successes and areas for improvement

By Jake Edge
July 25, 2012

As with most conferences, Akademy offered a wide variety of interesting talks. Some of those talks have already been covered over the last few weeks, but there is still a bit more to say about Akademy. It was an energetic conference, in a beautiful city. [Tallinn]

Akademy was broken up into a two-day conference portion followed by five days of meetings, workshops, hackfests, and BoFs. Most of the latter sessions are difficult to write about, particularly if one is helping out on a nearly full day workshop on one of those days. But a couple of the talks from the first two days stand out. In some ways, they complement each other well; one looks at some of the successes the KDE project has had, while the other looks at ways its governance could be improved.

Success stories

In the community keynote, Agustin Benito Bethencourt gave several examples of "success stories" that make him optimistic for the future of KDE. It is important to focus on successes because, in uncertain times in the software industry, those stories give the project "confidence in what the future will bring us as a community and as a free software project". He started with the example of the idea that the Qt framework should be free. When he first attended Akademy in 2005, many people were talking about a free Qt, but most people outside of the KDE community never thought it would happen. But inside KDE, "the idea remained strong", and it did eventually happen.

[Agustin Benito Bethencourt]

Now Qt has an open governance model, which came about because the KDE community "helped by pushing", he said. It is an example of "active patience", where KDE continued "making good stuff" and pushing for a more open Qt development model, which led to the desired outcome. In the future, if the project can confidently "keep pushing in the same direction", similar results can be achieved.

The KDE development process is unique, and there are "very few big companies that are as efficient as we are", Bethencourt said. Companies often say that KDE isn't succeeding, but they can rarely match it in the ability to develop new code. It is important for the project to recognize that and "to keep the efficiency that we have".

Another good sign for the future is how good KDE has gotten at incubating businesses. There are more and more freelancers and entrepreneurs in the community, with KDE-related companies springing up frequently. KDE provides the "right ecosystem", so the limits on what can be done simply disappear. It is important to "keep that wheel rolling", he said. Developers, translators, and designers will be building more new companies, and the project will be even more business-friendly in the future.

The project has always had a vision of where it is going, which is important. He recalled the days when the discussions about KDE 4 started, and when it was released he put it on a laptop. It worked, but was not perfect—over time that changed. The project has "a clear vision of where we want to go and we deliver after some time and work", Bethencourt said. That makes others want to "develop under our umbrella". Having a vision and sticking to it "will bring in outside energy".

The project also has longevity going for it. It is very difficult for a software business or a project to keep going for a long time, and KDE has been "innovating for the last 15 years". What KDE is doing now is "very different from what we were doing before", but that is a strength. There will be more changes over the coming years and if it keeps that "innovation mode", KDE has a bright future ahead of it. There is very little that can stop KDE from being here in another 15 years, he said—other than limitations that come from within the project itself.

Is life peachy?

Mirko Böhm started his talk with a reference to Mathias Klang's keynote that had come just before. Klang mentioned the "anti-homeless" benches that were installed in Tokyo to make it nearly impossible for anyone to sleep on them. Böhm said that he thinks KDE has some of those park benches, metaphorically, in that the project's governance is designed to keep in people that the project likes, but makes it "hard for those we don't [like] to join us". He wanted attendees to start thinking about "where in KDE those park benches are".

[Mirko Böhm]

Böhm has been contributing to KDE since 1997 and was a KDE e.V. board member from 1999 until 2006. He helped organize last year's Desktop Summit, and is currently doing research on free software and intellectual property issues at Technical University Berlin.

He has heard the claim that KDE is good at sticking its head in the sand. To illustrate, he put up a flowchart with a bubble that asked "Is life peachy?". If the answer is "yes", then "go do something" (e.g. write code), but if the answer is "no", "stick head in the sand", then "go do something". That is obviously not the right way to approach things, as there needs to be a way to take action to fix the problems. Changing the "stick head in sand" bubble to "change something" and re-routing back to the question is a better way. In order to decide what the "something" is that needs change, one must first observe and then reflect on the problems.

"Is life peachy at KDE?", he asked. For the most part, it is. KDE is the largest meritocratic volunteer-driven free software project; other large projects are mostly not run by volunteers, he said. KDE has an "outstanding product", with a fairly stable contributor base. "As a community, we are pretty healthy." The KDE code of conduct has been copied by other organizations and the project serves as a role model for other communities in free software.

Most folks want to stop at that point with the idea that "everything's fine"—except that it isn't, Böhm said. There are a number of areas where things aren't "peachy". To start with, users are complaining about a lack of technical improvements in areas they care about. At times, the feedback that they are providing "does not lead to tangible improvements".

There are also complaints about a lack of transparency in the governance of the project. Böhm is concerned that contributors are leaving for "non-technical reasons". It doesn't worry him if someone leaves because they graduate and get a job, but if they are "trying to make a difference, but can't" and leave because of that, it's a problem. Similarly, the project suffers from a failure to integrate a commercial ecosystem; it is "good at pissing off companies" so that they leave. That also leads to problems getting vendors to pick up KDE and ship it with their products or distributions. When making a software product, what matters in the end is that someone is using it, he said.

[Akademy group photo]

Böhm and Sebastian Kügler recently scored the project using the Open Governance Index (OGI) to try to get a sense of where it ranks. They found that KDE ranked second in openness, behind Eclipse, but ahead of projects like the Linux kernel, WebKit, Mozilla, and others. That shows that the project is doing well, overall, but "we could do better", he said. His goal would be for KDE to rank first.

There are positive points about the KDE community, but some reflection on them is needed. It is a meritocracy with self-directed contributors. That means that whoever does the coding work makes the decisions. But it leaves those who don't contribute code without a voice in the decision-making. That may not be quite right, he said, as there are others that make non-code contributions who may deserve a voice.

KDE e.V. is not "just" a support organization, but is set up to "support and foster the development of KDE". It does a good job of that, but there are some problems. Only individuals are allowed to be members, not companies or organizations. Those entities can sponsor KDE e.V., but they get no vote. People can only become members of KDE e.V. by invitation, which may leave out people new to the project at times. However, those requirements (individual-only, invite-only) are necessary to protect the assets of KDE e.V. as it is set up such that each member becomes a shareholder in the company.

One of the bigger problems that he sees with KDE e.V. is that it is "secret by default". That leads to downgrades in the OGI score, but it also may give the impression that the project is being run by a secret cabal. It is often said that KDE e.V. makes "no important decisions", but he doesn't agree. Akademy was held in Estonia because KDE e.V. decided it would be, for example, and the organization decides on who gets funding for things like sprints as well. Böhm believes that the "secret by default" approach is hurting the KDE community.

Some reflection on the role of commercial contributors is also needed. Currently, there are only individual contributors, mostly volunteers. That can be contrasted with Android, where there are few or no volunteers. But in order to use all of the available effort, there needs to be some thought about how to involve companies in the community. KDE is just about the only community where companies have no voice, he said, which is "not all bad"—indeed, it may be part of why KDE is still around after all these years.

Proposals

That is "enough reflection", Böhm said, and proceeded to outline some proposed actions to rectify some of the problems he observed. First off, he would like to see KDE stop sticking its head in the sand and try to address problems that crop up. Every year at Akademy, it would be good to see improvements in the areas that have been identified. For example, he proposed that there be a corporate liaison for KDE e.V. to work with companies. He also proposed that users get a voice. There is a KDE e.V. user working group that is being formed, but it should be made more formal in order to give users some say.

The Community Code of Conduct should also be extended, he said. It currently lists the responsibilities of contributors, without covering the rights of contributors. Those two things "normally go hand in hand". Contributors' rights would cover things like the ability to influence development, which is something that is not true for companies that contribute today.

The right to be treated as an "equal among peers" is another right that should be enumerated. Currently, there is something of a divide between technical contributors and non-technical contributors that could be addressed by this right. There is an overall need to minimize discrimination within the community between various sub-groups: technical vs. non-technical, volunteer vs. companies, users vs. contributors vs. KDE e.V members, and so on.

He also proposed that KDE e.V. adopt an "open by default" approach. Always defaulting to secret discussions on a closed mailing list, even for topics that have no need for secrecy, results in less open governance. That is bad for KDE, he said. Next year at Akademy, he hopes to have a similar talk, but one that looks at what has been accomplished in these and other areas.

Well-organized, interesting, and fun

[Tallinn old town]

Overall, Akademy was very well done. The talks were engaging, and the location was stunning. It is always nice to get a feel for what is happening in a large development project like KDE, and Akademy provided a great way to do that. KDE is a vibrant project and its conference reflected that. Once again, thanks to KDE e.V. for travel assistance so that I could go and have an interesting—and fun—trip to Tallinn.

Comments (1 posted)

Page editor: Jonathan Corbet

Security

Stealthy network penetration

By Jake Edge
July 25, 2012

Following up on the success of its Pwn Plug, a plug-computer-based network penetration tool, Pwnie Express has recently announced a power-strip-based successor: Power Pwn. Both products (and another that lives inside an N900 smartphone) are examples of the increasing capabilities of small, innocuous-looking packages—ones that can gather an enormous amount of sensitive data. But, Power Pwn is interesting for another reason: its development was partially funded by the US government.

For those not up on "leetspeak" (an alternative "language" used by the cracking/hacking and other subcultures), "pwn" may need some explanation. It is essentially a misspelling of "own" and in the cracking community is used to mean compromising or controlling a computer system of some kind. So, "pwning" a system is often the goal of attackers. The term is used widely in security circles as well, such as the Pwnie Awards that are given out at the Black Hat security conference.

So, while Pwnie Express's products are described as penetration testing (pentesting) tools, their names and capabilities make it obvious that they are quite suitable for more offensive tasks as well. Power Pwn is designed to look like (and act like) an eight-outlet power strip or surge protector, with "convenient" Ethernet ports, as well as a USB connector. Even when plugged into the network, it could easily be overlooked behind a desk or in a crowded server room.

But the device has no need to be connected to the network to be useful. It contains high-gain antennas for both Bluetooth and 802.11b/g/n, along with an external 3G/GSM network adaptor. Beyond that, it has a 1.2 GHz ARM processor with 512M of RAM and a 16G flash disk. It runs Debian 6 ("Squeeze") and comes with an impressive array of security and penetration tools.

It's clear that Pwnie Express has done more than just load a bunch of tools on top of the hardware and Debian, though. The device will call home via SSH either over the wired connection or 3G/GSM. There is also the ability to send shell commands to the device via SMS text messages. It can tunnel through firewalls and intrusion prevention systems (IPS). And so on. It could clearly be of use to those of any hat shade—white, gray, or black.

Those interested in the device will have to wait a while, though, as it is currently only available via pre-order (at a hefty $1295), with expected delivery at the end of September. Most of the same features can be found in the Pwn Plug that is available now (though not inexpensively: $795). That device looks like a cross between a wall-wart power supply and a plug-in air freshener—also easily overlooked.

Power Pwn was developed using money from the US Defense Advanced Research Projects Agency's (DARPA) new Cyber Fast Track (CFT) program:

CFT is designed to fund research to be performed by boutique security companies, individuals, and hacker/maker-spaces, and allow them to keep the commercial Intellectual Property for what they create. The goal is not to have these entities focus on solving DoD problems, but rather to fund research efforts these organizations would have considered on their own but are not pursuing due to complexity/cost/time/etc. Where it is an effort that may help the community at large it is almost by definition within the running lanes of CFT to consider. What's good for the community is good for DARPA.

It's tempting to speculate about the uses that the US government might have for a tool like Power Pwn. It's a bit hard to imagine that other, more secretive organizations, such as the National Security Agency (NSA), don't have similar—stealthier—devices already in hand, though. So, DARPA's thinking is likely along the lines of what Pwnie Express CEO Dave Porcello told Wired: "taking the tools that the hackers are using and putting them in the hands of the people that need to defend against the hackers"

Over time, of course, these kinds of devices are only going to get smaller and more stealthy. There are some limits, though, particularly in terms of power and wired networking connections—at least today. But it is clear that attackers are going to have better and better tools over time. In a somewhat different context (remote scanning), Bruce Schneier recently observed:

All sorts of remote surveillance technologies -- facial recognition, remote fingerprint recognition, RFID/Bluetooth/cell phone tracking, license plate tracking -- are becoming possible, cheaper, smaller, more reliable, etc. [...]

We're at a unique time in the history of surveillance: the cameras are everywhere, and we can still see them. Fifteen years ago, they weren't everywhere. Fifteen years from now, they'll be so small we won't be able to see them.

Keeping network intrusion devices from gathering sensitive data—or causing mayhem—is only going to get more difficult over time. Devices like Power Pwn and Pwn Plug are just the beginning. Widespread strong encryption, which will likely need to be deployed on wired networks as well, can help. But that just makes guarding the keys that much more important, of course. It's an arms race.

Comments (none posted)

Brief items

Security quotes of the week

Hopefully, Microsoft is in bed with various governments to allow them to listen in on our calls. This sounds crazy, but no. It would be an ironic twist, but if it were the case, Microsoft would be required to keep the quality high so everyone doesn't bail out and go elsewhere.
-- John C. Dvorak on Skype

With AI systems becoming more common, we have to start worrying about security. A network intrusion may be all the more serious if it is a neural net that is affected. New results indicate that it may be easier than we thought to provide data to a learning program that causes it to learn the wrong things.

If you like ScFi you will have seen or read scenarios where the robot or computer, always evil, is defeated by being asked a logical program that has no solution or is distracted by being asked to compute Pi to a billion billion digits. The key idea is that, given machine intelligence, the trick to defeating it is to feed it the wrong data.

-- Alex Armstrong on poison attacks against AI systems

Comments (5 posted)

New vulnerabilities

asterisk: two denial of service flaws

Package(s):asterisk CVE #(s):CVE-2012-3863 CVE-2012-3812
Created:July 20, 2012 Updated:September 18, 2012
Description:

From the Fedora advisory:

CVE-2012-3863: If Asterisk sends a re-invite and an endpoint responds to the re-invite with a provisional response but never sends a final response, then the SIP dialog structure is never freed and the RTP ports for the call are never released. If an attacker has the ability to place a call, they could create a denial of service by using all available RTP ports.

CVE-2012-3812: If a single voicemail account is manipulated by two parties simultaneously, a condition can occur where memory is freed twice causing a crash.

Alerts:
Gentoo 201209-15 asterisk 2012-09-26
Debian DSA-2550-2 asterisk 2012-09-26
Debian DSA-2550-1 asterisk 2012-09-18
Fedora FEDORA-2012-10324 asterisk 2012-07-20

Comments (none posted)

bash: buffer overflow

Package(s):bash CVE #(s):CVE-2012-3410
Created:July 23, 2012 Updated:September 25, 2014
Description: From the openSUSE advisory:

Bash was fixed to avoid a possible buffer overflow when expanding the /dev/fd prefix with e.g. the test builtin

Alerts:
SUSE SUSE-SU-2014:1214-1 bash 2014-09-25
Mandriva MDVSA-2013:019 bash 2013-04-04
Mandriva MDVSA-2013:032 bash 2013-04-05
Gentoo 201210-05 bash 2012-10-19
Mageia MGASA-2012-0184 bash 2012-07-29
openSUSE openSUSE-SU-2012:0898-1 bash 2012-07-23
Mandriva MDVSA-2012:128 bash 2012-08-09

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2012-2842 CVE-2012-2843 CVE-2012-2844 CVE-2012-2822 CVE-2012-2824 CVE-2012-2828 CVE-2012-2832 CVE-2012-2833
Created:July 23, 2012 Updated:August 15, 2012
Description: From the CVE entries:

Use-after-free vulnerability in Google Chrome before 20.0.1132.57 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to counter handling. (CVE-2012-2842)

Use-after-free vulnerability in Google Chrome before 20.0.1132.57 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to layout height tracking. (CVE-2012-2843)

The PDF functionality in Google Chrome before 20.0.1132.57 does not properly handle JavaScript code, which allows remote attackers to cause a denial of service (incorrect object access) or possibly have unspecified other impact via a crafted document. (CVE-2012-2844)

The PDF functionality in Google Chrome before 20.0.1132.43 allows remote attackers to cause a denial of service (out-of-bounds read) via unspecified vectors. (CVE-2012-2822)

Use-after-free vulnerability in Google Chrome before 20.0.1132.43 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to SVG painting. (CVE-2012-2824)

Multiple integer overflows in the PDF functionality in Google Chrome before 20.0.1132.43 allow remote attackers to cause a denial of service or possibly have unspecified other impact via a crafted document. (CVE-2012-2828)

The image-codec implementation in the PDF functionality in Google Chrome before 20.0.1132.43 does not initialize an unspecified pointer, which allows remote attackers to cause a denial of service or possibly have unknown other impact via a crafted document. (CVE-2012-2832)

Buffer overflow in the JS API in the PDF functionality in Google Chrome before 20.0.1132.43 allows remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors. (CVE-2012-2833)

Alerts:
Mageia MGASA-2012-0177 chromium 2012-07-21
Gentoo 201208-03 chromium 2012-08-14
openSUSE openSUSE-SU-2012:0993-1 chromium 2012-08-15

Comments (none posted)

kdepim: disable code execution by default in HTML email

Package(s):kdepim CVE #(s):CVE-2012-3413
Created:July 19, 2012 Updated:July 27, 2012
Description:

From the Fedora advisory:

It was reported [1],[2] that kdepim enabled Java, JavaScript, and plugin support by default. This could allow for the execution of Java/JavaScript or the loading of remote images in KMail's rendering of HTML email.

Alerts:
Fedora FEDORA-2012-10411 kdepim 2012-07-26
Ubuntu USN-1512-1 kdepim 2012-07-19
Fedora FEDORA-2012-10410 kdepim 2012-07-19

Comments (none posted)

nsd3: denial of service

Package(s):nsd3 CVE #(s):CVE-2012-2978
Created:July 19, 2012 Updated:August 10, 2012
Description:

From the Debian advisory:

Marek Vavruša and Lubos Slovak discovered that NSD, an authoritative domain name server, is not properly handling non-standard DNS packets. This can result in a NULL pointer dereference and crash the handling process. A remote attacker can abuse this flaw to perform denial of service attacks.

Alerts:
Fedora FEDORA-2012-11203 nsd 2012-08-09
Fedora FEDORA-2012-10887 nsd 2012-07-30
Fedora FEDORA-2012-10893 nsd 2012-07-30
Fedora FEDORA-2012-11207 nsd 2012-08-09
Debian DSA-2515-1 nsd3 2012-07-19

Comments (none posted)

php: multiple vulnerabilities

Package(s):php CVE #(s):CVE-2012-2688 CVE-2012-3365
Created:July 23, 2012 Updated:February 28, 2013
Description: From the Mandriva advisory:

Unspecified vulnerability in the _php_stream_scandir function in the stream implementation in PHP before 5.3.15 and 5.4.x before 5.4.5 has unknown impact and remote attack vectors, related to an overflow (CVE-2012-2688).

The SQLite functionality in PHP before 5.3.15 allows remote attackers to bypass the open_basedir protection mechanism via unspecified vectors (CVE-2012-3365).

Alerts:
Scientific Linux SLSA-2013:1814-1 php 2013-12-11
Oracle ELSA-2013-1814 php 2013-12-11
CentOS CESA-2013:1814 php 2013-12-11
Red Hat RHSA-2013:1814-01 php 2013-12-11
Scientific Linux SLSA-2013:1307-1 php53 2013-10-10
Oracle ELSA-2013-1307 php53 2013-10-02
Red Hat RHSA-2013:1307-01 php53 2013-09-30
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
CentOS CESA-2013:0514 php 2013-03-09
Scientific Linux SL-php-20130228 php 2013-02-28
Oracle ELSA-2013-0514 php 2013-02-28
Red Hat RHSA-2013:0514-02 php 2013-02-21
Gentoo 201209-03 php 2012-09-23
Ubuntu USN-1569-1 php5 2012-09-17
SUSE SUSE-SU-2012:1034-1 php5 2012-08-24
openSUSE openSUSE-SU-2012:0976-1 php5 2012-08-09
Mageia MGASA-2012-0186 php 2012-07-30
Mandriva MDVSA-2012:108 php 2012-07-23
Debian DSA-2527-1 php5 2012-08-13
Fedora FEDORA-2012-10908 php-eaccelerator 2012-08-05
Fedora FEDORA-2012-10908 maniadrive 2012-08-05
Fedora FEDORA-2012-10936 maniadrive 2012-08-05
Fedora FEDORA-2012-10908 php 2012-08-05
Fedora FEDORA-2012-10936 php 2012-08-05

Comments (none posted)

tiff: code execution

Package(s):tiff CVE #(s):CVE-2012-3401
Created:July 19, 2012 Updated:August 10, 2012
Description:

From the Ubuntu advisory:

Huzaifa Sidhpurwala discovered that the tiff2pdf utility incorrectly handled certain malformed TIFF images. If a user or automated system were tricked into opening a specially crafted TIFF image, a remote attacker could crash the application, leading to a denial of service, or possibly execute arbitrary code with user privileges.

Alerts:
Mandriva MDVSA-2013:046 libtiff 2013-04-05
Scientific Linux SL-libt-20121219 libtiff 2012-12-19
Oracle ELSA-2012-1590 libtiff 2012-12-19
Oracle ELSA-2012-1590 libtiff 2012-12-18
CentOS CESA-2012:1590 libtiff 2012-12-19
CentOS CESA-2012:1590 libtiff 2012-12-19
Red Hat RHSA-2012:1590-01 libtiff 2012-12-18
Debian DSA-2552-1 tiff 2012-09-26
Gentoo 201209-02 tiff 2012-09-23
Mandriva MDVSA-2012:127 libtiff 2012-08-08
Fedora FEDORA-2012-11000 libtiff 2012-07-26
Mageia MGASA-2012-0181 libtiff 2012-07-24
Ubuntu USN-1511-1 tiff 2012-07-19
Fedora FEDORA-2012-10978 libtiff 2012-08-09
openSUSE openSUSE-SU-2012:0955-1 tiff 2012-08-06

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 3.5 kernel was released on July 21; see Linus's announcement for some low-level details. Headline features in 3.5 include the CoDel queue management algorithm (a piece of the solution to the bufferbloat problem), the seccomp filters sandboxing mechanism, autosleep (an alternative to Android's opportunistic suspend mechanism), the uprobes user-space probe subsystem, the contiguous memory allocator, the new kcmp() system call, metadata checksumming in the ext4 filesystem, and a lot more. See the KernelNewbies 3.5 page for more information.

Stable updates: 3.0.38 and 3.4.6 were released on July 19 with the usual set of important fixes. The 3.2.24 update is in the review process as of this writing; its release can be expected any time.

Comments (none posted)

Quotes of the week

The magic constant police and the whitespace police are the TSA of the Linux kernel. In theory they are there to make things safer, but silently everyone thinks that these were the little kids that always got bullied in kindergarten, and this is their revenge to the rest of the population.
-- Arjan van de Ven

FWIW, I'm all for performance backports. They do have a downside though (other than the risk of bugs slipping in, or triggering latent bugs).

When the next enterprise kernel is built, marketeers ask for numbers to make potential customers drool over, and you _can't produce any_ because you wedged all the spiffy performance stuff into the crusty old kernel.

-- Mike Galbraith

Comments (none posted)

CRtools 0.1 released

The OpenVZ blog has the announcement of the release of CRtools 0.1. "It is our ultimate goal to merge all bits and pieces of OpenVZ to the mainstream Linux kernel. It's not a big secret that we failed miserably trying to merge the checkpoint/restore [CPT] functionality (and yes, we have tried more than once). The fact that everyone else failed as well soothes the pain a bit, but is not really helpful. The reason is simple: CPT code is big, complex, and touches way too many places in the kernel. So we* came up with an idea to implement most of CPT stuff in user space, i.e. as a separate program not as a part of the Linux kernel. In practice this is impossible because some kernel trickery is still required here and there, but the whole point was to limit kernel intervention to the bare minimum. Guess what? It worked even better that we expected. As of today, after about a year of development, up to 90% of the stuff that is needed to be in the kernel is already there, and the rest is ready and seems to be relatively easy to merge."

Comments (37 posted)

Kernel development news

3.6 merge window part 1

By Jonathan Corbet
July 25, 2012
Linus traditionally waits for a day or so after a major release before beginning to merge patches for the next cycle, but, with 3.6, he started right in. As of this writing, some 4,300 non-merge changesets have been pulled into the mainline; much of the activity thus far has been from the networking and ARM subsystems. Significant user-visible changes include:

  • The perf events subsystem now has support for the "uncore" performance measurement unit on Intel Nehalem and Sandy Bridge CPUs.

  • The x86 architecture now supports the reboot=bios and reboot-cpu command-line options on 64-bit processors (as well as on 32-bit, which has been supported for a long time)

  • "Suspend to both" support allows the system to be suspended after writing a hibernation image to disk. Then, should power run out before the suspended system is resumed, it can be restarted from the disk image instead.

  • The CANFD extension to the controller area network (CAN) protocol is now supported.

  • Numerous netfilter modules have gained proper namespace support. The netfilter user-space connection tracking helper infrastructure has also been merged.

  • The Bluetooth layer now has "three-wire UART" support, enabling Bluetooth operations over serial port connections.

  • The TCP small queues patch set, another piece of the solution to the bufferbloat problem, has been merged.

  • The TCP fast open protocol extension has been merged. TCP fast open is a patch out of Google that reduces the overhead of TCP connection setup, hopefully making protocols like HTTP go faster.

  • A long effort to remove the IPv4 routing cache from the networking subsystem has come to its conclusion. David Miller wrote:

    The ipv4 routing cache is non-deterministic, performance wise, and is subject to reasonably easy to launch denial of service attacks. The routing cache works great for well behaved traffic, and the world was a much friendlier place when the tradeoffs that led to the routing cache's design were considered.

    What it boils down to is that the performance of the routing cache is a product of the traffic patterns seen by a system rather than being a product of the contents of the routing tables.

    The replacement code simplifies the networking subsystem and, hopefully, gives better performance on high-volume systems.

  • New hardware support includes:

    • Processors and systems: Freescale BSC9131RDB reference boards, Altera SOCFPGA Cyclone V systems, Marvell Armada 370 and Armada XP boards, TI OMAP5 processors, TI EVMC6678LE evaluation boards, and Freescale (Motorola) Coldfire 5251/5253 and 5441x processors.

    • Audio: TI Isabelle audio ICs, ST-Ericsson AB8500 codecs, Dialog DA732x audio codecs Wolfson Micro WM5102 and WM5110 audio controllers, and ST STA529 audio amplifiers.

    • Input: Lenovo ThinkPad USB keyboards with trackpoint and Roccat Savu gaming mice.

    • Miscellaneous: Samsung S2MPS11 voltage regulators, Maxim 77686 voltage regulators, TI/National Semiconductor LP8720/LP8725 voltage regulators, Dialog Semiconductor DA9052 PMICs, Honeywell Humidicon HIH-6130/HIH-6131 humidity sensors, Wolfson Micro WM831x and WM832x PMICs, and NVIDIA Tegra20 APB DMA controllers.

    • Networking: RealTek rt3290 WiFi controllers, Sony PaSoRi contactless reader NFC controllers, Atmel RF230/231 radio transceivers, Broadcom BCM8706 and BCM8727 PHYs, and Asix AX88172A USB 2.0 Ethernet interfaces.

Changes visible to kernel developers include:

  • The obsolete static_branch() interface has been removed in favor of static_key_true() and static_key_false(). Some information on this interface can be found in this article.

  • Some initial work has been done to separate the dynamic tick code from the idle task, setting the ground for stopping the timer tick on non-idle CPUs.

  • The power domains subsystem has seen some integration with the cpuidle code to handle situations where devices share power lines with CPU cores.

  • The VFS layer has seen some significant changes. There is a new atomic_open() inode operation that combines the process of looking up, possibly creating, and opening a file into a single, atomic operation. The whole "open intents" mechanism has been removed. Numerous other operations have had prototype changes. The deferred fput() changes have been merged, simplifying the process of cleaning up file structures.

  • The PowerPC architecture now supports the jump label mechanism.

  • The NLMSG_NEW() and NLMSG_PUT() macros have been removed from the netlink interface.

  • The input subsystem has a new interface for the creation of user-space drivers; see Documentation/hid/uhid.txt for details.

  • There is a new grouping mechanism for I/O memory management units intended to help enable safe device access to virtualized guests.

This merge window can be expected to last until sometime around August 4, so there is quite a bit of code that can be expected to find its way into the mainline before the -rc1 release happens. See next week's Kernel Page for coverage of the continuation of the 3.6 merge window.

Comments (2 posted)

The UAPI header file split

By Michael Kerrisk
July 25, 2012

Patches that add new software features often gain the biggest headlines in free software projects. However, once a project reaches a certain size, refactoring work that improves the overall maintainability of the code is arguably at least as important. While such work does not improve the lives of users, it certainly improves the lives of developers, by easing later work that does add new features.

With around 15 million lines of code (including 17,161 .c files and 14,222 .h files) in the recent 3.5 release, the Linux kernel falls firmly into the category of projects large enough that periodic refactoring is a necessary and important task. Sometimes, however, the sheer size of the code base means that refactoring becomes a formidable task—one that verges on being impossible if attempted manually. At that point, an enterprising kernel hacker may well turn to writing code that refactors the kernel code. David Howell's UAPI patch series, which has been proposed for inclusion during the last few kernel merge windows, was created using such an approach.

The UAPI patchset was motivated by David's observation that when modifying the kernel code:

I occasionally run into a problem where I can't write an inline function in a header file because I need to access something from another header that includes this one. Due to this, I end up writing it as a #define instead.

He went on to elaborate that this problem of "inclusion recursion" in header files typically occurs with inline functions:

Quite often it's a case of an inline function in header A wanting a struct [or constant or whatever] from header B, but header B already has an inline function that wants a struct from header A.

As is the way of such things, a small itch can lead one to thinking about more general problems, and how to solve them, and David has devised a grand nine-step plan of changes to achieve his goals, of which the current patch set is just the first step. However, this step is, in terms of code churn, a big one.

What David wants to do is to split out the user-space API content of the kernel header files in the include and arch/xxxxxx/include directories, placing that content into corresponding headers created in new uapi/ subdirectories that reside under each of the original directories. As well as being a step toward solving his original problem and performing a number of other useful code cleanups, David notes that disintegrating the header files has many other benefits. It simplifies and reduces the size of the kernel-only headers. More importantly, splitting out the user-space APIs into separate headers has the desirable consequence that it "simplifies the complex interdependencies between headers that are [currently] partly exported to userspace".

There is one other benefit of the UAPI split that may be of particular interest to the wider Linux ecosystem. By placing all of the user-space API-related definitions into files dedicated solely to that task, it becomes easier to track changes to the APIs that the kernel presents to user space. In the first instance, these changes can be discovered by scanning the git logs for changes in files under the uapi/ subdirectories. Easing the task of tracking user-space APIs would help many other parts of the ecosystem, for example, C library maintainers, scripting language projects that maintain language bindings for the user-space API, testing projects such as LTP, documentation projects such as man-pages, and perhaps even LWN editors preparing summaries of changes in the merge window that occurs at the start of each kernel release cycle.

The task of disintegrating each of the header files into two pieces is in principle straightforward. In the general case, each header file has the following form:

    /* Header comments (copyright, etc.) */

    #ifndef _XXXXXX_H     /* Guard macro preventing double inclusion */
    #define _XXXXXX_H

    [User-space definitions]

    #ifdef __KERNEL__

    [Kernel-space definitions]

    #endif /* __KERNEL__ */

    [User-space definitions]
  
    #endif /* End prevent double inclusion */

Each of the above parts may or may not be present in individual header files, and there may be multiple blocks governed by #ifdef __KERNEL__ preprocessor directives.

The part of this file that is of most interest is the code that falls inside the outermost #ifndef block that prevents double inclusion of the header file. Everything inside that block that is not nested within a block governed by a #ifdef __KERNEL__ block should move to the corresponding uapi/ header file. The content inside the #ifdef __KERNEL__ block remains in the original header file, but the #ifdef __KERNEL__ and its accompanying #endif are removed.

A copy of the header comments remains in the original header file, and is duplicated in the new uapi/ header file. In addition, a #include directive needs to be added to the original header file so that it includes the new uapi/ header file, and of course a suitable git commit message needs to be supplied for the change.

The goal is to modify the original header file to look like this:

    /* Header comments (copyright, etc.) */

    #ifndef _XXXXXX_H     /* Guard macro preventing double inclusion */
    #define _XXXXXX_H

    #include <include/uapi/path/to/header.h>

    [Kernel-space definitions]

    #endif /* End prevent double inclusion */

The corresponding uapi/ header file will look like this:

    /* Header comments (copyright, etc.) */

    #ifndef _UAPI__XXXXXX_H     /* Guard macro preventing double inclusion */
    #define _UAPI__XXXXXX_H

    [User-space definitions]

    #endif /* End prevent double inclusion */

Of course, there are various details to handle in order to correctly automate this task. First of all, sometimes the script should produce only one result file. If there is no #ifdef __KERNEL__ block in the original header, the original header file is in effect renamed to the uapi/ file. Where the header file is disintegrated into two files, there are many other details that need to be handled. For example, if there are #include directives that are retained at the top of the original header file, then the #include for the generated uapi/ file should be placed after those #include directives (in case the included uapi/ file has dependencies on them). Furthermore, there may be pieces of the original header that are explicitly not intended for kernel space (i.e., they are for user-space only)—for example, pieces governed by #ifndef __KERNEL__. Those pieces should migrate to the uapi/ file, retaining the guarding #ifndef __KERNEL__.

David's scripts handle all of the aforementioned details, and many others as well, including making corresponding changes to .c source files and various kernel build files. Naturally, no scripting can correctly handle all possible cases in human-generated files, so part of the current patch set includes pre-patches that add markers to "coach" the scripts to do the right thing in those cases.

Writing scripts to automate this sort of task becomes a major programming project in its own right, and the shell and Perl scripts (.tar.xz archive) to accomplish the task run total more than 1800 lines. (Using scripting to generate the patch set has the notable benefit that the patch set can be automatically refreshed as the relevant kernel source files are changed by other kernel developers. Given that the UAPI patches touch a large number of files, this is an important consideration.)

Knowing the size of those scripts, and the effort that must have been required to write them, gives us a clue that the scale of the actual changes to the kernel code must be large. And indeed they are. In its current incarnation, the UAPI patch series consists of 74 commits, of which 65 are scripted (the scripted changes produce commits to the kernel source tree on a per-directory basis). Altogether, the patches touch more than 3500 files, and the diff of the changes amounts to over 300,000 lines.

The scale of these changes brings David to his next problem: how to get the changes accepted by Linus. The problem is that it's impossible to manually review source code changes of this magnitude. Even a partial review would require considerable effort, and would not provide ironclad guarantees about the remaining unreviewed changes. In the absence of such reviews, when Linus received David's request to pull these patches in the 3.5 merge window, he employed a time-honored strategy: the request was ignored.

Although David first started working on these changes around a year ago, Linus has not to date directly commented on them. However, back in January Linus accepted some preparatory patches for the UAPI work, which suggests that he's at least aware of the proposal and possibly willing to entertain it. Other kernel developers have expressed support for the UAPI split (1 and 2). However, probably because of the magnitude of the changes, getting actual reviews and Acked-by: tags has to date proved to be a challenge. Given the impossibility of a complete manual review of the changes, the best hope would seem to be to have other developers review the conceptual approach employed by David's scripts, possibly review the scripts themselves, perform a review of a sample of the changed kernel source files, and perform kernel builds on as many different architectures as possible. (Aspiring kernel hackers might note that much of the review task on this quite important piece of kernel work does not require deep understanding of the workings of the kernel.)

Getting sufficient review of any set of kernel patches, let alone a set this large, is a perennial difficulty. Things at least took a step forward with David's request to Linus to have the patches accepted for the currently open 3.6 merge window, when Arnd Bergmann provided his Acked-by: for the entire patch series. Whether that will prove enough, or whether Linus will want to see formal agreement from additional developers before accepting the patches is an open question. If it proves insufficient for this merge window, then perhaps a rethink will be required next time around about how to have such a large change accepted into the mainline kernel.

Comments (13 posted)

Who wrote 3.5

July 25, 2012

This article was contributed by Greg Kroah-Hartman.

Now that the 3.5 Linux kernel has been released, it's time for the traditional look at who wrote it. Here we'll try to summarize who did all of the work that went into this release.

Fastest-changing kernel ever

The 3.5 kernel was released one day faster than the 3.4 kernel was, in 62 days. The last time a kernel was released this quickly was back in 2005 with the 2.6.14 kernel release (61 days).

In those 62 days, the kernel developers crammed in a record-breaking 176.73 changes per day (7.36 changes per hour.) This is the fastest-changing kernel that has been recorded since I started keeping track of this development metric back in the 2.5 kernel release series.

These changes resulted in the following overall changes:

Changes in 3.5
571987 lines added
358836 lines removed
135848 lines modified

The kernel is still increasing at a pretty constant 1.37% growth in the number of lines and files, which is similar to the growth rate of the past three kernel releases.

Individual contributions

1,195 different developers contributing patches to the 3.5 kernel; those developers worked for at least 194 different companies. The names of the contributing developers are pretty familiar to those who track these statistics:

Most active 3.5 developers
By changesets
Greg Kroah-Hartman2392.2%
Axel Lin1911.7%
Mark Brown1871.7%
H. Hartley Sweeten1351.2%
David S. Miller1311.2%
Daniel Vetter1301.2%
Al Viro1281.2%
Stephen Warren1211.1%
Tejun Heo1121.0%
Eric Dumazet1051.0%
Hans Verkuil1020.9%
Paul Mundt1020.9%
Johannes Berg1020.9%
Shawn Guo1020.9%
Thomas Gleixner980.9%
Dan Carpenter860.8%
Sam Ravnborg840.8%
Chris Wilson790.7%
Trond Myklebust740.7%
Eric W. Biederman730.7%
Jiri Slaby730.7%
Arnaldo Carvalho de Melo710.6%
Artem Bityutskiy680.6%
Hans de Goede680.6%
Takashi Iwai640.6%
By changed lines
Paul Gortmaker440005.7%
Viresh Kumar204252.7%
Steven Rostedt146151.9%
H. Hartley Sweeten130831.7%
Dave Airlie122171.6%
Sakari Ailus108351.4%
Dong Aisheng105741.4%
Sonic Zhang104941.4%
Paul Walmsley100841.3%
Ben Skeggs100001.3%
Rob Herring98861.3%
Sascha Hauer96021.3%
Stephen Warren93651.2%
Parav Pandit88461.2%
Nicholas Bellinger87041.1%
Linus Walleij84961.1%
Shawn Guo77971.0%
David S. Miller74451.0%
Phil Edworthy71890.9%
Sam Ravnborg67520.9%
Hans Verkuil67180.9%
Alexander Shishkin66680.9%
Tejun Heo65790.9%
Greg Kroah-Hartman65240.9%
Vladimir Serbinenko64510.8%

In the quantity category (remember, we don't judge quality), I did a large number of cleanup patches removing old USB logging macros from the system, which resulted in the majority of my changes in the 3.5 kernel. Axel contributed a great number of regulator driver fixes and enhancements, and Mark Brown did the majority of his work in the sound system-on-a-chip drivers area. H. Hartley Sweeten has been working on cleaning up the Comedi (data acquisition) drivers to get them ready to move out of the staging area of the kernel. This work has him showing up in these statistics for the first time. And rounding out the top five is David Miller with a large number of networking core and driver patches.

Along with H. Hartley Sweeten, Daniel Vetter is also a newcomer to the "top changesets" list. His contributions came from numerous changes and enhancements to the Intel graphics drivers. Although Hans Verkuil is also a name that might not be familiar to many, his contributions to the Video4Linux drivers and core code show he is a core contributor to a subsystem that many users rely on every day.

Considering the statistics in lines changed, Paul Gortmaker leads by virtue of the fact that he deleted all of the old Token Ring drivers from the kernel. Viresh Kumar did a lot of SPEAr processor and driver work, adding numerous new drivers for the platform. Steven Rostedt did a large amount of development on ftrace and ktest (a kernel-testing tool). H. Hartley Sweeten did the aforementioned Comedi driver cleanup work, and Dave Arlie made major changes in the area of graphics drivers.

Reviewing the work

All kernel patches are reviewed and "Signed-off-by" a subsystem maintainer before they are committed to the Linux kernel. The developers with the most sign-offs for the 3.5 kernel were as follows:
Developers with the most signoffs (total 20391)
Greg Kroah-Hartman12166.0%
David S. Miller9224.5%
Mauro Carvalho Chehab6053.0%
Mark Brown5492.7%
John W. Linville4932.4%
Linus Torvalds4242.1%
Andrew Morton3731.8%
Daniel Vetter2681.3%
Dave Airlie2551.3%
Al Viro1971.0%
Axel Lin1910.9%
Trond Myklebust1730.8%
Arnaldo Carvalho de Melo1650.8%
James Bottomley1640.8%
Artem Bityutskiy1570.8%
Kyungmin Park1560.8%
Samuel Ortiz1540.8%
Linus Walleij1530.8%
Ingo Molnar1500.7%
Wey-Yi W Guy1460.7%
Thomas Gleixner1390.7%
Stephen Warren1360.7%
H. Hartley Sweeten1350.7%
Shawn Guo1310.6%
Paul Mundt1280.6%

I ended up doing the most sign-offs for this kernel release because of many changes in the staging and USB subsystems. David Miller follows with his work in the networking and networking driver trees, as well as in the IDE drivers. Mauro is the maintainer of the Video4Linux subsystem, Mark Brown is the maintainer of the embedded sound drivers, and John Linville is the maintainer of the wireless driver subsystem.

These numbers reflect the picture of what has been happening in the past few kernel releases, with the majority of changes happening in the staging and networking areas of the kernel.

Who sponsored this work

Here is the list of the companies who sponsored the developers doing the work for this kernel release, and the number of changes attributed to them:
Top changeset contributors by employer
(None)134312.3%
Red Hat112310.2%
Intel10619.7%
(Unknown)8607.8%
Linaro5194.7%
Novell4404.0%
Texas Instruments3132.9%
IBM2822.6%
Linux Foundation2792.5%
Google2652.4%
Samsung2512.3%
Oracle2041.9%
Renesas Electronics2011.8%
MiTAC1911.7%
NVIDIA1881.7%
Wolfson Microelectronics1871.7%
(Consultant)1601.5%
NetApp1531.4%
Vision Engraving Systems1351.2%
Qualcomm1211.1%

Longtime readers of this series of articles will notice that Linaro has appeared in the top 5 kernel developer companies by number of contributions for the first time. This is due to the increased number of patches Linaro has been contributing, as well as the organization's wish to have the member company employees' contributions be counted as coming from Linaro, instead of the member company itself, as we had previously been doing.

A newcomer to the top 20 companies is Vision Engraving Systems, thanks to the Comedi development work from H. Hartley Sweeten. With his work, hopefully this subsystem can move out of the staging area of the kernel in a future release.

Other than the large jump from Linaro, the other companies in the top 25 are well known. Even NVIDIA—despite Linus's well-publicized, and in my opinion well-deserved, criticism of its Linux graphics driver development efforts—continues to be a large contributor to the kernel in the area of embedded processor support for its products. Texas Instruments, Samsung, MiTAC, Wolfson Microelectronics, Qualcomm, Renesas, and Nokia are also primarily focused in the embedded Linux area, showing the wide range of ongoing company support for Linux in embedded systems.

Work continues as usual

With the 3.5 kernel release, the number of contributors remains as high as previous releases, the rate of contributions is greater than ever (as measured by number of patches per day), and the rate of increase in the size of the kernel code remains the same as it has been for the past year. This shows that the kernel development community is still growing, and maintaining its incredibly rapid development cycle, ensuring that Linux remains the largest software engineering project ever.

Comments (18 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 3.5 released ?
Greg KH Linux 3.4.6 ?
Steven Rostedt 3.4.4-rt14 ?
Steven Rostedt 3.2.23-rt37 ?
Greg KH Linux 3.0.38 ?
Steven Rostedt 3.0.36-rt58 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Michael S. Tsirkin tun zerocopy support ?
Yuchung Cheng TCP Fast Open client ?

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Gentoo debates recruitment schemes

By Nathan Willis
July 20, 2012

Cultivating contributors is a tricky endeavor; large projects like Linux distributions need to add developers, testers, packagers, and more on an ongoing basis. But recruiting involves more than extending an invitation; training new talent on the ins-and-outs of the development process is vital, as is deftly handling volunteers that don't quite have their act together. Debian, for instance, has a multi-step process for joining that requires both contribution and a recommendation from existing members.

Gentoo has long taken the recruiting game seriously as well, but it recently decided to shut down its developer-recruitment web application, and return to its previous method of email-submitted "quizzes." Opinion is divided as to which direction the recruitment process should take; the distribution has historically found success with its structured process, pairing new volunteers with mentors for training. But if the mechanics of maintaining that process become a burden, it can drive off new contributors and mentors.

In the past, the Gentoo recruitment training process centered around a set of quizzes that each new recruit had to complete successfully before getting commit privileges. There were two quizzes: one for those who would be working with the distribution's ebuild build system or Portage tree, and another for those who would be working on infrastructure and other non-build components. Both quizzes include a mix of policy and technical questions, requiring (sometimes lengthy) essay-style answers. For example, the ebuild quiz asks:

What is the proper method for suggesting a wide-ranging feature or enhancement to Gentoo? Describe the process for getting this feature approved and implemented.

in the "Organizational structure" section, and:

You find a package that will not build on some architectures without PIC (-fPIC) code in the shared libraries. What is the proper way to handle this situation?

in the "Ebuild technical" section. The recruit would send the completed quiz to his or her mentor and, upon receiving a satisfactory grade, would advance to the next step: opening a recruitment "bug" to track the recruit's progress, which eventually culminates in account setup.

The term "quiz" might be slightly confusing to some who associate the word with brief and/or quick tests. In Gentoo's case, the recruit could take as much time as necessary to complete the questions, and might be asked by the mentor to try again through several iterations. On the plus side, this technique emphasizes letting the new developer get familiar with the documentation and the distribution's way of doing things. But it also led to new recruits taking months or even years to complete the quizzes, often while they continued to contribute back in "unofficial" ways.

The web application (at recruiting.gentoo.org) was deployed in 2010, with the goal of streamlining the process by storing the questions and answers online. Mentors could provide feedback through the application, without juggling email threads. But the web application has not proven so useful in practice: it is buggy, it has UI issues, it runs on an outdated version of Rails, and there is insufficient developer-power in the project to fix it up. All of those factors combined to convince the recruiters to move back to the old-style, email-driven quiz process.

Markos Chandras from the Gentoo Recruiters team announced the decision on July 14, saying that new recruits (not including those who have already started using the web application) should use the old quizzes instead. "We understand that quizzes is not an ideal way to 'hire' people either, but they worked ok for all these years and it is the only alternative we have at the moment." Chandras added that hopefully a future Google Summer of Code (GSoC) project will be able to improve the application.

But not everyone agreed that the old-style quizzes worked acceptably, or that the web application was the only alternative to consider. Ben de Groot said that the time involved takes away from time the new recruit could contribute to Gentoo:

The first time I did the quizzes, it took me 9 months. After having been away for a couple of years, I recently returned as Gentoo dev, and the second time I did the quizzes it took me 3 months. I've seen others take a long time doing them as well. Davide (pesa), one of our most valued contributors in the Qt team, took close to two years I think.

I think this way we lose much valuable developer time. These people could have had commit access and done much valuable work so much earlier, if there wasn't this obstacle of the quizzes...

[...]What I noticed in my own experience as lead of our Qt team, is that getting people started on the real work, being part of the developer community and process, is a good way to introduce them to how we do things in Gentoo. The Qt team has its official overlay, and it is easy for us to give new contributors access to it. That way they can learn to write ebuilds and eclasses, and how to improve them, commit them, and get used to a good workflow.

De Groot proposed improving the wiki documentation to cover the quiz material, and having mentors walk their recruits through the documentation while simultaneously helping them learn development work. Alternatively, he said, mentors could assign tasks for recruits to complete. Chandras replied that the quizzes cover a specific set of material to ensure consistency between mentors, and that doing away with them would necessitate someone else monitoring the mentors to ensure they cover the proper work. On the other hand, he did like the idea of improving the wiki, and suggested that the post-quiz review steps could be simplified, particularly if a recruit has already been contributing.

Rich Freeman expressed surprise at the lengths of time taken by some recruits, but agreed that the quizzes have weaknesses, saying "I did struggle because policies were not always spelled out" and "sometimes the indirectness of some of the questions was frustrating", but that he completed his quiz in eight hours, and learned a lot in the process. He also suggested improving documentation, in particular by creating step-by-step tutorials for ebuild, which could guide new recruits through learning the system (in contrast to the existing documentation, which is predominantly reference material).

Several people responded that the absolute time required to complete the quizzes was not the issue, rather it was finding the free time to devote to the process amidst all other responsibilities (including Gentoo contributions). Peter Stuge commented, jokingly, that "the idiots that the quizzes are designed to keep out can spend two (or four/eight if they need) days to pass anyway with a little dedication, while less idiotic idiots such as perhaps myself need years because we're doing whatever work as opposed to learning foundation bylaws by heart."

Freeman also speculated in several messages on the impact the recruitment process has on the overall Gentoo culture, apart from the method used to indoctrinate new developers. On one hand, he suggested that the need to ensure a training regimen stemmed from technical choices like using CVS (where commit access is required). Using Git instead would alleviate some of the concern about adding new developers, because they could still do useful work in their own trees. More developers with more freedom to improve packages, he said, "would be a good thing. The all-or-nothing model too often turns out to be nothing."

On the other hand, Freeman argued that the non-technical topics in the quizzes — such as learning to work within teams, ask questions, and build consensus — are more important in the long term:

What really causes havoc around here is when people change ebuilds without consulting with the maintainer, or when they go tweaking system packages without a great deal of care and being part of the appropriate team, and so on. [...] Many of these issues have dwindled in recent years, and I think it is precisely because teams like the recruiters have been paying more careful attention to them.

Freeman may have summed up the feelings of many Gentoo developers: the specifics of the training process are less important than the fact that is it deliberate and guided by active mentors and recruiters, because the end goal is to integrate the new developers into the Gentoo community — not to train them on a particular suite of tools. On that front, the web application still has its share of fans. Theo Chatzimichos said he prefers it to the email-driven quizzes because it simplifies keeping track of recruits' answers. Chatzimichos said he mentors two or three recruits at a time, and proposed putting out a call for volunteers to revamp the web application, while making sure to not let the web application get out-of-sync with the quizzes.

In an email, Chandras said that Gentoo averages 10 to 15 recruits per year. That may not be many when compared to large distributions like Ubuntu or Fedora, but in a sense it only makes the recruitment process more critical. It is clear from the discussion that neither the old email-driven quizzes nor the recent web application quite meet everyone's needs — but at least the recruiters and the mentors are committed to sticking with the process even in spite of its awkward points.

Comments (31 posted)

Brief items

Distribution quote of the week

I guess this is a matter of opinion, but on Gentoo I don't think we're really at much risk of driving people away by OVER-communicating.
-- Rich Freeman

Comments (none posted)

GNU Linux-libre 3.5-gnu: Free and a half

The 3.5-gnu Linux-libre kernel has been released. The Linux-libre kernel meets the Free Software Foundation's criteria for free software and is suitable for distributions that aim to include only free software.

Comments (90 posted)

Slackware 14.0 beta

The July 22, 2012 entry in the slackware-current changelog (i386, x86_64) has the announcement that Slackware 14.0 has reached beta. "Howdy! Lots of shiny stuff here, including the long awaited Xfce 4.10! Thanks to Robby Workman for the initial set of build scripts, and lots of testing (plus some very helpful notes about things such as the proper build order). I'm calling this a beta (finally!), and it's really very close to what we expect to release. Test away."

Comments (3 posted)

Distribution News

Fedora

Fedora Summer of Open Hardware and Fun Sweepstakes

The Fedora project has launched a sweepstakes for Fedora contributors with Open Hardware for prizes.
Unfortunately, we don't have enough hardware to give something to every Fedora Contributor, so this is a sweepstakes, and sweepstakes come with all sorts of rules and restrictions.

This sweepstakes is for Fedora Contributors (defined as users in the Fedora Account System who have signed the FPCA and are in one additional group). There are some geographic and age restrictions, the reason for this is that it is extremely costly and time-consuming to determine whether or not it is possible to run a sweepstakes in a given country. Sweepstakes laws and regulations vary considerably from country to country, and many countries have strict registration requirements and fees associated with running sweepstakes. Other countries simply prohibit sweepstakes entirely. As a result, we are only offering this sweepstakes in countries where we know that the sweepstakes is lawful. We sincerely apologize for any inconvenience this may cause you.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Bergeron: The Future of FUDCons

On her blog, Fedora project leader Robyn Bergeron considers a different approach for the Fedora Users and Developers Conference (FUDCon). In the post, she suggests having a single FUDCon per year (rather than four), timing the conference so that it works well with the release schedule, and renaming (and repurposing) it to "DoCon". "Make this event be focused on the “do-ers” – and not the users. I mentioned previously in this post that it does not make the best use of our face-to-face bandwidth, and I’m sticking to that — and moreover, I think that trying to plan a parallel “user track” just winds up taking people away from getting things done. This is not a “we don’t care about the users” statement in any way, so don’t jump down my throat. But I think that mixing up the event tends to leave casual users/potential users/non-contributor users unsure about what to attend, and I haven’t seen any evidence on any large scale that users magically become contributors at a FUDCon. And there is NO REASON IN THE UNIVERSE why we can’t come up with a type of event that costs significantly less to host, requires fewer numbers of contributors to attend, and is geared solely towards users/potential users/potential contributors, and can be made repeatable in many places. The fact that a FUDCon in Pune can draw in a crowd of 500+ shows that there is absolutely interest."

Comments (9 posted)

HealthCheck Mandriva - Rebooting the company (The H)

Over at The H, Richard Hillesley has an in-depth look at the history and future of Mandriva, both the company and the distribution. "Mandriva is hoping that 'contributors will come to the new distribution because it will be a fun place to be. The new distribution will bring very much the same care for the end user that the Mandriva line of distributions have always brought, and will hope to be as technically advanced as Fedora,' says [Charles H.] Schulz, 'and this will be exciting and new.' Mandriva the company will contribute but does not expect to be in charge. 'We're not going to hog the governance. We won't be the ones that decide,' says Schulz. There has already been a community proposal for hosting the servers, and Mandriva and Rosa Labs will also contribute some servers, 'but we prefer to lead by example rather than domination.'"

Comments (none posted)

New desktops for Fedora

The H looks at the inclusion of the Cinnamon desktop in the Fedora 17 repositories. "Now that Muffin and the Cinnamon package have been approved, the desktop has been included in the Fedora 17 standard repositories; from there, it can be installed using a command such as yum install cinnamon and then selected when signing in via GDM or another login manager. Provided that the desktop continues to be maintained, it will likely also be part of Fedora 18, which is scheduled to be released this autumn." The article also mentions that there is an add-on repository for Ubuntu's Unity desktop.

OMG! Ubuntu! warns that installing Unity on top of Fedora 17 "will replace some core GNOME components with Unity-compatible versions" and should be done with caution.

Comments (none posted)

Page editor: Rebecca Sobol

Development

Command-line publishing with Easybook

By Nathan Willis
July 25, 2012

The web and e-books were both supposed to kill off paper-based publishing, but the reality is that authors and publishers often need to produce editions for all three formats instead. Easybook is one of several open source tools for doing just that. It is a PHP program used to write book content just once, then export it for output in a variety of formats — currently EPUB, HTML, and PDF. Easybook certainly does make the actual rendering of content into a simple affair, but there are other issues to consider, including the ease of editing content, and Easybook's reliance on proprietary software under the hood.

Easybook itself is a script written in PHP, but designed to be called from the command line. The actual book content is written in Markdown format, stored one-chapter-per-file, with a separate YAML file holding the book's configuration settings and a description of its structure. The configuration settings allow you to define several "editions" of each book, which incorporate output templates plus variations in the content. For example, you might want to include a "list of figures" in the PDF edition of a book, but omit it from the HTML version where such things are uncommon. Whenever you are happy with the content, you can export it to one of the defined editions with the Easybook script. The script generates HTML and EPUB (which of course is derived from HTML) directly, and calls PrinceXML to do the HTML-to-PDF conversion for PDF output.

As is the case in any typesetting program, it is the quality of the templates and styles that make or break the final output. The "editions" that you define in an Easybook project's configuration file are themes that incorporate page settings, typography, CSS styles, and structure. Easybook defines four themes by default: EPUB, PDF, and two varieties of HTML (single-page HTML and "HTML chunked," which splits the book into separate pages for each chapter). Even four sizes do not fit all, however, so you can (and indeed should) edit and extend them for your own work.

You can download Easybook as a Zip archive or check it out via Git. The current release is version 4.4. PHP 5.3.2 is required, and the download bundles in most of the required PHP packages, such as the Symfony component framework and Twig templating system. Easybook itself is available under the MIT license and most of the other components are open source as well. PrinceXML is proprietary, however: you must either install the free-for-noncommercial edition, which watermarks the first page of each PDF, or buy a license.

Book creation 101

Starting a new book project with Easybook is as simple as executing the ./book new "My Title" command, which creates a skeleton directory structure for the new book located beneath your Easybook installation directory, wherever that may be, and with the book title morphed into a more Unix-like lowercase-and-hyphens form for the subdirectory name. The command also populates it with a default configuration file and some blank chapter files. The directory structure looks like:

    ./my-title/
               config.yml
               Contents/
                        chapter1.md
                        chapter2.md
                        images/
               Output/

As mentioned above, book text is written in Markdown format (hence the .md file extensions). I am not a huge fan of Markdown; in my experience its not-quite-HTML syntax requires just as much mental effort as HTML, but subsequently requires you to process your output before reading it. But it does have its supporters. In any case, when writing the meat of your text you can use any editor or combination of editors you choose. The chapter1.md and chapter2.md file names are there merely to guide you; you can name your files anything you wish, because you must edit the config.yml file to tell Easybook what your book consists of, and how it will look.

The config.yml file contains a header stanza that includes general-purpose information like title, author, and publication date. Watch out for the edition option, though: in the header, this refers to the publication edition, which is what will enable rare-book-collectors decades from now to recognize your valuable first editions and pay more to own them. Further down, the editions (note the plural) option is where you list and describe the Easybook "editions" mentioned above. The default file created by new includes the basic four theme types, although they have different names: "ebook" mean EPUB, "web" means single-file HTML, "website" means HTML chunked, and "print" means PDF.

The name of the edition you want is the argument you pass to Easybook when generating output. So, for instance,

    ./book publish my-title print
generates the "print" edition PDF, and places it in the Output/ subdirectory.

The next major section of the config.yml file is taken up by the contents stanza, in which you list the elements comprising your book and the files in which its data is contained. Every element that goes into the book has its own element: option in this section. Easybook understands about twenty different elements at present. Some of them can be included simply by listing the element, such as a table of contents:

        - { element: toc   }

Because the table of contents is generated automatically at export time, it requires no other configuration. Chapters, however, need to point to the correct file:
        - { element: chapter, number: 1, content: chapter1.md }
        - { element: chapter, number: 2, content: blahblahblahblah.md }
        - { element: chapter, number: 3, content: thebutlerdidit.md }

Since you specify the filename, it can be anything you want, and it is simple to rearrange the chapters. Some of the other elements work the same way as chapter, such as introduction and epilogue. There are two distinctions to using a separate element for these components, though: you can style them differently for output, and you can optionally include or omit them from the different editions of your book. Other elements, such as list-of-tables (lot) are automatically generated. You can also include higher-level divisions of your text with the part element.

Finally, down in the editions stanza you will see each of the editions defined for the book. The four defaults mentioned earlier each have an indented list of options, and you can add additional directives to adjust their output or to alter the way they interpret individual book elements. For instance, the toc element can take a deep directive telling it how many levels deep to index content. It takes a value from 1 to 6, which correspond to HTML's <h1> down to <h6> headings. By default toc searches only chapter, part and appendix elements to create its index, but you can add others by listing them after an elements: directive underneath toc. For example,

    toc:
        deep:       4
        elements:   ["appendix", "chapter", "preface", "afterword", "conclusion"]

Because toc's deep directive is only used in the editions stanza, rather than being up in the contents stanza, you can define your print, web, and ebook editions to have different depths in their tables of contents. There are other directives that are unique to a specific output format, for example a PDF edition can specify all four margin widths, whether pages are single-or-double-sided, and include an ISBN number.

Themes with variation

The simplest way to customize Easybook's output is to create your own editions. You can add them to the editions stanza and specify every option, or extend the basic set with different options. For example, you could either create a new edition called booklet with different font and page sizes from print, or you could put the extends: print directive in your new edition and automatically inherit all of print's settings.

A more complicated option is to override the way the default theme handles specific content types. You will recall that in the contents stanza, the chapter elements pointed to a file, but most other elements required no further attributes. In those cases, the default theme already has a Twig template defined that tells the Easybook renderer how to handle the element. For example, the license element has a boilerplate "All Rights reserved" license buried within Easybook's theme directory. But you can point to your own as well, such as:

        - { element: license, content: GNU-FDL.md }

The default behavior of the theme for each element type is stored in a Contents subdirectory of the edition type, which is itself a subdirectory of the theme, all of which lives beneath your personal Easybook installation location. Throw in the twenty-odd element types, and that adds up to a large set of files. For instance, the license element for the default PDF type is found in $your_easybook_install_dir/app/Resources/Themes/Base/Pdf/Contents/license.md.twig.

You are (clearly) not meant to find and modify these files by hand, but the online documentation lacks a complete reference for the default themes and what they produce. In some cases, this is a cosmetic issue, but in others it is significant. If you put a cover element in your book, Easybook will generate it based on the book title, author, and edition given in the header. If you want it to include the publication date, too, you have to find and modify the Twig file.

Because themes are a collection of Twig template files, you can also create your own. The Easybook documentation has a separate chapter on the process, which is good because there are a considerable number of pieces to assemble. Each theme requires Twig templates for every content element type, plus templates for tables, figures, source code listings, and the book as a whole. HTML themes require extra templates for basic layout, and EPUB themes require other templates to handle creating EPUB's metadata files. In addition, you must create a CSS stylesheet that defines the styles referenced in the Twig templates.

Admittedly, this is advanced stuff, and Easybook attempts to provide you with simpler methods to modify your output merely by adding attributes and options in the config.yml file. I suspect that if you found the time required to develop an Easybook theme from scratch, you could just as easily use LaTeX or another typesetting system.

Easybook's real competition is other lightweight (in terms of user interface) formatting systems like Sourcefabric's Booktype (which we covered in February). Between the two, Booktype edges out Easybook for ease-of-editing. It provides a web-based WYSIWYG editor rather than requiring Markdown, and it works automatically with distributed teams of authors. While Easybook's locally-installed CLI option is easy to use, the fact that it relies on storing the book's contents and configuration file in a single location does not lend itself well to working with others. Despite the romanticized notion of authors toiling away in remote cabins in the woods, few if any book projects are single-user affairs.

It is also a major strike against Easybook that its PDF export functionality comes from a proprietary library. There are certainly free software alternatives; perhaps the Easybook project was unimpressed with their output, but considering the fact that the free version of PrinceXML watermarks output, it is hardly a viable option anyway. That said, what Easybook does do is provide a straightforward way to maintain format-independent text works and rapidly generate output suitable for consumption. For a lot of armchair publishers, that may be enough.

Comments (11 posted)

Brief items

Quotes of the week

Being the most widely used browser in the world is irrelevant. It's important that Mozilla software will continue to be available and compatible when people need it, now and in the future. Other browser makers must always be aware that they will never be able to abuse a market dominance, because Mozilla will always be waiting for users to come back with our arms wide open to make them feel home in the web.
Kai Engert

Or you could simply remember that it is always haystack,needle for strings and needle,haystack for arrays. Not that complicated.
Rasmus Lerdorf

Comments (6 posted)

Bison 2.6 released

Version 2.6 of the GNU bison parser generator has been released. Changes include new declarations and guards in generated parser headers and a new api.prefix variable. The latter will allow bison to automatically rename constants with a user-defined prefix, rather than a generic name like YYSTYPE as in previous versions. Several older features have also been flagged for deprecation in the next stable release, so careful reading is advised.

Full Story (comments: 4)

GNU MPC 1.0 "Fagus silvatica" released

The GNU MPC 1.0 release is out. "GNU MPC is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. 'Fagus silvatica' is our first release as a GNU package, and it is marked by a license change to LGPL version 3 or later for the code, and GFDL version 1.3 or later (without invariant sections) for the documentation. Now each line of code is covered by a test."

Full Story (comments: 14)

PhoneGap 2.0 released

The Adobe PhoneGap 2.0 release is available. "PhoneGap allows developers to build cross-platform mobile applications using HTML5, CSS3 and Javascript. With PhoneGap, you can re-use your existing web developer skills and use the PhoneGap API to gain access to native features that aren’t accessible in mobile browsers." New features include a cross-platform command-line interface, better documentation, and integration with Apache Cordova.

Comments (10 posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Romanick: The zombies cometh...

In his blog, Ian Romanick writes about a recent visit to Valve by Intel graphics engineers. They went to assist in making the "Left 4 Dead 2" game work well with Intel hardware and drivers. "The funny thing is Valve guys say the same thing about drivers. There were a couple times where we felt like they were trying to convince us that open source drivers are a good idea. We had to remind them that they were preaching to the choir. :) Their problem with closed drivers (on all platforms) is that it's such a blackbox that they have to play guess-and-check games. There's no way for them to know how changing a particular setting will affect the performance. If performance gets worse, they have no way to know why. If they can see where time is going in the driver, they can make much more educated guesses."

Comments (59 posted)

Page editor: Nathan Willis

Announcements

Brief items

Petition: Share government-developed software under an open source license

A petition is available for signing urging the US government to "Maximize the public benefit of federal technology by sharing government-developed software under an open source license" One of the top three reasons given is "Openness: Open Sourcing ensures basic fairness and transparency by making software and related artifacts available to the citizens who provided funding, consistent with the President’s 2009 declaration that “Information maintained by the Federal Government is a national asset.”" Registration is required to sign the petition. The goal is to get 25,000 signatures by August 16 and they have a long way to go to achieve that goal.

Comments (none posted)

X.Org Foundation achieves non-profit public charity status

The X.Org Foundation has announced that it has been recognized as a non-profit public charity in the U.S., making tax-deductible donations possible. "Contributions to the X.Org Foundation are used to further our mission to support the development of the open source graphics stack based on the X Window System, and to educate students and developers about the technology in the stack."

Full Story (comments: 4)

New Books

Deploying Rails--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Deploying Rails" by Tom Copeland and Anthony Burns.

Full Story (comments: none)

Upcoming Events

GStreamer Conference schedule announced

The schedule for this year's GStreamer conference is now available. The conference will be held August 27-28 in San Diego, CA just prior to LinuxCon North America. "In addition to talks about core GStreamer topics like GStreamer 1.0, the GStreamer SDK and hardware enablement with GStreamer 1.0, we can also offer a great selection of talks on related multimedia technologies such as ALSA, Wayland, V4L, OpenGL and Mesa and the Opus Audio Codec this year."

Full Story (comments: none)

Bitcoin Conference

The Bitcoin Conference will take place September 15-16, 2012 in London, UK. "This conference will be bringing Bitcoin to the mainstream. We will reach out to the general public with primer talks and workshops. We hope for this conference to bring greater opportunities to the community by promoting upcoming talent, and bringing investment to this fledgling economy."

Full Story (comments: none)

The Linux Foundation Announces Keynote Speakers, Co-located Events for LinuxCon Europe

The Linux Foundation has announced the keynote speakers and the co-located events for LinuxCon Europe. LinuxCon Europe and the Embedded Linux Conference Europe (ELCE) will take place in Barcelona, Spain, November 5-7, 2012. The call for papers for both is open until August 1. Other co-located events include EFL Dev Day, KVM Forum, Yocto Project Developer Day, and more.

Comments (none posted)

PyCon Argentina 2012

PyConAR will take place November 12-17, 2012 in Buenos Aires, Argentina. A PostgreSQL mini-conference will be held in parallel with PyConAR sprints.

Full Story (comments: none)

LCA2013 call for proposals closes

linux.conf.au have closed their call for proposals, achieving a near-record number of proposals since the conference began in 1999. Successful proposals will be announced in September 2012. LCA will be held January 28 - February 2, 2013 in Canberra, Australia.

Full Story (comments: none)

Events: July 26, 2012 to September 24, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 26
July 29
GNOME Users And Developers European Conference A Coruña, Spain
August 3
August 4
Texas Linux Fest San Antonio, TX, USA
August 8
August 10
21st USENIX Security Symposium Bellevue, WA, USA
August 18
August 19
PyCon Australia 2012 Hobart, Tasmania
August 20
August 22
YAPC::Europe 2012 in Frankfurt am Main Frankfurt/Main, Germany
August 20
August 21
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 25 Debian Day 2012 Costa Rica San José, Costa Rica
August 27
August 28
XenSummit North America 2012 San Diego, CA, USA
August 27
August 28
GStreamer conference San Diego, CA, USA
August 27
August 29
Kernel Summit San Diego, CA, USA
August 28
August 30
Ubuntu Developer Week IRC
August 29
August 31
2012 Linux Plumbers Conference San Diego, CA, USA
August 29
August 31
LinuxCon North America San Diego, CA, USA
August 30
August 31
Linux Security Summit San Diego, CA, USA
August 31
September 2
Electromagnetic Field Milton Keynes, UK
September 1
September 2
Kiwi PyCon 2012 Dunedin, New Zealand
September 1
September 2
VideoLAN Dev Days 2012 Paris, France
September 1 Panel Discussion Indonesia Linux Conference 2012 Malang, Indonesia
September 3
September 8
DjangoCon US Washington, DC, USA
September 3
September 4
Foundations of Open Media Standards and Software Paris, France
September 4
September 5
Magnolia Conference 2012 Basel, Switzerland
September 8
September 9
Hardening Server Indonesia Linux Conference 2012 Malang, Indonesia
September 10
September 13
International Conference on Open Source Systems Hammamet, Tunisia
September 14
September 16
Debian Bug Squashing Party Berlin, Germany
September 14
September 21
Debian FTPMaster sprint Fulda, Germany
September 14
September 16
KPLI Meeting Indonesia Linux Conference 2012 Malang, Indonesia
September 15
September 16
Bitcoin Conference London, UK
September 15
September 16
PyTexas 2012 College Station, TX, USA
September 17
September 19
Postgres Open Chicago, IL, USA
September 17
September 20
SNIA Storage Developers' Conference Santa Clara, CA, USA
September 18
September 21
SUSECon Orlando, Florida, US
September 19
September 20
Automotive Linux Summit 2012 Gaydon/Warwickshire, UK
September 19
September 21
2012 X.Org Developer Conference Nürnberg, Germany
September 21 Kernel Recipes Paris, France
September 21
September 23
openSUSE Summit Orlando, FL, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds