User: Password:
Subscribe / Log in / New account Weekly Edition for April 18, 2013

LGM: Collaboration and animation

By Nathan Willis
April 17, 2013

There were not any major new software releases at this year's Libre Graphics Meeting (LGM) in Madrid, although there were updates from most of the well-known open source graphics and design projects. But the annual gathering also offers an instructive look at where development is heading, both based on what new problems the programmers are tackling and on what the creative professionals report as their areas of concern. This year, two themes recurred in many of the sessions: collaboration (often in real-time) is coming to many creative-suite tools, and 2D animation is gaining momentum.

LGM was hosted at Medialab Prado, a brand-new facility in central Madrid, from April 10 though 13. As is typical of the event, talks were divided up among software development teams, artists, educators, and theorists exploring more abstract topics like consensus-building in communities. By coincidence, the best-known application projects from the graphics community (GIMP, Krita, Inkscape, Blender, Scribus, MyPaint) all happen to be in between stable releases, but there was still plenty to discuss. The conference runs a single track for all sessions (plus side meetings and workshops); altogether there were 68 presentations this year—which represented a broad sample of the open source graphics and design community.

Join together

One oft-repeated subject this year was development work to enable collaboration. Ben Martin of FontForge showcased the font editor's recently added ability to share a simultaneous editing session between multiple users. The work was underwritten by Dave Crossland, principally for use in type-design workshops, although Martin pointed out that it has other applications as well, such as running task-specific utilities in the background (for example, ttfautohint), or tweaking a font in the editor while showing a live preview of the result in a typesetting program.

It is sometimes assumed that collaborative editing requires deep changes to an application, but the FontForge implementation touches relatively little of the existing codebase. It hooks into the existing undo/redo system, which already handles serializing and recording operations. The collaboration feature is enabled by the first user session starting a server process; subsequent client sessions connect to the server. The server sends a copy of the open file to each new client; afterward any change on any client is relayed to all of the others. The code assumes a low-latency network at present, which simplifies synchronization, but the ZeroMQ library which handles the underlying networking is capable of running over almost any channel (including TLS and other secure options).

Though the overall design of the collaboration feature is straightforward, there are several challenges. First, undo/redo does not always correspond to a single action that needs to be relayed. There are no-ops, such as selection and deselection, which do not alter the file but still need to be tracked locally. FontForge also has seven different internal flavors of "undo" to handle special cases—such as when moving a spline point affects the total width of a glyph. When that happens, the user sees it only as moving a point on the canvas, but fonts store the width and side-bearings of a glyph as separate data. So one operation affects several parts of the file, but must still be undoable as a single operation. And users must be able to use the existing Undo and Redo commands from the menu, so the local undo/redo stack must be tracked separately from the collaboration stack.

There are also some editing tools that have not yet been hooked into the collaboration feature, as well as non-canvas editing tasks like adding layers or editing metadata tables. In addition, at the moment only FontForge's native SFD file format is supported. But Martin argued that more open source graphics applications ought to pursue real-time collaboration features. Proprietary applications by and large do not offer it, but with a robust undo/redo stack, implementation using ZeroMQ is within reach. [Gonzalez]

The 2D animation studio Tupi also supports real-time collaborative editing, as Gustav Gonzalez explained. Gonzalez's talk did not go into as much detail as the FontForge talk did, but that was in part because the Tupi team was making its first LGM appearance, and had far more ground to cover. But Gonzalez told the audience that nothing would prepare them for the strangeness of collaborative editing, when the elements in video frame suddenly start to move without their intervention.

Real-time editing, with multiple users altering one file at the same time, certainly has its uses—Gonzalez observed that animation projects deal with far more data than still image editors, seeing as they have dozens of frames per second to worry about. But enabling better collaboration between teams working asynchronously came up in multiple presentations, too. Julien Deswaef presented a session on using distributed version control in design projects. Deswaef noted that many of the file formats which graphics designers use today are text-based, including SVG, DXF, OBJ, and STL, and that they are often accustomed to using version control on web-design projects.

But while version control is quite easy to get started with when developing a web site based on Bootstrap.js or another Github-hosted framework, Git support is not integrated into most desktop tools. Consequently, Deswaef has started his own branch of Inkscape that features Git integration. The basic commands are in place, such as init, commit, and branch, but he is still working on the more complicated ones, such as rolling back.

The "holy grail," he said, is a "visual diff" that will allow users to highlight changes in a file. How best to implement visual diff in the interface is an ongoing debate in design circles, he said. Currently Adobe offers only screenshot-like snapshots of files in its versioning system, which is not a solution popular with many users. SVG's XML underpinnings ought to allow Inkscape to do better, perhaps via CSS styling, he said. Ultimately, not every file format would integrate well with Git (raster formats in particular), but illustrators sharing and reusing vector art on GitHub could be as important as web designers using Git for site designs.

It's sunny; let's share

Of course, collaboration is not a concept limited to end users. Tom Lechner spoke about "shareable tools," his proposal that graphics applications find a common way to define on-canvas editing tools, so that they can be more easily copied between programs. Lechner is famous for the constantly evolving interface of Laidout, his page impositioning program; in another session he showcased several new Laidout editing tools, which included a new on-canvas interface for rotating and reflecting objects. Graphics applications' tools change regularly, so perhaps there is hope that the proposal will attract interest—during a workshop session later in the week, developers from MyPaint, Krita, GIMP, and other applications did hash out the basics of a scheme to share paintbrushes across applications.

Several other sessions touched on collaboration features. Susan Spencer expressed interest in adding real-time collaboration to Tau Meta Tau Physica, her textile-pattern-making application, and several artists commented that collaboration had become a critical part of their workflows. But without doubt, the most unusual demonstration of collaborative features was the Piksels and Lines Orchestra, which performed a live "opera" on its own branches of Scribus, MyPaint, and Inkscape. [Piksels and Lines]

The applications were modified to hook sound events into each editing action (cut/copy/paste, painting, transforming images, and so on); the sounds were mixed together live using PulseAudio. Four artists on stage drew and edited in the various applications, while Jon Nordby of MyPaint conducted and Brendan Howell of PyCessing narrated. If you are having a hard time imaging the performance, you would be forgiven—experiencing it is really the only solution. Video of the sessions is scheduled to be archived at the Medialab Prado site shortly.

Animated discussions

The Tupi animation program was a welcome addition to the LGM program. Gonzalez provided an overview of the application (which began as a fork of the earlier project KToon), showed a video made with Tupi (featuring the voice of Jon "Maddog" Hall), and discussed the development roadmap. An Android version is currently in the works, and plug-ins are planned to simplify creating and importing work from Inkscape, MyPaint, and Blender.

There was not a session from the Synfig team, the other active 2D animation tool, but there was both a Synfig birds-of-a-feather meeting and a hands-on Synfig workshop, which was run by Konstantin Dmitriev from the Morevna Project. Perhaps the highest-profile discussion about 2D animation came in the talk delivered by animator Nina Paley, who lamented the state of the open source 2D animation programs. Paley is quite experienced and fairly well known as an animator. But she expressed frustration with the open source tools available, particularly in terms of their usability and discoverability.

Paley has been trying to move to an open source animation suite ever since Adobe canceled its Flash Studio product, she said, eager to avoid being locked into another proprietary tool that could be discontinued. But the difficulty involved in figuring out Synfig, Blender, and the other options makes staying on Flash on an old Mac machine preferable. People tell her she should just learn to program, she said, but that kind of comment misses the point: both her passion and her talent is for creating the animations; advising her to stop doing that and start programming instead would not result in good animation or good software.

As one might expect, Paley's dissatisfaction with Synfig elicited a passionate response from Dmitriev during the question and answer section; he argued that Synfig was perfectly usable, which he had demonstrated to Paley during the workshop, and repeatedly shows by holding training sessions for children. Paley replied by agreeing that the workshop had been useful, but said it also revealed the trouble: Synfig currently requires hands-on education. But, she said, she was willing to keep learning. As it stands, she is stuck on Flash, but she emphasized that this is a purely practical choice. Personally, she is very committed to free and open culture, which she demonstrated by showing an "illegal" animated short she had made that used music clips which were still under copyright.

Paley's talk expressed frustration, but in the larger scheme of things, the fact that 2D animation was discussed at all was a new development. Several other talks touched on it, including Thierry Ravet's session about teaching stop-motion animation at Numediart, and Jakub Steiner's presentation about creating animated new-user help for GNOME's Getting Started. Getting Started used Blender for its animation; Numediart used Pencil (which was also mentioned in passing during several other sessions).

While it is true that 2D animation with open source tools is currently a trying endeavor, not too long ago it was impossible. The growth of the topic for LGM 2013 bodes well for the future; not too many years ago, the pain-points at LGM were things like color management and professional print-ready output—now those features are a given.

Hearing criticisms about the projects can be uncomfortable, but it is part of the value of meeting in person. As Steiner observed on his blog: "Feedback from an animator struggling to finish a task is million times more valuable than online polls asking for a feature that exists in other tools." Collaborative editing tools are an area where open source may be a bit ahead of the proprietary competition, while in animation, open source may be a bit behind. But considering their frequency in this year's program, one should expect both to be major growth areas in the years ahead.

[The author wishes to thank Libre Graphics Meeting for assistance with travel to Madrid.]

Comments (none posted)

Surveying open source licenses

By Michael Kerrisk
April 17, 2013

In adjacent slots at the 2013 Free Software Legal and Licensing Workshop, Daniel German and and Walter van Holst presented complementary talks that related to the topic of measuring the use of free software and open source (FOSS) licenses. Daniel's talk considered the challenges inherent in trying to work out which are the most widely used FOSS licenses, while Walter's talk described his attempts to measure license proliferation. Both of those talks served to illustrate just how hard it can be to produce measurements of FOSS licenses.

Toward a census of free and open source software licenses

Daniel German is a Professor in the Department of Computer Science at the University of Victoria in Canada. One of his areas of research is a topic that interests many other people also: which are the most widely used FOSS licenses? His talk considered the methodological challenges of answering that question. He also looked at how those challenges were addressed in studying license usage in a subset of the FOSS population, namely, Linux distributions.

Finding the license of a file or project can be difficult, Daniel said. This is especially so when trying to solve the problem for a large population of files or projects. "I'm one of the people who has probably seen the most different licenses in his lifetime, and it's quite a mess." The problem is that projects may indicate their license in a variety of ways: via comments in source code files, via README or COPYING files, via project metadata (e.g., on SourceForge or Launchpad), or possibly other means. Other groups then abstract that license data. For example, Red Hat and Debian both do this, although they do it in different ways that are quite labor intensive.

"I really want to stress the distinction between being empirical and being anecdotal". Here, Daniel pointed at the widely cited statistics on FOSS license usage provided by Black Duck Software. One of the questions that Daniel asks himself, is, can he replicate those results? In short, he cannot. From the published information, he can't determine the methodology or tools that were used. It isn't possible to determine the accuracy of the tools used to determine the licenses. Nor is it possible to determine the names of the licenses that Black Duck used to develop its data reports. (For example, one can look at the Black Duck license list and wonder whether "GPL 2.0" means GPLv2-only, GPLv2 or later, or possibly both?)

Daniel then turned to consider the challenges that are faced when trying to take a census of FOSS licenses. When one looks at the question of what licenses are most used, one of the first questions to answer is: what is "the universe of licenses" that are considered to be FOSS? For example, the Open Source Initiative has approved a list of around 70 licenses. But, there are very many more licenses in the world that could be broadly categorized as free, including rather obscure and little-used licenses such as a Beerware license. "I don't think anyone knows what the entire universe is." Thus, one must begin by defining the subset of licenses that one considers to be FOSS.

Following on from those questions is the question of what is "an individual" for the purpose of a census? Should different versions of the same project be counted individually? (What if the license changes between versions?) Are forks individual? What about "like" forks on GitHub? Do embedded copies of source code count as individuals? Here, Daniel was referring to the common phenomenon of copying source code files in order to simplify dependency management. And then, is an individual defined at the file level or at the package level? It's very important from a methodological point of view that we are are told what is being counted, not just the numbers, Daniel said.

Having considered the preceding questions, one then has to choose a corpus on which to perform the census. Any corpus will necessarily be biased, Daniel said, because the fact that the corpus was gathered for some purpose implies some trade-offs.

Two corpuses that Daniel likes are the Red Hat and Debian distributions. One reason that he likes these distributions is that they provide a clearly defined data set. "I can say, I went to Debian 5.0, and I determined this fact." Another positive aspect of these corpuses is that they are proxies for "successful" projects. The fact that a project is in one of those distributions indicates that people find it useful. That contrasts with a random project on some hosting facility that may have no users at all.

While presence in a Linux distribution can be taken as a reasonable proxy of a successful project, a repository such as Maven Central is, by contrast, a "big dumpster" of Java code, but "it's Java code that is actually being used by someone". On the other hand, Daniel called SourceForge the "cemetery for open source". In his observation, there is a thin layer of life on SourceForge, but nobody cares about most of the code there.

Then there are domain-specific repositories such as CPAN, the Perl code archive. There is clearly an active community behind such repositories, but, for the purpose of taking a FOSS license census, one must realize that the contents of such repositories often have a strong bias in favor of particular licenses.

Having chosen a corpus, the question is then how to count the licenses in the corpus. Daniel considered the example of the Linux kernel, which has thousands of files. Those files are variously licensed GPLv2, GPLv2+, LGPLv2, LGPLv2.1, BSD 3 clause, BSD 2 clause, MIT/X11, and more. But the kernel as a whole is licensed GPLv2-only. Should one count the licenses on each file individually, or just the individual license of the project as a whole, Daniel asked. A related question comes up when one looks at the source code of the FreeBSD kernel. There, one finds that the license of some source files is GPLv2. By default, those files are not compiled to produce the kernel binary (if they were, the resulting kernel binary would need to be licensed GPL). So, do binaries play a role in a license census, Daniel asked.

When they started their work on studying FOSS licenses, Daniel and his colleagues used FOSSology, but they found that it was much too slow for studying massive amounts of source code. So they wrote their own license-identification tool, Ninka. "It's not user-friendly, but some people use it."

Daniel and his colleagues learned a lot writing Ninka. They found it was not trivial to identify licenses. The first step is to find the license statement, which may or may not be in the source file header. Then, it is necessary to separate comments from any actual license statement. Then, one has to identify the license; Ninka uses a sentence-based matching algorithm for that task.

Daniel then talked about some results that he and his colleagues have obtained using Ninka, although he emphasized repeatedly that his numbers are very preliminary. In any case, one of the most interesting points that the results illustrate is the difficulty of getting accurate license numbers.

One set of census results was obtained by scanning the source code of Debian 6.0. The scan covered source code files for just four of the more popular programming languages that Daniel found particularly interesting: C, Java, Perl, and Python.

In one of the scans, Ninka counted the number of source files per license. Unsurprisingly, GPLv2+ was the most common license. But what was noteworthy, he said, is that somewhat more than 25% of the source code files have no license, although there might be a license file in the same directory that allows one to infer what the license is.

In addition, Ninka said "Unknown" for just over 15% of the files. This is because Ninka has been consciously designed to have a strong bias against mis-identifying licenses. If it has any doubt about the license, Ninka will return "Unknown" rather than trying to make a guess; the 15% number is an indication of just how hard it can be to identify the licenses in a file. Ninka does still occasionally make mistakes. The most common reason is that a source file has multiple licenses and Ninka does not identify them all; Daniel has seen a case where one source code file had 30 licenses.

The other set of results that Daniel presented for Debian 6.0 measured packages per license. In this case, if at least one of the source files in a package uses a license, then that use is counted as an individual for the census. Again, the GPLv2+ is the most common of the identified licenses, but comparing this result against the "source files per license" measure showed some interesting differences. Whereas the Eclipse Public License version 1 (EPLv1) easily reached the list of top twenty most popular source file licenses, it did not appear in the top twenty packages licenses. The reason is that there are some packages—for example, Eclipse itself—that consist of thousands of files that use the EPLv1 license. However, the number of packages that make any use of the EPLv1 as a license is relatively small. Again, this illustrated Daniel's point about methodology when it comes to measuring FOSS license usage: what is being measured?

Daniel then looked at a few other factors that illustrated how a FOSS license census can be biased. In one case, he looked at the changes in license usage in Debian between version 5.0 and 6.0. Some licenses showed increased usage that could be reasonably explained. The GPLv3 was one such license: as a new, well-publicized license, the reasons for its usage are easily understood. On the other hand, the EPLv1 license also showed significant growth. But, Daniel explained, that was at least in part because, for legal reasons, Java code that uses that license was for a long time under-represented in Debian.

Another cause of license bias became evident when Daniel turned to look at per-file license usage broken down across three languages: Java, Perl, and Python. Notably, around 50% of Perl and Python source files had no license; for Java, that number was around 12%. "Java programmers seem to be more proactive about specifying licenses." Different programming language communities also show biases towards particular licenses: for example, the EPLv1 and Apache v2 licenses are much more commonly used with Java than with the Python or Perl; unsurprisingly the "Same as Perl" license is used only with Perl.

In summary, Daniel said: "every time you see a census of licenses, take it with a grain of salt, and ask how it is done". Any license census will be biased, according to the languages, communities, and products that it targets. Identifying licenses is hard, and tools will make mistakes, he said. Even a tool such as Ninka that tries to very carefully identify licenses cannot do that job for 15% of source files. For a census, 15% is a huge amount of missing data, he said.

License proliferation: a naive quantitative analysis

Walter van Holst is a legal consultant at the Dutch IT consulting company mitopics. His talk presented what he describes as "an extremely naive quantitative analysis" of license proliferation.

The background to Walter's work is that in 2009 his company sponsored a Master's thesis on license proliferation that produced some contradictory results. The presumption going into the research was that license proliferation was a problem. But some field interviews conducted during the research found that the people in free software communities didn't seem to consider license proliferation to be much of a problem. Four years later, it seemed to Walter that it was time for a quantitative follow-up to the earlier research, with the goal of investigating the topic of license proliferation further.

In trying to do a historical analysis of license proliferation, one problem that Walter encountered is that there were few open repositories that could be used to obtain historical license data. Thus, trying to use one of the now popular FOSS project-hosting facilities would not allow historical analysis. Therefore, Walter instead chose to use data from a software index, namely Freecode (formerly Freshmeat, before an October 2011 name change). Freecode provides project licensing information that is available for download from FLOSSmole, which acts a repository for dumps of metadata from other repositories. FLOSSmole commenced adding Freecode data in 2005, but Walter noted that the data from before 2009 was of very low quality. On the other hand, the data from 2009 onward seemed to be of high enough quality to be useful for some analysis.

How does one measure license proliferation? One could, Walter said, consider the distribution of license choices across projects, as Daniel German has done. Such an analysis may provide a sign of whether license proliferation is a problem or not, he said.

Another way of defining license proliferation is as a compatibility problem, Walter said. In other words, if there is proliferation of incompatible licenses, then projects can't combine code that technically could be combined. Such incompatibility is, in some sense, a loss in the value of that FOSS code. This raises a related question, Walter said: "is one-way license compatibility enough?" (For example, there is one-way compatibility between the BSD and GPL licenses, in the direction of the GPL: code under the two licenses can be combined, but the resulting work must be licensed under the GPL.) For his study, Walter presumed that one-way compatibility is sufficient for two projects to be considered compatible.

Going further, how can one assign a measure to compatibility, Walter asked. This is, ultimately, an economic question, he said. "But, I'm still not very good at economics", he quipped. So, he instead chose an "extremely naive" measure of compatibility, based on the following assumptions:

  1. Treat all open source projects in the analysis as nodes in a network.
  2. Consider all possible links between pairs of nodes (i.e., combinations of pairs of projects) in the network.
  3. Treat each possible combination as equally valuable.

This is, of course, a rather crude approach that treats combinations between say the GNU C library (glibc) and some obscure project with few users as being equal in importance to (say) the combination of glibc and gcc. This approach also completely ignores language incompatibilities, which is questionable, since it seems unlikely that one would want to combine Lisp and Java code, for example.

Given a network of N nodes, the potential "value" of the network is the maximum number of possible combinations of two nodes. The number of those combinations is N*(N-1)/2. From a license-compatibility perspective, that potential value would be fully realized if each node was license-compatible with every other node. So, for example, Walter's 2009 data set consisted of 38,674 projects, and, following the aforementioned formula, the total possible interconnections would be approximately 747.9 million.

Walter's measure of license incompatibility in a network is then based on asking two questions:

  • For each license in the network, how many combinations of two nodes in the network can produce a derived work under that license? For example, how many pairs of projects under GPL-compatible licenses can be combined in the network?
  • Considering the license that produces the largest number of possible connections for a derived work, how does the number of connections for that license measure up against the total number of possible combinations?

Perhaps unsurprisingly, the license that allows the largest number of derived work combinations is "any version of the GPL". By that measure, 38,171 projects in the data set were compatible, yielding 728.5 million interconnections.

Walter noted that the absolute numbers don't matter in and of themselves. What does matter is the (proportional) difference between the size of the "best" compatible network and the theoretically largest network. For 2009, that loss is the difference between the two numbers given above, which is 19.3 million. Compared to the total potential connections, that loss is not high (expressed as a proportion, it is 2.5%). Or to put things another way, Walter said, these figures suggest that in 2009, license proliferation appears not to have been too much of a problem.

Walter showed corresponding numbers for subsequent years, which are tabulated below. (The percentage values in the "Value loss" column are your editor's addition, to try and make it easier for the reader to get a feel for the "loss" value.)

Year Potential value
Value loss
GPL market
2009 747.8 19.3  (2.5%) 72%
2010 534.6 30.8  (5.7%) 63%
2011 565.9 56.4  (9.9%) 61%
2012 599.6 79.8 (13.3%) 59%
2013 621.6 60.3  (9.7%) 58%

The final column in the table shows the number of projects licensed under "any version of the GPL". In addition, Walter presented pie charts that showed the proportion of projects under various common licenses. Notable in those data sets was that, whereas in 2009 the proportion of projects licensed GPLv2-only and GPLv3 was respectively 3% and 2%, by 2013, those numbers had risen to 7% and 5%.

Looking at the data in the table, Walter noted that the "loss" value rises from 2010 onward, suggesting that incompatibility resulting from license proliferation is increasing.

Walter then drew some conclusions that he stressed should be treated very cautiously. In 2009, license proliferation appears not to have been much of a problem. But looking at the following years, he suggested that the increased "loss" value might be due to the rise in the number of projects licensed GPLv2-only or GPLv3-only. In other words, incompatibility rose because of a licensing "rift" in the GPL community. The "loss" value decreased in 2013, which he suggested may be due to an increase in the number of projects that have moved to Apache License version 2 (which has better license compatibility with the the GPL family of licenses).

Concluding remarks

In questions at the end of the session, Daniel and Walter both readily acknowledged the limitations of their methodologies. For example, various people raised the point that the Freecode license information used by Walter tends to be out of date and inaccurate. In particular, the data does not seem to be too precise on which version of the GPL a project is licensed under; the license for many projects is just defined as "GPL" which provided Walter's "any version of the GPL" license measure above. Walter agreed that his source data is dirty, but pointed out that the real question is how to get better data.

As Walter also acknowledged, his measure of license incompatibility is "naive". However, his goal was not to present highly accurate numbers. Instead, he wants to get some clues about possible trends and suggest some ideas for future study. It is easy to see other ways in which his results might be improved. Comparing his presentation with Daniel's, one can immediately come up with ideas that could lead to improvements. For example, approaches that consider compatibility at the file level or bring programming languages into the equation might produce some interesting results.

Inasmuch as one can find faults in the methodologies used by Daniel and Walter, that is only possible because, unlike the widely cited Black Duck license census, they have actually published their methodologies. In revealing their methodologies and the challenges they faced, they have shown that any FOSS licensing survey that doesn't publish its methodology should be treated with considerable suspicion. Clearly, there is room for further interesting research in the areas of FOSS license usage, license proliferation, and license incompatibility.

Comments (6 posted)

Current challenges in the free software ecosystem

By Michael Kerrisk
April 17, 2013

Given Eben Moglen's long association with the Free Software Foundation, his work on drafting the GPLv3, and his role as President and Executive Director of the Software Freedom Law Center, his talk at the 2013 Free Software Legal and Licensing Workshop promised to be thought-provoking. He chose to focus on two topics that he saw as particularly relevant for the free software ecosystem within the next five years: patents and the decline of copyleft licenses.

The patent wars

"We are in the middle of the patent war, and we need to understand where we are and where the war is going." Eben estimates the cost of the patent war so far at US$40 billion—an amount spread between the costs of ammunition (patents and legal maneuvers) and the costs of combat (damage to business). There has been no technical benefit of any kind from that cost, and the war has reached the point where patent law is beginning to distort the business of major manufacturers around the world, he said.

The effort that gave rise to the patent war—an effort primarily driven by the desires of certain industry incumbents to "stop time" by preventing competitive development in, for example, the smartphone industry—has failed. And, by now, the war has become too expensive, too wasteful, and too ineffective even for those who started it. According to Eben, now, at the mid-point of the patent war, the costs of the combat already exceed any benefit from the combat—by now, all companies that make products and deliver services would benefit from stopping the fight.

The nature of the war has also begun to change. In the US, hostility to patents was previously confined mainly to the free software community, but has now widened, Eben said. Richard Posner, a judge on the US Court of Appeals for the Seventh Circuit, has spoken publicly against software patents (see, for example, Posner's blog post and article in The Atlantic). The number of American-born Nobel Prize winners who oppose software patents is rising every month. The libertarian wing of the US Republican party has started to come out against software patents (see, for example, this article, and this article by Ramesh Ponnuru, a well-known Republican pundit).

Thus, a broader coalition against software patents is likely to make a substantial effort to constrain software patenting for the first time since such patenting started expanding in the early 1990s, Eben said. The dismissal of the patent suit by Uniloc against Red Hat and Rackspace was more than a victory for the Red Hat lawyers, he said. When, as in that case, it is possible to successfully question the validity of a patent in a motion to dismiss, this signals that the economics of patent warfare are now shifting in the direction of software manufacturers. (An explanation of some of the details of US legal procedure relevant to this case can be found in this Groklaw article.)

Illustrating the complexities and international dimensions of the patent war, Eben noted that even as the doctrine of software patent ownership is beginning to collapse in the US, the patent war is spreading that doctrine to other parts of the world. Already, China, the second largest economy in the world, is issuing tens of thousands of patents a year. Before the end of the patent war—which Eben predicts will occur two to four years from now—China's software industry will be extensively patented. The ownership of those patents is concentrated in the hands of a few people and organizations with extremely strong ties to government in a system of weak rule of law, he said.

Long before peace is reached, the strategists and lawyers who got us into the patent war will be asking how to get out of the mess that the war has gotten them into, and everyone else in the industry is going to feel like collateral damage, Eben said. As usual, the free (software) world has been thinking about this problem longer than the business world. "We are going to save you in the end, just as we saved you by making free software in the first place."

We're at the mid-point of the patent war over mobile, Eben said. The "cloud services ammunition dumps [patents] will begin to go up in flames" about a year and a half from now. Those "ammunition dumps" are the last ones that have not yet been exploited in the patent wars; they're going to be exploited, he said. He noted that some companies will be feeling cornered after IBM's announcement that its cloud services will be based on OpenStack. Those companies will now want to use patents stop time.

As the patent wars progress, we're going to become more dependent on organizations such as the Open Invention Network (OIN) and on community defense systems, Eben said. OIN will continue to be a well-funded establishment; SFLC will continue to scrape by. Anyone in the room who isn't contributing to SFLC through their institutions is making a serious mistake, Eben said, because "we're able to do things you [company lawyers] can't do, and we can see things you cannot; you should be helping us, we're trying to help you".

The decline of copyleft

Eben then turned a discussion of copyleft licenses, focusing on the decline in their use, and the implications of that decline for industry sectors such as cloud services.

The community ecosystem of free software that sustains the business of "everyone in this room" is about to have too little copyleft in it, Eben said. He noted that from each individual firm's perspective, copyleft is an irritation. But seen across the industry as a whole, there is a minimum quantity of copyleft that is desirable, he said.

Up until now, there has been sufficient copyleft: the toolchain was copyleft, as was the kernel. That meant that companies did not invest in product differentiation in layers where such differentiation would cost more than it would benefit the company's bottom line. While acknowledging that there is a necessary lower bound to trade secrecy, Eben noted the "known problem" that individual firms always over-invest in trade secrecy.

The use of copyleft licenses has helped all major companies by allowing them to avoid over-investment in product differentiation, Eben said. In support of that point, he noted that the investments made by most producers of proprietary UNIX systems were an expensive mistake. "It was expensive to end the HP-UX business. It cost a lot to get into AIX, and it cost even more to get out." Such experiences meant that the copyleft-ness of the Linux kernel was welcomed, because it stopped differentiation in ways that were more expensive than they were valuable.

Another disadvantage of excess differentiation is that it makes it difficult to steal each another's customers, Eben said. And as businesses move from client-server architectures to cloud-to-mobile architectures, "we are entering a period where everyone wants to steal everyone else's customers". One implication of these facts is that more copyleft right now at layers where new infrastructure is being developed would prevent over-investment in (unnecessary) differentiation, he said. In Eben's view, people will come to look on OpenStack's permissive licensing with some regret, because they're going to over-invest in orchestration and management software layers to compete with one another. "I am advising firms around the world that individually are all spending too much money on things they won't share, which will create problems for them in the future." Eben estimates that several tens of millions of dollars are about to be invested that could have been avoided if copyleft licenses were used for key parts of the cloud services software stack.

There are other reasons that we are about to have too little copyleft, Eben said. Simon Phipps is right that young developers are losing faith in licensing, he said. Those developers are coming to the conclusion that permission culture is not worth worrying about and that licenses are a small problem. If they release software under no license, then "everyone in this room" stands to lose a lot of money because of the uncertainty that will result. Here, Eben reminded his audience that Stefano Zacchiroli had explained that the free software community needs help in explaining why license culture is critically important. (Eben's talk at the Workshop immediately followed the keynote speech by Stefano "Zack" Zacchiroli, the outgoing Debian Project Leader, which made for a good fit, since one of Eben's current roles is to act as pro bono legal counsel for the Debian community distribution.)

Eben also noted that SFLC is doing some licensing research on over three million repositories and said that Aaron Williamson is presenting the results at the Linux Foundation Collaboration Summit. Some people may find the results surprising, he said.

Another cause of trouble for copyleft is the rise in copyright trolling around the GPL. That is making people nervous that the license model that has served them well for twenty years is now going to cause them problems. Asked if he could provide some examples of bad actors doing such copyright trolling, Eben declined: "you know how it is"; one presumes he has awareness of some current legal disputes that may or may not become public. However, Eben is optimistic: he believes the copyright trolling problem will be solved and is not overly worried about it.

Eben said that all of the threats he had described—educating the community about licenses, copyright trolls, and over-investment in differentiation in parts of the software stack that should be copyleft but are instead licensed permissively—are going to be problems, but he believes they will be solved. "I'm going to end on a happy note by explaining a non-problem that many people are worrying about unnecessarily."

The OpenStack revolution is putting companies into the software-as-a-service business, which means that instead of distributing binaries they are going to be distributing services. Because of this, companies are worrying that the Affero GPL (AGPL) is going to hurt them. The good news is that it won't, Eben said. The AGPL was designed to work positively with business models based on software-as-a-service in the same way that the GPL was designed to work with business adoption of free software, he said. "We will teach people how the AGPL can be used without being a threat and how it can begin to do in the service world what the GPL did in the non-services software world."

Your editor's brief attempt at clarifying why the AGPL is not a problem but is instead a solution for software-as-a-service is as follows. The key effect of the AGPL is to make the GPL's source-code distribution provision apply even when providing software as a service over a network. However, the provision applies only to the software that is licensed under the AGPL, and to derived works that are created from that software. The provision does not apply to the other parts of the provider's software stack, just as, say, the Linux kernel's GPLv2 license has no effect on the licensing of user-space applications. Thus, the AGPL has the same ability to implicitly create a software development consortium effect in the software-as-a-service arena that the GPL had in the traditional software arena. Consequently, the AGPL holds out the same promise as the GPL of more cheaply creating a shared, non-differentiated software base on which software-as-a-service companies can then build their differentiated software business.

As Eben noted in response to a question later in the morning, if businesses run scared of the AGPL, and each company builds its own specific APIs for network services, then writing software that talks to all those services will be difficult. In addition, there will be wasteful over-investment in duplicating parts of the software stack that don't add differential value for a company and it will be difficult for companies to steal each other's customers. There are large, famous companies whose future depends on the AGPL, he said. "The only question is if they will discover that too late."

Eben concluded on a robustly confident note. "Everything is working as planned; free software is doing what Mr. Stallman and I hoped it would do over the last twenty years." The server market belongs to free software. The virtualization and cloud markets belong to free software. The Android revolution has made free software dominant (as a platform) on mobile devices. The patent wars are a wasteful annoyance, but they will be resolved. The free software communities have answers to the questions that businesses around the world are going to ask. "When Stefano [Zacchiroli] says we are going to need each other, he is being modest; what he means is you [lawyers] need him, and because you need him, and only because you need him, you need me."

Comments (56 posted)

Looking in at GNOME 3.8

April 17, 2013

This article was contributed by Linda Jacobson

On March 27, the GNOME project announced the release of GNOME 3.8. It has a variety of new features, including new privacy settings, desktop clutter reduction, improved graphics rendering and animation transitions, new searching options, and, perhaps most significantly, a Classic mode that restores some of the appearance and usability features of GNOME 2. With these additions, the GNOME team is attempting to broaden the appeal of GNOME 3, so that it will be more attractive to old-time GNOME 2 users, while also being a viable alternative to proprietary systems in the business and professional world.

GNOME 3 was designed to be flexible and highly configurable. As part of that effort, the GNOME extensions web site was introduced six months after GNOME 3. The new Classic mode bundles some of those extensions to give users a way to configure their desktop to look more like GNOME 2. In Classic mode, there are menus for applications and places; windows have minimize and maximize buttons; there is a taskbar that can be used to restore minimized windows and to switch between them; and Alt-Tab shows windows instead of applications. [Classic mode]

A video of Classic mode shows some of these features from a pre-release version of GNOME 3.8. Some rough spots in the interface are on display, but have likely been fixed in the final release. The general availability of the release will be governed by the release dates of underlying distributions. GNOME 3.8 is released, but not yet available except by either using a testing distribution or building it yourself.

As part of the goal of increasing its attractiveness to the business world, GNOME 3.8 continues to work to keep the desktop uncluttered and streamlined. To this end, the applications launching view has a new "Frequent/All" toggle that will display either recently used programs or all of those available. In addition, some applications are grouped into folders. Each folder icon is a black box that displays mini-icons representing the folder's contents. The effect is to see all the available applications at a glance. [All]

This interface choice can produce a few problems. The initial state of the Frequent/All toggle is "Frequent", but new users won't have any entries there. Since there is insufficient contrast between the toggle and the default wallpaper, a person who just wades in to use the system can easily miss it, and then no applications will display.

Another problem is that "Help" has been moved to one of the new groups of programs, in this case the "Utilities" group. For former users of GNOME who know what Help looks like and that it is considered an application, it does not take long to search for. There are, at the moment, only two groups. However, for a new user, there is insufficient information to suggest where to look for Help when it is needed most.

Searching from the "Activities" overview has been improved in several ways, both in the way the results are presented and in the new settings available. These allow you to specify a subset of applications and limit your search to that subset. This is useful because it allows you to quickly narrow in on an application of interest.

One of the bigger new items in GNOME 3.8 is the privacy settings. These are designed for people whose desktop is not in a physically private space, allowing a person to keep her name, activities, and viewing history private. There are a number of practical uses for this. [Privacy settings] If you are in a public space, such as working in a coffee shop or on an airplane, you might want to preserve your anonymity. There is a new setting that ensures that your name is not displayed on the computer. When "Name & Visibility" is marked "Invisible", the name in the upper right corner disappears. Beyond that are settings for web and usage history retention, screen locking, and Trash and temporary file cleanup.

There are many other new usability features. For example, better rendering of animated graphics provides smoother transitions in the interface. There is also greater support for internationalization. First, many more languages supported. Second, there are also improvements to, and expansion of, GNOME's input methods. And, of course, there were numerous bug fixes throughout.

In an interview with GNOME 3 designer Jon McCann that appeared on GNOME's website along with the release announcement, he said the overriding goal of GNOME, starting with GNOME 3, is to make it more accessible to application developers. To that end, GNOME 3.8 uses a number of new interfaces and widgets for GTK+ that were not present in 3.6, Allan Day of the GNOME team explained. "These widgets are not available to application developers yet, but will become available in future releases." The new widgets also take advantage of GNOME's improved graphics and animation.

There is a new Weather application for viewing current weather conditions and forecasts for various locations. Weather, per the decisions made at the February hackfest, is written in Javascript.

The GNOME developers have also begun work on future plans, including creating a testable development environment, so that application developers can develop for and test against soon-to-be released versions of GNOME. In the medium term, adding application bundling and sandboxing is in the works, and, in the long term, providing better coding and user interface development tools is planned.

Comments (7 posted)

Page editor: Jonathan Corbet


Mixed web content

By Jake Edge
April 17, 2013

There are essentially two modes for retrieving content from the web: unencrypted plaintext using HTTP and encrypted with SSL/TLS using HTTPS. But there are quite a number of web pages out there that mix the two modes, so that some content is retrieved securely while other parts are not. These mixed content pages are of concern because it is difficult to properly warn users that an HTTPS page is potentially insecure because it is getting some of its contents from outside of the encryption protection. Mozilla is gearing up to block some mixed content by default and have a way to alert users to the presence of mixed content.

In a blog post, Mozilla security engineer Tanvi Vyas describes the background of the problem and Mozilla's plans for mixed content. Mixed content can be categorized by whether the insecure content is active or passive. Active content covers web resources like scripts, CSS, iframes, objects, and fonts. These are generally things that can affect the document object model (DOM) of the page. Passive content, on the other hand, cannot affect the DOM and includes things like images, audio, and video.

For fairly obvious reasons, mixed active content is a bigger problem than mixed passive content. That doesn't mean there are no issues with mixed passive content, as there are still privacy and other concerns, but mixed active content is much worse. A seemingly secure page with an https:// URL can instead be infiltrated by a man in the middle to steal credentials, web browsing history, secure cookies, and more.

Starting in Firefox 23, which should be released in mid-August, mixed active content will be blocked by default. Mixed passive content will not be blocked by default, at least partly to try not to contribute to "security warning fatigue" (i.e. warning so frequently that users get overwhelmed and just click "continue" all the time).

According to Vyas, code for mixed content blocking started landing in Firefox 18, but a user interface was not added until Firefox 21 (which is still in beta and should be released in mid-May). Those who want to try out the feature can set the security.mixed_content.block_active_content (active) and security.mixed_content.block_display_content (passive) about:config options to true. Prior to Firefox 21, though, changing those values and reloading the page will be the only way to override the settings.

[Mixed content blocker]

In her blog post, Vyas goes into a fair amount of detail about the changes made to the user interface in support of mixed content blocking. For one thing, mixed content pages will no longer get the "lock" icon in the address bar, so that users will hopefully be less complacent about them. A new shield icon is used to indicate content that has been blocked, with user-interface elements to disable the blocking for the page (seen at right). That is one of the missing pieces for the earlier versions of Firefox, so per-page blocking and unblocking of mixed content cannot be done.

There are some edge cases to consider, including frames and fonts, both of which have been classified as active content by Firefox (though Chrome, for example, considers frames to be passive). While technically a frame can't alter the DOM of the page, it can do various tricks to fool users into entering sensitive information into insecure frames. Other tricks are possible too. Fonts are another case that are treated as active even though they cannot change the DOM. A malicious font could change what a page says, though, and blocking an HTTP font on a secure page won't break anything since the browser will fall back to a default font. In any case, it is believed that mixed font content is rare.

Many web users will remember the "HTTP content on an HTTPS page" complaint that browsers pop up—some may still be seeing them—though most have probably disabled the message because it is shown too frequently. Instead of a pop-up, simply blocking the content is likely to prove a much better experience, both from a security and a usability perspective. It will also hopefully help site owners and designers find ways to avoid mixed content on the web.

Comments (9 posted)

Brief items

Security quote of the week

Linux Hangman Rules
You take turns putting setuid root onto files in /usr/bin /usr/sbin/, etc. and if your opponent can use that to get root, even via a convoluted scenario, then you lose. The goal is to create a system running with MAXIMUM PRIVILEGE.
Dave Aitel

Comments (9 posted)

Hijacking airplanes with an Android phone (Help Net Security)

Help Net Security has a report of a rather eye-opening talk from the Hack in the Box conference in Amsterdam. Security researcher Hugo Teso demonstrated exploits of two aircraft communication systems (ADS-B and ACARS), though purposely only in a virtual environment. "By taking advantage of two new technologies for the discovery, information gathering and exploitation phases of the attack, and by creating an exploit framework (SIMON) and an Android app (PlaneSploit) that delivers attack messages to the airplanes' Flight Management Systems (computer unit + control display unit), he demonstrated the terrifying ability to take complete control of [aircraft] by making virtual planes 'dance to his tune.'"

Comments (24 posted)

New vulnerabilities

curl: cookie information disclosure

Package(s):curl CVE #(s):CVE-2013-1944
Created:April 16, 2013 Updated:June 10, 2013
Description: From the

YAMADA Yasuharu discovered that libcurl was vulnerable to a cookie leak when doing requests across domains with matching tails. curl did not properly restrict cookies to domains and subdomains. If a user or automated system were tricked into processing a specially crafted URL, an attacker could read cookie values stored by unrelated webservers.

Gentoo 201401-14 curl 2014-01-20
openSUSE openSUSE-SU-2013:0879-1 curl 2013-06-10
openSUSE openSUSE-SU-2013:0876-1 curl 2013-06-10
Fedora FEDORA-2013-7797 curl 2013-05-25
Fedora FEDORA-2013-7813 curl 2013-05-15
Fedora FEDORA-2013-6766 curl 2013-05-06
openSUSE openSUSE-SU-2013:0726-1 curl 2013-04-30
Mandriva MDVSA-2013:151 curl 2013-04-26
Scientific Linux SL-curl-20130425 curl 2013-04-25
Oracle ELSA-2013-0771 curl 2013-04-24
Oracle ELSA-2013-0771 curl 2013-04-25
CentOS CESA-2013:0771 curl 2013-04-24
CentOS CESA-2013:0771 curl 2013-04-24
Red Hat RHSA-2013:0771-01 curl 2013-04-24
Debian DSA-2660-1 curl 2013-04-20
Fedora FEDORA-2013-5618 curl 2013-04-18
Mageia MGASA-2013-0121 curl 2013-04-18
Ubuntu USN-1801-1 curl 2013-04-15

Comments (none posted)

drupal7-ctools: access bypass

Package(s):drupal7-ctools CVE #(s):CVE-2013-1925
Created:April 15, 2013 Updated:April 17, 2013
Description: From the Drupal advisory:

The module doesn't sufficiently enforce node access when providing an autocomplete list of suggested node titles, allowing users with the "access content" permission to see the titles of nodes which they should not be able to view.

Fedora FEDORA-2013-4980 drupal7-ctools 2013-04-14
Fedora FEDORA-2013-4937 drupal7-ctools 2013-04-14

Comments (none posted)

freeipa: denial of service

Package(s):freeipa CVE #(s):CVE-2013-0336
Created:April 11, 2013 Updated:April 17, 2013

From the Fedora advisory:

Sumit Bose discovered that FreeIPA's directory server (dirsrv) would segfault if an unauthenicated user attempted to connect to it with a missing username/dn. According to RFC 3062, connecting without specifying the username/dn is valid.

Fedora FEDORA-2013-4460 freeipa 2013-04-11

Comments (none posted)

gsi-openssh: unauthorized account access

Package(s):gsi-openssh CVE #(s):
Created:April 16, 2013 Updated:April 17, 2013
Description: From the GSI-OpenSSH Security Advisory:

GSI-OpenSSH is a modified version of OpenSSH that adds support for RFC 3820 proxy certificate authentication and delegation. GSI-OpenSSH is provided by NCSA and is not associated with the OpenSSH project. GSI-OpenSSH is provided as both a standalone package and as a patch to OpenSSH.

The PermitPAMUserChange feature added to GSI-OpenSSH in August 2009 [1] based on an earlier OpenSSH patch [2] contains a memory management bug that may allow an authenticated user to log in to an unauthorized account. The PermitPAMUserChange feature is disabled by default and must be explicitly enabled by the system administrator. It is used primarily with MEG (MyProxy Enabled GSISSHD) [3].

The PermitPAMUserChange feature allows users to log in to a system using a username that need not correspond to a local system account, provided that PAM accepts the username, authenticates the user, and then maps the user to an existing local system account via PAM_USER. The memory management bug can cause the authenticated user to be mapped to an account different than PAM_USER.

Fedora FEDORA-2013-5051 gsi-openssh 2013-04-15
Fedora FEDORA-2013-5057 gsi-openssh 2013-04-15

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2013-1929
Created:April 12, 2013 Updated:May 28, 2013

From the Red Hat bugzilla entry:

Linux kernel built with the Broadcom tg3 ethernet driver is vulnerable to a buffer overflow. This could occur when the tg3 driver reads and copies firmware string from hardware's product data(VPD), if it exceeds 32 characters.

A user with physical access to a machine could use this flaw to crash the system or, potentially, escalate their privileges on the system.

openSUSE openSUSE-SU-2014:0766-1 Evergreen 2014-06-06
openSUSE openSUSE-SU-2013:1971-1 kernel 2013-12-30
openSUSE openSUSE-SU-2013:1950-1 kernel 2013-12-24
Scientific Linux SLSA-2013:1645-2 kernel 2013-12-16
openSUSE openSUSE-SU-2013:1773-1 kernel 2013-11-26
Red Hat RHSA-2013:1645-02 kernel 2013-11-21
Oracle ELSA-2013-1645 kernel 2013-11-26
SUSE SUSE-SU-2013:1474-1 Linux kernel 2013-09-21
SUSE SUSE-SU-2013:1473-1 Linux kernel 2013-09-21
Oracle ELSA-2013-1034 kernel 2013-07-10
CentOS CESA-2013:1034 kernel 2013-07-10
Scientific Linux SL-kern-20130710 kernel 2013-07-10
Red Hat RHSA-2013:1034-01 kernel 2013-07-10
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Mandriva MDVSA-2013:176 kernel 2013-06-24
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Oracle ELSA-2013-2525 kernel 2013-06-13
Oracle ELSA-2013-2525 kernel 2013-06-13
Ubuntu USN-1838-1 linux-ti-omap4 2013-05-30
Ubuntu USN-1839-1 linux-ti-omap4 2013-05-28
Ubuntu USN-1836-1 linux-ti-omap4 2013-05-24
Ubuntu USN-1834-1 linux-lts-quantal 2013-05-24
Ubuntu USN-1833-1 linux 2013-05-24
Ubuntu USN-1835-1 linux 2013-05-24
Red Hat RHSA-2013:0829-01 kernel-rt 2013-05-20
Mageia MGASA-2013-01451 kernel-vserver 2013-05-17
Mageia MGASA-2013-0150 kernel-rt 2013-05-17
Mageia MGASA-2013-0149 kernel-tmb 2013-05-17
Mageia MGASA-2013-0148 kernel-linus 2013-05-17
Mageia MGASA-2013-0147 kernel 2013-05-17
Debian DSA-2669-1 linux 2013-05-15
Debian DSA-2668-1 linux-2.6 2013-05-14
Fedora FEDORA-2013-5368 kernel 2013-04-11

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2012-6542 CVE-2012-6546 CVE-2012-6547 CVE-2013-1826
Created:April 17, 2013 Updated:April 19, 2013
Description: From the CVE entries:

The llc_ui_getname function in net/llc/af_llc.c in the Linux kernel before 3.6 has an incorrect return value in certain circumstances, which allows local users to obtain sensitive information from kernel stack memory via a crafted application that leverages an uninitialized pointer argument. (CVE-2012-6542)

The ATM implementation in the Linux kernel before 3.6 does not initialize certain structures, which allows local users to obtain sensitive information from kernel stack memory via a crafted application. (CVE-2012-6546)

The __tun_chr_ioctl function in drivers/net/tun.c in the Linux kernel before 3.6 does not initialize a certain structure, which allows local users to obtain sensitive information from kernel stack memory via a crafted application. (CVE-2012-6547)

The xfrm_state_netlink function in net/xfrm/xfrm_user.c in the Linux kernel before 3.5.7 does not properly handle error conditions in dump_one_state function calls, which allows local users to gain privileges or cause a denial of service (NULL pointer dereference and system crash) by leveraging the CAP_NET_ADMIN capability. (CVE-2013-1826)

SUSE SUSE-SU-2014:0536-1 Linux kernel 2014-04-16
Scientific Linux SLSA-2013:1645-2 kernel 2013-12-16
Red Hat RHSA-2013:1645-02 kernel 2013-11-21
Oracle ELSA-2013-1645 kernel 2013-11-26
Mandriva MDVSA-2013:176 kernel 2013-06-24
Oracle ELSA-2013-2525 kernel 2013-06-13
Oracle ELSA-2013-2525 kernel 2013-06-13
Ubuntu USN-1829-1 linux-ec2 2013-05-16
Ubuntu USN-1824-1 linux 2013-05-15
Debian DSA-2668-1 linux-2.6 2013-05-14
Ubuntu USN-1808-1 linux-ec2 2013-04-25
Oracle ELSA-2013-2520 kernel-2.6.32 2013-04-25
Oracle ELSA-2013-2520 kernel-2.6.32 2013-04-25
Oracle ELSA-2013-0744 kernel 2013-04-24
Scientific Linux SL-kern-20130424 kernel 2013-04-24
CentOS CESA-2013:0744 kernel 2013-04-24
Red Hat RHSA-2013:0744-01 kernel 2013-04-23
Ubuntu USN-1805-1 linux 2013-04-19
Oracle ELSA-2013-0747 kernel 2013-04-18
Oracle ELSA-2013-0747 kernel 2013-04-18
Scientific Linux SL-kern-20130417 kernel 2013-04-17
CentOS CESA-2013:0747 kernel 2013-04-17
Red Hat RHSA-2013:0747-01 kernel 2013-04-16

Comments (none posted)

krb5: denial of service

Package(s):krb5 CVE #(s):CVE-2013-1416
Created:April 17, 2013 Updated:June 11, 2013
Description: From the Red Hat advisory:

A NULL pointer dereference flaw was found in the way the MIT Kerberos KDC processed certain TGS (Ticket-granting Server) requests. A remote, authenticated attacker could use this flaw to crash the KDC via a specially-crafted TGS request.

Ubuntu USN-2310-1 krb5 2014-08-11
Gentoo 201312-12 mit-krb5 2013-12-16
openSUSE openSUSE-SU-2013:0967-1 krb5 2013-06-10
openSUSE openSUSE-SU-2013:0746-1 krb5 2013-05-03
Mageia MGASA-2013-0131 krb5 2013-05-02
Mandriva MDVSA-2013:158 krb5 2013-04-30
Mandriva MDVSA-2013:157 krb5 2013-04-30
Fedora FEDORA-2013-5286 krb5 2013-04-18
Fedora FEDORA-2013-5280 krb5 2013-04-18
openSUSE openSUSE-SU-2013:0904-1 krb5 2013-06-10
CentOS CESA-2013:0748 krb5 2013-04-17
Oracle ELSA-2013-0748 krb5 2013-04-16
Scientific Linux SL-krb5-20130416 krb5 2013-04-16
Red Hat RHSA-2013:0748-01 krb5 2013-04-16

Comments (none posted)

libapache-mod-security: file disclosure, denial of service

Package(s):libapache-mod-security CVE #(s):CVE-2013-1915
Created:April 11, 2013 Updated:May 3, 2013

From the Debian advisory:

Timur Yunusov and Alexey Osipov from Positive Technologies discovered that the XML files parser of ModSecurity, an Apache module whose purpose is to tighten the Web application security, is vulnerable to XML external entities attacks. A specially-crafted XML file provided by a remote attacker, could lead to local file disclosure or excessive resources (CPU, memory) consumption when processed.

openSUSE openSUSE-SU-2013:1342-1 apache2-mod_security2 2013-08-14
openSUSE openSUSE-SU-2013:1336-1 apache2-mod_security2 2013-08-14
openSUSE openSUSE-SU-2013:1331-1 apache2-mod_security2 2013-08-14
Mageia MGASA-2013-0129 apache-mod_security 2013-05-02
Mandriva MDVSA-2013:156 apache-mod_security 2013-04-29
Fedora FEDORA-2013-4834 mod_security 2013-04-14
Fedora FEDORA-2013-4831 mod_security 2013-04-14
Debian DSA-2659-1 libapache-mod-security 2013-04-10

Comments (none posted)

phpmyadmin: cross-site scripting

Package(s):phpmyadmin CVE #(s):CVE-2013-1937
Created:April 17, 2013 Updated:April 22, 2013
Description: From the CVE entry:

Multiple cross-site scripting (XSS) vulnerabilities in tbl_gis_visualization.php in phpMyAdmin 3.5.x before 3.5.8 might allow remote attackers to inject arbitrary web script or HTML via the (1) visualizationSettings[width] or (2) visualizationSettings[height] parameter.

Gentoo 201311-02 phpmyadmin 2013-11-04
openSUSE openSUSE-SU-2013:1065-1 phpmyadmin 2013-06-21
Fedora FEDORA-2013-5623 phpMyAdmin 2013-04-21
Fedora FEDORA-2013-5620 phpMyAdmin 2013-04-21
Mageia MGASA-2013-0122 phpmyadmin 2013-04-18
Mandriva MDVSA-2013:144 phpmyadmin 2013-04-16

Comments (none posted)

xen: privilege escalation

Package(s):xen CVE #(s):CVE-2013-1920
Created:April 17, 2013 Updated:April 17, 2013

From the CVE entry:

Xen 4.2.x, 4.1.x, and earlier, when the hypervisor is running "under memory pressure" and the Xen Security Module (XSM) is enabled, uses the wrong ordering of operations when extending the per-domain event channel tracking table, which causes a use-after-free and allows local guest kernels to inject arbitrary events and gain privileges via unspecified vectors.

SUSE SUSE-SU-2014:0470-1 Xen 2014-04-01
SUSE SUSE-SU-2014:0446-1 Xen 2014-03-25
SUSE SUSE-SU-2014:0411-1 Xen 2014-03-20
Gentoo 201309-24 xen 2013-09-27
openSUSE openSUSE-SU-2013:1392-1 xen 2013-08-30
SUSE SUSE-SU-2013:1075-1 Xen 2013-06-25
openSUSE openSUSE-SU-2013:0912-1 xen 2013-06-10
Fedora FEDORA-2013-4927 xen 2013-04-14
Fedora FEDORA-2013-4952 xen 2013-04-14

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.9-rc7, released on April 14. Linus says: "This is mostly random one-liners, with a few slightly larger driver fixes. The most interesting (to me, probably to nobody else) fix is a fix for a rather subtle TLB invalidate bug that only hits 32-bit PAE due to the weird way that works."

Stable updates: 3.8.7, 3.4.40, and 3.0.73 were released on April 12; 3.8.8, 3.4.41, and 3.0.74 followed on April 16.

Comments (none posted)

Quotes of the week

Greg, I'm shocked! Surely you've been doing this long enough to know that we don't use that kind of language on lkml?

To restore the list's reputation as a hostile pressure cooker powered by the smouldering remains of flame-roasted newcomers, allow me to correct your reply:

"NAK. And you smell."

Crisis averted,

Rusty Russell

Anything that loses data with alarming regularity is not ready for use in production systems.
— Dave Chinner (at the LF Collaboration Summit)

Comments (6 posted)


By Jonathan Corbet
April 17, 2013
The -rc7 stage of the kernel development cycle is normally considered to be the wrong time to add new driver API functions; most developers attempting such a thing could expect to get a response that is less than 100% encouraging. But, if you're Linus, you can get away with such things. So, after looking at too much messy driver code, Linus added a new helper function for drivers that need to map a physical memory range into a user-space process's memory:

    int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, 
			unsigned long len);

This function is a wrapper around io_remap_pfn_range() that takes care of a number of details specific to the virtual memory subsystem so that driver code need not worry about them. No drivers have been converted for 3.9, but one can expect that process to begin in the 3.10 cycle.

Comments (3 posted)

Video from the Collaboration Summit Kernel Report

If this week's Kernel Page seems a bit thin, it is because your editor has been busy at the 2013 Linux Foundation Collaboration Summit, to be followed by the Filesystem, Storage, and Memory Management Summit. In lieu of more content, sufficiently masochistic readers are directed to your editor's Collaboration Summit talk, posted on YouTube. Normal, text-only service will return next week.

Comments (none posted)

Kernel development news

Statistics from the 3.9 development cycle

By Jonathan Corbet
April 17, 2013
As of this writing, Linus has stated that 3.9-rc7 should be the last prepatch for the 3.9 development cycle. If that prediction holds, the final 3.9 release can be expected sometime around April 21, after a 62-day development cycle. That is not the shortest cycle ever, but it is getting close; in general, the community has been producing kernels more quickly in the last year, with no kernel after 3.3 taking more than 71 days. No kernel has gone past -rc8 since the release of 3.1-rc10 in October 2011 — and that was a special case caused by the breakin. At this point, everybody seems to know how the process works, and things go pretty smoothly.

3.8 was the most active development cycle ever. At 11,746 non-merge changesets (as of this writing), 3.9 will not beat that record, but it will set one of its own: the 1,364 developers who contributed to this kernel are the most ever. The most active of those developers were:

Most active 3.9 developers
By changesets
Takashi Iwai2652.3%
H Hartley Sweeten2592.2%
Al Viro2081.8%
Tejun Heo1861.6%
Johannes Berg1781.5%
Kees Cook1771.5%
Daniel Vetter1281.1%
Alex Elder1191.0%
Eric W. Biederman1090.9%
Laurent Pinchart1090.9%
Mark Brown1070.9%
Yinghai Lu980.8%
Peter Huewe950.8%
Kevin McKinney950.8%
Vineet Gupta940.8%
Rafael J. Wysocki900.8%
Hideaki Yoshifuji850.7%
Jingoo Han810.7%
Sachin Kamat760.7%
Mauro Carvalho Chehab750.6%
By changed lines
Paul Gortmaker349274.7%
Laurent Pinchart321374.3%
James Hogan278083.7%
Johannes Berg254513.4%
Takashi Iwai200962.7%
Vineet Gupta198862.7%
Ralf Baechle152102.0%
Manjunath Hadli145271.9%
George Zhang101541.4%
H Hartley Sweeten87961.2%
Sony Chacko87811.2%
Ariel Elior85901.1%
Joe Thornber77241.0%
Prashant Gaikwad75581.0%
Al Viro67490.9%
Christoffer Dall64020.9%
Andy King60630.8%
Ben Skeggs55630.7%
Ian Minett49430.7%
Bob Moore45420.6%

H. Hartley Sweeten continues to work on the cleanup of the Comedi drivers, but, for the first time since 3.5, he has been pushed out of the top position by Takashi Iwai, who merged a vast amount of ALSA sound driver work for 3.9. Al Viro has been working on the cleanup of a number of virtual filesystem APIs, but much of his work this time around was also focused on making the signal code more generic and architecture-independent. Tejun Heo's work is divided between improving the control group subsystem, improving workqueues, and simplifying the IDR API. Johannes Berg is highly active in wireless networking, and with the core mac80211 subsystem in particular.

Paul Gortmaker got to the top of the "lines changed" column through the removal of a number of old, obsolete network drivers; the kernel lost over 34,000 lines of code as the result of his work. Laurent Pinchart did a lot of low-level embedded architecture cleanup and improvement work, and James Hogan added the new Meta architecture.

One could look at the development statistics and conclude that the average kernel developer contributed eight or nine changesets during the 39 cycle. The truth of the matter is a little different, as can be seen in this plot:

counts plot]

Just over one third of the developers working on 3.9 contributed a single patch, and the median developer contributed two. Meanwhile, the 100 most active developers contributed more than half of all the patches merged in this cycle. This pattern where a relatively small group of developers is responsible for the bulk of the changes has not changed much in recent years.

219 companies (that we know of) supported development of the 3.9 kernel. The most active of these companies were:

Most active 3.9 employers
By changesets
Red Hat10509.0%
Texas Instruments3673.1%
Vision Engraving Systems2592.2%
Renesas Electronics2031.7%
Wolfson Microelectronics1291.1%
Inktank Storage1281.1%
Arista Networks1090.9%
By lines changed
Renesas Electronics662908.8%
Wind River507406.8%
Red Hat484246.5%
Texas Instruments323334.3%
Imagination Technologies278833.7%
Vision Engraving Systems107311.4%

For the first time ever, Intel finds itself at the top of the chart in both columns, displacing Red Hat and even exceeding the total of contributions from volunteers (those marked as "(None)" above); chances are, though, that if all the developers in the "unknown" category were known, they would push the volunteer group back to the top of the list. In general, the percentage of contributions from volunteers continues its slow decline. In today's job market, it seems, anybody who is able to get code into the kernel has to be fairly determined to reject job offers to remain a volunteer.

In summary, the kernel development community remains healthy and vibrant, delivering vast amounts of work to Linux users via a process that appears to run like a well-oiled machine. There are very few projects, either free or proprietary, that can sustain this kind of pace for years at a time. Given the kernel's history, it seems likely that things will continue in this vein for some time; it is going to be fun to watch.

Comments (9 posted)

Memory power management, 2013 edition

By Jonathan Corbet
April 17, 2013
When developers talk about power management, they are almost always concerned with the behavior of the CPU, since that is where the largest savings tend to be found. Computers are made up of more than just CPUs, though, and the other components require power as well. Seemingly, about once per year, attention turns to reducing the power demands of the RAM on the system; since RAM can take up to one third of a system's total power budget, this focus makes sense. Accordingly, LWN has looked at this issue once in 2011 and again in 2012. Now there is a new memory power management patch set in circulation, so another look seems warranted.

The most recent patch set comes from Srivatsa S. Bhat; it differs from previous approaches in a number of ways. For example, it targets memory controllers that have automatic, content-preserving power management modes. Such controllers divide memory into a set of regions, each of which can be powered down independently when the controller detects that there have been no memory accesses to the region in the recent past. The strategy to use is fairly obvious: try to keep as many memory regions as possible empty so that they will stay powered down.

The first step is to keep track of those regions in the memory management subsystem. Previous patches have used the zone system (which divides memory with different characteristics — high and low memory on 32-bit systems, for example) to track regions. The problem with this approach is that it causes an explosion in the number of zones; that leads to more memory management overhead and challenges in keeping memory usage balanced across all those zones. Srivatsa's patch, instead, tracks regions as separate entities in parallel with zones, avoiding this problem.

Once the kernel knows where the regions are, the trick is to concentrate memory allocations on a relatively small number of those regions whenever possible. To that end, the patch set causes the list of free pages to be sorted by region, so that allocations from the head of the list will come from the lowest-numbered region with available pages. Note that sorting within a region is not necessary; it is sufficient that all pages in a given region are grouped together. A set of pointers into the free list, one per region, helps newly-freed pages to be quickly added to the list in the correct location.

Region-aware allocation can help to keep active pages grouped together, but, in the real world, allocated pages will still end up being spread across physical memory over time. Unless other measures are taken, most regions will end up with active pages even when the system is under relatively light memory load; that will make powering down those regions difficult or impossible. So, inevitably, Srivatsa's patch set includes a mechanism for migrating pages out of regions.

Vacating regions of memory is not a new problem; the contiguous memory allocator (CMA) mechanism must sometimes take active measures to create large contiguous blocks, for example. So this particular problem has already been solved in the kernel. Rather than add a new compaction scheme, Srivatsa's patch set modifies the CMA implementation to make it suitable for memory power management uses as well. The result is a simple compact_range() function that can be invoked by either subsystem to move pages and free a range of memory.

There is still the question of when the kernel should try to vacate a memory region. If it does not happen often enough, power consumption will be higher than it needs to be. Excessive page migration will simply soak up CPU time, though, with no resulting power savings. Indeed, overly aggressive compaction could result in higher power usage than before. So some sort of control mechanism is required.

In this patch set, the page allocator has been enhanced to notice when it starts allocating pages from a new memory region. That new region, by virtue of having been protected from allocations until now, should not have many pages allocated; that makes it a natural target for compaction. But it makes no sense to attempt that compaction when the page is being allocated, since, clearly, no free pages exist in the lower-numbered regions. So the page allocator does not attempt compaction at that time; instead, it sets a flag indicating that compaction should be attempted in the near future.

The "near future" means when pages are freed. When some pages are given back to the allocator, it might be possible to use those pages to free a lightly-used region of memory. So that is the time when compaction is attempted; a workqueue function will be kicked off to attempt to vacate any regions that had previously been marked by the allocator. That code will only make the attempt, though, if a relatively small number of pages (32 in the current patch) would need to be migrated. Otherwise the cost is deemed to be too high and the region is left alone.

The patch set is still young, so there is not a lot of performance data available. In the introduction, though, a 6% power savings is claimed when running on a 2GB Samsung Exynos board, with the potential for more held out if other parts of the memory management subsystem can be made more power aware. One question that is not answered in the patch set is this: on a typical Linux system, very few pages are truly "free"; instead, they are occupied by the page cache. To be able to vacate regions, it seems like a more aggressive approach to reclaiming page-cache pages would be required. There are undoubtedly other concerns that would need to be addressed as well; perhaps they will be discussed in the 2014 update, if not before.

Comments (2 posted)

Patches and updates

Kernel trees


Core kernel code

Device drivers


Filesystems and block I/O

Memory management



Virtualization and containers


Page editor: Jonathan Corbet


Upstart for user sessions

By Nathan Willis
April 17, 2013

Ubuntu has announced plans to use Upstart, its event-driven init replacement, to manage the desktop user session as well. The functionality will be available starting with the 13.04 release, but it will be disabled by default. The plan is to complete the transition in time for the Ubuntu 13.10 release. Upstart works by executing actions in response to events or signals emitted when there are changes in system state. The goal of the change is to migrate many of the desktop session's processes to be on-demand services, thus saving memory and power currently used to keep idle services running persistently.

Upstart maintainer James Hunt wrote about the change on his blog in April, although the nucleus of the idea goes back further than that. In October 2012, Didier Roche suggested eliminating long-running daemons as a way to reduce memory footprint. Ted Gould replied that the idea of using Upstart to replace some of these daemons had been floated for the Ubuntu 12.10 release, but was not pursued.

What's a session, anyway?

Currently, an Ubuntu desktop session starts when the display manager LightDM launches gnome-session in response to a successful login attempt. The window manager, panel, and other basic components are defined by a file in /usr/share/gnome-session/sessions/, but there are multiple other mechanisms through which the auxiliary programs that round out a full desktop session (such as system monitors, panel applets, and notification daemons for the network, Bluetooth, printers, and so forth) can be started. LightDM can be configured to run scripts, and the user can define startup applications through the GNOME preferences. But the bulk of the auxiliary programs auto-started for a session are defined in /etc/xdg/autostart/; on the Ubuntu 12.04 system I explored, there are 33 auxiliary programs, from AT-SPI to Zeitgeist. The rationale for introducing Upstart to this equation is that many of the persistent daemons listed /etc/xdg/autostart/ are merely watching for file changes or DConf configuration-key changes.

Early on, Upstart supported only event types needed for system initialization, such as filesystem mounting, network startup, run-level changes, and so forth. But recent releases have extended the set of supported events in order to facilitate managing desktop session tasks. In particular, version 1.7 introduced upstart-event-bridge, which relays system-level events down to the user session (thus allowing the desktop session to react to system-level events), and version 1.8 introduced upstart-file-bridge, which allows jobs to react to file changes (including file creation, deletion, modification, or anything else supported by inotify).

Upstart 1.3 had already introduced the notion of "User Jobs"—unprivileged jobs handled by Upstart and limited to a user's login session. Having Upstart manage the entire user session is an extension of this earlier functionality, which is now being renamed "Session Jobs." As Hunt explains in his post, the initial targets for migration to Upstart are programs like update-notifier. The update-notifier daemon watches for newly available package updates by monitoring a fixed set of filesystem locations in /var/ and watching for the insertion of removable media containing software packages.

update-notifier does seem to be ripe for conversion to an on-demand service; inserting a CD or DVD can be detected through D-Bus events, and the upstart-file-bridge can emit an event upon changes to the filesystem locations it monitors. These locations include /var/lib/update-notifier/user.d/, which the separate update-manager modifies whenever it finds a new package in one of the enabled package repositories. Querying those package repositories is a relatively infrequent event, a few times a day might even be considered overkill.

The same could be said for the network manager daemon (since few people change networks more than a few times per day), backup manager, and several of the other persistent processes. But there are others that one might expect to be responding to events frequently, such as the Orca screen reader or the Gwibber communication server. Consequently, the full memory and power savings of moving to Upstart's on-demand daemon model is a bit hard to quantify, but it is almost certainly non-zero.

Changes to Ubuntu

Hunt does not go into detail regarding what other services might migrate soon (although he mentions the crash reporter "whoopsie" as one possibility)—he spends most of the rest of the post explaining how to write Upstart jobs. But there is a blueprint on the distribution's wiki that describes more of the plan.

The existing Upstart init process will remain unchanged, running as PID 1. When the user session starts, Upstart will launch a child process with init --user. This child process will handle the session startup duties, receiving signals from the system init process through upstart-event-bridge. Apart from emitting those signals, the system init has no knowledge of the session init's activity.

The session init does need to listen for SIGCHLD signals from the daemons that it launches, however, so it can detect when those daemons exit. The design is currently to use the PR_SET_CHILD_SUBREAPER process control option, so that the session init remains the parent of any double-forking daemons it launches. The downside is that this option is Linux only, and that it may necessitate rewrites of some daemons that expect to run as the child of PID 1.

The presence of both system and session init processes has also necessitated some changes to Upstart's event naming scheme. System events are now prefixed with :sys:, so custom jobs will need to be rewritten, and the new syntax will need to be used from the command-line utilities (such as initctl).

Further out, of course, there are many more changes slated to come to Ubuntu desktop sessions, starting with the Mir display server and Unity Next environment. Ultimately, Mir and Unity Next will replace LightDM as the display manager. Before that time, though, migrating away from gnome-session might be viewed by critics as the distribution further diverging its system configuration from that of its contemporaries.

It will be interesting to see whether Chrome OS (the largest non-Ubuntu-related distribution to use Upstart) also shows interest in using Upstart for session management. It is not likely to look appealing to Systemd-based distributions of course, although the notion of an init system expanding to assume control over other parts of the system stack might sound familiar.

On that point, GNOME 3.8 is the first GNOME release to support Systemd as an option, so when the next development cycle completes for those distributions that use GNOME Shell, gnome-session could be slated for replacement on those systems as well. If that does happen, it will offer an opportunity to compare raw numbers for the two init replacements on session management tasks. The predicted change for Ubuntu users is that an Upstart-based session will have few daemons running, consuming less power (precious battery power on mobile systems) and less RAM. How the actual statistics play out will no doubt depend on a lot of factors, both on its performance against gnome-session and its inevitable comparison to Systemd.

Comments (18 posted)

Brief items

Distribution quote of the week

> What is needed to review my patch and include it?

...42 rainbow cupcakes, world peace + special star constellation.

-- Marcel Partap

Comments (none posted)

A new life for Fuduntu

The developers of the Fuduntu distribution have announced the end of development on Fuduntu, but also the possibility of a new distribution rising from the ashes. "Following that decision, however, most of the team members then discussed the idea of creating a 'new' Fuduntu. Andrew Wyatt, however will not be a part of the new distro in an official capacity. After the decision to EOL, Andrew, the founder and lead developer of Fuduntu, announced his plans to retire after the final Fuduntu release. Andrew will be missed and his hard work and dedication is appreciated by all. While he will not be serving in an official capacity, Andrew will be serving as an advisor to the leadership and team of the new distro." Fuduntu was reviewed by LWN in 2011.

Comments (23 posted)

openSUSE ARMs More Hardware, Gets More Build Power

The openSUSE ARM team has announced the immediate availability of openSUSE 12.3 based ARM images with support for the Calxeda Highbank ARM solution as well as a variety of other SoC's. "The openSUSE ARM team has also been making steady progress on AArch64 support. Currently we provide over 5700 packages readily built for AArch64, which means openSUSE currently delivers the biggest software pool for AArch64, including Java, Python, Perl, PHP and related packages."

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian Project Leader Election 2013 Results

Debian Project Secretary, Kurt Roeckx, has announced the results of the 2013 Debian Project Leader elections. Lucas Nussbaum is the winner. His term will start on April 17, 2013.

Full Story (comments: 4)

bits from the DPL: March-April 2013

Stefano Zacchiroli presents his final bits as Debian Project Leader. Topics include Debian participation in the Outreach Program for Women, finances, talks, DPL helpers, legal matters, and more. "Now, before I get sentimental, let me thank Gergely, Lucas, and Moray for running in the recently concluded DPL election. Only thinking of running and then go through a campaign denote a very high commitment to the Project; we should all be thankful to them."

Full Story (comments: none)

Mandriva Linux

OpenMandriva: Meet our Face

The OpenMandriva Association has announced a new logo. The name of the community distribution has not yet been decided.

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Debian base for first Pardus Community Edition (The H)

Pardus is a Turkish distribution with the desktop and applications localized for its Turkish audience. The H covers the release of the first Community Edition. "Previous versions of Pardus were based on Gentoo Linux, but future releases of two new branches [Corporate and Community] will be based on Debian. With the split, the developers are clearly delineating between the version of Pardus designed to be used by government organisations and companies and a more streamlined version for end users. Pardus was originally developed by a division of the Technological Research Council of Turkey (TÜBİTAK) with the goal of creating an independent, secure and cost-effective operating system for the Turkish government. Current versions of the distribution are developed by community volunteers."

Comments (none posted)

The future of Cinnarch

The Cinnarch project has announced a change in direction. The project's original goal was to create an Arch Linux based system with the Cinnamon desktop. "While Cinnamon is a great user interface and we’ve had a lot of fun implementing it, it’s become too much a burden to maintain/update going forward. We’d like to remain faithful and compatible to our parent distro, Arch Linux, and further support of Cinnamon would strain that by causing incompatibilities/hacks in the entirety of the Gnome packageset. It is almost impossible to maintain software developed by Linux Mint in a rolling release as we are. They’re 1 year behind with upstream code. Arch Linux is going to have soon Gnome 3.8 and Cinnamon is not compatible with it. The Cinnamon team still have to migrate some of their tools to fully work with Gnome 3.6." The project is also contemplating a name change as they switch to the GNOME desktop.

Comments (none posted)

Manjaro 0.8.5 introduces a graphical installer (The H)

The H covers the Arch Linux derivative Manjaro. "The latest version of the Manjaro Linux distribution, version 0.8.5, introduces a graphical installer in the form of a fork of the Linux Mint installer. Other features of the recently announced Linux include the newly introduced "Manjaro Settings", which makes the installation of language packs, system-wide language settings, managing user accounts, and the configuration of keyboard layouts more accessible. The update brings the developers one step closer to their goal of creating an Arch-Linux-based distribution that is usable by beginners. Those beginners will find a 61-page Beginner User Guide to the desktop to help them."

Comments (none posted)

Page editor: Rebecca Sobol


A taste of Rust

April 17, 2013

This article was contributed by Neil Brown

Rust, the new programming language being developed by the Mozilla project, has a number of interesting features. One that stands out is the focus on safety. There are clear attempts to increase the range of errors that the compiler can detect and prevent, and thereby reduce the number of errors that end up in production code.

There are two general ways that a language design can help increase safety: it can make it easier to write good code and it can make it harder to write bad code. The former approach has seen much success over the years as structured programming, richer type systems, and improved encapsulation have made it easier for a programmer to express their goals using high-level concepts. The latter approach has not met with quite such wide success, largely because it requires removing functionality that easily leads to bad code. The problem there, of course, is that potentially dangerous functionality can also be extremely useful. The classic example is the goto statement, which has long been known to be an easy source of errors, but which most major procedural languages still include as they would quickly hit difficulties without it (though Rust, like Python, has avoided adding goto so far).

Nevertheless it is this second approach that Rust appears to focus on. A number of features that are present in languages like C, and some that are still present in more safety-conscious languages like Java, are disallowed or seriously restricted in Rust. To compensate, Rust endeavors to be a sufficiently rich language that those features are not necessary. One way to look at this focus is that it endeavors to produce compile-time errors where other languages would generate runtime errors (or core dumps). This not only reduces costs at runtime, but should decrease the number of support calls from customers.

A large part of this richness is embodied in the type system. Rust has many of the same sorts of types that other languages do (structures, arrays, pointers, etc.), but they are often, as we shall see, somewhat narrower or more restricted than is common. Coupled with this are various ways to broaden a type, in a controlled way, so that its shape fits more closely to what the programmer really wants to say. And for those cases where Rust turns out not to be sufficiently rich, there is a loophole.

Rigidly defined areas of doubt and uncertainty

In a move that strongly echos the "System" module in Modula-2 (which provides access to so-called "low-level" functionality), Rust allows functions and code sections to be declared as "unsafe". Many of the restrictions that serve to make Rust a safer language do not apply inside "unsafe" code.

This could be viewed as an admission of failure to provide a truly safe language, however that view would be overly simplistic. In the first place, Gödel's incompleteness theorem seems to suggest that any programming language which is rich enough to say all that we might want to say is also so rich that it cannot possibly be considered to be "safe". But on a more practical level, a moment's thought will show that code inside the language compiler itself is deeply "unsafe" in the context of the program that it generates. A bug there could easily cause incorrect behavior without any hope of a compiler being able to detect it. So complete safety is impossible.

Given that the reality of "unsafe" code is unavoidable, it is not unreasonable to allow some of it to appear in the target program, or in libraries written in the target language, instead of all of it being in the compiler. This choice to have the language be safe by default allows most code to be safe while providing clearly defined areas where unchecked code is allowed. This, at the very least, makes it easier to identify which areas of code need particular attention in code reviews.

If we look at the Rust compiler, which itself is written in Rust, we find that of the 111 source code files (i.e. the src/librustc directory tree, which excludes the library and runtime support), 33 of them contain the word "unsafe". 30% of the code being potentially unsafe may seem like a lot, even for a compiler, but 70% being guaranteed not to contain certain classes of errors certainly sounds like a good thing.

Safer defaults

As we turn to look at the particular ways in which Rust encourages safe code, the first is not so much a restriction as a smaller scale version of the choice to default to increased safety.

While many languages have keywords like "const" (C, Go) or "final" (Java) to indicate that a name, once bound, will always refer to the same value, Rust takes the reverse approach. Bindings are immutable by default and require the keyword "mut" to make them mutable, thus:

    let pi = 3.1415926535;
will never allow the value of pi to change while:
    let mut radius = 1.496e11;
will allow radius to be altered later. The choice of mut, which encourages the vowel to sound short, in contrast to the long vowel in "mutable", may be unfortunate, but encouraging the programmer to think before allowing something to change is probably very wise.

For some data structures, the "mutable" setting is inherited through references and structures to member elements, so that if an immutable reference is held to certain objects, the compiler will ensure that the object as a whole will remain unchanged.

Pointers are never NULL

Pointer errors are a common class of errors in code written in C and similar languages, and many successors have taken steps to control these by limiting or forbidding pointer arithmetic. Rust goes a step further and forbids NULL (or nil) pointers as well, so that any attempt to dereference a pointer will result in finding a valid object. The only time that a variable or field does not contain a valid pointer is when the compiler knows that it does not, so a dereference will result in a compile-time error.

There are of course many cases where "valid pointer or NULL" is a very useful type, such as in a linked list or binary tree data structure. For these cases, the Rust core library provides the "Option" parameterized type.

If T is any type, then Option<T> is a variant record (sometime called a "tagged union" but called an "enum" in Rust), which contains a value of type T associated with the tag "Some", or it contains just the tag "None". Rust provides a "match" statement, which is a generalization of "case" or "typecase" from other languages, that can be used to test the tag and access the contained value if it is present.

Option is defined as:

    pub enum Option<T> {
    struct element {
	value: int,
	next: Option<~element>,
could be used to make a linked list of integers. The tilde in front of element indicates a pointer to the structure, similar to an asterisk in C. There are different types of pointers which will be discussed later — tilde (~) introduces an "owned" pointer while the at sign (@) introduces a "managed" pointer.

Thus null-able pointers are quite possible in Rust, but the default is the safer option of a pointer that can never be NULL, and when a null-able pointer is used, it must always be explicitly tested for NULL before that use. For example:

    match next {
        Some(e) => io::println(fmt!("%d", next.value)),
        None    => io::println("No value here"),

In contrast to this, array (or, as Rust calls them, "vector") references do not require the compiler to know that the index is in range before that index is dereferenced. It would presumably be possible to use the type system to ensure that indexes are in range in many cases, and to require explicit tests for cases where the type system cannot be sure — similar to the approach taken for pointers.

Thus, while it is impossible to get a runtime error for a NULL pointer dereference, it is quite possible to get a runtime error for an array bounds error. It is not clear whether this is a deliberate omission, or whether it is something that might be changed later.

Parameterized Generics

Unsurprisingly, Rust does not permit "void" pointers or down casts (casts from a more general type, like void, to a more specific type). The use case that most commonly justifies these — the writing of generic data structures — is supported using parameterized types and parameterized functions. These are quite similar to Generic Classes in Java and Templates in C++. Generics in Rust have a simple syntax and any individual non-scalar type and any function or method can have type parameters.

A toy example might be:

    struct Pair<t1,t2> {
 	first: t1,
	second: t2,

    fn first<t1:Copy,t2:Copy>(p: Pair<t1,t2>) -> t1 {
 	return copy p.first;
We can then declare a variable:
    let foo = Pair { first:1, second:'2'};
noting that the type of foo is not given explicitly, but instead is inferred from the initial value. The declaration could read:
    let foo:Pair<int, char> = Pair { first:1, second:'2'};
but that would be needlessly verbose. We can then call
and the compiler will know that this returns an int. In fact it will return a copy of an int, not a reference to one.

Rust allows all type declarations (struct, array, enum), functions, and traits (somewhat like "interfaces" or "virtual classes") to be parameterized by types. This should mean that the compiler always knows enough about the type of every value to avoid the need for down casts.

Keep one's memories in order

The final safety measure from Rust that we will examine relates to memory allocation and particularly how that interacts with concurrency.

Like many modern languages, Rust does not allow the programmer to explicitly release (or "free" or "dispose" of) allocated memory. Rather, the language and runtime support take care of that. Two mechanisms are provided which complement the inevitable "stack" allocation of local variables. Both of these mechanisms explicitly relate to the multiprocessing model that is based on "tasks", which are somewhat like "goroutines" in Go, or Green threads. They are lightweight and can be cooperatively scheduled within one operating system thread, or across a small pool of such threads.

The first mechanism provides "managed" allocations which are subject to garbage collection. These will be destroyed (if a destructor is defined) and freed sometime after the last active reference is dropped. Managed allocations are allocated on a per-task heap and they can only ever be accessed by code running in that task. When a task exits, all allocations in that task's heap are guaranteed to have been freed. The current implementation uses reference counting to manage these allocations, though there appears to be an intention to change to a mark/sweep scheme later.

The second mechanism involves using the type system to ensure that certain values only ever have one (strong) reference. When that single reference is dropped, the object is freed. This is somewhat reminiscent of the "Once upon a type" optimization described by David Turner et al. for functional languages. However, that system automatically inferred which values only ever had a single reference, whereas Rust requires that this "once" or "owned" (as Rust calls it) annotation be explicit.

These single-reference allocations are made on a common heap and can be passed between tasks. As only one reference is allowed, it is not possible for two different tasks to both access one of these owned objects, so concurrent access, and the races that so often involves, are impossible.

May I borrow your reference?

Owned values can have extra references providing they are "borrowed" references. Borrowed references are quite weak and can only be held in the context of some specific strong reference. As soon as the primary reference is dropped or goes out-of-scope, any references borrowed from it become invalid and it is a compile-time error if they are used at all.

It is often convenient to pass borrowed references to functions and, more significantly, to return these references from functions. This is particularly useful as each of owned, managed, and on-stack references can equally be borrowed, so passing a borrowed reference can work independently of the style of allocation.

The passing of borrowed references is easily managed by annotating the type of the function parameter to say that the reference is a borrowed reference. The compiler will then ensure nothing is done with it that is not safe.

Returning borrowed references is a little trickier as the compiler needs to know not only that the returned reference is borrowed, but also needs to know something about the lifetime of the reference. Otherwise it cannot ensure that the returned reference isn't used after the primary reference has been dropped.

This lifetime information is passed around using the same mechanism as type-parameters for functions. A function can be declared with one or more lifetime parameters (introduced by a single quote: '), and the various arguments and return values can then be tagged with these lifetimes. Rust uses type inference on the arguments to determine the actual values of these parameters and thus the lifetime of the returned value. For example, the function choose_one declared as:

    fn choose_one<'r>(first: &'r Foo, second: &'r Foo) -> &'r Foo
has one lifetime parameter (r) which applies to both arguments (first and second), and to the result. The ampersand (&) here indicates a borrowed reference.

When it is called, the lifetime parameter is implicit, not explicit:

    baz = choose_one(bar, bat);
The compiler knows that the lifetime of r must be compatible with the lifetimes of bar and bat so it effectively computes the intersection of those. The result is associated with baz as its lifetime. Any attempt to use it beyond its lifetime will cause a compile-time error.

Put another way, Rust uses the lifetime parameter information to infer the lifetime of the function's result, based on the function's arguments. This is similar to the inference of the type of foo in our Pair example earlier. A more thorough explanation of these lifetimes can be found in the "Borrowed Pointers Tutorial" referenced below.

This extra annotation may feel a little clumsy, but it seems a relatively small price to pay for the benefit of being able to give the compiler enough information that it can verify all references are valid.

Time for that loophole

Allowing objects to be allocated on the stack, in a per-task heap, or in a common heap providing there is only ever one (strong) reference ensures that there will be no errors due to races with multiple tasks accessing the same data, but the price seems a little high to pay — it seems to mean that there can be no sharing of data between tasks at all.

The key to solving this conundrum is the "unsafe" code regions that were mentioned earlier. Unsafe code can allocate "raw" memory which has no particular affiliation or management, and can cast "raw" pointers into any other sort of pointer.

This allows, for example, the creation of a "queue" from which two owned references could be created, one of a type that allows references to be added to the queue, one which allows references to be removed. The implementation of this queue would require a modest amount of "unsafe" code, but any client of the queue could be entirely safe. The end points might be passed around from task to task, can be stored in some data structure, and can be used to move other owned references between tasks. When the last reference is dropped the "queue" implementation will be told and can respond appropriately.

The standard library for Rust implements just such a data structure, which it calls a "pipe". Other more complex data structures could clearly be implemented to allow greater levels of sharing, while making sure the interface is composed only of owned and managed references, and thus is safe from unplanned concurrent access and from dangling pointer errors.

Is safety worth the cost?

Without writing a large body of code in Rust it is impossible to get a good feel for how effective the various safety features of the language really are. Having a general ideological preference for strong typing, I expect I would enjoy the extra expressiveness and would find the extra syntax it requires to be a small price to pay. The greatest irritation would probably come from finding the need to write too much (or even any) unsafe code, or from having to give up all safety when wanting to sacrifice any.

Of course, if that led to practical ideas for making the language able to make richer assertions about various data, or richer assertions about the level of safety, and could thus enlarge the scope for safe code and decreasing the need for unsafe code, then the irritation might well result in some useful pearls.

After all, Rust is only at release "0.6" and is not considered to be "finished" yet.


Rust tutorial
Borrowed pointers tutorial
Rust Reference Manual
Blog post about Rust memory management
Another blog post about Rust

Comments (81 posted)

The future of Python 2

By Jake Edge
April 17, 2013

Python 3 has been out for close to four and a half years now—since December 2008—but its widespread adoption is still a ways off. Python 2.7 is younger—released in July 2010—but it is likely more commonly used than 3.x. In fact, usage of Python 2.x dwarfs that of the incompatible "next generation" Python. But with frameworks like Django and Twisted having been ported to Python 3.3, there is something of a feeling of inevitability for an eventual Python-3-only (or mostly) world. The only question is: how soon is eventual?

There are at least two questions floating around in the Python community with regard to future versions. One concerns how many more 2.7 releases there will be past the early April release of 2.7.4, while the other is a bit more amorphous: when will adoption of Python 3 really start to ramp up? The two questions are related, of course, but the problem is that the less knowable one will to a large extent govern how long 2.7 releases will be needed.

2.7 release manager Benjamin Peterson kicked off the discussion with a post provocatively titled "The end of 2.7". In it he posited that five years of maintenance was what was promised back when 2.7 was released in 2010, which would mean roughly another four six-monthly releases before 2.7 (and thus Python 2.x) would reach end of life. There were some questions about where Python 3 adoption would be by then, but there was no real opposition to Peterson's plan.

But there are the bigger questions. As was pointed out in the thread, there are still users of various end-of-life branches of the Python tree, including 2.4 and even 1.5. Those users are either happy with the existing version and unconcerned about security and other bug fixes—or they get support from one of the enterprise distributions such as RHEL, SUSE, or Ubuntu LTS. Contentment with those versions is not the only reason people stick with them, though, as there are real barriers to upgrading—especially to Python 3.

As Python creator (and the language's BDFL, benevolent dictator for life) Guido van Rossum put it, there are several issues that make migrating a large code base to a new Python version difficult and "they all conspire to make it hard to move forward, but not impossible". He goes on to outline those problems, which are largely comprised of the number of people and amount of time required, difficulties with external libraries and frameworks, and the general upheaval caused by such a move. Ultimately, though, he is optimistic:

But, despite all this, migrations happen all the time, and I am sure that Python 3 will prevail as time progresses. For many *users*, Python 3 may be a distraction. But for most *developers*, maintaining 2.7 is a distraction. By and large, users of 2.7 don't need new features, they just need it to keep working. And it does, of course.

Some suggestions in the thread that it is relatively straightforward to move to Python 3 by using tools like the six compatibility library were shot down by others. For example, Skip Montanaro noted that some code bases long predate the existence of six, or even Python 3. But it has gotten easier over time to migrate because new features have been added (or old features revived) to ease the transition. As Barry Warsaw pointed out, the Python developers have been learning from earlier porting efforts:

Python 3.3 is easier to port to than 3.2 is. I hope that we'll be able to take all of our experiences and funnel that into 3.4 to make it a better porting target still.

Van Rossum suggested that perhaps slowing down or stopping bug fixing in 2.7 might be the right course. Users have already worked around or otherwise dealt with any bugs they have run into. Ensuring that 2.7 continues to work on new releases of Windows and OS X (Linux too, but distributions normally take care of that) would instead be the focus. But there are some areas that do need new features; the IDLE IDE from the standard library has been mentioned as one area where bug fixes are needed.

Another interesting case is PyPy, which is an implementation of the Python language written in (a subset of) Python 2.7. Even PyPy3k, which will take Python 3 as input, is still written in RPython (restricted Python) based on 2.7. That means that the PyPy project will still be maintaining the 2.7 standard library for longer than the two years Peterson plans, as Maciej Fijalkowski noted.

When Martin von Löwis posited that it must be library support that leaves people "stuck" with Python 2, Fijalkowski was quick to point out another reason, at least for PyPy:

I'm stuck because I can't tell my users "oh, we didn't improve pypy for the last year/6 months/3 months, because we were busy upgrading sources you'll never see to python 3"

That problem is not limited to PyPy of course, there are lots of Python applications where the version of the underlying source code is not of any particular interest to users. The users want an application that works and regularly has new features, both of which could be negatively impacted by a Python version migration.

The consensus in the thread seems to be that Python 3 usage is increasing, and perhaps even reaching a tipping point because of recent migrations by frameworks like Django and Twisted. But Raymond Hettinger reminded everyone of the informal survey he conducted in his recent PyCon keynote, which found that out of roughly 2500 people, "almost everyone indicated that they had tried out Python 3.x and almost no one was using it in production or writing code for it". On the other hand, Ian Ozsvald has collected some data suggesting that Python 3 uptake is increasing.

In the final analysis, it won't be clear when (or if, though that outcome seems rather unlikely) a tipping point is reached until well after it happens. It is a bold step—some would say foolhardy—to change a language in a backward incompatible way as Python has done. It is not hard to see that any change of that sort will take some transitioning time. But the Python core developers are signaling that they believe the tipping point is at least in sight; anyone who has been blithely counting on 2.7 support continuing "forever" clearly needs to start at least thinking about their plans for the end of Python 2.7.

Comments (9 posted)

Brief items

Quotes of the week

But it can be worked around. Just use trial and error to find every Haskell library that does this, and then modify them to export all the symbols they use. And after each one, rebuild all libraries that depend on it.

You're very unlikely to end up with more than 9 thousand lines of patches. Because that's all it took me.

Joey Hess

And here, poor fool! with all my lore I stand, no wiser than before. All what I could derive from studying what lightweight means is, that one just has to claim to be lightweight. Bonus points if you include it into your name.
Martin Gräßlin

Comments (1 posted)

Wayland/Weston 1.1 released

The 1.1 release of the Wayland / Weston compositor system is out. New features include backends for the Raspberry Pi system and the Linux fbdev device, a number of performance improvements, a touchscreen calibration feature, a new software development kit, and more.

Full Story (comments: none)

NumaTOP launched

Intel has announced the availability of NumaTOP, a performance analysis tool for Non-Uniform Memory Access (NUMA) systems. The new tool "helps the user characterize the NUMA behavior of processes and threads and identify where the NUMA-related performance bottlenecks reside."

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Faure: Report from the freedesktop summit

David Faure has posted a terse report from the free desktop summit held April 8 in Nuremberg. "Perhaps most importantly we have come to agreement on a plan for improving the maintenance of freedesktop specifications going forward. One representative from each of GNOME, KDE and Unity will form a joint maintainer team. This team will monitor and participate in conversations on the xdg list and decide when consensus has been reached. The intention is to revive the usefulness of the xdg list as the primary point of communication between desktop projects."

Comments (29 posted)

Stephenson: hackweek9: Lightweight KDE Desktop project

Will Stephenson describes the KLyDE project, which is aimed at the creation of a lightweight KDE desktop. "As has been repeated on Planet KDE over the past decade, KDE is not intrinsically bloated. At its core, it jumps through a lot of hoops for memory efficiency and speed, and is modular to a fault. But most packagings of KDE take a kitchen sink approach, and when you install your KDE distribution you get a full suite of desktop, applets and applications. The other major criticism of KDE is that it is too configurable. The KlyDE project applies KDE's modularity and configurability to the challenge of making a lightweight desktop. However, what I don't want to do is a hatchet job where functionality is crudely chopped out of the desktop to fit some conception of light weight."

Comments (62 posted)

Packard: DRI3K — First Steps

At his blog, Keith Packard reports on the progress of the next-generation Direct Rendering Infrastructure replacement, which despite his earlier pleas increasingly seems to be named DRI3000. "The biggest pain in DRI2 has been dealing with window resize. When the window resizes in the X server, a new back buffer is allocated and the old one discarded. An event is delivered to ‘invalidate’ the old back buffer, but anything done between the time the back buffer is discarded and when the application responds to the event is lost."

Comments (none posted)

Hall: Building Ubuntu SDK apps

Michael Hall announced the availability of 44 third-party applications written for the Ubuntu SDK, which will eventually be available to users running the Ubuntu Touch environment on phones or other devices. "We’re starting out by collecting a list of known apps, with information about where to find their source code, the status of packaging for the app, and finally whether they are available in the PPA or not. I seeded the list with the apps I’ve been blogging about, but it’s open to anybody who has an app, or knows about an app ..." . In earlier posts, Hall has been reviewing many of the applications in depth.

Comments (none posted)

Semiotics in Usability: Guidelines for the Development of Icon Metaphors

The User Prompt blog is running a summary of its recent findings about icon design in the LibreOffice project. The work involved user testing to see how well common icon metaphors perform at the task of communicating to the user, and in many cases the results are not good news. "A lightning bolt can stand for ‘fast’ or ‘immediate’, but at the same time it can be interpreted as energy or hazard. The guiding principle should be deemed: The better known a function, the more abstract the metaphor can be. However, less known functions need explicit metaphors and a figurative support. In LibreOffice there are a few seldom used functions (e.g. the Navigator: Where does the Navigator lead you?), that should be placed less prominently or require a more pictorial icon."

Comments (none posted)

Page editor: Nathan Willis


Brief items

Xen becomes a Linux Foundation project

The Linux Foundation has announced that the Xen project has come under the Foundation's "collaborative project" umbrella. "The Xen Project is an open source virtualization platform licensed under the GPLv2 with a similar governance structure to the Linux kernel. Designed from the start for cloud computing, the project has more than a decade of development and is being used by more than 10 million users. As the project experiences contributions from an increasingly diverse group of companies, it is looking to The Linux Foundation to be a neutral forum for providing guidance and facilitating a collaborative network."

Comments (11 posted)

IFOSSLR 5.1 available

The seventh issue (Volume 5, No. 1) of the International Free and Open Source Software Law Review is now available. The article topics include an analysis and study of case law regarding APIs and the *GPL licenses, the rise of open source software foundations, and a study of the Lisp LGPL license.

Comments (none posted)

FSF: Tell W3C - We don't want the Hollyweb

The Free Software Foundation is circulating a petition to stop "Big Media moguls" from weaving Digital Restrictions Management (DRM) into HTML5. "Millions of Internet users came together to defeat SOPA/PIPA, but now Big Media moguls are going through non-governmental channels to try to sneak digital restrictions into every interaction we have online. Giants like Netflix, Google, Microsoft, and the BBC are all rallying behind this disastrous proposal, which flies in the face of the W3C's mission to "lead the World Wide Web to its full potential.""

Full Story (comments: none)

Articles of interest

Increasing participation of women in Free and Open Source Software ( covers the Outreach Program for Women which now has sixteen organizations offering internships. "The GNOME project itself, which has had the longest experience with offering internships through the Outreach Program for Women, has seen a substantial increase in the participation of women. While women comprised only 4% of attendees at GNOME's yearly conference, GUADEC, in 2009, women comprised 17% of attendees in 2012. In a recent survey of newcomers who joined and stayed involved in 12 FOSS organizations, 50% of GNOME respondents were women whereas only 6% of the respondents from other organizations were women, with no other organization having more than 15%. The organizations that joined the Outreach Program for Women more recently will no doubt see similar changes."

Comments (24 posted)

New Books

"Hacking Secret Ciphers with Python" released

Hacking Secret Ciphers with Python is a new full-length book on Python programming that uses cryptography as the underlying subject matter. It is available under the Creative Commons noncommercial sharealike license.

Comments (4 posted)

The Modern Web--New from No Starch Press

No Starch Press has released "The Modern Web" by Peter Gasston.

Full Story (comments: none)

Upcoming Events

Events: April 18, 2013 to June 17, 2013

The following event listing is taken from the Calendar.

April 15
April 18
OpenStack Summit Portland, OR, USA
April 16
April 18
Lustre User Group 13 San Diego, CA, USA
April 17
April 18
Open Source Data Center Conference Nuremberg, Germany
April 17
April 19
IPv6 Summit Denver, CO, USA
April 18
April 19
Linux Storage, Filesystem and MM Summit San Francisco, CA, USA
April 19 Puppet Camp Nürnberg, Germany
April 20 Grazer Linuxtage Graz, Austria
April 21
April 22
Free and Open Source Software COMmunities Meeting 2013 Athens, Greece
April 22
April 25
Percona Live MySQL Conference and Expo Santa Clara, CA, USA
April 26
April 27
Linuxwochen Eisenstadt Eisenstadt, Austria
April 26 MySQL® & Cloud Database Solutions Day Santa Clara, CA, USA
April 27
April 28
WordCamp Melbourne 2013 Melbourne, Australia
April 27
April 28
LinuxFest Northwest Bellingham, WA, USA
April 29
April 30
2013 European LLVM Conference Paris, France
April 29
April 30
Open Source Business Conference San Francisco, CA, USA
May 1
May 3
DConf 2013 Menlo Park, CA, USA
May 2
May 4
Linuxwochen Wien 2013 Wien, Austria
May 9
May 12
Linux Audio Conference 2013 Graz, Austria
May 10 CentOS Dojo, Phoenix Phoenix, AZ, USA
May 10 Open Source Community Summit Washington, DC, USA
May 14
May 17
SambaXP 2013 Göttingen, Germany
May 14
May 15
LF Enterprise End User Summit New York, NY, USA
May 15
May 19
DjangoCon Europe Warsaw, Poland
May 16 NLUUG Spring Conference 2013 Maarssen, Netherlands
May 22
May 24
Tizen Developer Conference San Francisco, CA, USA
May 22
May 25
LinuxTag 2013 Berlin, Germany
May 22
May 23
Open IT Summit Berlin, Germany
May 23
May 24
PGCon 2013 Ottawa, Canada
May 24
May 25
GNOME.Asia Summit 2013 Seoul, Korea
May 27
May 28
Automotive Linux Summit Tokyo, Japan
May 28
May 29
Solutions Linux, Libres et Open Source Paris, France
May 29
May 31
Linuxcon Japan 2013 Tokyo, Japan
May 30 Prague PostgreSQL Developers Day Prague, Czech Republic
May 31
June 1
Texas Linux Festival 2013 Austin, TX, USA
June 1
June 2
Debian/Ubuntu Community Conference Italia 2013 Fermo, Italy
June 1
June 4
European Lisp Symposium Madrid, Spain
June 3
June 5
Yet Another Perl Conference: North America Austin, TX, USA
June 4 Magnolia CMS Lunch & Learn Toronto, ON, Canada
June 6
June 9
Nordic Ruby Stockholm, Sweden
June 7
June 9
SouthEast LinuxFest Charlotte, NC, USA
June 7
June 8
CloudConf Paris, France
June 8
June 9
AdaCamp San Francisco, CA, USA
June 9 OpenShift Origin Community Day Boston, MA, USA
June 10
June 14
Red Hat Summit 2013 Boston, MA, USA
June 13
June 15
PyCon Singapore 2013 Singapore, Republic of Singapor

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds