Trademarks and free software projects have something of a troubled history. On one hand, a trademark can protect the project from others using the name but offering sub-standard—or malicious—versions of the software. On the other, whoever holds the trademark also holds much of the power over the project as a whole. That can lead to conflicts, which is what we are seeing in a recent fork of the ZeroMQ (aka 0MQ or ØMQ) project.
ZeroMQ is a high-performance asynchronous messaging system that is based on BSD sockets. Two of the developers working on the project, Martin Sustrik and Martin Lucina, wrote about ZeroMQ for LWN in January 2010. Sustrik is the chief architect of ZeroMQ and, up until recently, served as the de facto "benevolent dictator" for the project, while Lucina has been working on the project since late 2009. On March 15, the two of them announced a fork of the LGPLv3-licensed ZeroMQ code as Crossroads I/O version 1.0.0.
There were two main reasons cited for the fork in that announcement, but based on an email exchange with Sustrik and Lucina, the trademark issue is clearly paramount, at least for Sustrik. The trademark is held by iMatix Corporation which funded the early development of ZeroMQ. That trademark is governed by policy that essentially gives iMatix full control over how the names (ZeroMQ, 0MQ, ØMQ, and zmq) can be used.
That worries Sustrik for a couple of reasons. He believes that the underlying protocols for ZeroMQ should be standardized via the Internet Engineering Task Force (IETF), but that is not something that is easily done using volunteers:
His solution to the funding problem is to allow companies to make money by releasing modified versions, plugins, and extensions that can use the ZeroMQ name, which is something he says the current policy does not allow:
Given that trademarks are at the center of the disagreement, it is ironic that the trademark policy was created at Sustrik's request in May 2011. In a message on the zeromq-dev mailing list, iMatix CEO Pieter Hintjens noted that he created a policy and asked for feedback. There were no major complaints about the policy in that thread, but a suggestion from Sustrik that the trademarks be handed to a foundation (like Software in the Public Interest, which holds the Debian trademark) was not something that Hintjens was interested in:
But Sustrik is concerned that some day the trademarks could fall into "hostile" hands, which could essentially take control of various ZeroMQ-related projects (like the language bindings for roughly 30 languages) by asserting a different trademark policy. That also may make it difficult for ZeroMQ to attract companies to work on the project: "If I was a commercial entity I would be hesitant about investing in such a project". One of the early goals for Crossroads I/O is to find a "neutral entity" to control the trademark by the end of 2012. Until then, the policy states: "'Crossroads I/O' is trademark of Martin Sustrik. Martin Sustrik grants everyone the right to use the trademark with products related to Crossroads I/O (packages, plugins, extensions, tools and similar)."
Since a trademark owner can decide what gets to be called "ZeroMQ", it can also make fairly sweeping decisions about how the project is governed and released. According to Sustrik, after the 3.1.0 beta release, he was "explicitly prohibited to use [the] ZeroMQ trademark, which gives me no other chance than to fork". That prohibition came in a private message from Hintjens, he said. Around the same time, Hintjens posted a proposal for maintenance of libzmq (the heart of ZeroMQ) that makes no mention of a single maintainer, which is the role that Sustrik thought he had been filling.
In a Google+ posting (which Lucina called an "objective writeup"), Ian Barber pointed to a problem that may have been part of what led to the new development process: "Martin Sustrik was effectively a bottleneck for getting improvements into core, and was clearly being pulled multiple directions on features and functionality in the project".
As might be guessed, Hintjens has a somewhat different view from that of Sustrik and Lucina. In an email exchange, he noted that Sustrik was "in danger of burning out", which is part of what led to the changes. In addition, he doesn't see the trademark policy as a problem:
According to Hintjens, Sustrik and Lucina "happily ignored about a year of consensus on process, and unilaterally decided to take back control over stable releases" when they made the 3.1 release. That didn't sit well with some of the other contributors to ZeroMQ, which is what led to codifying the process and, ultimately, the fork.
The new process (scroll down to "Contributing to libzmq") is more wide open than that of most free software development projects. Anyone can fork the Github repository, make changes, and request a pull into the mainline. The maintainers are only meant to enforce process, ensure that the test cases pass, and are not allowed to impose their opinions on the quality of the code or feature. If others in the community are not happy with a particular patch, they are supposed to make another patch that fixes or reverts it—the maintainers are just there to apply it. The underlying assumption is that the community members will find and fix problems quickly by getting those changes in front of them sooner.
Barber calls the process "incredibly open", in fact, more open than he is comfortable with, but it has worked well so far:
Crossroads I/O is going to use the more common "benevolent dictator" model, with Sustrik in that role. Barber says that makes sense, because it is a proven model for open source development, but he is concerned that Crossroads will hit the same problems that ZeroMQ ran into. He also notes that while Sustrik and Lucina have both contributed a great deal to ZeroMQ, they are far from the only contributors to the project. It's not clear, at least yet, if others from ZeroMQ will start working on Crossroads I/O, nor whether the language binding authors will support both projects. That leaves users in a bit of a quandary, he said.
In any case, the new process installed by Hintjens is something of an experiment: "We will try it for a while and if it proves broken, we will fix the breakage and continue to improve it." But that experiment, coupled with what Sustrik sees as a restrictive trademark policy, is enough to cause the fork. What remains to be seen is if Sustrik and Lucina can build enough of a community to continue with their plans.
It is, at least in some sense, a friendly fork. Hintjens greeted the Crossroads I/O announcement with: "Congratulations on this, guys. It's nice to see the LGPL in action." Based on the emails from Hintjens, Sustrik, and Lucina, there was certainly some initial hostility and hurt feelings because of the fork, but, in the end, both sides seem to wish the other well. The two code bases share the same license, so there is no reason that patches cannot flow between them.
Hintjens noted that some may criticize the fork because it will split the community, but he strongly defended Crossroads I/O for a number of reasons. For one, he sees it as a place to do more experimental work, while ZeroMQ focuses on stability. He is also interested in seeing which of the two development models works out better over the long run. It will also provide a choice of free solutions for users, he said.
This fork, like most, comes out of a difference in philosophies between two sub-groups of the project. Unlike other forks, though, it doesn't come about because of license disagreements. It is interesting for another reason, however, as it serves as a reminder that a trademark holder can exercise a great deal of control over a project. Since the owner of the trademark can effectively decide what gets to be called by that name, they can override the wishes of longtime developers.
From all appearances, much of the ZeroMQ community is not upset with the changes or the fork. But most also seem to be keeping an eye on what Crossroads I/O is up to. If Sustrik and Lucina can make the benevolent dictator model produce a better messaging library than the more open ZeroMQ model does, there may be a shift toward Crossroads. By taking the trademark question off the table with a more liberal policy, Crossroads may attract some of the companies that Sustrik worries are put off by the ZeroMQ policy. That remains to be seen, of course, but what we do see is yet another example of the problems inherent in the coexistence of trademarks and free software.
Fedora's advisory board is debating changing its long-followed tradition of selecting a code name for each new release according to a peculiar formula. Although the code names have never been an integral part of the distribution's marketing plan, the impending release of Fedora 17, with its crowd-selected moniker "Beefy Miracle", has stirred up several critics — some to take offense at the name itself, some who find it a burden to explain to outsiders, and some who simply want to do away with release code names altogether.
Fedora's release code names have a history pre-dating the project itself; the tradition started at Red Hat, which assigned a code name to each Red Hat desktop release according to a peculiar pattern wherein each pair of consecutive code names shared some relationship, but that relationship differed for every pair of releases. In those days, however, the connection between each new name and its predecessor remained a secret; the code name originated behind closed doors and when the new release landed, figuring out the link was part of the game.
That all changed when Fedora picked up the code name mantle, and instituted a public procedure for suggesting, vetting, and voting on each new release name. The project now takes suggestions on its wiki for each new development cycle, where submitters list the connection between the previous code name and their proposed follow-up. Red hat's legal department and the Fedora Advisory Board whittle down the options to a manageable number, and voting takes place on the Fedora project site.
Code name selection has not always been a smooth process, but no name was quite as divisive as "Beefy Miracle," which was first proposed as a code name for Fedora 16, and which lost out narrowly to the eventual winner "Verne" — despite a public marketing campaign run on the behalf of "Beefy Miracle". The name then resurfaced as a proposal for Fedora 17, and this time won the poll. Fedora 17 is slated for release on May 8.
Even during the Fedora 16 voting cycle, "Beefy Miracle" was a controversial choice, partly for its nonsensical nature, but also because its connection to the previous code name "Lovelock" stretched the rules. The official guidelines state that each pair of code names must share an "is a" relationship (e.g., Laughlin is a city in Nevada, and Lovelock is a city in Nevada). The submitted connection between "Lovelock" and "Beefy Miracle" was that both strings would eventually produce the number five when fed through an iterative formula. The code name lost out to "Verne," of course, but the submitted connection for the Fedora 17 release name was also tenuous: that both "Verne" and "Beefy Miracle" had been proposed as possible code names for Fedora 16.
In October 2011, after the voting, there were several critics. Michael Schwendt called it impossible to explain to users; Christoph Wickert said it had no connection to the actual product, and another user named Roger lamented that it did not signify anything "important." But the most extreme reaction came from Bob Jensen, who stated his desire to end the use of code names altogether.
Jensen's 2011 call to end code names outright was terse and isolated to his sole email, but more users echoed the same sentiment as the planning cycle for Fedora 18 got underway in March. Stephen Smoogen called for an end to the naming cycle or changing to a different format in a message to the Fedora advisory-board list, saying "I am not sure what 'naming' does for us anymore. It is way too early in a release cycle to 'produce excitement'. [And if we can change out init systems, filesystem layout, we can surely examine whether naming does much or if there is something new we want to try.]"
In the resulting discussion, many of those who favor abolishing code names seemed to see no value in the actual names themselves. Seth Vidal, for example, said that selecting names consumes time and effort, but that no one remembers them. In a separate message, he said that release numbers were more useful because they enable one to quickly figure out how old a particular release is by counting backwards, which Fedora's unordered code names do not.
Fedora project leader Robyn Bergeron countered that the code names are incorporated into wallpapers, other design elements, and marketing materials, that participants in the naming process enjoy it, and further contended that release numbers are only "memorable" because people memorize how to count when they are children. Máirín Duffy (who leads the Fedora design team) said that the designers rarely find the code names inspirational, and although she was not in favor of abandoning them entirely (saying release numbers alone would be "cold"), she advocated selecting an consistent, ongoing theme like many other distributions use. "The current naming system usually results in awkward names that require lengthy explanation to those not involved."
Others pointed out additional value in having code names. Jason Brooks observed that when looking for help, code names make for more useful search terms than do numbers. Zoltan Hopper likened naming releases to naming children, and said each release is the creation of the community and thus ought to have a personality or identity.
Nevertheless, many of those who stood up for preserving release code names found room for improvement in the process itself. Clint Savage (who defended the use of code names as a part of Fedora's traditions and pointed to the value of non-artwork marketing such as "Beefy Miracle"-themed hot dog roasts), argued that the value of the code names outweighed the work involved in the process of selecting them, and that "maybe the problem is just a matter of streamlining."
Hopper, like Duffy, suggested picking a single theme. Choosing a theme wisely, such as scientists or inventors, he said, would also assist in marketing the distribution by lining up with Fedora's efforts to stay cutting-edge. Replacing the current process with a new one was also suggested by Matthias Clasen, who said "the naming thing started as a fun game, then it got 'standardized', and now it is just one more process that has stopped to be either fun or useful. [...] Time to reevaluate and come up with something fresh and fun that we can do for each release."
A distinct (and arguably more serious) issue raised on the advisory-board list is the difficulty of choosing a code name that does not offend the beliefs or sensibilities of anyone. That tricky subject was raised first in a non-public ticket filed on the Advisory Board's issue tracker. The bug filer, Rajesh Ranjan, subsequently agreed to make the ticket's comment thread public by posting it to the advisory-board list.
The specific problem Ranjan raises is that the name "Beefy Miracle" is "is having a huge negative cultural, social, and political connotation with respect to India and several religions of India and [the] world." The cow is considered sacred in Hinduism, he said, so the term "beefy" is offensive because it refers to food made from beef, and it may be offensive to members of other religious groups as well as secular people from the Indian subcontinent. One commenter replied that as an international, non-religious and non-political project, Fedora should not seek to "appease" any specific belief systems — and that any code name chosen will inevitably offend someone.
That led others on the list to suggest picking a non-offensive code name "theme" — although upon further debate, finding a neutral theme is not so simple. Mario Juliano Grande Balletta suggested astronomy terms, but Josh Boyer pointed out that many astronomical object are named for potentially offensive things like the Roman god of war, and that any names of deities can be offensive to atheists. Duffy suggested coffee drinks, but Richard Fontana observed that some religions find caffeine offensive. Fontana also pointed out that the suggestion of famous scientists will undoubtedly skew towards men for historical reasons, while Ranjan noted that even numbers themselves can have negative cultural connotations.
Of course, picking names that have positive connotations everywhere on the globe is no simple task; consider the Mer project's debate in October 2011, which produced gems like "Meer" (which sounds diminutive in English), "MerDE" (which is unflattering in French), "Mermade" (which looks like a typo), and "Mermer" (which sounds difficult to understand on a phone). Given the impossibility of pleasing all of the people all of the time, talk turned to practical measures that the project could take to weed out offensive names during the code name selection process. Ranjan proposed asking the translation mailing list to look for offensive terms, since it encompasses a wide cultural circle. Nicu Buculei said that sounded like too much work, and user John Rose (aka inode0) speculated that it would be too hard to get volunteers to tackle the necessary code name vetting.
Bergeron acknowledged the importance of avoiding offensive terms, but expressed her faith in the Fedora community to catch and call attention to such offensive terms in code names and elsewhere:
Nevertheless, she advocated forwarding the code name suggestion list to the the translators for review, at the same time that it is sent to Red Hat's legal office for approval.
As to abandoning code names altogether, there has yet to be a real consensus on the board. Several people suggested adding an option to the Fedora 18 ballot to drop code names in future releases, while others felt that that question should be submitted to a separate vote. The window for suggesting code names for Fedora 18 is now closed, and the board has until March 30 to perform its own analysis of the suggestions. Voting will commence April 6; thus whichever route the board decides to take on the no-more-code-names question, the world will see it soon enough.
Fedora would not be the only distribution to make releases without a code name if it does choose that route; openSUSE, Slackware, and many others work without them, too. On the other hand, code names are an accepted practice — although most others do use a set theme. Debian uses Toy Story characters, Sugar uses fruit. A dimension not raised on the advisory board thread is that many projects stick to an alphabetic sequence for their code names, which makes them easier to sort into the correct order. That list includes Android's desserts, Ubuntu's adjective animals, and even Maemo's trade-winds. Plenty of other open source projects also use release names, whether they are chosen to communicate a message or are fully tongue-in-cheek. Perhaps Fedora's experience with "Beefy Miracle" will prompt a change of pace, but whichever direction the project heads from here, it will at least have made a conscious decision about what role — if any — its code names ought to play in its overall message.
In its early days, the GNU project was forced to focus on a small number of absolutely crucial projects; that is why the first program released under the GNU umbrella was Emacs. Once the core was in place, though, the developers realized they would need a few other components to build their new system; a C library featured prominently on that list. So, back in 1987, Roland McGrath started development on the GNU C library; by 1988, it was seen as being sufficiently far along that systems could be built on top of it.
By the time the Linux kernel came around in 1991, GNU libc was, like many other GNU components, the obvious choice for those working to build the first distributions. But, while GNU libc was a good starting point, it quickly became clear that (1) it still needed a lot of work, and (2) the GNU project's management style was, in typical fashion, making it hard to get that work done and merged. So, early on in the history of Linux, the GNU C library was forked and a Linux-specific version ("Linux libc") was maintained and shipped with Linux distributions.
The GNU project was rather slow to come around to the idea that Linux was the system that they had been trying to build all these years; they continued to develop their C library on their own, despite the fact that Linux was not using it. Early on most of the work came from Roland, but, slowly, other contributors appeared; the first contribution from Ulrich "Uli" Drepper appears to have been made in September, 1995. Early 1997 saw the first glibc 2.0 release; by then, most commits were made by Ulrich (though the code sometimes came from others).
While Linux libc had been adequately maintained, it had not seen vast amounts of new development work. There came a point where it became clear that, in fact, glibc 2 was a better C library in many ways. Distributors of Linux came to the conclusion that the time had come to switch back to the GNU library and, for all practical purposes, simply abandon the Linux libc project. Those of us who lived through the subsequent transition (as seen in the Red Hat 5.0 and Debian 2.0 releases) still tend to shudder; it was a time when programs would not build and bugs abounded. But things eventually settled down and glibc has been the standard C library for most Linux distributions since.
Glibc has continued to evolve up to the present at varying speeds; during most of this time, the bulk of the work has been done by Ulrich Drepper. Of the nearly 19,000 commits found in the project's git repository (which contains changes back to 1995), over 12,000 were made by Ulrich. Never known for a willingness to defer to others, Ulrich quickly became the decision maker for glibc. The project's steering committee was formed by the FSF in 2001 as a way of asserting some control over the project, but, to a great extent, that control remained in Ulrich's hands. In 2008, Roland described the situation this way:
Four other developers (Andreas Jaeger, Jakub Jelinek, Richard Henderson, and Andreas Schwab) had the ability to commit "after approval." But that approval, in the end, had to come from Ulrich.
One need not look far to find complaints about how Ulrich managed the glibc project. He was never one to suffer fools, and, evidently, he saw himself surrounded by a lot of fools. Attempts to get changes into glibc often resulted in serious flaming or simply being ignored. Developers often despaired of getting things done and simply gave up trying. It is easy to criticize Ulrich's attitude at times, but one should also not lose track of all the work he got done. This includes steady improvements to glibc, big jumps in functionality (as with the Native POSIX Thread Library work done with Ingo Molnar) and lots of fixes. Ulrich's work has made all of our systems better.
That work notwithstanding, Ulrich is widely seen as, at best, an unpleasant maintainer who has turned the glibc project into an unwelcoming place. One need not challenge that characterization to point out that glibc actually had a number of problems associated with highly cathedral-style projects: a well-ensconced priesthood, a lack of interest in or awareness of the wider world's needs, etc. At some point, it seemingly became clear to a number of developers associated with glibc that it needed to become a more open project with more than a single point of control. A number of changes have been made to prod it in that direction.
The switch to Git for source code management in May, 2009 was one such change. At that time, the repository was made slightly more open so that non-core maintainers could keep branches there. Ulrich opposed that decision, but Roland responded:
The project started creating separate release branches and putting maintainers other than Ulrich in charge of them. The restricted-access "libc-hackers" mailing list fell into disuse and has been closed down. The use of Fedora as a sort of private testing ground has been halted after things broke one too many times. And a strong effort has been made to encourage more developers and to merge more code. The result is a release history that looks like this:
Release Date Changes 2.3 2002-10-03 3204 2.4 2006-03-06 5276 2.5 2006-09-29 348 2.6 2007-05-15 357 2.7 2007-10-18 446 2.8 2008-04-12 281 2.9 2008-11-13 293 2.10 2009-05-09 344 2.11 2009-10-30 408 2.12 2010-05-03 389 2.13 2011-01-17 261 2.14 2011-05-31 256 2.15 2011-12-23 582
(The 2.15 date is when the release was tagged; the announcement was, for whatever reason, not sent out for a few months). The 2.15 release suggests that development activity is on the increase after a sustained low point. Indeed, it is at its highest point since semi-regular releases began in 2006. Activity since 2.15 (483 commits, as of this writing) continues to be high by recent historical standards; perhaps more tellingly, only about 25% of those commits came from Ulrich. Given that he accounts for over 60% of the changes from 2.10 to 2.14, that suggests that a significant shift has taken place.
Also significant is the fact that Ulrich left Red Hat in September, 2010; according to his LinkedIn page, he is now "VP, Technology Division at Goldman Sachs." One can easily imagine that his new position might just have requirements that are unrelated to glibc maintenance. So a drop in participation is not entirely surprising; what is significant is that a growing community is more than taking up the slack.
The culmination of these trends came on March 26, when Roland announced that there was no longer any need for a glibc steering committee:
Glibc will remain a project under the GNU umbrella; Roland, along with Joseph Myers and Carlos O'Donell, will be the official GNU maintainers. But that role will give them no special decision-making powers; those powers are meant to be exercised by the project's development community as a whole. David Miller (an active glibc contributor) was quick to celebrate this change:
David also hinted that one other possible obstacle to contribution - the requirement to assign copyrights to the Free Software Foundation - is also being worked on, though the form the solution might take is unclear. Other developers are already talking about working to merge at least some of the changes from the EGLIBC fork back into the glibc mainline and reopening old bugs and change requests.
So the GNU C library project has moved into a new phase, one where, for the first time in its long history, it will not be under the control of a single developer. It will be interesting to watch where things go from here. If the project succeeds in building robust community governance and shedding its hostile-to-contributors image, the pace of development could pick up considerably. Given the crucial position occupied by glibc on so many systems, these changes are almost certainly a good thing.
Striking the delicate balance between usability and secure default options has surfaced as an unexpected issue for the Apache OpenOffice (AOO) project in the closing days of its 3.4 development cycle. An AOO developer opened a bug report on March 18 forecasting trouble with the office suite's new encryption settings. The problem is that recent AOO builds switched from Blowfish to AES as the default cipher. Since no previous releases of OpenOffice or most other ODF-reading applications support AES (LibreOffice added it as a feature in LO 3.5), encrypted files created with the new builds were unreadable in other programs. Complicating matters is ambiguity about how to interpret the ODF standard, how to expose new encryption options in the interface, and whether or not file encryption is implemented securely to begin with.
The ODF document format supported by AOO, LibreOffice, and other office suites, allows users to password-encrypt any file (via the "Save with Password" option). Up until recently, Blowfish was the default choice in every ODF application. But, as original bug reporter Dennis Hamilton noted, starting with revision 1293550, AOO produces AES256 Cipher-block chaining (CBC) encrypted documents by default — which are then unreadable by LibreOffice 3.3 and 3.4 (prior to 3.4.5), the last stable release of OpenOffice, and by Lotus Symphony. Exacerbating matters, all three programs report that the encrypted file is malformed, rather than reporting that it uses a different encryption method.
The encryption algorithm used in a compliant ODF file is specified in the manifest, which is an XML file stored in the ODF ZIP-archive-based file format. Section 4.8.1 of the OpenDocument 1.2 specification defines only one value as compliant: Blowfish, in 8-bit cipher feedback (CFB) mode. However, the specification allows "extended" ODF 1.2 files to support other algorithms, and mandates a W3C-standardized syntax for identifying them. Thus, strictly speaking, the other ODF applications are failing to recognize a correctly-formatted ODF 1.2 "extended" file, which could hardly be construed as a bug in AOO.
Several AOO developers observed that AOO was not to blame for other applications failing to understand the algorithm identifier in the file manifest and consequently voted that the bug did not qualify as a blocker to hold up the upcoming 3.4 release. Rob Weir, co-chair of the ODF 1.2 specification committee, said the problem was a security-versus-interoperability trade-off, that could be handled by user education. Users can be told about the interoperability issue and manually select the Blowfish cipher if desired. Furthermore, Weir argued that Blowfish needed to be replaced by AES anyway, both because it is a newer cipher, and because it is a US government recommended standard.
Hamilton disagreed on both points. First, he said, the AOO builds do not offer the user any way to select the encryption algorithm: they use AES automatically. Rather than serving as a "default" (which implies that a setting is available), the encryption algorithm used is fixed at build time — consequently AOO appears to produce corrupted ODF files, which will result in "a support nightmare" if released. Second, using AES encryption instead of Blowfish may not really increase security, he added in a follow-up, because ODF provides message digests based on the same start key used to encrypt the file, and because ODF does not properly salt digests. That provides attackers with a much easier target than the encrypted message body, making it irrelevant which encryption cipher is used.
Furthermore, Hamilton argued that ODF's XML contains extensive "boilerplate" text that can aid attackers in discovering a password, regardless of the cipher used:
Not everyone agrees with his analysis, but Hamilton has submitted formal proposals for ODF 1.3 to fix the digest problems, and proposes introducing "chaff" into the known-plaintext files to further deflect attack. More immediately, however, he attached a short patch to restore Blowfish as the encryption algorithm in AOO.
A discussion thread on the AOO development list ran in parallel to the one on the bug tracker. However, different facets of the issue cropped up on the list. There, Weir noted that LibreOffice, too, had enabled AES encryption, which should significantly increase the number of users who should be able to decrypt AES-based files, and pointed to the lack of complaints or confusion from users of either office suite.
T.J. Frazier reiterated that the root issue was not that AES was a bad default choice, but that AOO did not present any UI for the user to select an encryption cipher. He also argued that introducing an incompatibility with older releases was a problem in and of itself. "It is *wrong* to break compatibilities as this does, without long lead-time, and opt-in possibilities, unless there exists some drastic need. That has not been shown. Improvement, yes; crucial, no." Finally, he proposed several methods of enabling knowledgeable users to manually select AES, and volunteered to do the UI work.
Hamilton replied that there had been complaints in the LibreOffice community, and observed that the LibreOffice project had back-ported AES support into its 3.4 release series (starting with 3.4.5) in order to restore compatibility. It should also be noted that the problem is only for users of the older tools trying to read password-protected files created with the newer — reading Blowfish-encrypted files is still supported in the new versions.
Ultimately, however, release manager Jürgen Schmidt had the final say, and he accepted the issue as critical enough to warrant reverting back to Blowfish in the AOO 3.4 release, and favored implementing a user-selectable encryption setting for the 4.0 series. As he added in a subsequent message, "most users don't care about the technical details and they will be simply confused if it won't work any more." Weir concurred with that plan, saying, "users who are smart enough to know they want AES will be smart enough to set that option."
That may be true, but of course introducing user-configurable encryption settings will be a UI challenge of its own. For its part, the LibreOffice team is also planning to institute a UI review for the next release cycle. As Michael Meeks pointed out, the changes affect document signing as well as password-encryption.
Meeks did not elaborate, but considering Hamilton's comment in the bug tracker outlining several different attacks on the encrypted files and digests, there may be no shortage of options. Some of those may require changes to the ODF format to fix completely, but all of them require a carefully-considered interface. After all, the "smart" users may be counted on to get it right more often than not, but making it difficult for the inexpert users to choose poor settings is also important. The more complexity users are presented with, the more of them are likely to simply stick with the defaults.
Year after year we've watched ICANN suddenly shift and sway like the proverbial bull in the china shop, smashing past promises and pronouncements in its wake. And now, like an out of control starship that has lurched beyond a black hole's event horizon, it is being sucked inexorably toward a dark chaos of greed, a maelstrom of its own creation.
|Package(s):||asterisk||CVE #(s):||CVE-2012-1183 CVE-2012-1184|
|Created:||March 28, 2012||Updated:||May 4, 2012|
|Description:||The asterisk telephony system prior to version 22.214.171.124 suffers from a stack overrun in milliwatt_generate() and a buffer overflow vulnerability in ast_parse_digest(). Either could be exploited to execute arbitrary code.|
|Package(s):||chromium||CVE #(s):||CVE-2011-3049 CVE-2011-3050 CVE-2011-3051 CVE-2011-3052 CVE-2011-3053 CVE-2011-3054 CVE-2011-3055 CVE-2011-3056 CVE-2011-3057|
|Created:||March 26, 2012||Updated:||November 7, 2012|
|Description:||From the CVE entries:
Google Chrome before 17.0.963.83 does not properly restrict the extension web request API, which allows remote attackers to cause a denial of service (disrupted system requests) via a crafted extension. (CVE-2011-3049)
Use-after-free vulnerability in the Cascading Style Sheets (CSS) implementation in Google Chrome before 17.0.963.83 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to the :first-letter pseudo-element. (CVE-2011-3050)
Use-after-free vulnerability in the Cascading Style Sheets (CSS) implementation in Google Chrome before 17.0.963.83 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to the cross-fade function. (CVE-2011-3051)
The WebGL implementation in Google Chrome before 17.0.963.83 does not properly handle CANVAS elements, which allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via unknown vectors. (CVE-2011-3052)
Use-after-free vulnerability in Google Chrome before 17.0.963.83 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to block splitting. (CVE-2011-3053)
The WebUI privilege implementation in Google Chrome before 17.0.963.83 does not properly perform isolation, which allows remote attackers to bypass intended access restrictions via unspecified vectors. (CVE-2011-3054)
The browser native UI in Google Chrome before 17.0.963.83 does not require user confirmation before an unpacked extension installation, which allows user-assisted remote attackers to have an unspecified impact via a crafted extension. (CVE-2011-3055)
Google Chrome before 17.0.963.83 allows remote attackers to bypass the Same Origin Policy via vectors involving a "magic iframe." (CVE-2011-3056)
Google V8, as used in Google Chrome before 17.0.963.83, allows remote attackers to cause a denial of service via vectors that trigger an invalid read operation. (CVE-2011-3057)
|Package(s):||expat||CVE #(s):||CVE-2012-0876 CVE-2012-1148|
|Created:||March 28, 2012||Updated:||June 8, 2016|
|Description:||The expat utility suffers from a memory leak and a hash table collision flaw; either could be exploited for denial-of-service purposes.|
|Created:||March 28, 2012||Updated:||March 28, 2012|
|Description:||Expat suffers from a memory leak which may be exploited in a denial-of-service attack. See this message for (a little) more detail.|
|Created:||March 23, 2012||Updated:||July 7, 2014|
|Description:||From the Mandriva advisory:
Multiple out-of heap-based buffer read flaws and invalid pointer dereference flaws were found in the way file, utility for determining of file types processed header section for certain Composite Document Format (CDF) files. A remote attacker could provide a specially-crafted CDF file, which once inspected by the file utility of the victim would lead to file executable crash.
|Package(s):||freetype||CVE #(s):||CVE-2012-1126 CVE-2012-1127 CVE-2012-1128 CVE-2012-1129 CVE-2012-1130 CVE-2012-1131 CVE-2012-1132 CVE-2012-1135 CVE-2012-1137 CVE-2012-1138 CVE-2012-1139 CVE-2012-1140 CVE-2012-1141 CVE-2012-1143|
|Created:||March 23, 2012||Updated:||April 24, 2012|
|Description:||From the Ubuntu advisory:
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed BDF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1126)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed BDF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1127)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1128)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed Type42 font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1129)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed PCF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1130)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1131)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed Type1 font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1132)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1135)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed BDF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1137)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1138)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed BDF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1139)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed PostScript font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1140)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed BDF font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1141)
Mateusz Jurczyk discovered that FreeType did not correctly handle certain malformed font files. If a user were tricked into using a specially crafted font file, a remote attacker could cause FreeType to crash. (CVE-2012-1143)
|Created:||March 26, 2012||Updated:||March 28, 2012|
|Description:||From the Debian advisory:
Matthew Hall discovered that GNUTLS does not properly handle truncated GenericBlockCipher structures nested inside TLS records, leading to crashes in applications using the GNUTLS library.
|Created:||March 26, 2012||Updated:||March 28, 2012|
|Description:||From the Red Hat bugzilla:
Multiple (by checking for ATM technology support, checking for Xtables extension support, checking for setns() system call support, and in dhcp-client-script example script) insecure temporary file use cases were found in iproute. A local attacker could use this flaw to conduct symbolic link attacks (modify or remove files via specially-crafted link names).
|Created:||March 22, 2012||Updated:||May 1, 2012|
From the Red Hat bugzilla entry:
When running a binary with a lot of shared libraries, predictable base address is used for one of the loaded libraries.
This flaw could be used to bypass ASLR.
|Created:||March 26, 2012||Updated:||September 26, 2012|
|Description:||From the Debian advisory:
Matthew Hall discovered that many callers of the asn1_get_length_der function did not check the result against the overall buffer length before processing it further. This could result in out-of-bounds memory accesses and application crashes. Applications using GNUTLS are exposed to this issue.
|Package(s):||libzip||CVE #(s):||CVE-2012-1162 CVE-2012-1163|
|Created:||March 23, 2012||Updated:||March 29, 2012|
|Description:||From the Mandriva advisory:
libzip (version <= 0.10) uses an incorrect loop construct, which can result in a heap overflow on corrupted zip files (CVE-2012-1162).
libzip (version <= 0.10) has a numeric overflow condition, which, for example, results in improper restrictions of operations within the bounds of a memory buffer (e.g., allowing information leaks) (CVE-2012-1163).
|Created:||March 27, 2012||Updated:||April 19, 2012|
|Description:||From the Debian advisory:
It has been discovered that spoofed "getstatus" UDP requests are being sent by attackers to servers for use with games derived from the Quake 3 engine (such as openarena). These servers respond with a packet flood to the victim whose IP address was impersonated by the attackers, causing a denial of service.
|Package(s):||openssl||CVE #(s):||CVE-2012-0884 CVE-2012-1165|
|Created:||March 26, 2012||Updated:||April 23, 2012|
|Description:||From the openSUSE advisory:
The implementation of Cryptographic Message Syntax (CMS) and PKCS #7 in OpenSSL before 0.9.8u and 1.x before 1.0.0h does not properly restrict certain oracle behavior, which makes it easier for context-dependent attackers to decrypt data via a Million Message Attack (MMA) adaptive chosen ciphertext attack (CVE-2012-0884).
The mime_param_cmp function in crypto/asn1/asn_mime.c in OpenSSL before 0.9.8u and 1.x before 1.0.0h allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted S/MIME message, a different vulnerability than CVE-2006-7250 (CVE-2012-1165).
|Created:||March 26, 2012||Updated:||March 28, 2012|
|Description:||From the CVE entry:
The mime_hdr_cmp function in crypto/asn1/asn_mime.c in OpenSSL 0.9.8t and earlier allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted S/MIME message.
|Created:||March 22, 2012||Updated:||March 28, 2012|
From the Red Hat bugzilla entry:
A security flaw was found in the way osc, the Python language based command line client for the openSUSE build service, displayed build logs and build status for particular build. A rogue repository server could use this flaw to modify window's title, or possibly execute arbitrary commands or overwrite files via a specially-crafted build log or build status output containing an escape sequence for a terminal emulator.
|Package(s):||PHP5||CVE #(s):||CVE-2012-0781 CVE-2012-0789 CVE-2012-0807|
|Created:||March 26, 2012||Updated:||July 2, 2012|
|Description:||From the CVE entries:
The tidy_diagnose function in PHP 5.3.8 might allow remote attackers to cause a denial of service (NULL pointer dereference and application crash) via crafted input to an application that attempts to perform Tidy::diagnose operations on invalid objects, a different vulnerability than CVE-2011-4153. (CVE-2012-0781)
Memory leak in the timezone functionality in PHP before 5.3.9 allows remote attackers to cause a denial of service (memory consumption) by triggering many strtotime function calls, which are not properly handled by the php_date_parse_tzfile cache. (CVE-2012-0789)
Stack-based buffer overflow in the suhosin_encrypt_single_cookie function in the transparent cookie-encryption feature in the Suhosin extension before 0.9.33 for PHP, when suhosin.cookie.encrypt and suhosin.multiheader are enabled, might allow remote attackers to execute arbitrary code via a long string that is used in a Set-Cookie HTTP header. (CVE-2012-0807)
|Created:||March 22, 2012||Updated:||July 8, 2013|
From the Debian advisory:
It was discovered that Raptor, a RDF parser and serializer library, allows file inclusion through XML entities, resulting in information disclosure.
|Created:||March 28, 2012||Updated:||March 28, 2012|
|Description:||Wireshark prior to version 1.6.6 suffers from vulnerabilities in the ANSI A, 802.11, and MP2T dissectors, along with one in the pcap and pcap-ng file parsers. At least some of them look exploitable to run arbitrary code.|
Page editor: Jake Edge
The 126.96.36.199 update came out on March 22; it has a much longer list of fixes.
It's getting harder and harder to have rational error handling at the OS level as application environments move to higher levels and greater abstraction.
("Symbolic" because the Nouveau code has never been in the staging tree; only the configuration option was placed there.)
Kernel development news3.3 release announcement, Linus warned developers that he would be taking a bit of time off during the merge window; that did indeed happen over the last week. Still, he managed to pull some 4,000 changesets since last week's summary. Some of the more significant changes merged in the last week include:
Video4Linux: Afatech AF9005 based DVB-T/DVB-C receivers, and Keene FM transmitters.
Changes visible to kernel developers include:
bool poll_does_not_wait(const poll_table *p);
which returns true iff it is known that the poll() call will not block.
Also worthy of note is that there has been a vast amount of work done in the ARM architecture tree; the process of consolidating and cleaning up the ARM code continues at a high rate.
The 3.4 merge window would normally be expected to end around April 2. When he announced his vacation, Linus said that he would extend the merge window for a bit if necessary - though he warned that he would still only consider pull requests received during the window. Whether that will happen remains to be seen; either way, next week's Kernel Page will summarize the last new features merged for 3.4.
The "integrity" of a Linux system is based on whether it is running the code that the administrator expects. If not, a compromise of the system may have occurred. The Linux integrity subsystem is meant to detect those unexpected changes to files in order to protect systems against compromise. That is done by creating integrity "measurements" (hashes of contents and metadata) of files of interest.
Much of what is needed to do integrity management has already landed in the mainline, but there are a few remaining pieces. The integrity measurement architecture (IMA) appraisal extension patch set from Mimi Zohar and Dmitry Kasatkin fills in one missing piece: storing and validating the integrity measurement of files. A hash of a file's contents and metadata will be stored in the security.ima extended attribute (xattr) of the file, and the patch set will create and maintain those xattrs. In addition, it can enforce that the file contents are "correct" when the file is opened for reading or executing based on the integrity values that were stored.
The integrity subsystem has taken a rather twisted path into the kernel. It was proposed as far back as 2005, but the subsystem has been broken up into smaller pieces several times along the way. Much of IMA was added to the kernel in 2.6.30, but another piece, the extended verification module (EVM) was not merged until 3.2. Digital signature support was added to EVM in 3.3, and IMA appraisal is currently under review.
As described on the Linux IMA web page, the integrity subsystem is meant to thwart various kinds of attacks against the contents of files, both on- and off-line. Unexpected changes to files, particularly executables, may be a sign that the system has been compromised. In addition, the subsystem allows the use of the "Trusted Platform Module" (TPM) to collect integrity measurements and sign them in such a way that the system can "attest" to its integrity. That attestation could be sent to another system to "prove" that the system is intact—only approved code is running.
Current kernels can generate an integrity measurement of files that are executed, collect and digitally sign them with keys from the TPM (or the kernel keyring), and use that information for remote attestation. EVM adds the ability to thwart offline attacks against the file contents or metadata by hashing the values of the security xattrs of the file (e.g. security.selinux, security.ima), signing that hash, and storing it as security.evm.
But, there is nothing in place that would stop a running system from executing or reading a file that has been changed. If a file with an IMA hash is opened for reading or executing, the appraisal extension will check to see if the contents match the stored hash. If they don't match, the ima_appraise kernel command-line parameter determines what happens. If it is set to "enforce", access to the file is denied, while "fix" will update the IMA xattr with the new value. In addition, "off" can be used to turn off any file appraisal.
In order to recognize that a file has changed while it is open, the appraisal extension requires the filesystem to support i_version, which is a counter that gets incremented any time the file's inode gets updated. Filesystems must be mounted with i_version option in order for the appraisal extension to work. That allows the extension to notice the change when the file is closed and either update the xattr or flag the file change as a policy violation.
In order to get the initial security.ima xattrs on files that are to be appraised (by default, all files owned by root), one boots the kernel with ima_appraise_tcb (which enables appraisal) and ima_appraise=fix, and then by opening all files of interest (e.g. via a find command as suggested on the IMA web page).
The IMA appraisal extension will complete the off-line attack detection that EVM provides. Because the extension will create and maintain the security.ima xattr, EVM will be able to detect changes to the file contents.
In response to an earlier version of the patch set, James Morris asked if there were any distributions that were planning to use IMA and EVM once all the pieces are in place. George Wilson said that IBM plans to use it internally once distributions have incorporated it. In addition, Ryan Ware and Kasatkin said that the Tizen mobile distribution plans to use it for some product profiles.
But, before any of that can happen, the appraisal extension needs to find a way to change its locking behavior to get past a NAK by Al Viro. In the current patches, the final __fput() is deferred if a file is closed before munmap() is called in kernels using IMA appraisal. Viro is concerned that this changes the locking conditions based on whether the kernel is using IMA or not, which may make locking problems harder to spot. He also said that the overhead is too high for a commonly used path, and that not all of the places where __fput() is used were covered by the patch. So far, no solution to the problem has been found, though Viro did suggest possibly using a different mutex for changing xattrs, but that it would take a fair amount of code review to determine if that could be done.
Given that the patch set completes a job started by EVM, and will, for the most part, complete the integrity subsystem, it seems likely that a solution will be found. There are a few lingering pieces of IMA appraisal that are still coming, according to the "An Overview of the Linux Integrity Subsystem [PDF]" white paper. Two specific pieces are mentioned, one to add digital signature capabilities for vendor-signed files, and another that will protect directory contents (e.g. filenames). While the currently proposed patches may still need some work before they can be considered for the mainline, those working on the integrity subsystem are probably finally starting to see the light at the end of a long tunnel.an article on Peter Zijlstra's NUMA scheduling patch set. As it happens, Peter is not the only developer working in this area; Andrea Arcangeli has posted a NUMA scheduling patch set of his own called AutoNUMA. Andrea's goal is the same - keep processes and their memory together on the same NUMA node - but the approach taken to get there is quite different. These two patch sets embody a disagreement on how the problem should be solved that could take some time to work out.
Peter's patch set works by assigning a "home node" to each process, then trying to concentrate the process and its memory on that node. Andrea's patch lacks the concept of home nodes; he thinks it is an idea that will not work well for programs that don't fit into a single node unless developers add code to use Peter's new system calls. Instead, Andrea would like NUMA scheduling to "just work" in the same way that transparent huge pages do. So his patch set seems to assume that resources will be spread out across the system; it then focuses on cleaning things up afterward. The key to the cleanup task is a bunch of statistics and a couple of new kernel threads.
The first of these threads is called knuma_scand. Its primary job is to scan through each process's address space, marking its in-RAM anonymous pages with a special set of bits that makes the pages look, to the hardware, like they are not present. If the process tries to access such a page, a page fault will result; the kernel will respond by marking the page "present" again so that the process can go about its business. But the kernel also tracks the node that the page lives on and the node the accessing process was running on, noting any mismatches. For each process, the kernel maintains an array of counters to track which node each of its recently-accessed pages were located on. For pages, the information tracked is necessarily more coarse; the kernel simply remembers the last node to access each page.
When the time comes for the scheduler to make a decision, it passes over the per-process statistics to determine whether the target process would be better off if it were moved to another node. If the process seems to be accessing most of its pages remotely, and it is better suited to the remote node than the processes already running there, it will be migrated over. This code drew a strenuous objection from Peter, who does not like the idea of putting a big for-each-CPU loop into the middle of the scheduler's hot path. After some light resistance, Andrea agreed that this logic eventually needs to find a different home where it would run less often. For testing, though, he likes things the way they are, since it causes the scheduler to converge more quickly on its chosen solution.
Moving processes around will only help so much, though, if their memory is spread across multiple NUMA nodes. Getting the best performance out of the system clearly requires a mechanism to gather pages of memory onto the same node as well. In the AutoNUMA patch, the first non-local fault (in response to the special page marking described above) will cause that page's "last node ID" value to be set to the accessing node; the page will also be queued to be migrated to that node. A subsequent fault from a different node will cancel that migration, though; after the first fault, two faults in a row from the same node are required to cause the page to be queued for migration.
Every NUMA node gets a new kernel thread (knuma_migrated) that is charged with passing over the lists of pages queued for migration and actually moving them to the target node. Migration is not unconditional - it depends, for example, on there being sufficient memory available on the destination node. But, most of the time, these migration threads should manage to pull pages toward the nodes where they are actually used.
Beyond the above-mentioned complaint about putting heavy computation into schedule(), Peter has found a number of things to dislike about this patch set. He doesn't like the worker threads, to begin with:
Andrea responds that the cost of these threads is small to the point that it cannot really be measured. It is a little harder to shrug off Peter's other complaint, though: that this patch set consumes a large amount of memory. The kernel maintains one struct page for every page of memory in the system. Since a typical system can have millions of pages, this structure must be kept as small as possible. But the AutoNUMA patch adds a list_head structure (for the migration queue) and two counters to each page structure. The end result can be a lot of memory lost to the AutoNUMA machinery.
The plan is to eventually move this information out of struct page; then, among other things, the kernel can avoid allocating it at all if AutoNUMA is not actually in use. But, for the NUMA case, that memory will still be consumed regardless of its location, and some users are unlikely to be happy even if others, as Andrea asserts, will be happy to give up a big chunk of memory if they get a 20% performance improvement in return. This looks like an argument that will not be settled in the near future, and, chances are, the memory impact of AutoNUMA will need to be reduced somehow. Perhaps, your editor naively suggests, knuma_migrated and its per-page list_head structure could be replaced by the "lazy migration" scheme used in Peter's patch.
NUMA scheduling is hard and doing it right requires significant expertise in both scheduling and memory management. So it seems like a good thing that the problem is receiving attention from some of the community's top scheduler and memory management developers. It may be that one or both of their solutions will be shown to be unworkable for some workloads to the point that it simply needs to be dropped. What may be more likely, though, is that these developers will eventually stop poking holes in each other's patches and, together, figure out how to combine the best aspects of each into a working solution that all can live with. What seems certain is that getting to that point will probably take some time.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
Debian Edu is one of the quiet success stories of free software. If you are North American, you might have barely heard of it. Depending on what country you are in, you might know it better as Skolelinux. Or perhaps you have heard of Extremadura, an autonomous region of Spain that has deployed tens of thousands installations of LinEx, which is a Skolelinux partner. Under all these names, Debian Edu has worked for almost eleven years building a totally free-licensed distribution that balances the demands of complex hardware configurations with the educational needs of children.Debian Edu began with two projects founded in 2001: Debian Edu, founded by Raphaël Hertzog in France, and Skolelinux, founded by a group of Norwegian developers. The projects merged in 2006, and today Debian Edu is technically used for the project name and Skolelinux for the distribution, although in practice the names are used interchangeably according to personal preference and previous regional branding.
The goals of Debian Edu are unchanged from when the two initial projects began. According to Petter Reinholdtsen, one of Skolelinux's founders, they are to educate school children while using free software that supports each student's native languages. Debian was chosen as the base distribution because it was known to have a package to create installation CDs — a rare feature, back then. Similarly, thin clients were supported first because Ragnar Wisløf, another Skolelinux founder, was familiar with the Linux Terminal Server Project (LTSP), which seemed a good match for the older hardware often found at schools.
"Later," Reinholdtsen said, "we added support for network booted workstations with all software running locally but without a local disk, providing a powerful desktop without any local maintenance."
Today the project is funded regularly by its legal entity, SLX Debian Labs Foundation, which is also the majority owner of Skolelinux Drift AS, a company that provides commercial support for schools and municipalities. In the past, the project has also received funding from other sources, including the Norwegian Ministry of Education and Ministry of Reform and Administration.
Currently, Debian Edu has an active group of about ten developers, and an unknown number of translators and other contributors. A member association, Fri Programvare i Skolen organizes developer meetings and promotes the project. The project also works with Edubuntu, particularly on the LTSP project, and has some contact with similar projects in Brazil and other parts of the world, although "not as much as I would wish," Reinholdtsen added.
As with most free software projects, accurate installation numbers are impossible to find. However, Reinholdtsen calculated that a minimum of 120 schools use Skolelinux in Norway alone. Other large communities of users are in France and Germany, and some are known to exist in Japan and Taiwan as well.
Skolelinux uses the standard Debian installer, lightly rebranded and with a few additions. Contrary to the still-surviving myth, this installer is no longer much of a challenge to use, although it still offers more choices than the installation programs of most major distributions. In fact, according to Reinholdtsen, Debian Edu helped to fund Joey Hess's extensive rewriting of the Debian installer a few years ago.
All the same, Skolelinux is unusual enough that users should read the first few sections of the distribution's manual before beginning to install. Mercifully, documentation is one of the project's priorities, and the information provided is clear, precise, and mostly complete.
The Architecture section in particular could be used as a primer for a general understanding of thin clients and network-boot workstations. However, the section might benefit from more practical information about basic hardware setup. Common hardware alternatives, such as workstations that boot from an external drive, could also be discussed in more detail, although that could easily become an endless task.
Administrators might also want to familiarize themselves beforehand with what Debian calls flavors — the installation sets for different circumstances. A main server is required for administration and installing on other machines, but a complete setup might consist of any number of secondary servers, workstations, thin clients, as well, perhaps, as roaming workstations (ones that are sometimes off the network). The installer also allows standalone installations for those who want to try Skolelinux without a network, perhaps on a student's notebook.
Each of these flavors differs in both the hardware requirements and the software installed. For example, thin client servers may require two Ethernet cards (one to connect to the main server and another to connect to the thin clients), while the firewall and administrative tools, unsurprisingly, are only installed on the server. Hard drive memory requirements are especially different, with a minimum of 15GB required for a workstation, and 30GB for the server. Similarly, the main server requires roughly 2GB of RAM for every 30 client machines. Since Skolelinux is often used to extend the useful years of older hardware, it is important to check that these varied requirements are met.
Once you start to install, the procedure is straightforward. However, pay attention to the final messages about hardware and software that is still unconfigured, such as Kerberos. These messages are probably unnecessary for most people likely to be installing Skolelinux, but they are useful reminders of what remains to be done before a fully functioning network exists.
Many of the administrative functions, such as installing general packages or permitting automatic security upgrades will be familiar to many general users of Linux. However, the documentation also includes such features as suggestions about other tools to use, a script for creating directories simultaneously in each user's home directory, and brief descriptions of lesser-known alternate software sources such as Debian's stable-updates and backports repositories. Like the installation instructions, this material provides an overview that even non-Skolelinux users might find helpful.
For administrators, the new features of the recent 6.0.4+r0 release includes a new web-based LDAP administration tool. Other new features for printers are designed to save power and reduce waste, such as a default flush of print queues every night and having client machines automatically turn off at night and on again in the morning. In addition, stopped printer queues are restarted in an hour by default in case the printer was turned off accidentally.
For users, Skolelinux defaults to KDE, but GNOME, LXDE, and Sugar are also available as desktop environments. Installations include a much larger selection of software than standard Debian installations do, including KDE applications such as Amarok, Marble, and K3B, and other standard applications such as Inkscape, Scribus, and OpenOffice.org (not LibreOffice, since Debian stable has not yet switched).
Other installed software is heavily oriented toward education and creativity. The ever-popular Compris is included in the Favorites menu, while the Education menu includes everything from software for learning the alphabet and fractions to software for learning geometry and graph analysis. A drum machine and Rosegarden are included for musicians, as well as a chess game and at least two typing tutors.
Some might complain about the version numbers on Skolelinux's software, since they are based on Debian stable, and some are far from the latest available. However, Skolelinux has chosen stability over the cutting edge, as system administrators would generally prefer.
In addition, as Reinholdtsen points out, the emphasis on stability minimizes the chances of a disruption of service during the school year. Although the emphasis may mean that drivers are unavailable for some newer hardware, he suggested that "we have found a reasonably good balance between stability and offering the latest software."
My one quibble with the selection of user software is that it covers the complete range of pre-university educational software. This choice is perhaps suitable for schools that cover a wide range of students. Yet inevitably it means that much of the software will be irrelevant for any specific class. Perhaps a solution might be the ability to install educational software by age or ability. But, in general, Skolelinux offers an impressive showcase of educational software.
Officially, Skolelinux is a Debian Pure Blend — a distribution that is a subset of general Debian and remains compatible with it. However, unless you are doing a standalone installation, setting up Skolelinux is not a trivial task. Given the number of flavors that it supports and the fact that multiple-machine setups are the norm, Skolelinux might more realistically be described as several distributions in one — including some with conflicting needs and priorities. The quality of the documentation mitigates this complexity to a large degree, but the instructions still contain assumptions about users' knowledge that may be unwarranted.
Still, what is noticeable about Skolelinux is not that there are gaps in the documentation, but that there are so few. In fact, another educational aspect of Skolelinux is that it could serve as a distribution for training sysadmins about networks and thin clients.
More than anything else, this reduction of complexity to clarity is what makes Skolelinux a pioneer and continuing innovator in free software and education but still relevant today. The project is currently fundraising, so one way to help this veteran project would be to make a donation to help ensure that its annual meeting of core developers can take place this year.
Alternatively, instead of thousands of words being wasted in this thread, interested DDs could help with finalizing the Policy change for how upstart jobs should coexist with sysvinit scripts on the system.
Debian GNU/LinuxIf you've followed what the team, Enrico in particular, has been doing in the last months, you'll know that there was some intention to replace the current NM web interface. Due to security concerns that were brought to public attention just recently, we decided that it's time to actually replace the code."
Newsletters and articles of interest
Page editor: Rebecca Sobol
Until a few months ago, I hadn't found a good tool to handle a task list and track my time. Several were tried, but none of them could handle all the requirements to my satisfaction: the tool should be usable on the command line with straightforward commands that are easy to memorize, it should be able to use the same task data on different machines without having to install and maintain a dynamic website, the tool should be able to track the time I spend on tasks and projects, and it shouldn't force me to follow a specific time management philosophy such as Getting Things Done or the Pomodoro Technique.
At the beginning of this year, during my annual and always unsuccessful New Year's resolution to manage my time more efficiently, I tried a beta version of Taskwarrior 2.0. While it isn't perfect, it turns out that Taskwarrior is right for my use case. This command-line task list manager maintains a list of tasks that you can add, remove and annotate at will. It has a rich list of sub-commands to do advanced things with your tasks, including reports, charts, time tracking, and so on, and it is extensible with Lua scripts. After a year of development, version 2.0.0 has now been released.
Version 2.0 still has to trickle down to the repositories of various Linux distributions, but you can download the source and compile it (it has been tested on various Linux distributions, the BSDs, Solaris, Mac OS X, and Cygwin) or use the binary packages for i386 Debian 6, Ubuntu 11.10, or Fedora 16. Older versions are available in many repositories, but Taskwarrior 2.0 has added significant changes and changed the command-line syntax, so it's probably worthwhile to temporarily use a 2.0 downloaded from the project's web site until your distribution's repository updates its Taskwarrior package to 2.0.
The first time you enter task in a terminal, it prompts to create an initial ~/.taskrc file, which has default settings and comments. After that, a task help command gives you a daunting list of sub-commands, syntax, and examples, but despite this apparent complexity the basics of Taskwarrior are quite easy to grasp if you follow the project's excellent documentation. The place to start is the 30-second tutorial which teaches you how how to add, list and delete tasks, set a specific priority to a task, and to mark it as done. These are easily memorizable commands like task add, task ls, task <ID> delete, and task <ID> done.
While the 30-second tutorial teaches you the most important Taskwarrior sub-commands, there's a lot more to the program, which would be difficult to find out on your own only from reading the man pages. Luckily, there's a long tutorial as well. After the basic usage, it explains projects, priorities, tags, undo and delete, information about tasks, statistics, annotations, due dates, calendars, and so on. For instance, when your tasks have due dates, you can see them highlighted on a calendar with task calendar. The program even provides sample holiday files for a couple of countries. If you include the holiday file in your ~/.taskrc, holidays are shown in another color on your calendar.
The second part of the tutorial handles some advanced uses of Taskwarrior. Some of the topics are tasks that are recurring weekly, monthly, or yearly, waiting tasks (tasks with a due date far out into the future that remain hidden until a certain date to not clutter your task list), dependencies between tasks, reports and time sheets, monthly and yearly charts, a summary showing your progress in all projects and sub-projects, filters, and import and export in CSV, YAML, vCalendar, or JSON format. It pays to follow both tutorials and try their examples before you start using Taskwarrior, and you should probably read them again after having used Taskwarrior for a while. There's also a gallery with some video tutorials, as well as some HOWTOs about specific functionality.
One particular interesting feature of Taskwarrior is the built-in synchronization functionality, which lets you backup, restore and synchronize task lists using ssh or rsync to share the same task list on multiple machines. So you can backup your local task list to a remote machine with a task push command, restore it with a task pull command, and merge a local and remote task list with task merge. Of course you can configure Taskwarrior to store its data (by default in the directory ~/.task/) in a Dropbox folder or any other directory that is synchronized between your machines, but the advantage of Taskwarrior's built-in synchronization feature is that it doesn't require you to install any third-party synchronization software; ssh access to a remote machine is enough. Note that the ~/.taskrc configuration file isn't synchronized by this method, just the data in ~/.task/.
Taskwarrior is highly configurable. A task show command shows more than 200 configuration variables you can set in the ~/.taskrc file with their current value. Variables that you have changed from the default values are highlighted in color. The colors for many types of information can also be changed in the configuration file, or you can use one of the several standard color themes.
You can also add your own variables to the configuration file, for instance to create custom reports with specific columns, labels, sorting order, and filters. The custom reports you add even show up in the help output of the task command and you can run it as a sub-command, just like the default reports. Taskwarrior 2.0 also adds the possibility of extending the program's functionality with Lua scripts, but unfortunately this is not documented yet.
Beginning with version 2.0, Taskwarrior has been re-licensed from GPLv2+ to the more permissive MIT license. One of the reasons the developers cite is that using the MIT license won't require that all plugins that ship with Taskwarrior be GPL compatible, which had to be done in the past. Another reason they cite is that "certain app stores don't accept GPL licensed software".
The developers welcome changes and give an overview of how you can contribute. There's a big issues list with feature requests and bug reports and the project has some big-picture architectural plans. For instance, they are working on a task server that will support synchronizing task data and later group collaboration. This task server will also be able to send out reminder notices for tasks that are due.
There's also a lot of development happening on Taskwarrior extensions and external scripts. One example is Taskhelm, a graphical interface for Taskwarrior. Other ones are VITtk, which is a vi-like interface for Taskwarrior, the web interface Taskwarrior-web, as well as some conversion tools.
As a nice illustration of the development activity, the development of Taskwarrior 2.0 has been visualized by the software version control visualization tool Gource. The data was extracted and visualized from the project's Git repository and converted to a YouTube video.
Taskwarrior will not appeal to everyone, but if you're spending a lot of time at the command line, it offers a frictionless window to your task list. It's always available because you don't have to switch to another window or open a specific web site. With an alias t=task in your ~/.bashrc, a couple of key presses are enough to view your current tasks and track your time. Of course it requires some time to get to know the right commands, but they are straightforward and the documentation and man pages (task, taskrc, task-color, task-faq, task-sync and task-tutorial) are excellent and comprehensive.
Our struggle today is not to have a female Einstein get appointed as an assistant professor. It is for a woman schlemiel to get as quickly promoted as a male schlemiel. ~ Bella Abzug
That's certainly how I personally sent in a patch or two to glibc.
-- Linus Torvalds (in the comments)Cairo graphics library is out. "The major feature of this release is the introduction of a new procedural pattern; the mesh gradient. This, albeit complex, gradient is constructed from a set of cubic Bezier patches and is a superset of all other gradient surfaces which allows for the construction of incredibily detailed patterns." There is also an X XCB backend, a lot of performance improvements, and more. released. Improvements include proper time zone support, various security improvements, and more; see the release notes for details.
New features include software transactional memory support, more C11 and C++11 standard features, OpenMP 3.1 support, better link-time optimization, and support for a number of new processor architectures. See the GCC 4.7 changes page for lots more information.released. "Go 1 introduces changes to the language (such as new types for Unicode characters and errors) and the standard library (such as the new time package and renamings in the strconv package). Also, the package hierarchy has been rearranged to group related items together, such as moving the networking facilities, for instance the rpc package, into subdirectories of net." Lots of details can be found in the Go 1 release notes. GNOME 3.4 is the second major update of GNOME 3. It builds on the foundations that we have laid with 3.0 and 3.2 and offers a greatly enhanced experience. The exciting new features and improvements in this release include a new virtual machine and remote access application, a completely revamped web browsing user experience, integrated document search, first-class web applications, better graphics tablet support, application menus, and many more." See the release notes for information and screenshots. an announcement that the glibc steering committee, which oversaw development of the GNU C library, is dissolving. "With the recent reinvigoration of volunteer efforts, the community of developers is now able to govern itself. Therefore, the Steering Committee has decided unanimously to dissolve. The direction and policies of the project will now be governed directly by the consensus of the people active in glibc development." Joseph Myers, one of the volunteers who has helped to pick things up again, has added: "Everyone is welcome to join in the development, working in accordance with community-agreed practices and GNU policy, and commit access is routinely available for people making good contributions." He has also suggested that, over time, the eglibc fork may be merged back in to glibc. (Thanks to Michael Kerrisk). released. "XBMC 11.0 Milestones include Addon Rollbacks, vast improvements in Confluence (the default skin), massive speed increases via features like Dirty-region rendering and the new JPEG decoder, a simpler, better library, movie set scraping, additional protocol handling, better networking support, better handling of unencrypted BluRay content and structures, adjustable display refresh rate in OSX (to match the already available feature in Windows and Linux), AirPlay support, an upgraded weather service with geoip lookup, and much, much more. " (LWN reviewed this release in February).
Newsletters and articles
Page editor: Jonathan Corbet
Articles of interesttakes a quick look at experimental collaborative editing capabilities in LibreOffice. "Telepathy is an open source instant messaging framework that supports multiple protocols. One of the key features of Telepathy is that it allows instant messaging protocols to be used as a medium for arbitrary communication between applications, like a form of real-time network IPC. Building LibreOffice's collaborative editing features on top of Telepathy eliminates the need to operate special servers for the purpose." interview with Guido Günther. "Guido Günther joined the Debian Project while completing his degree in physics at the University of Konstanz. He helped with development of Debian for new processor architectures, and co-initiated Debian’s Groupware Meetings. He also enjoys contributing to the GNOME project, and advanced Free Software virtualisation technologies. He works as a professional Free Software developer and consultant." a lengthy article about GPL enforcement inspired by the discussions from earlier this year. "Behind the the headline stories of litigation lies a calmer reality, where the great majority of companies who are notified by SFC or gpl-violations.org are happy to comply without a fuss, because the software works for them, and gives them reductions in cost, speed to market, collaborative opportunities and access to high quality code." looks into revenue and salaries at various open source non-profit organizations. "With a revenue of $1,934,659, the Mozilla Foundation ranked fourth of the eighteen FLOSS-related non-profits researched for this report. But with a net cash flow loss of $1,333,815 for the 2010 fiscal year, the Mozilla Foundation was next to last on money lost for the year. [...] All of this information was obtained from the Federal income tax forms all U.S. non-profits are required to file with the IRS. Specifically, Form 990 (or the 990-EZ when applicable). Thirteen of the non-profits have publicly available information for their 2010 fiscal years, with the other five's information only up to date to their respective 2009 fiscal years." writes about the implications of a recent US Supreme Court decision in Mayo Collaborative Services v. Prometheus Laboratories, Inc. [PDF]. He looks at the possible impact on software patent decisions down the road. "It also seems noteworthy that the Mayo Court outlined a balanced view of the patent system that took account of the risks it can pose for innovation. It wrote, 'Patent protection is, after all, a two-edged sword. On the one hand, the promise of exclusive rights provides monetary incentives that lead to creation, invention, and discovery. On the other hand, that very exclusivity can impede the flow of information that might permit, indeed spur, invention, by, for example, raising the price of using the patented ideas once created, requiring potential users to conduct costly and time-consuming searches of existing patents and pending patent applications, and requiring the negotiation of complex licensing arrangements.'"
Contests and AwardsThe prize is awarded by the Free Software Foundation Europe (FSFE) and the Foundation for a Free Information Infrastructure e.V. (FFII). 1&1 is awarded for automatically adding XMPP for all customers of their mail services. The Document Freedom Award is awarded annually on the occasion of Document Freedom Day - the international day for Open Standards. Last years winners include tagesschau.de, Deutschland Radio, and the German Foreign Office."
Calls for PresentationsRefereed-track presentations are about 45 minutes in length, and should focus on a specific aspect of the "plumbing" in the Linux system. The Linux system's plumbing includes kernel subsystems, core libraries, windowing systems, management tools, media creation/playback, and so on."
Upcoming EventsThis year's event is about CIFS, clustering and directories again - but with the difference that the first releases of Samba 4 is available in real business offerings: A Samba 4 ready Debian is available at SerNet for free." This year's conference motto is "OpenCms 8 – Experiences, Possibilities, Potentials". This 4th annual gathering of OpenCms users will focus on success stories and projects based on OpenCms 8. Experts from all around the world will share their experiences with this latest OpenCms release. Alkacon will also unveil Version 8.5 of its successful open source website content management system OpenCms which will contain several enhancements."
|EclipseCon 2012||Washington D.C., USA|
|Wireless Battle of the Mesh (V5)||Athens, Greece|
|Palmetto Open Source Software Conference 2012||Columbia, South Carolina, USA|
|March 29||Program your own open source system-on-a-chip (OpenRISC)||London, UK|
|March 30||PGDay DC 2012||Sterling, VA, USA|
|April 2||PGDay NYC 2012||New York, NY, USA|
|LF Collaboration Summit||San Francisco, CA, USA|
|Android Open||San Francisco, CA, USA|
|Percona Live: MySQL Conference and Expo 2012||Santa Clara, CA, United States|
|SuperCollider Symposium||London, UK|
|Linux Audio Conference 2012||Stanford, CA, USA|
|European LLVM Conference||London, UK|
|April 13||Drizzle Day||Santa Clara, CA, USA|
|OpenStack "Folsom" Design Summit||San Francisco, CA, USA|
|Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems||Paris, France|
|OpenStack Conference||San Francisco, CA, USA|
|April 21||international Openmobility conference 2012||Prague, Czech Republic|
|Luster User Group||Austin, Tx, USA|
|Evergreen International Conference 2012||Indianapolis, Indiana|
|Penguicon||Dearborn, MI, USA|
|April 28||Linuxdays Graz 2012||Graz, Austria|
|LinuxFest Northwest 2012||Bellingham, WA, USA|
|Libre Graphics Meeting 2012||Vienna, Austria|
|Utah Open Source Conference||Orem, Utah, USA|
|Tizen Developer Conference||San Francisco, CA , USA|
|Ubuntu Developer Summit - Q||Oakland, CA, USA|
|samba eXPerience 2012||Göttingen, Germany|
|Professional IT Community Conference 2012||New Brunswick, NJ, USA|
|Debian BSP in York||York, UK|
|C++ Now!||Aspen, CO, USA|
|PostgreSQL Conference for Users and Developers||Ottawa, Canada|
|Military Open Source Software - Atlantic Coast||Charleston, SC, USA|
|Croatian Linux Users' Convention||Zagreb, Croatia|
|Flossie 2012||London, UK|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds