User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 2, 2012

A tempest in a toybox

By Jonathan Corbet
February 1, 2012
The eLinux.org web site is currently promoting a project to write a replacement for Busybox under a permissive license. Normally, the writing of more free software is seen as a good thing, but, in this case, there have been complaints about the perceived motivation behind the project. What this discussion shows is that there are some divisions within our community on how our licenses should be enforced - and even what those licenses say.

One could imagine a number of reasons for wanting to rewrite Busybox. Over time, that package has grown to the point that it's not quite the minimal-footprint tool kit that it once was. Android-based systems can certainly benefit from the addition of Busybox, but the Android world tends to be allergic to GPL-licensed software; a non-GPL Busybox might even find a place in the standard Android distribution. But the project page makes another reason abundantly clear:

Busybox is arguably the most litigated piece of GPL software in the world. Unfortunately, it is unclear what the remedy should be when a GPL violation occurs with busybox. Litigants have sometimes requested remedies outside the scope of busybox itself, such as review authority over unrelated products, or right of refusal over non-busybox modules. This causes concern among chip vendors and suppliers.

What seems to be happening in particular is that the primary Busybox litigant - the Software Freedom Conservancy (SFC) - has taken the termination language in GPLv2 to mean that somebody who fails to comply with the license loses all rights to the relevant software, even after they fix their compliance problems. (See Android and the GPLv2 death penalty for more information on this interpretation of the license, which is not universally held). Under this interpretation, they are withholding the restoration of a license to use and distribute Busybox based on conditions that are not directly related to Busybox; among other things, they require compliance for all other free software products shipped by that company, including the Linux kernel.

Thus, according to Matthew Garrett:

The reason to replace Busybox isn't because they don't want to hand over the source to Busybox - it's because Busybox is being used as a proxy to obtain the source code for more interesting GPLed works. People want a Busybox replacement in order to make it easier to infringe the kernel's license.

There is some truth to the notion that, on its own, license enforcement for Busybox is not hugely interesting. Encouraging compliance with the GPL is a good thing, but, beyond that, there is little to be gained by prying the source for a Busybox distribution from a vendor's hands. There just isn't anything interesting being added to Busybox by those vendors. Rob Landley, who was once one of the Software Freedom Law Center's plaintiffs (before the enforcement work moved to the Software Freedom Conservancy) once wrote:

From a purely pragmatic perspective: I spent over a year doing busybox license enforcement, and a dozen lawsuits later I'm still unaware of a SINGLE LINE OF CODE added to the busybox repository as a result...

Rob has been working on the Toybox project, which happens to be the effort that would someday like to replace Busybox, since 2006.

So, beyond the generation of bad publicity for a violator and some cash for the litigants, Busybox enforcement on its own could perhaps be said to achieve relatively little for the community. But a vendor that can't be bothered to provide a tarball for an unmodified Busybox distribution is highly unlikely to have its act together with regard to other projects, starting with the kernel. And that vendor's kernel code can often be the key to understanding their hardware and supporting it in free software. So it is not surprising that a group engaging in GPL enforcement would insist on the release of the kernel source as well.

On its face, it does seem surprising that vendors would object to this requirement - unless they overtly wish to get away with GPL infringement. Tim Bird, who is promoting the Busybox replacement project, has stated that this is not the case. Instead, Tim says:

It is NOT the goal of this to help people violate the GPL, but rather to decrease the risk of some nuclear outcome, should a mistake be made somewhere in the supply chain for a product. For example, it is possible for a mistake made by an ODM (like providing the wrong busybox source version) could result in the recall of millions of unrelated products. As it stands, the demands made by the SFC in order to bring a company back into compliance are beyond the value that busybox provides to a company.

Mistakes do happen. Companies are often surprisingly unaware of where their code comes from or what version they may have shipped in a given product. Often, the initial violation comes from upstream suppliers, with the final vendor being entirely unaware that they are shipping GPL-licensed software at all. What is being claimed here is that SFC's demands are causing the consequences of any such mistakes to be more than companies are willing to risk.

What does the SFC require of infringers? The SFC demands, naturally enough, that the requirements of the GPL be met for the version of Busybox shipped by an infringer. There are also demands for an unknown amount of financial compensation, both to the SFC (The SFC's FY2010 IRS filings show just over $200,000 in revenue from legal settlements) and to the Busybox developer (Erik Andersen) that the SFC is representing. Then there are the demands for compliance for all other GPL-licensed products shipped by the vendor, demands that, it is alleged, extend to the source for binary-only kernel modules. The SFC also evidently demands that future products be submitted to them for a compliance audit before being shipped to customers.

Such demands may well be appropriate for habitual GPL infringers; they are, arguably, a heavy penalty for a mistake. Whether the cases filed by the SFC relate to habitual behavior or mistakes is not necessarily clear; there have been plenty of allegations either way. One person's mistake is another person's intentional abuse.

If Busybox is, for whatever reason, especially mistake-prone, then replacing it with a mistake-proof, BSD-licensed version might make sense. Not using the software at all is certainly a way to avoid infringing its license. What is a bit more surprising is that some developers are lamenting the potential loss of Busybox as a lever for the enforcement of conditions on the use of the kernel. There are a couple of concerns here:

  • The use of the GPL death penalty is worrisome, in that it gives any copyright holder extreme power over anybody who can be said to have infringed the license in any trivial way. Even if one is fully in agreement with the SFC's use of the termination clause, there are, beyond doubt, entities out there who would like to use it in ways that are not in the interests of the free software community.

  • One could argue that enforcement of the licenses for other software packages should be left up to the developers who wrote that code. They may have different ideas about how it should be done or even what compliance means. Any developer with copyrights in the kernel (or any other product) is entirely capable of going to the SFC if they want SFC-style enforcement of their rights.

For such a developer to go to the SFC is exactly what Matthew is asking for in his post. Thus far, despite a search for plaintiffs on the SFC's part, that has not happened. Why that might be is not entirely clear. Perhaps kernel developers are not comfortable with how the SFC goes about its business, or perhaps it's something else. It's worth noting that most kernel developers are employed by companies these days, with the result that much of their output is owned by their employers. For whatever reason, companies have shown remarkably little taste for GPL enforcement in any form, so a lot of kernel code is not available to be put into play in an enforcement action.

That last point is, arguably, a real flaw in the corporate-sponsored free software development model - at least, if the viability of copyleft licenses matters. The GPL, like almost any set of rules, will require occasional enforcement if its rules are to be respected; if corporations own the bulk of the code, and they are unwilling to be part of that enforcement effort, respect for the GPL will decrease. One could argue that scenario is exactly what is playing out now; one could also argue that it is causing Busybox, by virtue of being the only project for which active enforcement is happening, to be unfairly highlighted as the bad guy. If GPL enforcement were spread across a broader range of projects, it would be harder for companies to route around - unless, as some fear, that enforcement would drive companies away from GPL-licensed software altogether.

Situations like this one show that there is an increasing amount of frustration building in our community. Some vendors and some developers are clearly unhappy with how some license enforcement is being done, and they are taking action in response. But there is also a lot of anger over the blatant disregard for the requirements of the GPL at many companies; that, too, is inspiring people to act. There are a number of undesirable worst-case outcomes that could result. On one side, GPL infringement could reach a point where projects like the kernel are, for all practical effect, BSD-licensed. Or GPL enforcement could become so oppressive that vendors flee to code that is truly BSD licensed. Avoiding these outcomes will almost certainly require finding a way to enforce our licenses that most of us can agree with.

Comments (229 posted)

SCALE 10X: The trickiness of the education market

February 1, 2012

This article was contributed by Nathan Willis

The classroom presents a special challenge to Linux and open source advocates. At first glance it seems like a slam dunk: free software lowers costs, and provides students with unique opportunities to learn. But even as FOSS adoption grows into big business for enterprises, start-ups, and mom-and-pop shops, it continues to be a minority player in public schools and universities. There are outreach efforts fighting the good fight, but progress is slow, and learning how to adapt the message to the needs of educators is far from a solved problem.

Open Source Software In Education (OSSIE) was a dedicated Saturday track at SCALE 10X in Los Angeles. The sessions included talks about FOSS aimed at educators and talks about promoting open source usage in schools. The track was running concurrently with the rest of the conference, which made it difficult to attend every session, but the overlap between two of the talks raised more than enough questions for the open source community — namely, how to adapt outreach strategies for success in the often intransigent education sector.

Elizabeth Krumbach's "Bringing Linux into Public Schools and Community Centers" was an overview of the Partimus project's work in the San Francisco Bay area (and similar efforts), setting up and maintaining computer labs for K-12 students. Sebastian Dziallas's "Undergraduate Education Strategies" was a look at the Red Hat-run Professors' Open Source Summer Experience (POSSE), which is a workshop for college professors interested in bringing open source to the classroom.

Case studies from Partimus

[Elizabeth Krumbach]

Krumbach is both a volunteer and a board member with Partimus, a volunteer-driven nonprofit that accepts hardware donations, outfits them with free software, and provides them to San Francisco area schools. As she explained, Partimus's involvement includes not only the desktop systems used by students, but the network tools and system administration support required to keep the labs running. That frequently means setting up thin clients for the lab machines, plus network-mounted drives and imaging servers to provision or replace clients, and often setting up the infrastructure for the network itself: running Ethernet and power to all of the seats. The client software is based on Ubuntu, Firefox, and LibreOffice on the client side, plus OpenLDAP directory service and the DansGuardian filtering system — which fulfills a legal requirement for most schools.

The talk examined three education deployments in depth, and the lessons interested projects could draw from each. The Mount Airy Learning Tree (MALT) is a community center in Philadelphia, and a Partimus-inspired effort by the Ubuntu Pennsylvania team worked with the center to build its first-ever computer lab. The deployment was initially a success, but it did not end well when MALT relocated to a new venue on the other side of the city. The volunteers who had been supporting the lab found it impossible to make the numerous trips required to support the new facility on an ongoing basis, and the new MALT staff were uninterested in the lab. Although community centers are often easier to work with than public schools, Krumbach said, the MALT experience underlines the necessity of having on-the-ground volunteers available, and of having buy-in by the community center staff itself.

The Creative Arts Charter School (CACS) is a San Francisco charter school, meaning that it is publicly funded but can make autonomous decisions apart from the general school district. CACS is one of Partimus's flagship projects, an ongoing relationship that involves both labs and individual installs for various teachers. In the CACS case, Krumbach highlighted that supporting the computers required Partimus volunteers willing to go to the schools and inspect the machines in person. Teachers, being driven by the demands of the fixed academic calendar, rarely call in to report hardware or software failures: they simply work around them.

The ASCEND charter school in Oakland is another Partimus effort, but one with a distinctly different origin story. Robert Litt, a teacher at ASCEND, learned about Linux and open source from an acquaintance, and sought out help himself. Partimus donated a server to the school, but acts more like a technology consultancy, providing help and educational resources, while the labs are run and maintained by Litt. Krumbach used the example as evidence of the value of a local champion: Litt is a forward-thinking, technology-aware teacher in other respects as well; he runs multiple blogs to communicate with and provide assignments to his elementary-age classes.

Schools, grants, and budgets

A successful school deployment is not primarily a technological challenge, Krumbach said: the software is all there, and getting modern hardware donations is relatively easy. Instead, the challenges center around the individuals. She called attention to the "enthusiastic" leadership of Partimus director Christian Einfeldt, who is an effective advocate for the software and motivator of the volunteers. But on-the-ground supporters and strong allies at the school themselves are vital as well. Finally, she emphasized that "selling" schools on open source software required demonstrating it and providing training classes so that the teachers could gain firsthand experience — not merely enumerating a list of benefits.

The audience in the session included many who either worked in education or who had firsthand experience advocating open source software in the classroom, which at times made for impassioned discussion. The topic that occupied the most time was how to respond when a Linux lab is challenged by the sudden appearance of a grant-funded (or corporate-donated) rival lab built on Windows. Apparently, in those situations it is common for the donation or the grant to stipulate that the new hardware be used only in a particular way — which precludes installing another operating system. Krumbach said that Partimus had encountered such a dilemma, and quoted Einfeldt as saying "it's wrong, but it sort of makes me glad when I walk into that lab and one third of the Windows computers don't boot. And they call us back in when half of them don't boot."

Grants and corporate-sponsored donations relate to another important issue, which is that public schools do not deal with budgets like businesses do. They do have a budget (even a technology budget), Krumbach said, but the mindset is completely different: a school's budget is fixed, it is determined by outsiders, and the school has very little input into the process.

In other words, schools don't deal with income and expenses like businesses do, and thus the "you'll start saving money now" argument common in the small business and enterprise market simply carries no weight. A better strategy is to directly connect open source software to opportunities to do new things: a new course, an optional extra-curricular activity, or a faster and simpler way to teach a particular subject. That approach makes charter schools an especially viable market, she said; anyone interested in promoting open source software would do well to pay attention to when local charter schools are in the planning stages.

The higher-ed gap

[Sebastian Dziallas]

While Partimus is interested in the primary and secondary education market (and generally only at the desktop-user level), Red Hat's POSSE targets college professors who teach computer science and software engineering. It has been run both as a week-long boot camp and as a weekend experience, but in either case, the professors are split into groups and learn about the open source development model by immersion: getting familiar with wikis, distributed source code management, and communicating only by online means. Dziallas mentioned that (in at least one case) the professors were instructed to only communicate with each other over IRC during the project; IRC like other tools common in open source projects is rarely used in academia.

At the end of a POSSE training course, the expectation is that the professors will use real-world open source projects as exercises and learning opportunities in their own classes — anywhere from serving as source material to assigning semester-long projects that get the students involved in actual development. In addition, the professors leave POSSE with valuable contacts in the open source community, including people who they can turn to when they have questions or when something goes wrong (such as a project delaying its next release to an inopportune time of year).

Dziallas is currently a student at Olin College, and had worked as an intern at Red Hat in the summer of 2011. Based on that internship and his experience with POSSE, he presented his insights on the cultural differences between open source software and academia, and how understanding them could help bridge the gap.

For starters, he pointed out that open source and academia have radically different timing on a number of fronts. Many Linux-related open source projects now operate on steady, six-month release cycles, while universities typically only re-evaluate their curriculum every four years. Planning is also different: open source projects vary from those with completely ad-hoc roadmaps to those that plan a year in advance — but academia thinks in two-to-five-year cycles for everything from hardware refreshes to accreditation. The "execution time" of the two worlds differs, too, with the lifecycle of a typical software release being six to twelve months, but the lifespan of a particular degree taking four to five years.

As a result, he said, from the open source perspective the academic world seems glacially slow, but from academia's vantage point, open source is chaotic and unpredictable. But the differences do not stop there. In open source, jumping in and doing something without obtaining permission first is the preferred technique — while in academia it is anathema. Open source is always preoccupied with the problem of finding and recruiting more contributors, he said, while academia is currently interested in "mentoring," "portfolio material," and the "workplace readiness" of students. Industry has been quick to connect with universities, recruiting interns and new employees, but open source has so far not been as successful.

Challenges for POSSE

POSSE is Red Hat's effort to bridge the gap and find common ground between open source in the wild and academia. The professors are encouraged to find an existing project that they care about, not to simply pick one at random, in the hopes of building a sustainable relationship. The "immersion" method of learning the open source methodology is supposed to be a quicker path to understanding it than any written explanation can provide. But ultimately, building connections between the interested professors and actual developers is one of the biggest benefits of the program.

Dziallas calculated that of all of the college professors with an interest in learning more about open source, only 50% can make it to a POSSE event (for budgetary or time reasons). In addition, about 30% have some sort of "institutional blocker" that precludes their attendance beyond just logistical issues, and a tiny percentage drop out for loss of interest or other reasons.

Thus POSSE is only reaching a fraction of the educators it would like to, but the challenge does not stop there. Among POSSE alumni, the challenge is maintaining a long-term relationship. The amount of support a professor receives after POSSE corresponds to the success rate. Although some are able to use institutional funds to further their involvement with open source (such as travel support to attend a conference, or to bring in a developer to give a guest lecture), most are not. POSSE has only been in operation since 2009, so its long-term sustainability has yet to be proven. But, Dziallas noted, regardless of whether or not the current formula is sustainable, "we must keep trying."

As was the case with Krumbach's talk, the audience question-and-answer segment of the session was taken up largely by the question of how to make inroads into institutions where there is currently no Linux or open source presence. At the college level, of course, the specifics are different. One audience member asked how to combat purchasing decisions that locked out open source, to which Dziallas replied that there is a big difference between the software that students use to do their homework, and what shapes the education experience: if understanding open source and participating in the community is the goal, that goal can be accomplished on a computer running Microsoft software.

Another audience member weighed in on the topic by suggesting that open source advocates take a closer look at the community colleges and technical colleges in their area, not just the four year, "liberal arts" institutions. In the United States, "community" and "technical" colleges typically have a different mandate, the argument went, and one that puts more emphasis on job training and on learning real-world skills. As a result, they move at a different pace than traditional institutions and respond to different factors.

In both sessions, then, the speakers shared their successes, but the audience expressed an ongoing frustration with cracking into the educational computing space. Of course, selling Linux on the desktop has always been a tougher undertaking than selling it in the server room, but it is clear from the conversations at OSSIE that advocating open source in education is far more complicated than substituting "administrator" for "executive" and "classroom" for "office." Both Partimus and POSSE are gaining valuable insights through their own work about the distinct expectations, timing, and interaction it takes to present a compelling case to educators. They still have more information to gather, but even now other open source projects can learn from their progress.

Comments (3 posted)

Thoughts from LWN's UTF8 conversion

By Jonathan Corbet
February 1, 2012
There are a lot of things that one does not learn in engineering school. In your editor's case, anything related to character encodings has to be put onto that list. That despite the fact that your editor's first programs were written on a system with a six-bit character size; a special "shift out" mechanism was needed to represent some of the more obscure characters - like lower case letters. Text was not portable to machines with any other architecture, but the absence of a network meant that one rarely ran into such problems. And when one did, that was what EBCDIC conversion utilities were for.

Later machines, of course, standardized on eight-bit bytes and the ASCII character set. Having a standard meant that nobody had to worry about character set issues anymore; the fact that it was ill-suited for use outside of the United States didn't seem to matter. Even as computers spread worldwide, usage of ASCII stuck around for a long time. Thus, your editor has a ready-made excuse for not thinking much about character sets when he set out to write the "new LWN site code" in 2002. Additionally, the programming languages and web platforms available at the time did not exactly encourage generality in this area. Anything that wasn't ASCII by then was Latin-1 - for anybody with a sufficiently limited world view.

Getting past the Latin-1 limitation took a long time and a lot of work, but that seems to be accomplished and stable at this point. In the process, your editor observed a couple of things that were not immediately obvious to him. Perhaps those observations will prove useful to anybody else who has had a similarly sheltered upbringing.

Now, too, we have a standard for character representation; it is called "Unicode." In theory, all one needs to do is to work in Unicode, and all of those unpleasant character set problems will go away. Which is a nice idea, but there's a little detail that is easy to skip over: Unicode is not actually a standard for the representation of characters. It is, instead, a mapping between integer character numbers ("code points") and the characters themselves. Nobody deals directly with Unicode; they always work with some specific representation of the Unicode code points.

Suitably enlightened programming languages may well have a specific type for dealing with Unicode strings. How the language represents those strings is variable; many use an integer type large enough to hold any code point value, but there are exceptions. The abortive PHP6 attempt used a variable-width encoding based on 16-bit values, for example. With luck, the programmer need not actually know how Unicode is handled internally to a given language, it should Just Work.

But the use of a language-specific internal representation implies that any string obtained from the world outside a given program is not going to be represented in the same way. Of course, there are standards for string representations too - quite a few standards. The encoding used by LWN now - UTF8 - is a good choice for representing a wide range of code points while being efficient in LWN's still mostly-ASCII world. But there are many other choices, but, importantly, they are all encodings; they are not "Unicode."

So programs dealing in Unicode text must know how outside-world strings are represented and convert those strings to the internal format before operating on them. Any program which does anything more complicated to text than copying it cannot safely do so if it does not fully understand how that text is represented; any general solution almost certainly involves decoding external text to a canonical internal form first.

This is an interesting evolution of the computing environment. Unix-like systems are supposed to be oriented around plain text whenever possible; everything should be human-readable. We still have the human-readable part - better than before for those humans whose languages are not well served by ASCII - but there is no such thing as "plain text" anymore. There is only text in a specific encoding. In a very real sense, text has become a sort of binary blob that must be decoded into something the program understands before it can be operated on, then re-encoded before going back out into the world. A lot of Unicode-related misery comes from a failure to understand (and act on) that fundamental point.

LWN's site code is written in Python 2. Version 2.x of the language is entirely able to handle Unicode, especially for relatively large values of x. To that end, it has a unicode string type, but this type is clearly a retrofit. It is not used by default when dealing with strings; even literal strings must be marked explicitly as Unicode, or they are just plain strings.

When Unicode was added to Python 2, the developers tried very hard to make it Just Work. Any sort of mixture between Unicode and "plain strings" involves an automatic promotion of those strings to Unicode. It is a nice idea, in that it allows the programmer to avoid thinking about whether a given string is Unicode or "just a string." But if the programmer does not know what is in a string - including its encoding - nobody does. The resulting confusion can lead to corrupted text or Python exceptions; as Guido van Rossum put it in the introduction to Python 3, "This value-specific behavior has caused numerous sad faces over the years." Your editor's experience, involving a few sad faces for sure, agrees with this; trying to make strings "just work" leads to code containing booby traps that may not spring until some truly inopportune time far in the future.

That is why Python 3 changed the rules. There are no "strings" anymore in the language; instead, one works with either Unicode text or binary bytes. As a general rule, data coming into a program from a file, socket, or other source is binary bytes; if the program needs to operate on that data as text, it must explicitly decode it into Unicode. This requirement is, frankly, a pain; there is a lot of explicit encoding and decoding to be done that didn't have to happen in a Python 2 program. But experience says that it is the only rational way; otherwise the program (and programmer) never really know what is in a given string.

In summary: Unicode is not UTF8 (or any other encoding), and encoded text is essentially binary data. Once those little details get into a programmer's mind (quite a lengthy process, in your editor's case), most of the difficulties involved in dealing with Unicode go away. Much of the above is certainly obvious to anybody who has dealt with multiple character encodings for any period of time. But it is a bit of a foreign mind set to developers who have spent their time in specialized environments or with languages that don't recognize Unicode - kernel developers, for example. In the end, writing programs that are able to function in a multiple-encoding world is not hard; it's just one more thing to think about.

Comments (91 posted)

Page editor: Jonathan Corbet

Security

Format string vulnerabilities

By Jake Edge
February 1, 2012

A recent sudo advisory described a "format string vulnerability" that could be used for privilege escalation. Since sudo runs as setuid-root, that means that it could potentially be used by a regular user—not just one listed in the /etc/sudoers file—to compromise the system. As with many security flaws, format string vulnerabilities are the result of improper handling of user-supplied input. Given this latest report, it's probably worth taking a look at how these kind of vulnerabilities come about.

For those who aren't C programmers, a little background may be in order. The standard C library function for printing things to stdout is printf()—other functions in the same family can be used to print to stderr, character buffers, or other files. That function takes a string as its first argument which can contain special formatting characters that describe the types of the rest of the arguments. For example:

    printf("hello, world\n");
    printf("%s\n", "hello, world");
    printf("%s, %s\n", "hello", "world");
would all print the canonical string to stdout. The "%s" is the format specifier for a string, so the function expects the corresponding argument to be a pointer to a null-terminated array of characters.

Members of the printf() family use the "varargs" (variable arguments) facility of the C language to take an arbitrary number of arguments. When the formatting string is parsed, values are popped off the stack in the order they are listed. Those values are expected to be there by the function, but, given the existing ABI, the compiler does not (in fact cannot) enforce that they be placed there by caller. That's where the problem can occur.

In the easy case, compilers can and do warn when there is a mismatch between the format string and arguments. A call like:

    printf("hello, %s\n");
will cause a warning if the warning level is set high enough (like -Wall for GCC). But those kinds of problems are relatively easily found. A trickier problem occurs with something like:
    printf(str);
which is perfectly legal as long as str contains no formatting characters. If it does, however, the function will happily pop things off the stack that don't correspond to the arguments in that formatting string. For GCC, the "-Wformat -Wformat-nonliteral" flags can be used to detect this kind of thing. In the "best" case, having format specifiers in str will lead to a program crash, in the worst, it could end up executing code. If str comes from user-supplied input, an attacker may be able to arrange just the right formatting string to execute code of their choosing.

That may be bad enough for a program run as an unprivileged user (as the attacker's code might be the equivalent of "rm -rf $HOME"), but it is far worse if the program has root privileges as sudo does. According to Wikipedia, format string bugs were noted in 1990, but were not recognized as a security problem until a researcher auditing proftpd reported a way to exploit the bug. That exploit used the "%n" format, which stores the number of characters printed so far to an integer pointer it pops off the stack. By arranging just the right format string, the exploit would overwrite the current user ID.

In the sudo case, the program name (which is stored in argv[0] for C programs) was being printed as part of an error message. As the advisory from the finder describes, the program name was "printed" into a buffer (using a variant of sprintf()), and that buffer was then handed off to a vfprintf() as the format string. That meant that the user-controlled program name—which could certainly contain format specifiers—was used as the format string for the vfprintf(). The fix for sudo is to ensure that the program name is printed with a "%s" specifier in the final print statement, rather than building it into the earlier buffer.

How can the user control the program name, especially for a setuid binary like sudo? That's not very hard either:

    $ ln -s /usr/bin/sudo %n

The sudo advisory notes that building sudo with -D_FORTIFY_SOURCE=2 will prevent these kinds of exploits, though the advisory from the finder notes an article in Phrack that may make it possible to bypass that protection.

The problem in sudo was introduced relatively recently, for version 1.8.0 released at the end of February 2011. It has now been fixed in 1.8.3p2 and affected distributions are starting to get updates out. These kinds of bugs are yet another lesson in the need for great care when handling user-controlled input.

Comments (23 posted)

Brief items

Security quotes of the week

Most people do not realize that any program they run can examine the memory of any other process run by them. Meaning the computer game you are running on your desktop can watch everything going on in Firefox or a programs like pwsafe or kinit or other program that attempts to hide passwords..
-- Dan Walsh

So, if we receive a block less than 10 seconds after the previous one and the previous block had a timestamp more than 24 hours in the past, we don't bother to verify any of the ECDSA signatures in it and will allow it to include transactions that spend random people's Bitcoins!
-- Aidan Thornton

Comments (16 posted)

Format string vulnerability in sudo

The sudo utility (version 1.8.0 and later) suffers from a format string vulnerability that can be easily shown to crash the program. There do not appear to be any publicly-posted privilege escalation exploits at this time, but that does not mean that such exploits do not exist. An update to version 1.8.3p2 in the near future is probably a good idea; expect advisories from the distributors in the near future.

Comments (31 posted)

Apache HTTP Server 2.2.22 released

Version 2.2.22 of the Apache web server is out. The main point of this release appears to be the fixing of six different CVE numbers, so people with their own Apache builds probably want to grab the update.

Full Story (comments: none)

KaKaRoTo: How the ECDSA algorithm works

On his blog, Youness Alaoui (aka KaKaRoTo) describes the Elliptic Curve Digital Signature Algorithm (ECDSA), which can be used to cryptographically sign messages or other data. He covers the math behind the algorithm in both a simplified and more detailed view. In addition, he discusses where Sony went wrong with its ECDSA implementation in early versions of the PlayStation 3 firmware: "Once you know the private key dA, you can now sign your files and the PS3 will recognize it as an authentic file signed by Sony. This is why it’s important to make sure that the random number used for generating the signature is actually “cryptographically random”. This is also the reason why it is impossible to have a custom firmware above 3.56, simply because since the 3.56 version, Sony have fixed their ECDSA algorithm implementation and used new keys for which it is impossible to find the private key.. if there was a way to find that key, then the security of every computer, website, system may be compromised since a lot of systems are relying on ECDSA for their security, and it is impossible to crack."

Comments (none posted)

New vulnerabilities

accountsservice: privilege escalation

Package(s):accountsservice CVE #(s):CVE-2011-4406
Created:January 31, 2012 Updated:February 1, 2012
Description: From the Ubuntu advisory:

Hayawardh Vijayakumar discovered that AccountsService incorrectly handled privileges when modifying the language settings on Ubuntu. A local attacker could exploit this issue to modify arbitrary files, and possibly create a denial of service or obtain increased privileges.

Alerts:
Ubuntu USN-1351-1 accountsservice 2012-01-31

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2011-3924 CVE-2011-3925 CVE-2011-3926 CVE-2011-3927 CVE-2011-3928
Created:January 30, 2012 Updated:February 1, 2012
Description: From the CVE entries:

Use-after-free vulnerability in Google Chrome before 16.0.912.77 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to DOM selections. (CVE-2011-3924)

Use-after-free vulnerability in the Safe Browsing feature in Google Chrome before 16.0.912.75 allows remote attackers to cause a denial of service (heap memory corruption) or possibly have unspecified other impact via vectors related to a navigation entry and an interstitial page. (CVE-2011-3925)

Heap-based buffer overflow in the tree builder in Google Chrome before 16.0.912.77 allows remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors. (CVE-2011-3926)

Skia, as used in Google Chrome before 16.0.912.77, does not perform all required initialization of values, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors. (CVE-2011-3927)

Use-after-free vulnerability in Google Chrome before 16.0.912.77 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to DOM handling. (CVE-2011-3928)

Alerts:
Gentoo 201201-17 chromium 2012-01-27

Comments (none posted)

curl: data injection

Package(s):curl CVE #(s):CVE-2012-0036
Created:January 30, 2012 Updated:April 13, 2012
Description: From the Red Hat bugzilla:

libcurl is vulnerable to a data injection attack for certain protocols through control characters embedded or percent-encoded in URLs.

When parsing URLs, libcurl's parser is very laxed and liberal and only parses as little as possible and lets as much as possible through as long as it can figure out what to do.

In the specific process when libcurl extracts the file path part from a given URL, it didn't always verify the data or escape control characters properly before it passed the file path on to the protocol-specific code that then would use it for its protocol business.

This passing through of control characters could be exploited by someone who would be able to pass in a handicrafted URL to libcurl. Lots of libcurl using applications let users enter URLs in one form or another and not all of these check the input carefully to prevent malicious ones.

A malicious user might pass in %0d%0a to get treated as CR LF by libcurl, and by using this fact a user can trick for example a POP3 client to delete a message instead of getting it or trick an SMTP server to send an unintended message.

This vulnerability can be used to fool libcurl with the following protocols: IMAP, POP3 and SMTP.

This flaw only affects curl versions 7.20.0 up to and including 7.23.1 It is corrected in 7.24.0

Alerts:
Mandriva MDVSA-2012:058 curl 2012-04-13
Gentoo 201203-02 curl 2012-03-05
Fedora FEDORA-2012-0888 curl 2012-02-11
openSUSE openSUSE-SU-2012:0229-1 curl 2012-02-09
Debian DSA-2398-1 curl 2012-01-30
Fedora FEDORA-2012-0894 curl 2012-01-28

Comments (none posted)

ktsuss: privilege escalation

Package(s):ktsuss CVE #(s):CVE-2011-2921 CVE-2011-2922
Created:January 27, 2012 Updated:February 1, 2012
Description: From the Gentoo advisory:

Two vulnerabilities have been found in ktuss:

  • Under specific circumstances, ktsuss skips authentication and fails to change the effective UID back to the real UID (CVE-2011-2921).
  • The GTK interface spawned by the ktsuss binary is run as root (CVE-2011-2922).

A local attacker could gain escalated privileges and use the "GTK_MODULES" environment variable to possibly execute arbitrary code with root privileges.

Alerts:
Gentoo 201201-15 ktsuss 2012-01-27

Comments (none posted)

Mozilla products: multiple vulnerabilities

Package(s):thunderbird firefox seamonkey CVE #(s):CVE-2011-3659 CVE-2011-3670 CVE-2012-0442 CVE-2012-0449 CVE-2012-0444
Created:February 1, 2012 Updated:July 23, 2012
Description: The Mozilla product suite (including Firefox, Thunderbird, and Seamonkey) suffers from a number of vulnerabilities, most of which are exploitable for arbitrary code execution.
Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
SUSE SUSE-SU-2012:0326-1 libvorbis 2012-03-06
Ubuntu USN-1369-1 thunderbird 2012-02-17
Scientific Linux SL-libv-20120215 libvorbis 2012-02-15
CentOS CESA-2012:0136 libvorbis 2012-02-15
CentOS CESA-2012:0136 libvorbis 2012-02-15
CentOS CESA-2012:0136 libvorbis 2012-02-15
Red Hat RHSA-2012:0136-01 libvorbis 2012-02-15
SUSE SUSE-SU-2012:0221-1 Mozilla Firefox 2012-02-09
SUSE SUSE-SU-2012:0198-1 Mozilla XULrunner 2012-02-09
openSUSE openSUSE-SU-2012:0234-1 MozillaFirefox 2012-02-09
Debian DSA-2406-1 icedove 2012-02-09
Ubuntu USN-1353-1 xulrunner-1.9.2 2012-02-08
Ubuntu USN-1350-1 thunderbird 2012-02-08
Ubuntu USN-1355-3 ubufox, webfav 2012-02-03
Ubuntu USN-1355-2 mozvoikko 2012-02-03
Ubuntu USN-1355-1 firefox 2012-02-03
Mandriva MDVSA-2012:013 mozilla 2012-02-03
Debian DSA-2402-1 iceape 2012-02-02
Debian DSA-2400-1 iceweasel 2012-02-02
Scientific Linux SL-seam-20120201 seamonkey 2012-02-01
Scientific Linux SL-fire-20120201 firefox 2012-02-01
Scientific Linux SL-thun-20120201 thunderbird 2012-02-01
Scientific Linux SL-thun-20120201 thunderbird 2012-02-01
Oracle ELSA-2012-0080 thunderbird 2012-02-01
Oracle ELSA-2012-0085 thunderbird 2012-02-01
Oracle ELSA-2012-0084 seamonkey 2012-02-01
Oracle ELSA-2012-0079 firefox 2012-02-01
Oracle ELSA-2012-0079 firefox 2012-02-01
Oracle ELSA-2012-0079 firefox 2012-02-01
Fedora FEDORA-2012-1140 libvpx 2012-02-02
Fedora FEDORA-2012-1140 gstreamer-plugins-bad-free 2012-02-02
Fedora FEDORA-2012-1140 thunderbird-lightning 2012-02-02
Fedora FEDORA-2012-1140 thunderbird 2012-02-02
Fedora FEDORA-2012-1140 xulrunner 2012-02-02
Fedora FEDORA-2012-1140 firefox 2012-02-02
Red Hat RHSA-2012:0084-01 seamonkey 2012-02-01
Red Hat RHSA-2012:0079-01 firefox 2012-02-01
Red Hat RHSA-2012:0085-01 thunderbird 2012-02-01
CentOS CESA-2012:0084 seamonkey 2012-02-01
CentOS CESA-2012:0079 firefox 2012-02-01
CentOS CESA-2012:0079 firefox 2012-02-01
CentOS CESA-2012:0079 firefox 2012-02-01
CentOS CESA-2012:0080 thunderbird 2012-02-01
CentOS CESA-2012:0085 thunderbird 2012-02-01
CentOS CESA-2012:0085 thunderbird 2012-02-01
Red Hat RHSA-2012:0080-01 thunderbird 2012-02-01

Comments (none posted)

openttd: denial of service

Package(s):openttd CVE #(s):CVE-2012-0049
Created:January 30, 2012 Updated:August 7, 2012
Description: From the OpenTTD advisory:

Using a slow read type attack it is possible to prevent anyone from joining a server with virtually no resources. Once downloading the map no other downloads of the map can start, so downloading really slowly will prevent others from joining. This can be further aggravated by the pause-on-join setting in which case the game is paused and the players cannot continue the game during such an attack. This attack requires that the user is not banned and passes the authorization to the server, although for many servers there is no server password and thus authorization is easy.

Alerts:
Debian DSA-2524-1 openttd 2012-08-06
Fedora FEDORA-2012-0623 openttd 2012-01-28
Fedora FEDORA-2012-0647 openttd 2012-01-28

Comments (none posted)

php5: arbitrary file writes

Package(s):php5 CVE #(s):CVE-2012-0057
Created:January 31, 2012 Updated:April 13, 2012
Description: From the Debian advisory:

When applying a crafted XSLT transform, an attacker could write files to arbitrary places in the filesystem.

Alerts:
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
Gentoo 201209-03 php 2012-09-23
CentOS CESA-2012:1046 php 2012-07-10
Scientific Linux SL-php-20120709 php 2012-07-09
Scientific Linux SL-php5-20120705 php53 2012-07-05
Scientific Linux SL-php-20120705 php 2012-07-05
Oracle ELSA-2012-1046 php 2012-06-30
Oracle ELSA-2012-1047 php53 2012-06-28
Oracle ELSA-2012-1045 php 2012-06-28
CentOS CESA-2012:1047 php53 2012-06-27
CentOS CESA-2012:1045 php 2012-06-27
Red Hat RHSA-2012:1047-01 php53 2012-06-27
Red Hat RHSA-2012:1046-01 php 2012-06-27
Red Hat RHSA-2012:1045-01 php 2012-06-27
SUSE SUSE-SU-2012:0496-1 PHP5 2012-04-12
SUSE SUSE-SU-2012:0472-1 PHP5 2012-04-06
openSUSE openSUSE-SU-2012:0426-1 php5 2012-03-29
SUSE SUSE-SU-2012:0411-1 PHP5 2012-03-24
Ubuntu USN-1358-1 php5 2012-02-09
Debian DSA-2399-2 php5 2012-01-31
Debian DSA-2399-1 php5 2012-01-31

Comments (none posted)

rubygem-actionpack: cross-site scripting

Package(s):rubygem-actionpack CVE #(s):CVE-2011-4319
Created:January 26, 2012 Updated:March 19, 2012
Description:

From the Red Hat bugzilla entry:

A cross-site scripting (XSS) flaw was found in the way the 'translate' helper method of the Ruby on Rails performed HTML escaping of interpolated user input, when interpolation in combination with HTML-safe translations were used. A remote attacker could use this flaw to execute arbitrary HTML or web script by providing a specially-crafted input to Ruby on Rails application, using the ActionPack module and its 'translate' helper method without explicit (application specific) sanitization of user provided input.

Alerts:
Fedora FEDORA-2012-0643 rubygem-actionpack 2012-01-25
Fedora FEDORA-2012-0626 rubygem-actionpack 2012-01-25

Comments (none posted)

smokeping: cross-site scripting

Package(s):smokeping CVE #(s):CVE-2012-0790
Created:February 1, 2012 Updated:March 21, 2013
Description: The smokeping CGI script does not properly sanitize input passed via the displaymode parameter, thus enabling cross-site scripting attacks.
Alerts:
Debian DSA-2651-1 smokeping 2013-03-20
Fedora FEDORA-2012-0801 smokeping 2012-01-31
Fedora FEDORA-2012-0813 smokeping 2012-01-31

Comments (none posted)

software-properties: man-in-the-middle attack

Package(s):software-properties CVE #(s):CVE-2011-4407
Created:January 31, 2012 Updated:October 2, 2012
Description: From the Ubuntu advisory:

David Black discovered that Software Properties incorrectly validated server certificates when performing secure connections to download PPA GPG key fingerprints. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to install altered package repository GPG keys.

Alerts:
Ubuntu USN-1352-1 software-properties 2012-01-31

Comments (none posted)

sudo: privilege escalation

Package(s):sudo CVE #(s):CVE-2012-0809
Created:February 1, 2012 Updated:February 1, 2012
Description: A format string vulnerability in sudo (versions 1.8.0 to 1.8.3p1) enables a local attacker to obtain root privileges; see this advisory for details.
Alerts:
Gentoo 201203-06 sudo 2012-03-05
Fedora FEDORA-2012-1028 sudo 2012-01-31

Comments (none posted)

usbmuxd: code execution

Package(s):usbmuxd CVE #(s):CVE-2012-0065
Created:February 1, 2012 Updated:April 11, 2013
Description: It turns out that usbmuxd does not perform proper bounds checking when processing the SerialNumber field provided by USB devices. A local attacker with a suitably modified USB device could exploit this failure to run arbitrary code as the "usbmux" user.
Alerts:
Mandriva MDVSA-2013:133 usbmuxd 2013-04-10
Mageia MGASA-2012-0228 usbmuxd 2012-08-18
Mandriva MDVSA-2012:133 usbmuxd 2012-08-16
openSUSE openSUSE-SU-2012:0345-1 usbmuxd 2012-03-09
Gentoo 201203-11 usbmuxd 2012-03-05
Fedora FEDORA-2012-1213 usbmuxd 2012-02-17
Fedora FEDORA-2012-1192 usbmuxd 2012-02-17
Ubuntu USN-1354-1 usbmuxd 2012-02-01

Comments (none posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):CVE-2012-0066 CVE-2012-0067 CVE-2012-0068
Created:January 27, 2012 Updated:February 1, 2012
Description: From the Debian advisory:

Laurent Butti discovered a buffer underflow in the LANalyzer dissector of the Wireshark network traffic analyzer, which could lead to the execution of arbitrary code (CVE-2012-0068)

This update also addresses several bugs, which can lead to crashes of Wireshark. These are not treated as security issues, but are fixed nonetheless if security updates are scheduled: CVE-2011-3483, CVE-2012-0041, CVE-2012-0042, CVE-2012-0066 and CVE-2012-0067.

Alerts:
Oracle ELSA-2013-1569 wireshark 2013-11-26
Gentoo GLSA 201308-05:02 wireshark 2013-08-30
Gentoo 201308-05 wireshark 2013-08-28
Oracle ELSA-2013-0125 wireshark 2013-01-12
Scientific Linux SL-wire-20130116 wireshark 2013-01-16
CentOS CESA-2012:0509 wireshark 2012-04-24
Oracle ELSA-2012-0509 wireshark 2012-04-23
Scientific Linux SL-wire-20120423 wireshark 2012-04-23
Red Hat RHSA-2012:0509-01 wireshark 2012-04-23
openSUSE openSUSE-SU-2012:0295-1 wireshark 2012-02-23
Debian DSA-2395-1 wireshark 2012-01-27

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.3-rc2, released on January 31 - a little later than would have ordinarily been expected. "The diffstat is pretty flat - indicative of mostly small changes spread out. Which is what I like seeing, and we don't always see at this point. There's some file movement (8250-based serial and the arm mx5 -> imx merge), but otherwise really not a lot of excitement. Good." That said, there are quite a few changes in this prepatch; see the short-form changelog in the announcement for details. Thirteen of those changes are reverts for patches that didn't work out.

Stable updates: there have been no stable updates in the last week. The 2.6.32.56, 3.0.19, and 3.2.3 stable updates are in the review process as of this writing; they can be expected on or after February 3.

Comments (none posted)

Quotes of the week

It wouldn't be the first time lockdep & ftrace live locked the system. Or made it so unbearably slow. Lockdep and ftrace do not play well together. They both are very intrusive. The two remind me of the United States congress. Where there is two parties trying to take control of everything, but nothing ever gets done. We end up with a grid/live lock in the country/computer.
-- Steven Rostedt

I can see some vindictive programmer doing that, while thinking "I'll show these people who pointed out this bug in my code, mhwhahahahaa! I'll fix their test-case while still leaving the real problem unaddressed", but I don't think compiler people are quite *that* evil. Yes, they are evil people who are trying to trip us up, but still..
-- Linus Torvalds

In that way, my philosophy of ext4 is that it should be like the Linux kernel; it's an evolutionary process and central planning is often overrated. People contribute to ext4 for many different reasons, and that means they optimize ext4 for their particular workloads. Like Linus for Linux, we're not trying to architect for "world domination" by saying, "hmm, in order to 'take out' reiserfs4, we'd better implement features foo and bar".
-- Ted Ts'o

You're making the assumption that users are informed and knowledgable, and all filesystem developers should know this is simply not true. Users repeatedly demonstrate that they don't know how filesystems work, don't understand the knobs that are provided, don't understand what their applications do in terms of filesystem operations and don't really understand their data sets. Education takes time and effort, but still users make the same mistakes over and over again.
-- Dave Chinner

Looks like there are more dragons and hidden trapdoors in the drm release path than actual lines of code.
-- Daniel Vetter

Comments (none posted)

Greg Kroah-Hartman moves to the Linux Foundation

The Linux Foundation has announced that Greg Kroah-Hartman has joined the organization as a fellow. "In his role as Linux Foundation Fellow, Kroah-Hartman will continue his work as the maintainer for the Linux stable kernel branch and a variety of subsystems while working in a fully neutral environment. He will also work more closely with Linux Foundation members, workgroups, Labs projects, and staff on key initiatives to advance Linux."

Comments (12 posted)

LSF/MM summit deadline approaching

The deadline for requests to attend the 2012 storage, filesystem, and memory management summit is February 5 (the event happens April 1-2 in San Francisco). Any developers who would like to be there and have not expressed their interest should do so in the very near future.

Full Story (comments: none)

Kernel development news

What happened to disk performance in 2.6.39

By Jonathan Corbet
January 31, 2012
Herbert Poetzl recently reported an interesting performance problem. His SSD-equipped laptop could read data at about 250MB/s with the 2.6.38 kernel, but performance dropped to 25-50MB/s on anything more recent. An order-of-magnitude performance drop is just not the sort of benefit that most people look forward to when upgrading their kernel, so this report quickly gained the attention of a number of developers. The resolution of the problem turned out to be simple, but it offers an interesting view of how high-performance disk I/O works in the kernel.

An explanation of the problem requires just a bit of background, and, in particular, the definition of a couple of terms. "Readahead" is the process of speculatively reading file data into memory with the idea that an application is likely to want it soon. Reasonable performance when reading a file sequentially depends on proper readahead; that is the only way to ensure that reading and consuming the data can be done in parallel. Without readahead, applications will spend more time than necessary waiting for data to be read from disk.

"Plugging," instead, is the process of stopping I/O request submissions to the low-level device for a period of time. The motivation for plugging is to allow a number of I/O requests to accumulate; that lets the I/O scheduler sort them, merge adjacent requests, and apply any sort of fairness policy that might be in effect. Without plugging, I/O requests would tend to be smaller and more scattered across the device, reducing performance even on solid-state disks.

Now imagine that we have a process about to start reading through a long file, as indicated by your editor's unartistic rendering here:

[Bad art]

Once the application starts reading from the beginning of the file, the kernel will set about filling the first readahead window (which is 128KB with larger files) and submit I/O for the second window, so the situation will look something like this:

[Reading begins]

Once the application reads past 128KB into the file, the data it needs will hopefully be in memory. The readahead machinery starts up again, initiating I/O for the window starting at 256KB; that yields a situation that looks something like this:

[Next window]

This process continues indefinitely with the kernel running to always stay ahead of the application and have the data there by the time that application gets around to reading it.

The 2.6.39 kernel saw some significant changes to how plugging is handled, with the result that the plugging and unplugging of queues is now explicitly managed in the I/O submission code. So, starting with 2.6.39, the readahead code will plug the request queue before it submits a batch of read operations, then unplug the queue at the end. The function that handles basic buffered file I/O (generic_file_aio_read()) also now does its own plugging. And that is where the problems begin.

Imagine a process that is doing large (1MB) reads. As the first large read gets into generic_file_aio_read(), that function will plug the request queue and start working through the file pages already in memory. When it gets to the end of the first readahead window (at 128KB), the readahead code will be invoked as described above. But there's a problem: the queue is still plugged by generic_file_aio_read(), which is still working on that 1MB read request, so the I/O operations submitted by the readahead code are not passed on to the hardware; they just sit in the queue.

So, when the application gets to the end of the second readahead window, we see a situation like this:

[Bummer]

At this point, everything comes to a stop. That will cause the queue to be unplugged, allowing the readahead I/O requests to be executed at last, but it is too late. The application will have to wait. That wait is enough to hammer performance, even on solid-state devices.

The fix is to simply remove the top-level plugging in generic_file_aio_read() so that readahead-originated requests can get through to the hardware. Developers who have been able to reproduce the slowdown report that this patch makes the problem go away, so this issue can be considered solved. Look for this fix to appear in a stable kernel release sometime soon.

Comments (15 posted)

Preparing for user-space checkpoint/restore

By Jonathan Corbet
January 31, 2012
The addition of a checkpoint/restore functionality to Linux has been an ongoing topic of discussion and development for some years now. After the poor reception given to the in-kernel C/R implementation at the end of 2010, that particular project seems to have faded into the background. Instead, most of the interest seems to be in solutions that operate mostly in user space. Depending on the approach taken, most or all the support needed to implement this functionality in user space already exists. But a complete solution is not yet there.

CRIU

Cyrill Gorcunov has been working to fill in some of the gaps with a preparatory patch set for user-space checkpointing/restore with the "CRIU" tool set. There are a number of small additions to the kernel ABI to be found here:

  • A new children entry in a thread's /proc directory provides a list of that thread's immediate children. This information allows a user-space checkpoint utility to find those child processes without needing to walk through the entire process tree.

  • /proc/pid/stat is extended to provide the bounds of the process's argument and environment arrays, along with the exit code. That allows this information to be reliably captured at checkpoint time.

  • A number of new prctl() options allow the argument and environment arrays to restored in a way matching what was there at checkpoint time. The desired end result is that ps shows the same information about a process after a checkpoint/restore cycle as it did before.

Perhaps the most significant new feature, though, is the addition of a new system call:

    long kcmp(pid_t pid1, pid_t pid2, int type, unsigned long idx1, unsigned long idx2);

Checkpoint/restore is meant to work as well on a tree of processes as on a single process. One challenge in the way of meeting that goal is that some of those processes may share resources - files, say, or, perhaps, a whole lot more. Replicating that sharing at restore time is relatively easy; the clone() system call provides a nice set of flags controlling the sharing of resources. The harder part is knowing, at checkpoint time, whether that sharing is taking place.

One way for user space to determine whether, for example, two processes are sharing the same open file would be to query the kernel for the address of the associated struct file and see if they are the same in both processes. That kind of functionality sets off alarms among those concerned about security, though; learning where data structures live in kernel space is often an important precondition to an attack. There was talk for a while of "obfuscating" the pointers - through an exclusive-OR with a random value, for example - but the risk was still seen as being too high. So the compromise is kcmp(), which simply answers the question of whether resources found in two processes are the same or not.

kcmp() takes two process ID parameters, indicating the processes of interest; both processes must be in the same PID namespace as the calling process. The type parameter tells the kernel the specific item that is being compared:

  • KCMP_FILE: determines whether a file descriptor idx1 in the first process is the same as another descriptor (idx2) in the second process.

  • KCMP_FILES: compares the file descriptor arrays to see whether the processes share all files.

  • KCMP_FS: compares fs_struct structures (which hold the current umask, working directory, namespace root, etc.).

  • KCMP_IO: compares the I/O context, used mainly for block I/O scheduling.

  • KCMP_SIGHAND: compares the two process's signal handler arrays.

  • KCMP_SYSVSEM: compares the list of undo operations associated with SYSV semaphores.

  • KCMP_VM: compares each process's address space.

The return value from kcmp() is zero if the two items are equal, one if the first item is "less" than the second, or two if the first is "greater" than the second. The ordered comparison may seem a little strange, especially when one looks at the implementation and sees that the pointers are obfuscated before comparison within the kernel. The result is, thus, an ordering that (by design) does not match the ordering of the relevant data structures in kernel space. It turns out that even a reshuffled (but consistent) "ordering" is useful for optimizing comparisons in user space when large numbers of open files are present.

This patch set has been through a few cycles of review and seems to have addressed most of the concerns raised by reviewers. It may just find its way in through the next merge window. Meanwhile, people who want to see how the user-space side works can find the relevant code at criu.org.

DMTCP

CRIU is not the only user-space checkpoint/restore implementation out there; the DMTCP (Distributed MultiThreaded CheckPointing) project has been busy since about 2.6.9. DMTCP differs somewhat from CRIU, though; in particular, it is able to checkpoint groups of processes connected by sockets - even across different machines - and it requires no changes to the kernel at all. These features come with a couple of limitations, though.

Checkpoint/restore with DMTCP requires that the target process(es) be started with a special script; it is not possible to checkpoint arbitrary processes on the system. That script uses the LD_PRELOAD mechanism to place wrappers around a number of libc and (especially) system call implementations. As a result, DMTCP has no need to ask the kernel whether two processes are sharing a specific resource; it has been watching the relevant system calls and knows how the processes were created. The disadvantage to this approach - beyond having to run checkpointable process in a special environment - is that, as can be seen in the table of supported applications, not all programs can be checkpointed.

The recent 1.2.4 release improves support, though, to the point that everything a wide range of users care about should be checkpointable. The system has been integrated with Open MPI and is able to respond to MPI-generated checkpoint and restore requests. DMTCP is available with the openSUSE, Debian Testing, and Ubuntu distributions. DMTCP may offer something good enough today for many users, who may not need to wait for one of the other projects to be ready sometime in the future.

Comments (14 posted)

Betrayed by a bitfield

By Jonathan Corbet
February 1, 2012
Developers tend to fear compiler bugs, and for good reason: such bugs can be hard to find and hard to work around. They can leave traps in a compiled program that spring on users at bad times. Things can get even worse if one person's compiler bug is seen by the compiler's developer as a feature - such issues have a tendency to never get fixed. It is possible that just this kind of feature has turned up in GCC, with unknown impact on the kernel.

One of the many structures used by the btrfs filesystem, defined in fs/btrfs/ctree.h, is:

    struct btrfs_block_rsv {
	u64 size;
	u64 reserved;
	struct btrfs_space_info *space_info;
	spinlock_t lock;
	unsigned int full:1;
    };

Jan Kara recently reported that, on the ia64 architecture, the lock field was occasionally becoming corrupted. Some investigation revealed that GCC was doing a surprising thing when the bitfield full is changed: it generates a 64-bit read-modify-write cycle that reads both lock and full, modifies full, then writes both fields back to memory. If lock had been modified by another processor during this operation, that modification will be lost when lock is written back. The chances of good things resulting from this sequence of events are quite small.

One can imagine that quite a bit of work was required to track down this particular surprise. It is also not hard to imagine the dismay that results from a conversation like this:

I've raised the issue with our GCC guys and they said to me that: "C does not provide such guarantee, nor can you reliably lock different structure fields with different locks if they share naturally aligned word-size memory regions. The C++11 memory model would guarantee this, but that's not implemented nor do you build the kernel with a C++11 compiler."

Unsurprisingly, Linus was less than impressed by this response. Language standards are not written for the unique needs of kernels, he said, and can never "guarantee" the behavior that a kernel needs:

So C/gcc has never "promised" anything in that sense, and we've always had to make assumptions about what is reasonable code generation. Most of the time, our assumptions are correct, simply because it would be *stupid* for a C compiler to do anything but what we assume it does.

But sometimes compilers do stupid things. Using 8-byte accesses to a 4-byte entity is *stupid*, when it's not even faster, and when the base type has been specified to be 4 bytes!

As it happens, the problem is a bit worse than non-specified behavior. Linus suggested running a test with a structure like:

    struct example {
	volatile int a;
      	int b:1;
    };

In this case, if an assignment to b causes a write to a, the behavior is clearly buggy: the volatile keyword makes it explicit that a may be accessed from elsewhere. Jiri Kosina gave it a try and reported that GCC is still generating 64-bit operations in this case. So, while the original problem is technically compliant behavior, it almost certainly results from the same decision-making that makes the second example go wrong.

Knowing that may give the kernel community more ammunition to flame the GCC developers with, but it is not necessarily all that helpful. Regardless of the source of the problem, this behavior exists in versions of the compiler that, almost certainly, are being used outside of the development community to build the kernel. So some sort of workaround is likely to be necessary even if GCC's behavior is fixed. That could be a bit of a challenge; auditing the entire kernel for 32-bit-wide bitfield variables in structures that may be accessed concurrently will not be a small job. But, then, nobody said that kernel development was easy.

Comments (85 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

FreeBSD and release engineering

February 1, 2012

This article was contributed by Nathan Willis

On January 16, John Kozubik posted to the FreeBSD-hackers list and expressed his disappointment in some of the recent trends in the project. Namely, an increasingly-slow release cycle, too many overlapping "production" releases, and an estrangement between the core developers and end users when it comes to support issues like bug fixes. The list has since debated Kozubik's assessment of the situation in a heroically long thread, but while the majority agree that FreeBSD would benefit from refocusing its energies and polishing its processes, it has not yet developed a concrete plan of action.

Too many series, not enough releases

Kozubik himself is not a FreeBSD developer; he manages enterprise production environments (including rsync.net) that run almost 1,000 FreeBSD machines. As he explained in his initial message, he was disappointed to hear that the next point release of the OS, 8.3, has been pushed back to March 2012 — more than a year after 8.2. Such a long stretch between minor releases makes maintaining a stable server farm difficult, but paradoxically FreeBSD has also complicated the lives of its end users by simultaneously pushing out multiple "production releases" with different major numbers — at present including versions 7.4, 8.2, and 9.0.

Once a new "major" version is in development, he said, important patches to the previous major version start to languish, as developers lose interest in donating their time to the old code and to less-than-exciting maintenance tasks. As a result, enterprise users face a difficult choice: either go without new features and fixes in the "old" production release, or risk the instability of the "new" production release. He worries that FreeBSD may be "becoming an operating system by, and for, FreeBSD developers" — to the ultimate alienation of users.

Kozubik then outlined five underlying causes that contribute to the alienation problem, and proposes three fixes. First, there is a "widening gap of understanding" between the developers and end users, caused by developers frequently jumping between bleeding-edge snapshots of the code, rather than running the releases tagged as stable for production use. The disconnect makes discussing maintenance issues difficult. Second, maintaining multiple "production releases" simultaneously dilutes developer focus, which keeps the releases from ever truly maturing. Third, the simultaneous production releases drive away potential paid investment from enterprise customers, because they view each FreeBSD release with uncertainty.

Fourth, the slow pace of minor releases means that important fixes often sit unreleased long after they have been verified by maintainers. That not only hurts end users (by depriving them of regression fixes and security updates), but downstream projects as well. Finally, when the slow pace of minor releases is coupled with the multiple-major-releases problem, new code and fixes increasingly get bumped from the "old production" release to the "new production release," solely because the new release is what developers are interested in working on. This traps enterprise users in "the same bad choice again: make major investments in testing and people and processes every two years, or just limp along with old, buggy drivers and no support." Kozubik called this "the culture of 'that's fixed in CURRENT' or 'we built those changes into (insert next major release)'."

He suggested that the project take three steps to ameliorate the trouble. First, intentionally consider the processes and costs incurred by large FreeBSD deployments. "Think about the logistics of major upgrades and the difficulty of running snapshot releases, etc. Remember - if it's not fixed in the production release, it's NOT FIXED. Serious (large) end users have very little use for CURRENT." Second, the project should focus on just one production release at a time, and commit to a definite support schedule (Kozubik suggests five years as a production release, followed by five years as a "legacy" release). Third, in concert with the predictable major release schedule, the project should commit to doing smaller, more frequent minor updates, around three times per year.

Dissecting the problem

A majority of the people who joined in the discussion agreed that something like what Kozubik proposed would be beneficial. Opinion varied, however, on what the underlying causes are, and whether or not they can be fixed in the short term. Rich Healey observed that there are very few paid FreeBSD developers, and while volunteers have little motivation to undertake the unexciting maintenance tasks, the paid developers that exist are often contracted to implement specific new features.

Warner Losh said that no corporate sponsor has been pushing the project to keep the release process on schedule since the demise of Walnut Creek. Julian Elischer responded by suggesting that interested high-volume customers and downstream vendors could pool their resources and pay a developer to work specifically on release management — a suggestion that was met with approval.

FreeBSD release engineer Robert Watson described the dilemma as a release engineering problem — with several causes — that he and the other release team members could address. Historically, he said, the FreeBSD base and ports collection were on a single, tightly-coupled release cycle — which often resulted in ports getting very little attention. In the early days, he added, the project used CVS version control, which made branching very expensive. Last but not least, the project has come to rely on a single "head release engineer" to steer all of the release schedules, which results in bottlenecks and slipping release dates. The CVS problem has been addressed by moving to Subversion, he said, and there is growing support for de-coupling the base and ports releases, but the project still needs to fix the single-release-engineer issue. Watson suggested mentoring-in new release managers from the developer community, each of whom would take responsibility for one minor release.

On the other hand, Igor Mozolevsky argued that the FreeBSD Problem Report (PR) patch system is broken, because it effectively requires end users to undertake a nagging campaign to even get a patch examined by a developer. That drives away users and is clearly sub-optimal for the project. Not everyone agreed with that assessment, although Matthew Fuller cited an example of a manpage fix collecting dust for three years.

Adam Vande More speculated that a bounty system would motivate quicker PR merges — but Matt Olander from iXsystems pointed out that he has set one up already. Watson again suggested changes that the project could make to its existing processes, starting with replacing the GNATS issue-tracker that no one seems to like, and adopting a more formal policy for trawling PRs and triaging bug reports.

Pushback

Not everyone was on board with Kozubik's reading of the situation, however. Ivan Voras disagreed completely, saying that "the situation is actually quite good," and that "nobody would mind" if there were no more stable releases at all. "The 'releases' are for many people simply a periodical annoyance due to freezes," he said. Kozubik replied that "I could not have illustrated my point better, RE: FreeBSD becoming an OS by, and for, FreeBSD developers," and explained that most businesses will simply not use software that is not "officially" released.

Andriy Gapon defended the concept of a "by the developers, for the developers" mindset as the equivalent to being a community project. Projects that exist "for the users," he argued, tend to exist for commercial purposes, and be backed by profit-driven corporations. FreeBSD is more akin to the Debian project, he said, which is very much a "for the developers" effort, but still quite successful.

Adrian Chadd took issue with the question being raised at all, saying:

If you care this much to comment on it, please consider caring enough to step up and assist. Or, pay a company like iXsystems for FreeBSD support and get them to do this for you. Otherwise you're just re-iterating the same stuff I'm sure all the developers know but are just out of manpower/time/money/resources to do anything about.

But Doug Barton quickly responded "let's do away with the whole, 'If you step up to help, your help will be accepted with open arms' myth. That's only true if the project leadership agrees with your goals." The FreeBSD project needs to do some serious thinking about things like the role of "committer" and the meaning of "release," he said, including the difference between major and minor releases, all of which are terms with no formal definition. "We also need to take a step back and ask if throwing more person-hours at the problem is the right solution, or if redesigning the system to better meet the needs of the users *as a first step* is the right answer."

Engineering release-engineering

Barton suggested defining minor "point releases" as updates only to the FreeBSD base, not the ports collection or documentation. "The other thing I think has been missing (as several have pointed out in this thread already) is any sort of planning for what should be in the next release," he added. The current release schedule is a hold-over from the trouble-filled days the project experienced in the FreeBSD 5.x era, he said, but "the pendulum has swung *way* too far in the wrong direction, such that we are now afraid to put *any* kind of plan in place for fear that it will cause the release schedule to slip."

Watson concurred, adding that ""here's been an over-swing caused by the diagnosis 'it's like herding cats' into 'cats can't be herded, so why try?'" — and asking list members if they could come up with a tentative release schedule.

The debate continues, without a consensus yet on a release schedule. For his part, Kozubik favored immediately declaring FreeBSD 7.z end-of-life, marking 8.x as "legacy," and tagging 9.x as the only production release. Not everyone agrees with Kozubik's five-production-years-plus-five-legacy-years timetable; though there is support for a five-year total support life, and most developers seem to think that making minor releases every four months is doable.

Such a schedule would be roughly in line with what the commercial Linux distributors offer, and as Freddie Cash observed, the maintenance of a production release alongside a legacy release is similar to the process used by Debian. The big question remains whether the project will be able to hash out all of the details in time to commit to a concrete release schedule for 9.x itself — or whether the new release process will have to wait for FreeBSD 10.0.

Comments (13 posted)

Brief items

Distribution quotes of the week

You know what I love?

When reboots don’t go horribly wrong.

-- Seth Vidal

Fellow Anti-mergers, I understand the pain and anguish that systemd has caused you personally, and your families. Your hopes and dreams crushed, by someone with all the charm of a cheese grater across the knuckles. Your remaining life tainted by this putrescent subhuman who forced himself upon your internet.

Despite the privation we have all endured, please find strength to stop this nightmarish ravaging of our once-pure filesystems. For if he’s not stopped now, what hope for /usr/sbin vs /usr/bin?

-- Rusty Russell

At that time, Debian developers were busy breaking unstable as much as they could, as it’s tradition on the weeks following a major release...
-- Jordi Mallach

Comments (none posted)

Debian 6.0.4 released

The Debian Project has released the fourth update of Debian 6.0 (squeeze). "Please note that this update does not constitute a new version of Debian 6.0 but only updates some of the packages included. There is no need to throw away 6.0 CDs or DVDs but only to update via an up-to-date Debian mirror after an installation, to cause any out of date packages to be updated."

Comments (none posted)

Red Hat extends RHEL5 and 6 support

Red Hat has sent out a press release stating, in pure marketing style, that the support period for versions 5 and 6 of Red Hat Enterprise Linux has been extended to ten years. "Many of our customers have come to realize that standardizing on Red Hat Enterprise Linux improves efficiency and helps lower costs. With a ten-year life cycle, customers now have additional choices when planning their Red Hat Enterprise Linux deployment and overall IT strategy. We are pleased that customers are looking far into the future with Red Hat."

Comments (53 posted)

Red Hat Enterprise Linux 4 - 30 day End Of Life Notice

Red Hat Enterprise Linux 4 will reach its end-of-life on February 29, 2012. "For customers who are unable to migrate off Red Hat Enterprise Linux 4 before its end-of-life date and require software maintenance and/or technical support, Red Hat offers an optional support extension called the Extended Life-cycle Support (ELS) Add-On Subscription. The ELS Subscription provides up to three additional years of limited Software Maintenance (Production 3 Phase) for Red Hat Enterprise Linux 4 with unlimited technical support, critical Security Advisories (RHSAs) and selected Urgent Priority Bug Advisories (RHBAs). For more information, contact your Red Hat sales representative or channel partner."

Full Story (comments: none)

Distribution News

Ubuntu family

UDS Sponsorship Now Open

The next Ubuntu Developer Summit (UDS) will take place in Oakland, California May 7-11, 2012. "UDS is the most important event in the Ubuntu calendar. It is where we get together to discuss, design, and plan the next version of Ubuntu; in this case the Ubuntu 12.10 release." Canonical will sponsor the hotel and accommodation for a limited number of people to attend UDS. Applications for sponsorship must be submitted by February 22.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

The case for the /usr merge

Lennart Poettering has announced the posting of a summary of the motivations for merging several root-level directories into /usr. "A unified filesystem layout (as it results from the /usr merge) is more compatible with UNIX than Linux’ traditional split of /bin vs. /usr/bin. Unixes differ in where individual tools are installed, their locations in many cases are not defined at all and differ in the various Linux distributions. The /usr merge removes this difference in its entirety, and provides full compatibility with the locations of tools of any Unix via the symlink from /bin to /usr/bin."

Comments (267 posted)

Page editor: Rebecca Sobol

Development

Linux screen recording

By Jake Edge
February 1, 2012

Turning an on-screen session into a video is a widely used technique for a number of different uses, but one of its main use-cases is for education. One can find any number of "screencasts" on YouTube or elsewhere that show how to do particular tasks using various tools like Gimp or Blender. Depending on the task, it's generally much easier to show people how to do something rather than to try to explain in words the steps required. There are a few different Linux tools available for doing screen recording.

Perhaps the best-known choice is recordMyDesktop, which is actually a command-line tool that provides a means to turn a region of the desktop into a video—in Ogg-Theora-Vorbis format. As the man page describes, there are lots of different options that can be used to control the area that will be recorded, the video format and frame rate, sound options, and so on. But, since screen recording is an inherently visual task, there are both GTK and Qt front-ends available.

[recordMyDesktop window]

Both of the front-ends are written in Python using either pyGtk or pyQt4, and both, not surprisingly, provide a GUI interface to the recordMyDesktop command line. As one might guess, the front-ends pretty much look and act the same (the Qt version is seen at right). In the simplest case, one simply indicates the region of interest on the preview pane. Once the region is selected, hitting the "Record" button starts the video capture. Amusingly, because the preview pane shows the entire window, including the qt-recordMyDesktop window itself, one gets an "infinite" regression in the pane.

One can also choose a particular window, rather than a region, by using the "Select Window" option. But, in either case, it is really the rectangular region of the visible screen that is being recorded. If another window is moved into the region, or a different virtual desktop is chosen, that's what goes into the video. Depending on the use, that may be exactly what is called for, but, for other uses, like recording an online lecture or webcast, it may require extra care. Popping up an email client or web browser to quickly check something may result in unwanted video artifacts.

Audio can also be recorded, either from the audio that accompanies the content or from a local microphone. For the specific recording I was doing (a Go class lecture on the KGS Go Server), it required switching the sound source in recordMyDesktop from "DEFAULT" to "pulse" and using pavucontrol to change the recording source for the application to the "Monitor" of my sound card. It was a little non-obvious, at least to me, but there were multiple solutions that Google led me to, including this.

[recordMyDesktop Advanced window]

The "Advanced" settings allow changing things like the sound source mentioned above, but also frame rate, on-the-fly encoding, mouse cursor settings, and more. As with video in general, files can get fairly large for a 90-minute class, so reducing the size of the recording area and the frame rate will produce smaller video while still capturing the information needed. A 2 frames per second (fps) setting, for example, may be fine for showing human-initiated screen changes and can make a big reduction in size from the default 30fps.

The recording is stopped from a panel applet (or a signal to the recordMyDesktop process), which will then start the encoding process (unless on-the-fly encoding is chosen). The end result is a out.ogv file that can be played in Totem, Dragon Player, or other video players. Sadly, sharing Ogg format video with much of the rest of the world is difficult, so transcoding to WebM (maybe) or H.264 is probably required. Transmageddon or Arista (for those not running the GStreamer bleeding edge) fit the transcoding bill nicely.

Another option for recording is Istanbul, which has a minimal "GUI" via the menu from its panel applet. It lacks many of the advanced features of recordMyDesktop, including the ability to directly record audio. One can use a separate audio recorder and try to synchronize the audio and video (as the Fedora wiki suggests), but that may prove somewhat painful. Another option is Byzanz, which is a command-line-only tool that also lacks audio support (and, seemingly, a home page).

From discussions with other students who use OS X or Windows, the choices there are more varied, but generally not free (in either sense of the term). I found recordMyDesktop to be more than adequate for the task at hand, and it would seem that lots of others are using it as well. If audio is not required (or can be recorded separately, perhaps by narrating while watching the video for example), Istanbul or Byzanz may suit as well. Foolishly, I set out on this task thinking that Linux options might be difficult to find or use—video applications for Linux have had that reputation in the past—but was happily surprised to find that it wasn't the case.

Comments (23 posted)

Brief items

Quotes of the week

I really urge people to think about openness and freedom, two amazingly important concepts, beyond the boundaries of simple software licensing. Licensing is important, and we take it pretty damn seriously .. but we ought to look at bigger picture and really think about how to make our digital tools open and free in all sorts of ways.
-- Aaron Seigo

But whereas I previously held for Java a cordial dislike borne of having only a cursory notion of how it worked, now my dislike for the language can no longer be called at all "cordial", for familiarity has bred contempt.
-- Tom Christiansen

Comments (39 posted)

Firefox 10 released

The Firefox 10 release is out. New features include better add-on compatibility, anti-aliasing for WebGL, CSS 3D transforms, a full-screen API for HTML5 applications, and more; see the release notes for details.

Comments (1 posted)

Freemyipod

The freemyipod project aims to provide tools and documentation that will allow you to jailbreak your iPod so that you can run alternative firmware such as Rockbox. So far only clickwheel devices are supported. (Thanks to Ashley Hull)

Comments (none posted)

Git 1.7.9 released

The Git 1.7.9 release includes a long list of new features, including localization support with gettext, the ability to work with platform-level key management mechanisms, the ability to pull from a signed tag (and keep the relevant metadata), a new option to create GPG-signed commits, a side-by-side diff display in gitweb, and more.

Full Story (comments: none)

ImageZero for lossless photo compression

Christoph Feck has announced the first release of ImageZero, a lossless photo compression tool. "Being twice as fast as PNG when decompressing (and more than 20 times faster when compressing) it achieves compression ratios that are near or better than PNG for natural photos, sometimes even better than JPEG-LS for very high quality photos." The code is available on gitorious. (Thanks to Paul Wise).

Comments (25 posted)

Mercurial 2.1 released

Version 2.1 of the Mercurial source code management system is out. The big feature this time around is phases: "Phases improve safety of history rewriting and provide control over changesets exchanged among different repositories. Phases are intended to transparently 'just work' for most users." There are also some bookmark improvements, better copy detection, and more; details are in the WhatsNew page.

Full Story (comments: 1)

ownCloud 3 released

Version 3 of the ownCloud personal cloud system has been announced. New features include a browser-based text editor, an integrated PDF viewer, a photo gallery application, an improved calendar application, and, inevitably, an application store. "The browser based text editor supports 35 programming languages for syntax highlighting, keyboard shortcuts, drag and drop text, automatic indent and outdent, unstructured/user code folding and Live syntax checker (for JavaScript, Coffee and CSS). The editor is based on the ACE JavaScript Editor. The editor supports basic text files. Editing more advanced formats like doc(x) and ODT is planned for future releases." LWN looked at ownCloud in early January.

Comments (none posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Almost There - PyPy's ARM Backend

The PyPy Status Blog has an update on the status of the PyPy port to the ARM architecture. "The current results on ARM, as shown in the graph below, show that the JIT currently gives a speedup of about 3.5 times compared to CPython on ARM. The benchmarks were run on the before mentioned BeagleBoard-xM with a 1GHz ARM Cortex-A8 processor and 512MB of memory. The operating system on the board is Ubuntu 11.04 for ARM."

Comments (none posted)

Page editor: Jonathan Corbet

Announcements

Brief items

CERN OHL v.1.2 call for comments

The CERN Open Hardware License version 1.2 is available for comments and feedback. "The main changes were introduced in article 3 of the licence. One point in particular is still under discussion and concerns article 3.3(e) – attempting to send modifications to the Licensors whose design was modified and those who requested it. On the one hand questions of practicalities arise – does every minor modification/debugging need to be sent to everyone? – while on the other it may be perceived as a fair return, for contributors, to be notified of modifications that were made. Your input and suggestions on this point are most welcome!" (Thanks to Paul Wise)

Comments (9 posted)

The Document Foundation will be based in Berlin

The Document Foundation has announced that its long-awaited legal entity will be based in Berlin. "'After many months of work in close cooperation with the authorities, we were able to keep the spirit of the community bylaws, and incorporate them into legally binding statutes, that ensure the promises that TDF has made in its manifesto', says Michael (Mike) Schinagl, a Berlin-based lawyer and contributor to various free software projects, who has been driving the legal aspects of the foundation set-up from the very beginning."

Full Story (comments: none)

GNU Project renews focus on free software in education

The GNU Project has announced the relaunch of its worldwide volunteer-led effort to bring free software to educational institutions of all levels. "The newly formed GNU Education Team is being led by Dora Scilipoti, an Italian free software activist and teacher. Under her leadership, the Team has developed a list of specific goals to guide their work..."

Full Story (comments: none)

LPI joins Linux Foundation

The Linux Professional Institute (LPI) has become a member of the Linux Foundation. ""LPI represents many Linux professionals from around the globe and we have been promoting the professional use of Linux and Open Source since 1999. Our membership in The Linux Foundation is a natural partnership for us given our long-standing history of industry and community cooperation. We look forward to working with The Linux Foundation to enhance the open source ecosystem that supports innovation and evolution in this dynamic industry," said Jim Lacey, president and CEO of LPI."

Full Story (comments: none)

Articles of interest

Opponents protest signing of ACTA without adequate debate (ars technica)

ACTA (Anti-Counterfeiting Trade Agreement) was called "more dangerous than SOPA" by US Sen. Ron Wyden (D-OR), as ars technica reports. "Kader Arif, a French member of the European Parliament from the Socialist Party, had been assigned to be a rapporteur on ACTA, meaning that he was asked to study the issue and deliver a report on the subject. But he resigned in protest on Thursday. ”I want to denounce in the strongest possible manner the entire process that led to the signature of this agreement," he said, according to one translation. "No inclusion of civil society organisations, a lack of transparency from the start of the negotiations, repeated postponing of the signature of the text without an explanation being ever given, exclusion of the EU Parliament's demands that were expressed on several occasions in our assembly.”"

Comments (25 posted)

FOSDEM speaker interviews

The last set of interviews with FOSDEM speakers has been released. This list includes Juan David Gonzalez Cobas and Javier Serrano (open hardware), Bryan Østergaard (community management), Ben Klang (Adhearsion), Soren Hansen (monitoring), Kristian Høgsberg (Wayland), Anil Madhavapeddy (UNIX I/O), Carl-Daniel Hailfinger (coreboot), and Claire Corgnou (average Jane and Joe).

Comments (none posted)

Garrett: The ongoing fight against GPL enforcement

Matthew Garrett has posted a complaint about an attempt to create a permissively-licensed busybox and calls for kernel developers to be more aggressive in enforcing their copyrights. "The real problem here is that the [Software Freedom Conservancy's] reliance on Busybox means that they're only able to target infringers who use that Busybox code. No significant kernel copyright holders have so far offered to allow the SFC to enforce their copyrights, with the result that enforcement action will grind to a halt as vendors move over to this Busybox replacement. So, if you hold copyright over any part of the Linux kernel, I'd urge you to get in touch with them. The alternative is a strangely ironic world where Sony are simultaneously funding lobbying for copyright enforcement against individuals and tools to help large corporations infringe at will."

Comments (221 posted)

Kuhn: Some Thoughts on Conservancy's GPL Enforcement

Bradley Kuhn has posted a lengthy explanation of the Software Freedom Conservancy's GPL enforcement activities and the demands they make. "I started using this request regularly around 2002 because violators express a concern that, if they came into compliance due to my efforts, what was to stop others from coming to complain, in sequence, and wasting their time? I suggested that if they came into compliance all at once, on all FLOSS licenses involved, it would be easy for me to be on their side, should someone else complain. Namely, I'd come to their defense and say: 'Yes, they were out of compliance, but we've checked everything and they're now in compliance throughout this product. Those who are now complaining are being unfair, since — while this violator had trouble initially — their compliance with all FLOSS licenses is now adequate'."

Comments (1 posted)

Seigo: The reveal

KDE developer Aaron Seigo writes about the "Spark", an upcoming unlocked €200 tablet that runs the KDE Plasma Active system. "This is more than just another piece of hardware on the market, though. This is a unique opportunity for Free software. Finally we have a device coming to market on our terms. It has been designed by and is usable by us on our terms. We are not waiting for some big company to give us what we desire, we're going out there and making it happen together. Just as important: the proceeds will be helping fuel the efforts that make this all possible."

Comments (14 posted)

Calls for Presentations

“CeBIT for all!”: Submit your entries now

CeBIT 2012 takes place March 6-10 in Hannover, Germany. Univention is organizing an Open Source stage at this year’s CeBIT in Hall 2. "This stage will play host to an extensive stage programme on all trade fair days with contributions from important Open Source projects and companies, discussions and expert interviews. Under the motto “CeBIT for all!” Univention is also offering projects and small companies the chance to apply for a free presentation slot. The call for paper will run until the 25th February 2012."

Full Story (comments: none)

Ohio LinuxFest Opens 10th Call for Talks

Ohio Linuxfest 2012 will take place September 28-30 in Columbus, Ohio. The call for talks closes July 6, 2012.

Full Story (comments: none)

Upcoming Events

XFS Developers meeting in San Francisco

There will be a meeting of XFS developers on April 3, 2012 in San Francisco, California. The meeting will take place during the 6th Annual Linux Foundation Collaboration Summit.

Full Story (comments: none)

Libre Graphics Meeting 2012

The 2012 Libre Graphics Meeting (LGM) will take place May 2-5 in Vienna, Austria. "LGM gives software developers, artists, designers and other graphics professionals the opportunity to collaborate and learn from each other."

Comments (none posted)

Events: February 2, 2012 to April 2, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
January 31
February 2
Ubuntu Developer Week #ubuntu-classroom, irc.freenode.net
February 4
February 5
Free and Open Source Developers Meeting Brussels, Belgium
February 6
February 10
Linux on ARM: Linaro Connect Q1.12 San Francisco, CA, USA
February 7
February 8
Open Source Now 2012 Geneva, Switzerland
February 10
February 12
Linux Vacation / Eastern Europe Winter session 2012 Minsk, Belarus
February 10
February 12
Skolelinux/Debian Edu developer gathering Oslo, Norway
February 13
February 14
Android Builder's Summit Redwood Shores, CA, USA
February 15
February 17
2012 Embedded Linux Conference Redwood Shores, CA, USA
February 16
February 17
Embedded Technology Conference 2012 San José, Costa Rica
February 17
February 18
Red Hat, Fedora, JBoss Developer Conference Brno, Czech Republic
February 24
February 25
PHP UK Conference 2012 London, UK
February 27
March 2
ConFoo Web Techno Conference 2012 Montreal, Canada
February 28 Israeli Perl Workshop 2012 Ramat Gan, Israel
March 2
March 4
Debian BSP in Cambridge Cambridge, UK
March 2
March 4
BSP2012 - Moenchengladbach Mönchengladbach, Germany
March 5
March 7
14. German Perl Workshop Erlangen, Germany
March 6
March 10
CeBIT 2012 Hannover, Germany
March 7
March 15
PyCon 2012 Santa Clara, CA, USA
March 10
March 11
Open Source Days 2012 Copenhagen, Denmark
March 10
March 11
Debian BSP in Perth Perth, Australia
March 16
March 17
Clojure/West San Jose, CA, USA
March 17
March 18
Chemnitz Linux Days Chemnitz, Germany
March 23
March 24
Cascadia IT Conference (LOPSA regional conference) Seattle, WA, USA
March 24
March 25
LibrePlanet 2012 Boston, MA, USA
March 26
April 1
Wireless Battle of the Mesh (V5) Athens, Greece
March 26
March 29
EclipseCon 2012 Washington D.C., USA
March 28
March 29
Palmetto Open Source Software Conference 2012 Columbia, South Carolina, USA
March 28 PGDay Austin 2012 Austin, TX, USA
March 29 Program your own open source system-on-a-chip (OpenRISC) London, UK
March 30 PGDay DC 2012 Sterling, VA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds