User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 3, 2011

LCA 2011

By Jonathan Corbet
February 1, 2011
Our community has a number of volunteer-organized events; some of them are rather more organized than others. Anybody who thinks that volunteers cannot produce a professional-quality (or better) event, though, has never been to linux.conf.au. LCA is not just run by volunteers; it is organized by a completely different group of volunteers every year, but it still comes off reliably, every time, as a top-quality conference. Each year's organizers clearly deserve a lot of credit, but there is also a lot of value in the LCA "ghosts" institution, whereby organizers from previous years give advice and keep a watch for red flags as the planning and preparations go forward. Without the ghosts, LCA would not be what it is.

Now imagine that you have been planning an event for over a year. Two weeks before the conference, venues, equipment, accommodations, transportation, social events, and more are all in place. Then the host city is hit by catastrophic floods, the venues for both the conference and the social events are taken out of commission, and the routers for the wireless network are soaking at the wrong end of a flooded warehouse. Even if a new venue can be found, it will no longer be within walking distance of the accommodations, so transportation must be arranged on short notice.

That is the point where the ghosts run out of useful experience to share. It is also the point where an insufficiently determined group would simply give up.

[Against all odds] The organizers of LCA 2011, held in Brisbane, would appear to be a determined bunch indeed. They found a new venue, reprinted the conference maps, found new locations for the social events, swam through the warehouse to recover the routers, arranged new transportation for the attendees, and, beyond any doubt, did a thousand things that nobody else saw. The end result was a conference which, barring knowledge to the contrary, would have seemed like they had planned it that way all along. LCA 2011 didn't just work - it worked just as well as its predecessors. One easily runs out of superlatives when describing the job this group did; your editor only hopes that, after they have slept for a solid week or so, they have arranged a major party to celebrate what they accomplished.

There were a number of interesting sessions at this conference, many of which have been covered in these pages. Here, your editor will summarize some of the talks which, for various reasons (including simple time) were not discussed in a separate article.

Andrew 'Tridge' Tridgell has developed a reputation for energetic LCA talks focused on the simple joy of hacking; his LCA 2011 talk did not [Andrew Tridgell] disappoint. Tridge, it seems, has become somewhat of a coffee snob, so he has taken to roasting his own beans. That turns out to be an attention-intensive process which takes too much time away from the hacking that coffee is meant to support, so he built a Linux-powered coffee roaster out of an old bread maker, a temperature sensor, a heat gun, and a hand-made circuit for power regulation.

While demonstrating the device and hoping the fire alarms did not sound, he went into the specifics of coffee roasting and the details of how one uses LD_PRELOAD to reverse engineer a Windows temperature driver running under Virtualbox on Linux. A good time was clearly had by all. Bdale Garbee's session on the creation of a large, Linux-powered milling machine had a similar feel. Both talks will be well worth watching once the videos become available.

Daniel Bentley and Daniel Nadasi talked about the challenges that go with opening up code at Google. Internal programs tend to be heavily used and have a lot of internal contributors; these people often have a lot of worries when they are approached about releasing their code to the world. They have to be sold on the business case for opening the code, and they have to be talked past worries that their code is too ugly to see the light of day. There are also some real concerns that opening code might reveal internal information and that working with the community might slow the project down. Changing source control and build systems can also be a challenge; apparently few people at Google still remember how to write a classic makefile.

An important question is: where is the home for the code's further development? If it's developed internally, the internal folks are happy because things are working as they were before. Outsiders, who see a series of code dumps, may be less impressed. If development happens publicly then outside developers will be happier, but it can be harder for internal developers. An added factor is that any project, no matter how successfully it is opened, will be dominated by internal developers during the first part of its open existence; that tends to drive the internal development model, but that, in turn, can slow (or prevent) the development of a community around the code.

Daniel and Daniel's response to this problem is a tool called "make open easy," or "moe." With moe, internal developers can mark sections of code which should not be visible to the outside world; markings can take the form of function annotations or preprocessor-like directives. The tool can then extract the code from the internal repository, edit it according to the directives, and load it into a public repository. Importantly, it can also move code in the other direction, merging external changes while retaining the scrubbing directives. Moe makes lives easier on both sides of the wall, and is in active use with a number of projects; it can be obtained from code.google.com.

Carl Worth gave a well-attended session on the notmuch mail system. Notmuch has been reviewed here in the past; your editor was [Carl Worth] mostly interested in the current and future state of this search-oriented mail tool. Recent changes include the ability to search on mail folder names - useful for migrating from a folder-based mail client. There is also synchronization with maildir flags, which is helpful for people using both notmuch and a more traditional client. There are now a few supported output styles for search operations, which should make it easier to create a web-based notmuch front-end, among other things.

In the near future, notmuch users should expect the ability to search on arbitrary mail headers and some relief from the rather inflexible date format which must be used now. Further ahead, there will be more work toward synchronization with remote mail spools; the hard part here is moving tags back and forth. Options for a solution include the addition of a special header to the messages themselves (but that could be problematic if the header leaks in a forwarded message, revealing to all the tags one uses for mail from the special people in one's life), the use of custom maildir flags, or the addition of some sort of journal replay mechanism. There is also talk of storing mail in git packs and using the git protocol to move messages (and tags) around. Even further ahead might be a notmuch backend for mutt.

Meanwhile, the project has a number of interested users but, by Carl's admission, it could benefit from a more present maintainer.

[Kirk McKusick] Kirk McKusick is one of the creators of BSD Unix. His fast-paced session in an overflowing room covered much of the history of the Berkeley Software Distribution, the ups and downs of hacking with Bill Joy, the ATT lawsuit, his refusal to work for just-starting Sun Microsystems (because Apollo had the workstation market completely sewed up), and much more. The talk should eventually appear with the rest of the conference videos; there is also apparently a DVD available on Kirk's web page for those who want more.

There were far more interesting talks than your editor could possibly attend, much less write up. The good news is that the conference organizers are making the videos available quickly; they can be found (in several formats) on this blip.tv page, but this wiki page has them in a much better-organized fashion.

In summary: LCA 2011 was another great success; it would have been judged favorably against its predecessors even in the absence of natural disasters. LCA 2012 will, perhaps surprisingly, be held in Ballarat, a small city outside Melbourne. The Ballarat organizers have a hard act to follow, but history suggests they will be up to the task.

Comments (11 posted)

Debugging conference anti-harassment policies

By Jonathan Corbet
January 31, 2011
Linux.conf.au 2011 distinguished itself in a number of ways, one of which was the uniformly interesting and thought-provoking nature of its keynote talks, two of which have already been covered on LWN. Mark Pesce's keynote was no exception, but this talk also stood out as the only one at the conference to trigger the newly-adopted anti-harassment policy, leading to apologies from the organizers and the speaker. This action was controversial on all fronts; perhaps the only clear conclusions are that we have not yet come to a real consensus on what harassment means or the best way to prevent it.

The source: As of this writing, the talk is not available on the LCA 2011 videos page. Mark has posted the text of the talk and the slides [ODP] for the curious.
The talk itself was about freedom and privacy on the net. There was much discussion of the evils of sites like Facebook and a bit of talk about the Plexus project which is trying to create alternatives which are more free. To your editor, the most chilling point was that the net itself is not free; the crusade against Wikileaks and the Internet shutdown in Egypt were given as examples. The net, he said, functions at the whim of government; we need to build alternative transports - using smoke signals, if necessary - to ensure our right to communicate. We are at war for our freedom, he said, and we need to start approaching the problem that way.

The message clearly resonated with many people in the audience, but the presentation of that message was less than pleasing to many. The speaker aimed for a high level of drama, made heavy use of profanity, and put up some slides that struck some attendees as overtly sexual in nature. In your editor's opinion, the presentation style, which was clearly intended to shock and disturb, detracted from the message which was being delivered. It also ensured that much of the subsequent talk would be about the slides and the language, and not about what was really said. Your editor, who, at the outset, wondered if he could learn something from the speaker to spice up his own talks (which are notably less dramatic), concluded at the end that there was indeed something to learn, but the lessons were all negative.

A number of attendees complained, and the organizers, in response, apologized (to applause) at the closing session. Mark later posted an apology of his own. It seemed like a reasonable handling of the situation, and the discussion could have stopped there - but it didn't.

The lca-chat mailing list, which had mostly occupied itself with (1) making Brisbane's public transportation system seem much more complicated than it really is and (2) discussing the lack of toilet paper in one of the lodging choices, hosted several threads on whether the response to the talk was right. Interested parties are encouraged to read through the threads - which remained civil throughout - for the full discussion. But there are a few things which can be summarized:

  • Some attendees were upset by the talk and fully supported the apology.

  • Some participants, while supporting the posted anti-harassment policy, felt that the talk did not violate that policy.

  • Others went further, saying that the language and imagery used were effective and necessary for the talk to attain its objective of making attendees uncomfortable with the current state of affairs.

  • Others yet objected to the entire conversation, claiming that a discussion of whether the policy applied made them feel unsafe and asking people to stop.

Your editor disagrees with the last group and feels that the discussion is absolutely necessary. We are partway through a process - likely to take years - aimed at making our community and its gatherings more welcoming for all those we would like to have attend. LCA 2011 adopted a new style of policy on harassment which had not been used before, and Mark Pesce's talk was the first time it was invoked. The idea that we have everything right and that no further discussion required is, frankly, laughable. Some debugging will certainly be necessary - once we are sure we have the core design right.

While evaluating the design and pondering debugging, there are a couple of viewpoints from LCA organizers that warrant reading in full. The first is from LCA 2011 organizer Russell Stuart, who opposed the policy from the outset - though, having lost that battle, he argued for apologizing when the policy was violated. He says:

One of the roles of LCA organisers is to bring popular, enlightening and if we get very lucky even inspiring talks. By two measure's Mark Pesce's talk was one of those. It received one of the longest, it not the longest acclamation of any talk at LCA 2011. And if the chatter on our lists is any guide, it caused more people to stop, think and act than any other talk. And yet we have a small minority of people who evidently take offence at images and words that would be perfectly acceptable on Australia broadcast TV, and are now suggesting the vast bulk of the LCA attendees who enjoyed the talk should not have been allowed to see it because they object to it. And they got very close to achieving just that.

Russell fears that the policy heads toward outright censorship and should not be used by other conferences until it has been "substantially reworked." He found agreement from Susanne Ruthven, one of the lead organizers of LCA 2010 and the author of that conference's anti-harassment policy. That policy was aimed at preventing broadly-described "harassment or discrimination" and, seemingly, would not have been invoked for this talk:

As organisers of LCA2010, Andrew and I have discussed this current situation and think some of Mark's slides could be inappropriate and considered bad taste, but they have certainly achieved their purpose of making us all sit up and think, and more importantly, to question. In our view, Mark's talk was not discriminatory or harassment. It obviously offended some people, but then he is entitled to shock, horrify and offend under his right to freedom of expression (as long as his actions aren't breaking any laws, like discrimination laws etc).

Clearly there is a balance to be found here; outright harassment is not a freedom of speech issue, but the desire to create a more welcoming environment in general will almost certainly require curtailing certain types of speech. Those who see speech freedom as fundamental will resist such moves. Those who have suffered assault, or who simply do not want to circulate in a highly sexualized environment, will push in the other direction. Conference organizers - and speakers - may find themselves caught in the middle.

The problems addressed by anti-harassment policies are real. Conference attendees have had to put up with some horrifying experiences which - hopefully! - do not reflect what our community is about. Practices like the employment of booth babes or the use of women as sexually-charged attention magnets on slides do not create an environment which is conducive to the acceptance of women as equal participants. We absolutely need to clean up our act. But doing so will be an iterative process which must also respect other, equally fundamental freedoms. It's a design and debugging problem, and we are far from the final release on this bit of code.

Comments (181 posted)

LCA: Lessons from 30 years of Sendmail

By Jonathan Corbet
February 2, 2011
The Sendmail mail transfer agent tends to be one of those programs that one either loves or hates. Both its supporters and its detractors will agree, though, that Sendmail played a crucial role in the development of electronic mail before, during, and after the explosion of the Internet. Sendmail creator Eric Allman took a trip to Brisbane to talk to the LCA 2011 about the history of this project. Sendmail is, he said, 30 years old now; in those three decades it has thrived without corporate support, changed the world, and thrived in a world which was changing rapidly around it.

The history

Sendmail had its start at the University of California, Berkeley, in 1980; it was initially something Eric did while he was supposed to be working on the Ingres relational database management system. In those days, the Computer Science department had a dozen machines, but the main system was "Ernie CoVAX," which was accessed via ASCII terminals. There was a limited number of ports, so users had to connect via a patch panel in the mail room; contention for available ports was often intense.

Things got more interesting when the Ingres project got an ARPAnet connection; a single PDP11 machine, with two ports, was the only way to access the net at that time. There was no way the entire department was going to share those two ports without somebody getting hurt, so another solution was required. Eric looked at the problem, concluded that what everybody really wanted was the ability to send mail through the gateway machine, and decided that he would make a way to access email from other machines on campus. From this beginning delivermail was born.

There was a set of design principles that Eric adopted at that time. There was only one of him, so programming time was a truly finite resource. Redesigning user agents and mail stores was out of the question. Delivermail had to adapt to the world around it, not the other way around. [Eric Allman] The resulting program worked, but was not without its problems. The compiled-in configuration lacked flexibility, there was no address translation as messages moved between networks, and the parsing was simple and opaque. But it succeeded in moving mail around and giving the entire department access to the net.

Then the department got the BSD contract. Bill Joy needed a mail transfer agent to connect to the network, so he talked Eric into taking on the job. After all, how hard could it be? Among other things, the new MTA needed to support the SMTP mail protocol - which wasn't specified yet. Supporting SMTP also forced the addition of a mail queue, a job which turned out to be much harder than it looked. Eric hacked away, and Sendmail was shipped with 4.1BSD in 1982 with support for SMTP, header rewriting, queueing, and runtime configuration.

After that, Eric left Berkeley for a "lucrative" (heavy on the quotes) career in industry. Sendmail, meanwhile, was picked up by the Unix vendors. The Unix wars were in full force at that time; the inevitable result was a proliferation of different versions of Sendmail. The program became balkanized and incompatible across systems.

Eric returned to Berkeley in 1989 and started hacking on Sendmail again; the immediate need was support for the ".cs" subdomain at the university. That work snowballed into a major rewrite culminating in Sendmail 8; this version integrated a great deal of code from both the industry and the community. It added support for ESMTP, a number of new protocols, delivery status notifications, LDAP integration, eight-bit mail, and a new configuration package. Uptake increased after the Sendmail 8 release as a result of these features, but also as the result of the publication of the O'Reilly "bat" book. Documentation, it turns out, really matters.

Sendmail Inc. was created in 1998 with the fantasy that it would let Eric get back to coding. In reality, starting a company is more about marketing, sales, and money than about technology - a lesson many of us have learned. It was one of the first companies trying to mix open source and proprietary offerings; in those days, the prevailing wisdom is that a company needed proprietary lock-in to have any chance of success. Over time, though, functionality migrated to the free version; thus Sendmail gained support for encryption, authentication, milters (mail filters), virtual hosting, spam filtering, and more. And that's where things stand today.

Lessons learned

As one might expect, 30 years of experience have led to a number of lessons worth passing on. Eric shared a few of them.

One is that requirements change all the time. The original delivermail program had reliability as its primary focus - few things are more hazardous to one's academic career than losing a professor's grant proposal. Over time, the requirements shifted toward functionality and performance; Sendmail had to scale up in speed and features as the Internet took off. Then users were demanding protection from spam and malware; that shifted Sendmail development toward keeping mail out. We have, Eric noted, gone full circle toward unreliable mail service. After that came requirements around legal and regulatory compliance - that is where a great deal of Sendmail Inc.'s business lies. There is currently an increasing focus on controlling costs, mobility, and social network integration. Without the ability to adapt to meet these shifting requirements, Sendmail would not have thrived through all these years.

With regard to Sendmail's design decisions, Eric said that some turned out to be right, some were wrong, and some were right at the time but are wrong now. One criticism that has been made is that Sendmail is an overly general solution; it can route and rewrite messages in ways which are generally unneeded in these days of Internet monoculture. Eric defended that generality by saying that the world was in great flux when Sendmail was designed; there was no way to really know how things were going to turn out. And, he said, he would do it again: "the world is still ugly."

Rewriting rules for addresses are a part of that generality; even at the time, it seemed like overkill, but he couldn't come up with anything better. It was, he said, probably the right thing to do. That said, the decision to use tabs as active characters was the stupidest thing he has ever done. That's how makefiles did it, and it seemed cool at the time. As a whole, he said, the concept was right, but the syntax and flow control could have been a lot better. Even so, he's glad he did matching based on tokens; basing Sendmail configuration around regular expressions would have been far worse.

If he were doing the configuration system now, it would look a lot more like the Apache scheme.

The message munging feature was needed for the rewriting of headers; it facilitated interoperability between different networks. It is still used a lot, he said, though it's arguably not necessary. Sendmail could benefit from a pass-through mode which shorts out the message munging, but that leaves open the question of what should be done with non-compliant messages. Should they be fixed, rejected, or just dropped? There is, he said, no obvious answer.

[Eric Allman] The embedding of SMTP and queueing in the mail daemon was the right thing to do; he does not agree with the Postfix approach of proliferating lots of small daemons. The queue structure itself involves two files for every message: one with the envelope, and one with the body. That forces the system to scan large numbers of small files on a busy system, which is not always optimal. At the time it was the right way to go; now he would probably use some sort of database for the envelopes. The decision to use plain text for all internal files was right, though; it makes debugging much easier.

With regard to the use of the m4 macro preprocessor for configuration, Eric admitted that the syntax is painful. But he needed a macro facility and didn't want to reinvent the wheel. The "damned dnl lines" for comments were a mistake, though, and completely unnecessary. In summary, some sort of tool was needed; m4 might not have been the best choice, but it's not clear what would have been.

With regard to extending or changing features: Sendmail has tended toward extending features and maintaining compatibility, and that has not always been the right thing to do. The hostname masquerading facility was one example; that feature was simply done wrong the first time around. Rather than fixing it, though, Eric papered over the problems with new features. It would have been better to inflict some short-term pain on users, perhaps aided by a migration tool, and be done with it. The unwillingness to replace mistaken features has a lot to do with why Sendmail is difficult to configure.

Sendmail goes out of its way to accept and fix bogus input; that was in compliance with the robustness principle ("be conservative in what you send but liberal in what you accept") that was widely accepted at the time. It increases interoperability, but at the cost of allowing broken software to persist indefinitely, leading to large costs down the road. Nonetheless, it was the right idea at the time for the simple reason that everything was broken then. But he should have tightened things up later on.

What would he have done differently? At the top of the list is trying to fix problems as soon as possible. These include tabs in the configuration file and the V7 mailbox format. He's really tired of seeing ">From" in messages; he said he could have fixed it and expressed his apologies for not having taken the opportunity. He would make more use of modern tools; Sendmail has its own build script, which is not something he would do today. He would use more privilege separation, though he would not go as far as Postfix. He would have made a proper string abstraction; strings are by far the weakest part of the C language.

There are also a number of things he would do the same, starting with the use of C as the implementation language. It is, he said, a dangerous language, but the programmer always knows what is going on. Object-oriented programming, he said, is a mistake; it hides too much. Beyond that, he would continue to do things in small chunks. The creation of syslog (initially as a way of getting debugging information out) was obviously the right thing to do; he was surprised that there was no centralized way of dealing with logging data on Unix systems. He would still implement rewriting rules, albeit with a different syntax. And he would continue not to rely too heavily on outside tools. There is a cost to adding dependencies on tools; sometimes it's better to just build what you need. There are, he said, projects using lex when all they really need is strtok().

There were a number of "takeaways" to summarize the talk:

  • The KISS (keep it simple, stupid) principle works.
  • If you don't know what you are doing, advance designs will not help.
  • The world is messy, just plan on it.
  • Flexibility trumps performance when the world changes every day.
  • Fix things early; your installed base will only get larger if you succeed, and the pain of not fixing things will only get worse.
  • Use plain text for internal files and protocols.
  • Good documentation is the key to broad acceptance; most projects, he said, have not yet figured this out.

The talk was evidently based on a chapter from an upcoming book on the architecture of open-source applications.

One member of the audience asked Eric which MTA he would recommend for new installations today. His possibly surprising answer was Postfix. He talked a lot with Postfix author Wietse Venema during its creation, and was impressed. Postfix is, he said, nice work, even if he doesn't agree with all of the design decisions that were made.

Comments (107 posted)

Page editor: Jonathan Corbet

Security

The end of OpenID?

By Jake Edge
February 2, 2011

Last week's Security page had a quote from 37signals about its decision to drop support for OpenID. Since then there have been several postings that purport to explain the problems with OpenID and why it never gained much traction. One of the better analyses comes from Wired's webmonkey blog, which calls OpenID "The Web's Most Successful Failure". So, why hasn't OpenID taken the world by storm?

OpenID set out to solve, or help solve, the "single sign-on" (SSO) problem, so that users could have a single identity that they used with multiple web sites. But OpenID is more than that, because it allows users, rather than web sites, to decide how much personal information needs to be shared. It is this user-centric nature of OpenID that may be leading to its downfall.

We have looked at OpenID several times over the years, including an overview in 2006, and a look at OpenID 2.0 in 2007. By the time we looked at the OpenID Connect proposal back in June, the problems with users being able to control the amount of information provided to web sites was becoming evident. It was, in fact, a major reason that OpenID Connect was proposed.

While OpenID is by no means perfect, the resistance to its adoption is not necessarily completely technical. Other OAuth-based schemes have become much more popular at least in part because web site operators get access to much more personal information by default than they get when users log in with OpenID. Even site-specific registration tends to extract more information (email address, full name, and so on). Because that kind of information is valuable to web site operators—and willingly given up by the vast majority of users—OpenID users are seen to be "less valuable", as OpenID Connect developer Chris Messina pointed out. The Wired blog post put it this way:

Web publishers never warmed to OpenID since it allows a user to log in to a website and leave a comment on a story, a blog post or a photo while essentially remaining anonymous to the publisher. That anonymous aspect has made OpenID less attractive to publishers who want to collect more data about their readers or interact with them — whether that means following them on Twitter, connecting with them on Facebook or sending them e-mail.

But one of the main alternatives to OpenID—one that has seen much more adoption—is Facebook Connect (though the "Connect" part of the name has largely been dropped). As that name would imply, it is run by Facebook, which is an organization that is not noted for its interest in preserving user privacy. One hopes that the pervasiveness of Facebook sign-ons will have some boundaries. While it does solve the SSO problem for Facebook users, in a fairly uncomplicated way, it would be horrifying to be greeted by your bank's log-in screen asking for your Facebook ID.

OpenID suffers from some design flaws, using a URL as the OpenID identifier being one of the most prominent, but its Achilles heel is that it is complicated for users, beyond just remembering their OpenID URL(s). An additional problem is that some of the larger web services were only interested in being OpenID providers (i.e. using their URLs to log in elsewhere), and weren't particularly interested in being "relying parties" (i.e. taking OpenID URLs from elsewhere to allow users to log in). This asymmetric "support" for OpenID further muddied the waters for users.

At this point, though, we may well have seen the crest of the OpenID wave. Wired posits it being incorporated into Mozilla's (and other browser makers') efforts to move identity management into the browser itself. That would allow the browser to route around the individual web site log-in screens and authenticate the user behind the scenes, so OpenID could be used in a far less complicated manner.

In the end, OpenID is targeted at users who value their privacy and want to take control of their internet identities—two traits that seem to be in short supply for many users. Facebook Connect (and the Twitter equivalent) leverage huge user bases to make adoption by other web sites very attractive. Though there is evidently still some user confusion about using those authentication methods, the experience is more straightforward than OpenID.

So, where do we go from here? The US government is starting to make noise about trusted internet identities, which might provide an alternative SSO solution—though not without privacy (and other) concerns of its own. LWN has implemented OpenID relying party support, though there is still some work and testing to do before we can roll it out. The 37signals announcement and the related chatter seems likely to turn off some other sites that were considering OpenID support.

It is tempting to call OpenID a failure, and to some extent it is, but it has some compelling ideas, at least for technically (and privacy) savvy users. But the features that are most attractive to those users are precisely those that web site operators wish to avoid—anonymous/pseudonymous authentication doesn't play well with their business models. For sites like LWN, where registration doesn't require any personal information, the barriers to adoption are likely to be things like available developer time (that's certainly the case here). In addition, there has always been some interest from our readers in OpenID support but it never seemed to garner a critical mass clamoring for it. If OpenID had taken off the way many hoped it would, supporting it would have become a much higher priority for LWN and lots of other sites.

As Wired notes, OpenID was ahead of its time. It suffered from some technical problems—what new protocol doesn't?—but those could have been fixed if there was some groundswell of interest from users or web sites. Since that didn't happen, it's probably time to start thinking about other SSO options that aren't controlled by companies or governments. Without a solution that is under individual control, we risk being herded into systems that cater to the needs of these large organizations—with all the dangers to internet freedom that implies.

Comments (29 posted)

Brief items

Security quote of the week

Shutting WikiLeaks down won't stop government secrets from leaking any more than shutting Napster down stopped illegal filesharing.
-- Bruce Schneier

Comments (none posted)

Sony Wins TRO, Impoundment (Groklaw)

Groklaw has an in-depth look at the temporary restraining order [PDF] granted on January 26 to Sony against George Hotz for restoring the ability to run Linux on PlayStation 3 consoles. "Hotz is also ordered to hand over to Sony "any computers, hard drives, CD-roms, DVDs, USB stick, or any other storage devices on which any Circumvention Devices are stored" in his "possession, custody or control." I guess it's off with his head, too, then, because he surely knows how to do what he did. People who live in countries that don't have the DMCA also know. Just saying. [...] I would have thought Sony would be more technically clueful about the Internet, but what they do well is get the law to help them out. That's the purpose of the DMCA, if you think about it, to scare people so they won't do what they otherwise can do. So Hotz is in some hot water at the moment, I'd say, an object lesson, and it'll stay that way until the hearing, a date for which is not yet chosen. And from my reading, I'd say after that too, at least with this judge."

Comments (22 posted)

Egypt Leaves the Internet (Renesys blog)

In an unprecedented move, Egypt has completely removed itself from the internet, presumably in response to gathering unrest there, as reported by Renesys. US (and other) politicians will undoubtedly look on this as a validation of the "internet kill switch" idea (pushed by Connecticut senator Joe Lieberman among others). "At 22:34 UTC (00:34am local time), Renesys observed the virtually simultaneous withdrawal of all routes to Egyptian networks in the Internet's global routing table. Approximately 3,500 individual BGP routes were withdrawn, leaving no valid paths by which the rest of the world could continue to exchange Internet traffic with Egypt's service providers. Virtually all of Egypt's Internet addresses are now unreachable, worldwide."

Comments (38 posted)

Sourceforge Attack: Full Report

Sourceforge.net briefly reported an attack on its infrastructure on Thursday January 27 that resulted in some services (CVS, interactive ssh shells, and others) being suspended. More details were released on January 29, which show that the attack exploited a privilege escalation to root in one of the Sourceforge services. "It’s better to be safe than sorry, so we’ve decided to perform a comprehensive validation of project data from file releases, to SCM commits. We will compare data [against] pre-attack backups, and will identify changed and added. We will review that data, and will will also refer anything suspicious to individual project teams for further assessment as needed. [...] The validation work is a precaution, because while we don’t have evidence of any data tampering, we’d much prefer to burn a bunch of CPU cycles verifying everything than to discover later that some extra special trickery lead to some undetected badness."

Comments (3 posted)

Nmap 5.50 released

With an amusing title, "Nmap 5.50: Now with Gopher protocol support!", Nmap lead Fyodor announced the most recent release of the network exploration tool on January 28. It does indeed come with Gopher support, but other new features may be of wider interest: "A primary focus of this release is the Nmap Scripting Engine, which has allowed Nmap to expand up the protocol stack and take network discovery to the next level. Nmap can now query all sorts of application protocols, including web servers, databases, DNS servers, FTP, and now even Gopher servers! Remember those? These capabilities are in self-contained libraries and scripts to avoid bloating Nmap's core engine."

Comments (none posted)

New vulnerabilities

calibre: cross-site scripting and file disclosure

Package(s):calibre CVE #(s):
Created:February 2, 2011 Updated:February 2, 2011
Description: The calibre ebook management program suffers from directory traversal and cross-site scripting vulnerabilities; see this advisory for more information.
Alerts:
openSUSE openSUSE-SU-2011:0086-1 calibre 2011-01-31

Comments (none posted)

chm2pdf: two insecure tmp file flaws

Package(s):chm2pdf CVE #(s):CVE-2008-5298 CVE-2008-5299
Created:January 28, 2011 Updated:February 2, 2011
Description:

From the Red Hat bugzilla entries [1, 2]:

chm2pdf 0.9 uses temporary files in directories with fixed names, which allows local users to cause a denial of service (chm2pdf failure) of other users by creating those directories ahead of time. (CVE-2008-5298)

chm2pdf 0.9 allows user-assisted local users to delete arbitrary files via a symlink attack on .chm files in the (1) /tmp/chm2pdf/work or (2) /tmp/chm2pdf/orig temporary directories. (CVE-2008-5299)

Alerts:
Fedora FEDORA-2011-0454 chm2pdf 2011-01-17
Fedora FEDORA-2011-0467 chm2pdf 2011-01-17

Comments (none posted)

kernel: denial of service

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4342
Created:January 31, 2011 Updated:August 9, 2011
Description: The econet protocol implementation can enable a remote attacker to oops the kernel with a maliciously-crafted UDP packet.
Alerts:
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Ubuntu USN-1133-1 linux 2011-05-24
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
Ubuntu USN-1111-1 linux-source-2.6.15 2011-05-05
SUSE SUSE-SA:2011:015 kernel 2011-03-24
SUSE SUSE-SA:2011:012 kernel 2011-03-08
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
SUSE SUSE-SA:2011:008 kernel 2011-02-11
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: privilege escalation

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4346
Created:January 31, 2011 Updated:August 9, 2011
Description: A kernel vulnerability allows an attacker to bypass the mmap_min_addr restriction and map user-space memory at the null address.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
CentOS CESA-2011:0429 kernel 2011-04-14
Red Hat RHSA-2011:0429-01 kernel 2011-04-12
Red Hat RHSA-2011:0421-01 kernel 2011-04-07
Ubuntu USN-1105-1 linux 2011-04-05
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Fedora FEDORA-2011-2134 kernel 2011-02-24
SUSE SUSE-SA:2011:012 kernel 2011-03-08
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Ubuntu USN-1080-2 linux-ec2 2011-03-02
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1080-1 linux 2011-03-01
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Mandriva MDVSA-2011:029 kernel 2011-02-17
Fedora FEDORA-2011-1138 kernel 2011-02-07
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: privilege escalation

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4527
Created:January 31, 2011 Updated:August 9, 2011
Description: Two vulnerabilities in the OSS sound card drivers can facilitate local information disclosure or privileged code execution.
Alerts:
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Scientific Linux SL-kern-20110216 kernel 2011-02-16
Ubuntu USN-1133-1 linux 2011-05-24
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
Ubuntu USN-1111-1 linux-source-2.6.15 2011-05-05
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
SUSE SUSE-SA:2011:015 kernel 2011-03-24
SUSE SUSE-SA:2011:012 kernel 2011-03-08
Ubuntu USN-1080-2 linux-ec2 2011-03-02
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1080-1 linux 2011-03-01
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Red Hat RHSA-2011:0263-01 kernel 2011-02-16
SUSE SUSE-SA:2011:008 kernel 2011-02-11
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: information disclosure

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4529
Created:January 31, 2011 Updated:August 9, 2011
Description: A vulnerability in the IrDA socket implementation (on non-x86 systems) can leak some kernel memory to user space.
Alerts:
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1160-1 kernel 2011-06-28
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Ubuntu USN-1133-1 linux 2011-05-24
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
Ubuntu USN-1111-1 linux-source-2.6.15 2011-05-05
SUSE SUSE-SA:2011:015 kernel 2011-03-24
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
SUSE SUSE-SA:2011:012 kernel 2011-03-08
SUSE SUSE-SA:2011:008 kernel 2011-02-11
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: information disclosure

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4565
Created:January 31, 2011 Updated:August 9, 2011
Description: The CAN protocol implementation can leak the address of a kernel data structure, possibly making exploitation of another vulnerability easier.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1160-1 kernel 2011-06-28
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Red Hat RHSA-2011:0498-01 kernel 2011-05-10
Mandriva MDVSA-2011:029 kernel 2011-02-17
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: denial of service

Package(s):linux-2.6 kernel CVE #(s):CVE-2010-4649
Created:January 31, 2011 Updated:October 24, 2012
Description: A buffer overflow in the InfiniBand subsystem may allow local users to corrupt memory and oops the system.
Alerts:
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1186-1 kernel 2011-08-09
Scientific Linux SL-kern-20110715 kernel 2011-07-15
CentOS CESA-2011:0927 kernel 2011-07-18
Red Hat RHSA-2011:0927-01 kernel 2011-07-15
Ubuntu USN-1167-1 linux 2011-07-13
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Fedora FEDORA-2011-2134 kernel 2011-02-24
Ubuntu USN-1080-2 linux-ec2 2011-03-02
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1080-1 linux 2011-03-01
Red Hat RHSA-2011:0498-01 kernel 2011-05-10
Fedora FEDORA-2011-1138 kernel 2011-02-07
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2010-4656
Created:January 31, 2011 Updated:August 9, 2011
Description: A buffer in the I/O-Warrior driver may enable a privilege escalation exploit by local users.
Alerts:
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1160-1 kernel 2011-06-28
Ubuntu USN-1146-1 kernel 2011-06-09
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Red Hat RHSA-2011:0421-01 kernel 2011-04-07
SUSE SUSE-SA:2011:019 kernel 2011-04-28
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2011-0521
Created:January 31, 2011 Updated:August 9, 2011
Description: The AV7110 driver does not properly check user input, enabling the corruption of memory and a local denial-of-service attack.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1160-1 kernel 2011-06-28
Scientific Linux SL-kern-20110216 kernel 2011-02-16
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Ubuntu USN-1133-1 linux 2011-05-24
Ubuntu USN-1111-1 linux-source-2.6.15 2011-05-05
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
CentOS CESA-2011:0429 kernel 2011-04-14
Red Hat RHSA-2011:0429-01 kernel 2011-04-12
Red Hat RHSA-2011:0421-01 kernel 2011-04-07
SUSE SUSE-SA:2011:019 kernel 2011-04-28
SUSE SUSE-SA:2011:015 kernel 2011-03-24
openSUSE openSUSE-SU-2011:0416-1 kernel 2011-04-29
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Fedora FEDORA-2011-2134 kernel 2011-02-24
Red Hat RHSA-2011:0263-01 kernel 2011-02-16
Fedora FEDORA-2011-1138 kernel 2011-02-07
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Debian DSA-2153-1 linux-2.6 kernel 2011-01-30

Comments (none posted)

myproxy: invalid certificate hostname check

Package(s):myproxy CVE #(s):
Created:January 27, 2011 Updated:February 2, 2011
Description:

From the MyProxy advisory:

The myproxy-logon program (also called myproxy-get-delegation) in MyProxy versions 5.0 through 5.2 does not abort connections when it finds that the myproxy-server's certificate is valid and signed by a trusted certification authority but the certificate does not contain the expected hostname (or identity given in the MYPROXY_SERVER_DN environment variable), unless the myproxy-logon -T or myproxy-logon -b options are given.

Alerts:
Fedora FEDORA-2011-0514 myproxy 2011-01-18
Fedora FEDORA-2011-0512 myproxy 2011-01-18

Comments (none posted)

openjdk: privilege escalation

Package(s):openjdk CVE #(s):CVE-2011-0025
Created:February 2, 2011 Updated:June 15, 2011
Description: The IcedTea openjdk implementation does not properly verify signatures on JAR files in some situations, allowing an attacker to run code which appears to be from a trusted source.
Alerts:
Gentoo 201406-32 icedtea-bin 2014-06-29
Mandriva MDVSA-2011:054 java-1.6.0-openjdk 2011-03-27
SUSE SUSE-SR:2011:003 gnutls, tomcat6, perl-CGI-Simple, pcsc-lite, obs-server, dhcp, java-1_6_0-openjdk, opera 2011-02-08
Debian DSA-2224-1 openjdk-6 2011-04-20
openSUSE openSUSE-SU-2011:0102-1 java-1_6_0-openjdk 2011-02-07
Ubuntu USN-1055-1 openjdk-6, openjdk-6b18 2011-02-01

Comments (none posted)

pango: code execution

Package(s):pango CVE #(s):CVE-2011-0020
Created:January 27, 2011 Updated:April 1, 2011
Description:

From the Pango advisory:

An input sanitization flaw, leading to a heap-based buffer overflow, was found in the way Pango displayed font files when using the FreeType font engine back end. If a user loaded a malformed font file with an application that uses Pango, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. (CVE-2011-0020)

Alerts:
Gentoo 201405-13 pango 2014-05-17
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
openSUSE openSUSE-SU-2011:0221-1 pango 2011-03-24
Ubuntu USN-1082-1 pango1.0 2011-03-02
Pardus 2011-42 pango pango-docs 2011-02-14
CentOS CESA-2011:0180 pango 2011-02-04
Red Hat RHSA-2011:0180-01 pango 2011-01-27

Comments (none posted)

perl-CGI-Simple: HTTP response splitting

Package(s):perl-CGI-Simple CVE #(s):CVE-2010-4410
Created:January 28, 2011 Updated:December 9, 2011
Description:

From the CVE entry:

CRLF injection vulnerability in the header function in (1) CGI.pm before 3.50 and (2) Simple.pm in CGI::Simple 1.112 and earlier allows remote attackers to inject arbitrary HTTP headers and conduct HTTP response splitting attacks via vectors related to non-whitespace characters preceded by newline characters, a different vulnerability than CVE-2010-2761 and CVE-2010-3172.

Alerts:
Oracle ELSA-2011-1797 perl 2011-12-08
Oracle ELSA-2011-1797 perl 2011-12-08
Scientific Linux SL-perl-20111208 perl 2011-12-08
CentOS CESA-2011:1797 perl 2011-12-09
CentOS CESA-2011:1797 perl 2011-12-09
Red Hat RHSA-2011:1797-01 perl 2011-12-08
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
SUSE SUSE-SR:2011:003 gnutls, tomcat6, perl-CGI-Simple, pcsc-lite, obs-server, dhcp, java-1_6_0-openjdk, opera 2011-02-08
Red Hat RHSA-2011:0558-01 perl 2011-05-19
Ubuntu USN-1129-1 perl 2011-05-03
Fedora FEDORA-2011-0654 perl-CGI 2011-01-21
Fedora FEDORA-2011-0653 perl-CGI-Simple 2011-01-21
Fedora FEDORA-2011-0631 perl-CGI-Simple 2011-01-21
openSUSE openSUSE-SU-2011:0083-1 perl-CGI-Simple 2011-01-28

Comments (none posted)

proftpd: code execution

Package(s):proftpd CVE #(s):CVE-2010-4652
Created:January 28, 2011 Updated:March 15, 2011
Description:

From the Red Hat bugzilla entry:

A heap-based buffer overflow flaw was found in the way ProFTPD FTP server prepared SQL queries for certain usernames, when the mod_sql module was enabled. A remote, unauthenticated attacker could use this flaw to cause proftpd daemon to crash or, potentially, to execute arbitrary code with the privileges of the user running 'proftpd' via a specially-crafted username, provided in the authentication dialog.

Alerts:
Gentoo 201309-15 proftpd 2013-09-24
Debian DSA-2191-1 proftpd-dfsg 2011-03-14
Mandriva MDVSA-2011:023 proftpd 2011-02-08
Fedora FEDORA-2011-0610 proftpd 2011-01-20
Fedora FEDORA-2011-0613 proftpd 2011-01-20

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2011-0445
Created:February 1, 2011 Updated:April 19, 2011
Description:

From the Pardus advisory:

The ASN.1 BER dissector in Wireshark 1.4.0 through 1.4.2 allows remote attackers to cause a denial of service (assertion failure) via crafted packets, as demonstrated by fuzz-2010-12-30-28473.pcap.

Alerts:
Gentoo 201110-02 wireshark 2011-10-09
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
Fedora FEDORA-2011-0450 wireshark 2011-01-17
Fedora FEDORA-2011-0460 wireshark 2011-01-17
Pardus 2011-21 wireshark 2011-01-31

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.38-rc3, released on February 1. "Nothing hugely special in here, and I'm happy to say that most of the pull requests have been nice clear bug-fixes and fixing regressions. Thanks to most of you for that." As always, see the full changelog for all the details.

Stable updates: no stable updates have been released in the last week. The 2.6.35.11 update is in the review process as of this writing; it could be released as early as February 3.

Comments (none posted)

Quotes of the week

If you ever have a function with the string "check" in its name, it's a good sign that you did something wrong.
-- Andrew Morton

In GNOME 3.0, we're defaulting to suspending the computer when the user shuts the lid, and not providing any preferences combobox to change this. This is what the UI designers for GNOME 3.0 want, and is probably a step in the right direction. We really can't keep working around bugs in the kernel with extra UI controls.
-- Richard Hughes

I don't normally do acked-by's. I think it's my way of avoiding getting blamed when it all blows up.
-- Andrew Morton

Comments (71 posted)

Undertaker 1.0

By Jonathan Corbet
February 1, 2011
As anybody who has ever sat through an "allyesconfig" build - or a build using a distributor configuration - understands, there is a lot of code in the kernel. Most of the time, creating the perfect kernel is a matter of excluding code until the size becomes reasonable. So it's a rare kernel build that actually compiles a majority of the code found in the tree.

The kernel configuration mechanism makes it possible to perform this selection. Part of this mechanism is wired into the build system; it allows source files to be passed over entirely if they contain nothing of interest. The other half, though, is implemented with preprocessor symbols and conditional compilation. Kernel developers may be discouraged from using #ifdef, but there are still a lot of conditional blocks in the code.

Sometimes, the logic which leads to the inclusion or exclusion of a specific block is complex and not at all clear. There are many configuration options in the kernel, and they can depend on each other in complicated ways. As a result, dead code - code which will not be compiled regardless of the selected configuration - may escape notice for years. Dead code adds noise to the source tree and, since nobody ever runs it, it is more than likely to contain bugs. If that code is re-enabled or copied, those bugs could spread through the tree in surprising ways.

So it would be good to be able to identify dead code and get it out of the tree. The newly-released undertaker tool was designed to do a number of types of static analysis, including dead code identification. Developers can run it on their own to find dead blocks in specific files; there is also a web interface which allows anybody to browse through the tree and find the dead sections. That should lead to patches hauling away the bodies and cleaning up the tree, which is a good thing.

Comments (none posted)

Kconfiglib

By Jonathan Corbet
February 2, 2011
The kernel configuration system is a complex bit of code in its own right; many people who have no trouble hacking on kernel code find reasons to avoid going into the configuration subsystem. There is value in being able to work with the complicated data structure that is a kernel configuration, though. Ulf Magnusson has recently posted a library, Kconfiglib, which, he hopes, will make that easier.

Kconfiglib is a Python library which is able to load, analyze, and output kernel configurations; care has been taken to ensure that any configuration it creates is identical to what comes out of the existing kernel configuration system. With Kconfiglib, it becomes straightforward to write simple tools like "allnoconfig"; it also is possible to ask questions about a given configuration. One possible tool, for example, would answer the "why can't I select CONFIG_FOO" question - a useful feature indeed.

There are currently no Python dependencies in the kernel build system; trying to add one could well run into opposition. But Kconfiglib could find a role in the creation of ancillary tools which are not required to configure and build a kernel as it's always been done. For the curious, there's a set of examples available.

Comments (8 posted)

Kernel development news

Using the perf code to create a RAS daemon

By Jake Edge
February 2, 2011

Monitoring a system for "reliability, availability, and serviceability" (RAS) is an important part of keeping that system, or a cluster of such computers, up and running. There is a wide variety of things that could be monitored for RAS purposes—memory errors, CPU temperature, RAID and filesystem health, and so on—but Borislav Petkov's RAS daemon is just targeted at gathering information on any machine check exceptions (MCEs) that occur. The daemon uses trace events and the perf infrastructure, which requires a fair amount of restructuring of that code to make it available not only to the RAS daemon, but also for other kinds of tools.

The first step is to create persistent perf events, which are events that are always enabled, and will have event buffers allocated, even if there is no process currently looking at the data. That allows the MCE trace event to be enabled at boot time, before there is any task monitoring the perf buffer. Once the boot has completed, the RAS daemon (or some other program) can mmap() the event buffers and start monitoring the event. This will allow the RAS daemon to pick up any MCE events that happened during the boot process.

To do that, the struct perf_event_attr gets a new persistent bitfield that is used to determine whether or not to destroy the event buffers when they are unmapped. In addition, persistent events can be shared by multiple monitoring programs because they can be mapped as shared and read-only. Once the persistent events are added, the next patch then changes the MCE event to become a persistent event.

With that stage set, Petkov then starts to rearrange the perf code so that the RAS daemon and other tools can access some of the code that is currently buried in the tools/perf directory. That includes things like the trace event utilities, which move from tools/perf/util to tools/lib/trace and some helper functions for debugfs that move to tools/lib/lk. These were obviously things that were needed when creating the RAS daemon, but not easily accessible.

A similar patch moves the mmap() helper functions from the tools/perf directory to another new library: tools/lib/perf. These functions handle things like reading the head of the event buffer queue, writing at the tail of the queue, and reading and summing all of the per-cpu event counters for a given event.

In response to the patch moving the mmap() helpers, Arnaldo Carvalho de Melo pointed out that he had already done some work to rework that code, and that it would reduce the size of Petkov's patch set once it gets merged into the -tip tree. He also noted that he had created a set of Python bindings and a simple perf-event-consuming twatch daemon using those bindings. While Petkov had some reasons for writing the RAS daemon in C rather than Python, mostly so that it would work on systems without Python or with outdated versions, he did seem impressed: "twatch looks almost like a child's play and even my grandma can profile her system now :)."

But the Python bindings aren't necessarily meant for production code, as Carvalho de Melo describes. Because the Python bindings are quite similar to their C counterparts, they can be used to ensure that the kernel interfaces are right:

I.e. one can go on introducing the kernel interfaces and testing them using python, where you can, for instance, from the python interpreter command line, create counters, read its values, i.e. test the kernel stuff quickly and easily.

Moving to a C version then becomes easy after the testing phase is over and the kernel bits are set in stone.

There are some additional patches that move things around within the tools tree before the final patch actually adds the RAS daemon. The daemon is fairly straightforward, with the bulk of it being boilerplate daemon-izing code. The rest parses the MCE event format (from mce/mce_record/format file in debugfs), then opens and maps the debugfs mce/mce_recordN files (where N is the CPU number). The main program sits in a loop checking for MCE events every 30 seconds, printing the CPU, MCE status, and address for any events that have occurred to a log file. Petkov mentions decoding of the MCE status as something that he is currently working on.

Obviously, the RAS daemon itself is not the end result Petkov is aiming for. Rather, it is just a proof-of-concept for persistent events and demonstrates one way to rearrange the perf code so that other tools can use it. There may be disagreements about the way the libraries were arranged, or the specific location of various helpers, but the overall goal seems like a good one. Whether tools like ras actually end up in the kernel tree is, perhaps, questionable—the kernel hackers may not want to maintain a bunch of tools of this kind—but by making the utility code more accessible, it will make it much easier for others build these tools on their own.

Comments (none posted)

LCA: Rationalizing the wacom driver

By Jonathan Corbet
February 1, 2011
Wacom tablets are often the tool of choice for those who need accurate and flexible input devices; they seem to be especially favored by artists. Like a mouse, these tablets can report position and movement, but they can also present multiple input devices to the system (one for each of several different types of pens, for example) and report variables like pen angle, pressure, and more. Support in Linux for these devices has not been as good as one might like, but, as Peter Hutterer described in his talk at the linux.conf.au Libre Graphics Day miniconf, it is getting better quickly. How that came to be is a classic example of how to (or how not to) manage kernel driver development.

Peter is the maintainer for the bulk of the graphical input drivers. He has, he says, rewritten most of that subsystem, so he is to blame for the bugs which can be found there. Most input devices are easily handled through the evdev abstraction, but the Wacom driver is an exception. The things which are unique to these tablets (multiple input "devices," one associated with each pen, the pressure, tilt, and rotation axes, and the relatively high resolution) require a separate driver for their support. Thus, Wacom users must have the linuxwacom driver in their systems.

There is some confusion about the linuxwacom driver, because there are multiple versions of it, all of which can be found on SourceForge. One version (0.8.8) is created by Wacom itself; it is a classic vendor driver, Peter [Peter Hutterer] said, with everything that usually implies about the development process (code dumps) and the quality of the code itself. This driver ships as a tarball containing a wild set of permutations of kernel and X.org versions; it's a mess. But it's Wacom's mess, and the company has been resistant to efforts to clean it up.

Peter got fed up with this situation in 2009 and forked the driver. His version is now the default driver in a number of distributions, and is the only one which supports newer versions of the X server. Looking at the repositories, Peter found 78 commits total before the fork, all from Wacom. After the fork, there are 788 commits, 65% from Red Hat, and 12% from Wacom. Extracting the driver from its vendor-dominated situation has definitely helped to increase its rate of development.

Surprisingly, the original vendor driver is still under development by Wacom, despite the fact that it does not support current X servers and is not shipped by any distributors. The original mailing list is still in business, but, Peter warned, one should not ask questions about the new driver there. Kernel development, he said, should be done on the linux-kernel mailing list. There is also little point in talking to him about problems with the older driver; Wacom insists on keeping control over that code.

Update: Peter tells us that there are three mailing lists (linuxwacom-announce, linuxwacom-discuss and linuxwacom-devel) which are still the place to go for general questions, including hardware-specific questions. X driver development for the forked driver happens exclusively on linuxwacom-devel and all patches are sent there. So the mailing lists are definitely the place to ask questions, at least in regards to the X driver. The kernel driver is the exception here. Kernel driver development should happen on LKML, not on linuxwacom lists.

Much of the work Peter has done so far has been toward the goal of cleaning up the driver. That has involved throwing out a number of features. Some of those needed to go - the original driver tries to track the resolution of the screen, for example, which it has no business knowing. Support for the "twinview" approach to dual monitors has also been taken out. In some cases, the removed features are things that people want; support should eventually be restored once it can be done in the right way. Sometimes, Peter said, things have to get worse before they can get better.

Also gone is the wacomcpl configuration tool. It is, Peter said, some of the worst code that he has ever seen.

Peter did this talk to update the graphics community on the state of support for this driver, but he was also looking for input. His attitude toward development was described as "if it doesn't crash the server, it works." In other words, he is not a graphic artist, so he has no deep understanding of how this hardware is used. To get that understanding, he needs input from the user community regarding development priorities and what does not work as well as it should.

So artists making use of Wacom tablets should make sure that their needs are known; the developer in charge of the driver is ready to listen. Meanwhile, bringing a more open development process to the driver has increased the pace of development and is improving the quality of the code. If the usual pattern holds, before long Linux should have support for these tablets which is second to none.

Comments (6 posted)

Using KernelShark to analyze the real-time scheduler

February 2, 2011

This article was contributed by Steven Rostedt

The last LWN article on Ftrace described trace-cmd, which is a front end tool to interface with Ftrace. trace-cmd is all command line based, and works well for embedded devices or tracing on a remote system. But reading the output in a text format may be a bit overwhelming, and make it hard to see the bigger picture. To be able to understand how processes interact, a GUI can help humans see what is happening at a global scale. KernelShark has been written to fulfill this requirement. It is a GUI front end to trace-cmd.

KernelShark is distributed in the same repository that trace-cmd is:

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git
To build it, you just need to type make gui, as just typing make will only build trace-cmd. These two tools have been kept separate since a lot of embedded devices do not have the libraries needed to build KernelShark. A full HTML help is included in the repository and is installed with make install_doc. After installing the documentation, you can access the help directly from KernelShark from the "Help" menu.

This article is not a tutorial on using KernelShark, as everything you need to know about the tool is kept up-to-date in the KernelShark repository. Instead, this article will describe a use case that KernelShark was instrumental in helping to solve.

Analyzing the real-time scheduler

Some time ago, when the push/pull algorithm of the real-time scheduler in Linux was being developed, a decision had to be made about what to do when a high priority process wakes up on a CPU running a lower real-time priority process, where both processes have multiple CPU affinity, and both can be migrated to a CPU running a non-real-time task. One would think that the proper thing to do would be to simply wake up the high priority process on that CPU which would cause the lower priority process to be pushed off the running CPU. But a theory was that by doing so, we move a cache hot real-time process onto a cache cold CPU and possibly replace it with a cache cold process.

After some debate, the decision was made to migrate the high priority process to the CPU running the lowest priority task (or no task at all) and wake it there. Some time later, after the code was incorporated into mainline, I started to question this decision even though I was the one that fought for it. With the introduction of Ftrace, we now have a utility to truly examine the impact that decision has made.

The decision to move the higher priority task was based on an assumption that if the task was waking up, that it is more likely to be cache cold than a task that is already running. Thinking more about this case, one must think about what would cause a high priority task to wake up in the first place. If it is woken up periodically to do some work, then it can very well be the case that it will be cache cold. Any task that was scheduled in between can easily push out the cache of this high priority task. But what if the high priority task was blocked on a mutex? If the task was blocked on a mutex and another RT task was scheduled in its place then when the high priority task wakes up again, there is a good chance that the task will be cache hot.

A mutex in most real-time programs will usually be held for a short period of time. The PREEMPT_RT patch, which this code was developed from, converts spinlocks into mutexes, and those mutexes are held for very small code segments, as all spinlocks should be. Migrating a task simply because it blocked on a mutex increases the impact these locks have on the throughput. Why punish the high priority task even more because it blocked and had to wait for another task to run?

Before making any decision to change the code, I needed to have a test case that can show that the moving of a high priority task instead of preempting the lower priority task will cause the high priority task to ping pong around the CPUs when there is lock contention. A high priority task should not be punished (migrated) if it simply encounters lock contention with lower priority real-time tasks. It would also be helpful to know how changing this decision affects the total number of migrations for all the tasks under lock contention.

First try

Having a 4 processor box to play with, I started writing a test case that would possibly cause this scenario, and use Ftrace to analyze the result. The first test case to try was to create five threads (one more than CPUs) and four pthread mutex locks. Have all threads wake up from a barrier wait and then loop 50 times grabbing each lock in sequence and do a small busy loop. The name of this test is called migrate.c.

The test application uses trace_marker as explained in previous articles to write what is happening inside the application to synchronize with kernel tracing.

Running the following with trace-cmd:

   # trace-cmd record -e 'sched_wakeup*' -e sched_switch -e 'sched_migrate*' migrate
   # kernelshark
[KernelShark] Like trace-cmd report, KernelShark will, by default, read the file trace.dat. You can specify another file by naming it as the first argument to KernelShark. While the KernelShark display images may be difficult to read fully in the article, clicking any of them will bring up a full-resolution version.

Since all tasks have been recorded, even trace-cmd itself, we want to filter out any tasks that we do not care about. Selecting Filter->Tasks from the KernelShark menu, and then choosing only the migrate threads will remove the extraneous tasks. Note that events that involve two tasks, like sched_switch or sched_wakeup, will not be filtered out if one of the tasks should be displayed.

[KernelShark post-filtering]

In the default graph view, each on-line CPU is represented by a plot line. Each task is represented by a different color. The color is determined by running the process ID through a hash function and then parsing that number into a RGB format.

  • The purple (   ) colored bar represents thread 4, the highest priority task.
  • The orange(ish) (   ) colored bar represents thread 3.
  • The turquoise (   ) colored bar represents thread 2.
  • The brown (   ) colored bar represents thread 1.
  • The light blue (   ) colored bar represents thread 0, the lowest priority task.

The lines sticking out of the top of the bars represent events that appear in the list below the graph.

By examining the graph we can see that the test case was quite naive. The lowest priority task, thread 0, never got to run until the other four tasks were finished. This makes sense as the machine only had four CPUs and there were four higher priority tasks running. The four running tasks were running in lock step, taking the locks in sequence. From this view it looks like the tasks went out of sequence, but if we zoom in to where the migrations happened, we see something different.

[KernelShark zoom]

To zoom into the graph, press and hold the left mouse button. A line will appear, then drag the mouse to the right. As the mouse moves off the line, another line will appear that follows the mouse. When you let go of the mouse button, the view will zoom in making the locations of the two lines the width of the new window.

Repeating the above procedure, we can get down to the details of the migration of thread 3. Double clicking on the graph brings the list view to the event that was clicked on. A green line appears at the location of that was clicked.

[KernelShark event selection]

On CPU 0, thread 3 was preempted by the watchdog/0 kernel thread. Because we filtered out all threads but the migration test tasks, we see a small blank on the CPU 0 line. This would have been filled in with a colored bar representing the watchdog/0 thread if the filters were not enabled. The watchdog/0 thread runs at priority 99, which we can see from the sched_switch event as the priority of the tasks is between the two colons. The priority shown is represented by the kernel's view of priority, which is inverse to what user-space uses (user space priority 99 is kernel priority zero).

When the watchdog/0 thread preempted thread 3, the push/pull algorithm of the scheduler pushed it off to CPU 3, which had the lowest priority running task. Zooming into the other migrations that happened on the other CPUs, show that the watchdog kernel thread was responsible for them as well. If it wasn't for the watchdog kernel threads, this test would not have had any migrations.

Test two, second failure

The first test took the naive approach of just setting up four locks and having the tasks grab them in order. But this just kept the tasks in sync. The next approach is to try to mix things up a little more. The concern about the real-time scheduler is how it affects the highest priority task. The next test creates the four locks again (as there are four CPUs) and five tasks each of increasing priority. This time, only the highest priority task grabs all the locks in sequence. The other four tasks will grab a single lock. Each lock will have a single task and the highest priority task grabbing that lock. To try to force contention, pthread barriers are used. For those unfamiliar with pthread barriers, they are synchronization methods to serialize threads. A barrier is first initialized with a number and all threads that hit the barrier will block until that number of threads have hit the barrier, then all the threads are released.

This test case creates two barriers for each lock (lock_wait and lock_go) each initialized with the number 2, for the two tasks (the unique low priority task and the high priority task) that will take the lock. The low priority task will take the lock and wait on a barrier (lock_wait). The high priority task will hit that barrier before it takes the corresponding lock. Because the low priority task is already waiting on the barrier, the high priority task will trigger the barrier to release both tasks because the barrier has a task limit of two. The high priority task will most likely try to take the mutex while the low priority task aleady has it. The low priority task will release the mutex and then wait on the other barrier (lock_go) letting the high priority task take the mutex.

Running this test under trace-cmd yields the following from KernelShark after filtering out all but the migrate test tasks.

As the colors of the tasks are determined by their process IDs, this run has the following: [KernelShark second try]

  • The initial green (   ) bar is the main thread that is setting up all the locks and barriers.
  • The light purple (   ) bar is the lowest priority thread 0.
  • The red (   ) bar is the next-higher priority thread 1.
  • The yellow(ish) (   ) bar is the next-higher priority thread 2.
  • The blue (   ) bar is the next-higher priority thread 3.
  • The light turquoise (   ) bar is the highest priority thread 4.

Looking at the graph it seems that the highest priority thread stayed on the same CPU, and was not affected by the contention. Considering that the scheduler is set to migrate a waking real-time task if it is woken on a CPU that is running [KernelShark zoom view] another real-time task, regardless of the priorities, one would think the high priority task would have migrated a bit more. Zooming in on the graph brings to light a bit more details to what is occurring.

What we can see from the graph, and from the list, is that the high priority thread did have contention on the lock. But because all threads are waiting for the high priority process to come around to its lock, the other threads are sleeping when the high priority process wakes up. The high priority process is only contending with a single thread at a time. Threads 0 and 2 share CPU 2 without issue, while threads 1 and 3 each still have a CPU for themselves.

The test to force migration

The second test was on the right track. It was able to produce a contention but failed to have the CPUs busy enough to cause the highest priority task to wake up on a CPU running another real-time task. What is needed is to have more tasks. The final test adds twice as many running threads as there are CPUs.

This test goes back to all tasks grabbing all locks in sequence. To prevent the synchronization that has happened before, each thread will hold a lock a different amount of time. The higher the priority of a thread, the shorter time it will hold the lock. Not only that, but the threads will now sleep after they release a lock. The higher the priority of a task, the longer it will sleep:

   lock_held  = 1 ms * ((nr_threads - thread_id) + 1)
   sleep_time = 1 ms * thread_id
The lowest priority thread will never sleep and it will hold the lock for the longest time. To make things even more interesting, the mutexes have been given the PTHREAD_PRIO_INHERIT attribute. When a higher priority thread blocks on a mutex held by a lower priority thread, the lower priority thread will inherit the priority of the thread it blocks.

The test records the number of times each task voluntarily schedules, the number of times it is preempted, the number of times it migrates, and the number of times it successfully acquired all locks. When the test finishes, it gives an output of these for each thread. The higher the task number the higher the priority of the thread it represents.

Task vol nonvol migrated iterations
0 43 3007 1571 108
1 621 1334 1247 108
2 777 769 1072 108
3 775 17 701 108
4 783 50 699 108
5 788 2 610 109
6 801 89 680 109
7 813 0 693 110
Total 5401 5268 7273 868

[KernelShark success] Running this test under trace-cmd and viewing it with KernelShark yields a graph with lots of pretty colors, which means we likely succeeded in our goal. To prove that the highest priority thread did indeed migrate, we can plot the thread itself.

Using the "Plots" menu and choosing "Tasks" brings up the same type of dialog as the task filter that was described earlier. I selected the highest priority thread (migrate-2158), and zoomed in to get a better view. The colors on a task plot are determined by the CPU number it was running on. When a task migrates, the colors of the plot changes.

[KernelShark task plot]

This test now demonstrates how a high priority task can migrate substantially when other RT tasks are running on the system. Changes to the real-time scheduler can now be tested. The commit changes the decision on which thread migrates when an real-time task wakes up on a CPU running another real-time task. The original way was to always move the task that is waking up if there is a CPU available that is running a task that is lower in priority than both tasks. Instead, the commit changes this to just wake up the real-time task on its CPU if it is higher priority than the real-time task that is currently running.

The migrate test now shows:

Task vol nonvol migrated iterations
0: 52 2923 2268 108
1: 569 1529 1457 109
2: 801 1961 2194 109
3: 808 789 1274 109
4: 810 61 155 109
5: 813 10 57 109
6: 827 35 81 110
7: 824 0 4 110
total: 5504 7308 7490 873

The total number of migrations has stayed around the same (several runs will yield a fluctuation of a few hundred), but the number of migrations for the highest priority task has dropped substantially, as it will not migrate simply because it woke up on a CPU running another real-time task. Note, the reason that the highest priority task migrated at all was because it woke up on a CPU that was running the task that owned the mutex that it was blocked on. As these are priority inheritance mutexes, the owner would have the same priority as the highest priority process that it is blocking. The wake up will not preempt a real-time task of equal priority. Perhaps that can be the next change to the real-time scheduler: have the wake up be aware of priority mutexes.

[KernelShark after change]

The highest priority thread (migrate-21412) was woken on CPU 3, which was running thread 1 (migrate-21406) which is the task that thread 7 originally blocked on. CPU 2 happened to be running thread 0 (migrate-21405) which was the lowest priority thread running at the time. Note that the empty green box that is at the start of the task plot represents the time between when a task was woken and the time it actually was scheduled in.

Using KernelShark allowed me to analyze each of my tests to see if they were doing what I expected them to do. The final test was able to force a common scenario where a high priority process is woken on a CPU running another real-time task, and cause the decision to be made, whether to migrate the waking task or not. This test allowed me to see how the changes to that decision affected the results.

This article demonstrates a simple use case for KernelShark, but there are a lot more features that aren't explained here. To find out more, download KernelShark and try it out. It is still in beta and is constantly being worked on. Soon there will be plugins that will allow it to read other file formats and even change the way it displays the graph. All the code is available and under the GPL, so you can add your own features as well (hint hint).

Comments (6 posted)

Patches and updates

Kernel trees

Architecture-specific

Build system

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Ubuntu and Qt, MeeGo and GTK+

February 2, 2011

This article was contributed by Nathan Willis

The waning days of January brought two intriguing developments in the world of Linux distributions and their application frameworks. On January 18th, Canonical's Mark Shuttleworth announced that starting with Ubuntu 11.10, the distribution would ship Qt libraries in the base ISO image, and was underwriting some development work to make it easier for Qt applications to tie in to the rest of the desktop. That news largely overshadowed the GNOME Foundation's January 17th announcement that it had hired Igalia to integrate the GTK+ toolkit into the MeeGo Handset platform, and to merge components from Maemo's Hildon framework upstream into GTK+ itself.

On the surface, both efforts mark the availability of a new framework in a distribution that up until now has seemingly shipped entirely one side of the "Qt/GTK+" fence (Ubuntu being GNOME-based, and MeeGo being Qt-based). But neither situation is as clear-cut as that. Canonical has always provided Qt libraries and Qt-based applications through its repositories — just not on the Ubuntu ISO image itself. MeeGo officially supports Qt as the third-party application developer platform, but it also includes many components from the GNOME stack.

Ubuntu plus Qt

Shuttleworth introduced the Qt announcement by saying that ease of use and effective integration are the key values in Ubuntu's user experience, and that although the distribution has historically given very strong preference to GTK+ applications, a toolkit itself is merely a means to an end. When evaluating whether or not to make a particular application part of the default install, he continued, the questions to ask are whether it is free software, whether it is "best-in-class," whether it integrates with the system settings, preferences, and other applications, whether it is accessible, and whether it looks and feels consistent with the rest of the desktop.

Although there are plenty of excellent Qt applications that meet most of those requirements, the sticking point in previous releases has been the fact that GTK+ applications all use the same centrally-managed preferences store (dconf), while Qt applications typically use KDE's. Aside from storing its own preferences in a separate location, a Qt application on Ubuntu does not have access to system-wide settings that affect its integration — font rendering, sound, peripheral settings, etc.

To fix this, Canonical has contracted developer Ryan Lortie to write dconf bindings for Qt. It is not yet clear what form Lortie's dconf work will take — some have suggested Qt's QSettings, others KDE's KConfig. Currently the plan for Ubuntu seems limited just to shipping Qt libraries in the 11.10 ISO, but Shuttleworth left the door open for individual Qt-based applications to be included, too.

It does not sound like Ubuntu is considering core KDE applications in this category, but Qt applications. As Ryan Paul pointed out at ars technica, KDE applications "come with heavy KDE infrastructure dependencies and have KDE-centric behaviors," while Ubuntu continues to develop a GNOME-based desktop. On that topic, Shuttleworth even explicitly said that the decision to add Qt libraries was "in no way a criticism of GNOME," and reiterated that the distribution is making GNOME the focus of its design work. Nevertheless, some in the comments on Shuttleworth's post appeared to read the announcement as a "move" away from GNOME and towards KDE.

There were also vocal reactions from many KDE supporters that seemed to interpret the announcement as an attempt to coerce KDE developers into altering their code to support Ubuntu specifically. KDE's Aaron Seigo called it "dictating" to Qt developers and "not that much different from saying that Qt apps should just use Gtk+ for rendering so they fit in better." He also said that KDE and Qt have led the way in defining standards (citing freedesktop.org), and contrasted the project with Canonical, saying the company had "historically taken rather heavy-set stances that worked against" giving developers the best choice of applications.

Seigo's blog is frequently inflammatory, of course — ironically, several commenters on Shuttleworth's announcement linked to an older post by Seigo in which he lambastes freedesktop.org as "messed up" and a "self-important disappointment" for developing dconf in the first place. A lower-key criticism came from openSUSE's community manager Jos Poortvliet, who referred to the project as "creating a special Ubuntu world" by keeping the dconf bindings Ubuntu-only, rather than integrating them with upstream GNOME and GTK+.

But it is not clear whether Lortie's dconf work truly will remain Ubuntu-only, or whether it will be embraced upstream — there are still simply too many unknowns. Ubuntu's community manager Jono Bacon posted a FAQ entry about the plan, but it offered no elaboration on the development process itself. Jim Campbell pointed to the contributor agreement that Canonical uses for other projects as a concern and suggested that it might cause the work to be Ubuntu-only.

Campbell also raises another interesting question in his post: whether GNOME developers will be enticed to write applications with Qt in general. Despite long-standing divisions between the Qt and GNOME frameworks, there is no reason an application must use GTK+ for its widget toolkit simply because it uses other GNOME libraries. It rarely happens, but that may be because no major distro installs both Qt and GTK+ libraries by default. A fact frequently overlooked in the discussion is that when Ubuntu ships both, it will be the first major distribution to do so. Even those distributions like openSUSE that offer users a choice of desktop environments at install-time typically install either one complete stack or the other.

Having both frameworks available at the same time could indeed make mixed-framework applications possible, which Paul also observed. That prospect does not thrill everyone, though. Blogger Martin Espinoza told LinuxInsider it amounts to more bloat and more dependencies in an already tight ISO image. Several suggested to Shuttleworth that it may be time for Ubuntu to move from a CD to a DVD image to cope with the increasing bulk of the default install.

MeeGo plus GTK

While the Canonical project is an example of a distribution choosing to draw in another framework, the GNOME Foundation's announcement is the opposite: a framework cozying up to a distribution. Prior to MeeGo's birth, the Maemo distribution for Nokia handsets was based primarily on GNOME frameworks, including the GTK+ widget toolkit. After Nokia's acquisition of Trolltech, however, the company changed directions, and rewrote Maemo 5 with Qt.

Things got more complicated with the combined Maemo-plus-Moblin-equals-MeeGo stack. The MeeGo project officially recognizes Qt as the third-party development platform on which the SDK is based, and around which its marketing to device makers centers. But the overall MeeGo architecture still depends heavily on other, non-Qt components, including Cairo, Clutter, Pango, GConf, Telepathy, GLib, D-Bus, GStreamer, ATK, and Evolution Data Server. So it should not be surprising that the GNOME Foundation was interested in funding development to bring the GTK+ portion of the GNOME platform to MeeGo as well.

The Foundation put out a call for bids in October of 2010, detailing three requirements: ensuring GTK+ applications would run on the MeeGo Handset UX (User eXperience), adding upstream components to GTK+ to facilitate running GTK+ applications on MeeGo, and merging the functionality of Maemo's Hildon framework into upstream GTK+. The contractor chosen in January's announcement, Igalia, is a contract company based in Spain that contributes to GNOME, GStreamer, WebKit, and other open source projects. In the announcement, the Foundation said that Igalia's application "focused the most on integrating elements of Hildon into GTK+ upstream," and that this emphasis would make it easier to port desktop-based GTK+ and Maemo applications to MeeGo.

Hildon is the application framework originally created by Nokia for Maemo. It includes desktop components, an input system suited for touch-based and onscreen keyboard input, finger-friendly menu and user interface widgets, kinetic scrolling, and other handset-oriented features. Igalia started working on Hildon when Nokia shifted its Maemo attention to Qt. The GNOME-funded work will support two developers at the company, Claudio Saavedra and Carlos García Campos.

The announcement and the Igalia site are both short on details, but it does sound like the emphasis will be merging existing Hildon and mobile technologies into GTK+ proper, rather than maintaining a separate project. With the MeeGo project's self-proclaimed "upstream first" philosophy, that approach would make the most sense. But it is the reaction from MeeGo that is the biggest unanswered question. The project's support for Qt as a development framework is enthusiastic — which is what one would expect from Qt's corporate parent Nokia.

Whether or not the project will support GTK+ in future releases remains to be seen. When MeeGo launched in 2010, the architecture diagram included both widget toolkits, though it does not anymore. The FAQ still states that MeeGo will include GTK+, but it is absent from the developer documentation.

Rumors are that the Clutter-based interface on the Netbook UX will be replaced. Of course, the Netbook UX is already more GTK+-heavy, including desktop GNOME applications like Evolution, Banshee, and Empathy. Perhaps the real story there is merely how different the various MeeGo UXes really are: they are not simply finger-, keyboard-, or remote-based recasts of the same interface; they have very different components. The Netbook UX is still largely derived from Moblin, and the Handset UX comes from Maemo 5.

There is nothing wrong with that approach; MeeGo is most accurately described as a meta-distribution encompassing several distinctly different siblings. But at 2010's MeeGo Conference in Dublin, one of the key messages was that all MeeGo releases would come with a compliance-testing guarantee: an application that runs on one MeeGo device will run on any MeeGo device.

That guarantee rested largely on outside developers using Qt and QtMobility as the development framework, so one has to wonder how the project — particularly the program managers — will react when presented with a revamped and actively-developed GTK+ for Handsets that competes for developer attention with the official solution. MeeGo's governance structure is always described as a meritocracy, where anyone can contribute. Hopefully, as is true on the desktop, that will prove true, and developers can take their pick of frameworks.

Oranges and Oranges

In the space of 24 hours, a GNOME-based distribution announced that it would start shipping Qt libraries enabled, and the GNOME Foundation announced that it would pay to develop GTK+ for the Qt-based MeeGo. It would be nice if the open source community saw both situations the same way: as big players in the Linux ecosystem doing their best to give developers more choices for how to create their applications.

In neither situation does the new development work indicate that the distribution is "moving" away from one framework to the other. Unfortunately, however, the often dichotomous KDE-versus-GNOME mindset contributed a distracting amount of noise to the discussion. Partly that is because people erroneously equate Qt with KDE, and just as erroneously equate GTK+ to Qt. Neither comparison is apt. GTK+ is just the widget toolkit; the proper parallel to Qt is the entire GNOME Platform. KDE is a distinct project from Qt, and is an environment built on top of the Qt platform.

It also does way more harm than good to speculate on things like Ubuntu "switching" from GNOME to KDE (or even from GNOME to Qt). Commenters pointed to the 2D, fallback version of Unity as the secret reason why Shuttleworth decided to add Qt to 11.10. I personally suspect is has more to do with Shuttleworth's recent infatuation with Scribus, although I lack hard evidence. Considering that no specific Qt applications have been discussed for inclusion, it seems like the Qt inclusion is designed more to reach out to the "opportunistic developers" Ubuntu wants to attract than it is to bend existing Qt developers to Canonical's will.

Shuttleworth hits the nail on the head when he calls a toolkit a means to an end. That's inherent in the idea of a "toolkit." Whatever you may think of Canonical's motivations in funding Qt dconf work, or the GNOME Foundation's motivations in funding MeeGo GTK+ work, both projects are going to be empowering for developers — which is what the community usually cares most about in the long run.

Comments (14 posted)

Brief items

Distribution quote of the week

If you're wondering why people don't follow your instructions to help you with your project, go hit your local library and check out a cookbook. Bake something you've never baked before. Then, while eating it, open your documentation again and take a look at it with this in mind.
-- Mel Chua from her FUDCon lightning talk

Comments (4 posted)

TurnKey Linux 11 released

TurnKey Linux 11, which is based on Ubuntu 10.04.1, has been released. "TurnKey Linux is a popular virtual appliance library that helps users save time and money by discovering and leveraging the best free open source software. 45+ ready-to-use solutions can be deployed in minutes to bare metal, virtual machines or launched on-demand in the cloud. [...] TurnKey makes open source just work. Solutions are built and pre-tested by a community of experts, prioritizing ease of use and security. Standard features include sleek web management interfaces, smart backup and restore automation and daily security updates."

Comments (none posted)

Foresight Linux 2.5.0 ALPHA 2 GNOME Edition released

Foresight Linux 2.5.0 Alpha 2, GNOME edition has been released. "Well known for being a desktop operating system featuring an intuitive user interface and a showcase of the latest desktop software, this new release brings you the latest GNOME 2.32 release, a newer Linux kernel 2.6.35.10, Xorg-Server 1.8, Conary 2.2 and a ton of updated applications!"

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian plans "live commenting" of the Squeeze release

Debian plans to do "live commenting" of the Squeeze (6.0) release process on Identi.ca. In addition, it is looking for "fun Debian facts" to fill in: "However, several steps of the release process are quite boring (e.g. waiting for the CD, DVDs and blue rays for 11 archs are builds). Therefore we would also like to fill this emptiness with funny or otherwise interesting facts about Debian (e.g. the 150'000 bugs closed in the two years since lenny got released)." (Thanks to Paul Wise.)

Comments (none posted)

Debian administrators for the Google Summer of Code

Debian project leader Stefano Zacchiroli has announced that three people will be the administrators for the project's participation in the Google Summer of Code project: Ana Guerrerom, Obey Arthur Liu, and Sylvestre Ledru. "GSoC admins coordinate Debian participation, interact with the students who are often new to Debian, and indirectly deal with the money who are used to sponsor the initiative. It is a role of responsibility and involves representing Debian in various ways. I'm therefore pleased to properly delegate the role to this year GSoC admins; see delegation text reported below. [...] I'm confident we will hear back soon from the GSoC admins about how we can help. In the meantime you can start thinking at your project proposal to both improve Debian and reach out to new contributors."

Full Story (comments: none)

Fedora

An anthropologist's view of an open source community (Opensource.com)

Opensource.com reports on a talk by anthropologist Diana Harrelson at FUDCon [Fedora Users and Developers Conference], which was held at the end of January in Tempe, Arizona. The talk focused on Harrelson's study on the Fedora community. "'My entire research was just to find out why you guys do it,' Diana said in her talk. Motivation may seem more obvious to those within communities, but from the outside, it looks more like doing a lot of hard work for no pay. [...] High on the list of reasons were learning for the joy of learning and collaborating with interesting and smart people. Motivations for personal gain, like networking or career benefits, were low on the list. Self motivation, however, is important, as seen in comments from multiple contributors who said things like, 'Mainly I contribute just to make it work for me.'"

Comments (16 posted)

Mandriva Linux

Mandriva 2011 delayed by two weeks, Technology Preview available

Mandriva has announced a delay in the release of Mandriva 2011: "Due to a huge number of big changes in Mandriva 2011 so far, combined with rpm5 migration both in the repositories and inside the build system, we have decided to shift the release dates for Mandriva 2011 by two weeks, to give us a better time period to fit the remaining pieces." Mandriva 2011 final is now scheduled from June 13. In the meantime, a Technology preview is now available: "The Technology Preview showcases what will be inside the first Mandriva 2011 Alpha version. It already comes with rpm5, native systemd, networkmanager support, KDE 4.6.0, kernel 2.6.37, firefox 4b10, X.org server 1.9, clementine 0.6 and lots of updated packages everywhere."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

ArchBang Linux 2011.01 brings new look (The H)

The H briefly looks at the ArchBang 2011.01 release. "The ArchBang project has released the 2011.01 edition of its ArchBang Linux distribution, code named "Symbiosis". Like Arch Linux, upon which it is based, ArchBang is a simple and lightweight Linux distribution for i686 and x86-64 platforms aimed at Linux users who want to create "their own ideal environment" and install only what they need. However, ArchBang uses the minimalistic Openbox window manager with support for its pseudo-tiling functions."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Facebook helps establish Supercell testing infrastructure

February 2, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

Oregon State University's Open Source Lab (OSU-OSL) has gotten a hand up from Facebook to create a on-demand testing infrastructure for open source projects called Supercell. The idea behind Supercell is to provide limited duration hosting for open source projects that need to test on specific operating systems, as well as providing facilities for projects to test software in a large cluster with several VMs running concurrently. When finished, Supercell will provide test infrastructure for open source projects that don't have their own server farm and testing infrastructure.

The project was announced on Thursday, January 20 by Facebook's Scott MacVicar. MacVicar wrote that there's a disparity in development resources between many open source projects and companies doing in-house development of software: namely that many open source projects lack the kind of hardware and testing infrastructure that companies have at their disposal.

To help solve the problem, Facebook has decided to donate hardware and funding to OSU-OSL to develop Supercell, a service for projects to test on multiple operating systems and architectures. The Open Source Lab provides hosting to quite a few open source projects and communities, so it's not surprising that Facebook would look to OSL for assistance with Supercell. Why is Facebook interested? OSL's Leslie Hawthorn said that, while she didn't want to speak for Facebook, her conversations with the company indicated that Facebook's goal with open source is to "let people make useful stuff," and that OSL was a natural partner because "they know we're neutral, and we're here for the benefit of open source."

Currently the hardware is x86 and AMD64, with a number of guest OSes available. At present, Supercell supports Debian Lenny (5.0), CentOS 5.5, Gentoo, Gentoo Hardened, and Ubuntu Lucid (10.04), Karmic (9.10), and Maverick (10.10). According to MacVicar, some Mac OS X servers (two Apple Xserves) are also available "for those projects that explicitly need to test on Mac OS." The current hardware for Supercell is two Dell servers with 4 Opteron 2.1GHz 12-core CPUs and 128GB of RAM each, another with 2 Intel E5620 2.4GHz 4-core CPUs and 12GB of RAM, and 12TB of disk for NFS storage.

Plans are also on the table to support Fedora, FreeBSD, and OpenBSD in the near future, and OSU-OSL is evaluating feedback from the community in deciding on additional OSes and architectures. According to the FAQ, Supercell may support Alpha, ARM, ARMel, PowerPC, SPARC, and others as a longer term goal.

The hardware cluster is being managed by Ganeti and Ganeti Web Manager on top of Linux and KVM. The entire stack under Supercell is, of course, open source. Ganeti, which was originally developed by Google, is a tool for virtualization management that handles deploying and managing virtual machine instances on top of KVM or Xen. According to its documentation, Ganeti can deploy a new virtual machine running Ubuntu in under 30 seconds, including hostname, networking, and serial console set up. Images are a gzipped tarball or filesystem dump of an operating system, usually running between 200 and 400MB in size.

Ganeti has been in development for some time, with its initial public release in August 2007 under the GPLv2. Google started the project in its Zurich office for cluster management of virtual servers on commodity hardware — pretty much the same thing that OSU-OSL and Facebook were looking for.

The Ganeti Web Manager is a bit newer, a result of collaboration between OSL and Google Code-In students from the 2010 Google Code-In. Ganeti Web Manager is a Django-based application that provides Web-based management for Ganeti clusters. It's still maturing, but the 0.4 release from December 22, 2010 is considered "enough to get people to start using it in production" according to OSL's Lance Albertson. The 0.4 release implements basic VM management, VNC console, a permissions system for managing clusters and virtual machines, and SSH Key management.

Since the service is "on-demand," what happens when a project comes back for a second round of testing? According to OSL's operations manager Jeff Sheltren, OSL plans to tie into a configuration management framework such as Puppet so projects can save and reuse configurations. "This will allow OSL to provide a base set of standard configurations people can use (think: 'I need a LAMP stack') as well as giving projects the ability to fine tune their environment and re-use that configuration for future VMs."

Projects eager to get hands on with Supercell will have to wait a few more months, at least if they're hoping to use OSU's hardware and services. The service is considered early alpha at this point, with a projection that it will be ready by the third quarter of 2011. In the interim, OSU is looking to find additional sponsors for Supecell. Sheltren says that Facebook's donation amounts to about $50,000 in hardware and funds. This will support a fair amount of development, but there's plenty of work ahead.

Currently there is no discussion list for the Supercell service, but OSL is looking for feedback on what other operating systems that Supercell could support and other requests from the community. By submitting the feedback form, interested parties can sign up for the Supercell announcement list as well. Hawthorn did say that OSL may set up a discussion list, and will be providing regular updates about Supercell via the blog and Twitter.

But projects looking to implement their own "Supercell" can start today, just add hardware. Developers interested in helping with Ganeti and Ganeti Web Manager can find more info on the wiki including the mailing list and documentation.

Though Ganeti Web Manager is considered "production ready" by Albertson, it still has a lengthy roadmap of features that OSL plans to integrate. For instance, templates for virtual machines, ability to modify or reinstall VMs, implement support for noVNC instead of the Java VNC client, and serial console support.

With any luck, Facebook won't be the sole supporter of Supercell outside of OSU-OSL. The project has a lot of potential to provide a much-needed facility for short-term testing resources that many projects simply couldn't afford.

Comments (none posted)

Brief items

Quotes of the week

To be honest, I don't really understand how Emacs development works -- it seems to be an anarchic free-for-all, but I have trouble accepting that this could possibly be the case.
-- Tom Tromey

I'm coming to think of POSIX compatibility more as a legacy matter than something genuinely useful in a Unicode world.
-- Tom Christiansen

Comments (none posted)

Celery 2.2 released

Celery is "an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well." The version 2.2 release adds support for eventlets and gevents, Jython support, built-in virtual transports, messaging with Kombu, and more.

Full Story (comments: 1)

crosstool-NG 1.10.0 released

crosstool-NG is a system for the easy building of cross-compiler toolchains. "crosstool-NG can build from generic, general purpose toolchains, to very specific and dedicated toolchains. Simply fill in specific values in the adequate options." The 1.10.0 release is out; it adds support for gcc 4.5.2, the gold linker, gcc plugins, and building statically-linked toolchains.

Comments (1 posted)

HTSQL 2.0-FINAL released

HTSQL is a gateway allowing the encoding of SQL queries in URL syntax. "The target audience for HTSQL is the accidental programmer -- one who is not a SQL expert, yet needs a usable, comprehensive query tool for data access and reporting." The idea is interesting but one hopes that the security aspects have been thought through; there is little mention of security on the project web site.

Full Story (comments: none)

KDevelop 4.2

KDevelop 4.2 has been released. There's a lot of improvements in this release, including search-and-replace functionality, better documentation, improved C++ support, a "rename assistant," and more.

Comments (1 posted)

MPlayer 1.0rc4 released

The MPlayer 1.0rc4 release is out. "Notable additions are VP8 decoding, H.264 bug fixes and speedups, unencrypted Blu-ray support. Network streams can now be played through FFmpeg, there has been quite a bit of subtitle work and Ogg and Matroska demuxer defaults were switched to libavformat. The window position is now decided by the window manager."

Full Story (comments: 1)

Pyramid 1.0 released

The developers of repoze.bfg have come to the credible conclusion that "Pyramid" is a better name for their web application framework project. The Pyramid 1.0 release is now available; new features (beyond the name change) include a number of template improvements, built-in session support, support for Mako templating, improved configuration, and more. See the "what's new" page for more information.

Comments (none posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Openmoko Community Update

The Openmoko Community Update for February 1 has been released with news of new Openmoko hardware. "GTA04 is a project by the long time distributor and hw developer, German company Golden Delicious. The name is loaned from Openmoko project because of the spiritual continuation - GTA01 was the codename for Neo1973, GTA02 was the Neo FreeRunner, and GTA03 was the canceled successor product. Besides offering improved versions of Neo FreeRunner (better battery life, better audio output), they've a complete replacement board planned to fit an existing Neo FreeRunner case and use the existing display." The new board has an OMAP3530 ARMv7 CPU, UMTS/3G (HSPA), USB 2.0 OTG, WLAN, BT, and FM transceiver, Barometric Altimeter, Accelerometer, Compass, Gyroscope, and optionally a camera. The update also includes information on various distribution releases for Openmoko devices. (Thanks to Sam Tygier.)

Comments (5 posted)

Open source powers new Aussie space race (Computerworld)

Computerworld reports on a linux.conf.au talk about an open hardware/software effort targeting a moon landing. The Lunar Numbat project aims to put a Linux-powered robotic marsupial on the moon. "The Australia defence force has donated a stockpile of rockets like the Zuni missile to ASRI for experiments. 'We are designing and implementing electronic controllers for the rockets using open software and open hardware,' [Luke] Weston said. Back in 2004 researchers at the University of NSW also worked on putting Linux in space for one of its satellite projects. The software is generally licenced under the GPL and the hardware under the TAPR Open Hardware licences."

Comments (none posted)

Phipps: The Open-By-Rule Governance Benchmark

Simon Phipps offers some suggestions on how to judge a project's governance. "Can you find everything about the community, including why things happened as well as what happened? Are all the governance conversations visible (apart from the bits where personal privacy is appropriate)? Can you track all the commits and find out why each was made? An open-by-rule community will have it all there somewhere, including the dirty laundry (arguments, trolls and the like)."

Comments (10 posted)

Zander: Views and a Conversion

Thomas Zander presents his vision of KOffice on the KOffice.org blog. The Calligra suite split off from KOffice back in December, and Zander is describing what he sees for KOffice going forward. "My idea for KOffice is to make a set of applications that help people get their work done by being fun and with those plugins there are unimagined possibilities to get users to have fun and to get more types of work done quicker. Specifically the idea of what an office suite is should be [challenged]. Its useless to try to compete in an already saturated market by just copying the competition. Just like with KDE4 its much more fun to build a platform that allows both the old and imaginative new usages."

Comments (7 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

The last IPv4 address blocks allocated

APNIC [Asia-Pacific Network Information Center] has announced that it has received the last two freely available IPv4 address blocks from the Internet Assigned Numbers Authority (IANA). Under the existing plan, IANA will distribute the five remaining address blocks to each of the five Regional Internet Registries (RIR). The RIRs will then distribute addresses within those blocks to organizations within their regions. That means that IANA is out of IPv4 address space, and the RIRs won't be too far behind: "APNIC expects normal allocations to continue for a further three to six months. After this time, APNIC will continue to make small allocations from the last /8 block". Furthermore, "APNIC reiterates that IPv6 is the only means available for the sustained ongoing growth of the Internet, and urges all Members of the Internet industry to move quickly towards its deployment."

Comments (188 posted)

Re-branding Blender (Blender Foundation)

The Blender Foundation has put out a press release about two companies that are re-branding the Blender 3D content creation suite and selling it. While that is not a GPL violation of any kind, the companies are playing fast and loose with copyright: "The companies IllusionMage and 3Dmagix resell via their websites Blender under their own name. Both websites are probably managed by the same person or company. [...] On their web pages they intentionally hide that the products are distributions of GNU GPL licensed software, and that the software is freely downloadable as well. More-over, even after contacting them several times, they don't remove copyrighted content from their websites. A lot of text and images have been copied from blender.org and random images - not even from blender - were copied from various CG [Computer Graphics] websites." (Thanks to Paul Wise.)

Comments (16 posted)

EFF Urges Supreme Court to Crack Down on Bad Patents

The Electronic Frontier Foundation, along with Public Knowledge and the Apache Software Foundation, has asked the US Supreme Court to make it easier to invalidate bad patents. "In an amicus brief filed in Microsoft v. i4i, EFF argues that the existing high standard of proof for invalidating a patent in federal court unfairly gives the owners of bad patents the upper hand. Currently, when a defendant is accused of infringing a patent, the Federal Circuit wants to see "clear and convincing" evidence that that patent is illegitimate and the case against it unfounded. This is in contrast to the standard of proof for most civil cases, which is a "preponderance of the evidence" -- or a showing that more likely than not the allegations are true. In software cases, "clear and convincing" evidence of patent invalidity can be hard to come by, as source code is constantly changing over the life of a product and much of the original code is often unavailable. This is a particular problem with free and open source software, as the collaborative nature of the projects make documentation even harder."

Full Story (comments: none)

Articles of interest

Predictions for 2011 (Freedom to Tinker)

A little belated, perhaps, but the Freedom to Tinker blog (from Princeton University's Center for Information Technology Policy, which is directed by Ed Felten) has put out its predictions for the year. It's always an interesting read; this year there are 25 separate predictions, including: "2011 will see the outbreak of the first massive botnet/malware that attacks smartphones, most likely iPhone or Android models running older software than the latest and greatest. If Android is the target, it will lead to aggressive finger-pointing, particularly given how many users are presently running Android software that's a year or more behind Google's latest—a trend that will continue in 2011."

Comments (10 posted)

Microsoft Phone 7 Is Dead in the Water (pcmag.com)

For a little Thursday amusement, take a peek at John C. Dvorak's latest column at pcmag.com. He has an—ummm—interesting view of Linux and open source software, but he thinks it is time for Microsoft to adopt it: "The fact is Microsoft is zigging when it should be zagging. It needs to open a new division that has nothing to do with the rest of the company, so Open Source code can't come into contact with its commercial code. Here it can evolve an Open Source and Linux policy with products for sale and support services. The company needs to get back to an even footing with Google in the phone and, soon, the pad business. It may not catch up with Apple insofar as innovation is concerned, but it can't afford to languish and constantly be humiliated by seemingly pointless and dead-end rollouts."

Comments (84 posted)

Final round of FOSDEM speaker interviews

The last round of FOSDEM speaker interviews are now available. They include: Manik Surtani (Infinispan), David Chisnall (Objective-C), Nicolas Spiegelberg (Facebook Messages), Chris Hofmann (Mozilla Firefox), Jos van den Oever (WebODF), Michael Meeks (LibreOffice), Chris Lattner (LLVM), and Andrew Gerrand (Go). From Van den Oever's interview: "The talk will explain what the WebODF project is about and how it can be used to add ODF support to your website or desktop application. There are several good Free Software solutions for working with ODF on the desktop and on mobile devices, notably LibreOffice and Calligra. These are written in C++, are compiled natively, and need to be installed on each machine on which they are used. Cloud solutions can be run in the browser, but there was no Free Software ODF software for the browser." FOSDEM will be held February 5-6 in Brussels, Belgium.

Comments (none posted)

New Books

Book sprint results in "An Open Web"

FLOSS Manuals has coordinated an effort to produce a free book called An Open Web, which is now available. The book was made with free software and is open to contributions from anyone. "The process for making the book is known as a 'Book Sprint.' It is an intensive and innovative methodology for the rapid development of books. It took five people and locked them in a room in Berlin's CHB for five days with the goal to produce a book with the sole guiding meme being the title — An Open Web. The authors had to create the concept, write the book, and output it to print in 5 days."

Full Story (comments: none)

Arduino: A Quick-Start Guide--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released Arduino: A Quick-Start Guide by Maik Schmidt.

Full Story (comments: none)

Resources

Bufferbloat.net launches

The bufferbloat.net web site is up. It is meant to be a place for developers and administrators to work on solving bloat-related problems; it currently hosts a few mailing lists and a talk on bufferbloat by Jim Gettys.

Comments (2 posted)

Tiny Linux Plug Computers: Wall Wart Linux Servers (LinuxPlanet)

Over at LinuxPlanet, there's a brief introduction to Linux-based "plug" computers. "Fortunately, there's a class of computers ideally suited to that sort of job: "plug computers", sometimes called Sheevaplugs after an early model. The whole computer is built into the bit that plugs into the wall, so they're barely bigger than a normal "wall wart" power supply. They use power-efficient ARM CPUs, so you can run a server with only 5 watts. They're inexpensive, usually just over $100 for a plug with 512M RAM and 512M flash. Best of all, they come with Linux installed right out of the box."

Comments (7 posted)

Contests and Awards

Call for nominations for the 13th Annual Free Software Awards

The Free Software Foundation and GNU project are seeking nominations for the 13th annual Free Software Awards. Nominations are open until February 16. "The Free Software Foundation Award for the Advancement of Free Software is presented annually by FSF president Richard Stallman to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software. [...] Nominations are also open for the 2010 Award for Projects of Social Benefit. The Social Benefit award recognizes a project that intentionally and significantly benefits society through collaboration to accomplish an important social task."

Full Story (comments: none)

Education and Certification

The Linux Foundation offers Android and MeeGo developer training

The Linux Foundation has announced new training courses in Android and MeeGo development. "The Android and MeeGo developer courses will help meet new demands for Linux training and help to fill open positions at a variety of The Linux Foundation's member companies. These courses will give professionals lucrative job skills while helping to advance Linux in this space."

Full Story (comments: none)

Linux Professional Institute Hosts Exam Labs at SCALE and Indiana Linux Fest

The Linux Professional Institute will be hosting exams for LPI certification (LPIC) at two upcoming conferences. All three levels of LPIC exams will be offered at SCALE 9x (Los Angeles, California: February 27, 2011) and Indiana Linux Fest (Indianapolis, Indiana: March 27, 2011). Click below for more information.

Full Story (comments: none)

Calls for Presentations

OSCON Call for Proposals closes February 7

The call for proposals for OSCON is coming to a close on February 7. "OSCON (O'Reilly Open Source Convention), the premier Open Source gathering, will be held in Portland, OR July 25-29. We're looking for people to deliver tutorials and shorter presentations."

Full Story (comments: none)

Upcoming Events

Events: February 10, 2011 to April 11, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 7
February 11
Global Ignite Week 2011 several, worldwide
February 11
February 12
Red Hat Developer Conference 2011 Brno, Czech Republic
February 15 2012 Embedded Linux Conference Redwood Shores, CA, USA
February 25 Build an Open Source Cloud Los Angeles, CA, USA
February 25
February 27
Southern California Linux Expo Los Angeles, CA, USA
February 25 Ubucon Los Angeles, CA, USA
February 26 Open Source Software in Education Los Angeles, CA, USA
March 1
March 2
Linux Foundation End User Summit 2011 Jersey City, NJ, USA
March 5 Open Source Days 2011 Community Edition Copenhagen, Denmark
March 7
March 10
Drupalcon Chicago Chicago, IL, USA
March 9
March 11
ConFoo Conference Montreal, Canada
March 9
March 11
conf.kde.in 2011 Bangalore, India
March 11
March 13
PyCon 2011 Atlanta, Georgia, USA
March 19 Open Source Conference Oita 2011 Oita, Japan
March 19
March 20
Chemnitzer Linux-Tage Chemnitz, Germany
March 19 OpenStreetMap Foundation Japan Mappers Symposium Tokyo, Japan
March 21
March 22
Embedded Technology Conference 2011 San Jose, Costa Rica
March 22
March 24
OMG Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems Washington, DC, USA
March 22
March 25
Frühjahrsfachgespräch Weimar, Germany
March 22
March 24
UKUUG Spring 2011 Conference Leeds, UK
March 22
March 25
PgEast PostgreSQL Conference New York City, NY, USA
March 23
March 25
Palmetto Open Source Software Conference Columbia, SC, USA
March 26 10. Augsburger Linux-Infotag 2011 Augsburg, Germany
March 28
April 1
GNOME 3.0 Bangalore Hackfest | GNOME.ASIA SUMMIT 2011 Bangalore, India
March 28 Perth Linux User Group Quiz Night Perth, Australia
March 29
March 30
NASA Open Source Summit Mountain View, CA, USA
April 1
April 3
Flourish Conference 2011! Chicago, IL, USA
April 2
April 3
Workshop on GCC Research Opportunities Chamonix, France
April 2 Texas Linux Fest 2011 Austin, Texas, USA
April 4
April 5
Camp KDE 2011 San Francisco, CA, USA
April 4
April 6
SugarCon ’11 San Francisco, CA, USA
April 4
April 6
Selenium Conference San Francisco, CA, USA
April 6
April 8
5th Annual Linux Foundation Collaboration Summit San Francisco, CA, USA
April 8
April 9
Hack'n Rio Rio de Janeiro, Brazil
April 9 Linuxwochen Österreich - Graz Graz, Austria
April 9 Festival Latinoamericano de Instalación de Software Libre

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds