Now imagine that you have been planning an event for over a year. Two weeks before the conference, venues, equipment, accommodations, transportation, social events, and more are all in place. Then the host city is hit by catastrophic floods, the venues for both the conference and the social events are taken out of commission, and the routers for the wireless network are soaking at the wrong end of a flooded warehouse. Even if a new venue can be found, it will no longer be within walking distance of the accommodations, so transportation must be arranged on short notice.
That is the point where the ghosts run out of useful experience to share. It is also the point where an insufficiently determined group would simply give up.
The organizers of LCA 2011, held in Brisbane, would appear to be a determined bunch indeed. They found a new venue, reprinted the conference maps, found new locations for the social events, swam through the warehouse to recover the routers, arranged new transportation for the attendees, and, beyond any doubt, did a thousand things that nobody else saw. The end result was a conference which, barring knowledge to the contrary, would have seemed like they had planned it that way all along. LCA 2011 didn't just work - it worked just as well as its predecessors. One easily runs out of superlatives when describing the job this group did; your editor only hopes that, after they have slept for a solid week or so, they have arranged a major party to celebrate what they accomplished.
There were a number of interesting sessions at this conference, many of which have been covered in these pages. Here, your editor will summarize some of the talks which, for various reasons (including simple time) were not discussed in a separate article.
Andrew 'Tridge' Tridgell has developed a reputation for energetic LCA talks focused on the simple joy of hacking; his LCA 2011 talk did not disappoint. Tridge, it seems, has become somewhat of a coffee snob, so he has taken to roasting his own beans. That turns out to be an attention-intensive process which takes too much time away from the hacking that coffee is meant to support, so he built a Linux-powered coffee roaster out of an old bread maker, a temperature sensor, a heat gun, and a hand-made circuit for power regulation.
While demonstrating the device and hoping the fire alarms did not sound, he went into the specifics of coffee roasting and the details of how one uses LD_PRELOAD to reverse engineer a Windows temperature driver running under Virtualbox on Linux. A good time was clearly had by all. Bdale Garbee's session on the creation of a large, Linux-powered milling machine had a similar feel. Both talks will be well worth watching once the videos become available.
Daniel Bentley and Daniel Nadasi talked about the challenges that go with opening up code at Google. Internal programs tend to be heavily used and have a lot of internal contributors; these people often have a lot of worries when they are approached about releasing their code to the world. They have to be sold on the business case for opening the code, and they have to be talked past worries that their code is too ugly to see the light of day. There are also some real concerns that opening code might reveal internal information and that working with the community might slow the project down. Changing source control and build systems can also be a challenge; apparently few people at Google still remember how to write a classic makefile.
An important question is: where is the home for the code's further development? If it's developed internally, the internal folks are happy because things are working as they were before. Outsiders, who see a series of code dumps, may be less impressed. If development happens publicly then outside developers will be happier, but it can be harder for internal developers. An added factor is that any project, no matter how successfully it is opened, will be dominated by internal developers during the first part of its open existence; that tends to drive the internal development model, but that, in turn, can slow (or prevent) the development of a community around the code.
Daniel and Daniel's response to this problem is a tool called "make open easy," or "moe." With moe, internal developers can mark sections of code which should not be visible to the outside world; markings can take the form of function annotations or preprocessor-like directives. The tool can then extract the code from the internal repository, edit it according to the directives, and load it into a public repository. Importantly, it can also move code in the other direction, merging external changes while retaining the scrubbing directives. Moe makes lives easier on both sides of the wall, and is in active use with a number of projects; it can be obtained from code.google.com.
Carl Worth gave a well-attended session on the notmuch mail system. Notmuch has been reviewed here in the past; your editor was mostly interested in the current and future state of this search-oriented mail tool. Recent changes include the ability to search on mail folder names - useful for migrating from a folder-based mail client. There is also synchronization with maildir flags, which is helpful for people using both notmuch and a more traditional client. There are now a few supported output styles for search operations, which should make it easier to create a web-based notmuch front-end, among other things.
In the near future, notmuch users should expect the ability to search on arbitrary mail headers and some relief from the rather inflexible date format which must be used now. Further ahead, there will be more work toward synchronization with remote mail spools; the hard part here is moving tags back and forth. Options for a solution include the addition of a special header to the messages themselves (but that could be problematic if the header leaks in a forwarded message, revealing to all the tags one uses for mail from the special people in one's life), the use of custom maildir flags, or the addition of some sort of journal replay mechanism. There is also talk of storing mail in git packs and using the git protocol to move messages (and tags) around. Even further ahead might be a notmuch backend for mutt.
Meanwhile, the project has a number of interested users but, by Carl's admission, it could benefit from a more present maintainer.
Kirk McKusick is one of the creators of BSD Unix. His fast-paced session in an overflowing room covered much of the history of the Berkeley Software Distribution, the ups and downs of hacking with Bill Joy, the ATT lawsuit, his refusal to work for just-starting Sun Microsystems (because Apollo had the workstation market completely sewed up), and much more. The talk should eventually appear with the rest of the conference videos; there is also apparently a DVD available on Kirk's web page for those who want more.
There were far more interesting talks than your editor could possibly attend, much less write up. The good news is that the conference organizers are making the videos available quickly; they can be found (in several formats) on this blip.tv page, but this wiki page has them in a much better-organized fashion.
In summary: LCA 2011 was another great success; it would have been judged favorably against its predecessors even in the absence of natural disasters. LCA 2012 will, perhaps surprisingly, be held in Ballarat, a small city outside Melbourne. The Ballarat organizers have a hard act to follow, but history suggests they will be up to the task.
The message clearly resonated with many people in the audience, but the presentation of that message was less than pleasing to many. The speaker aimed for a high level of drama, made heavy use of profanity, and put up some slides that struck some attendees as overtly sexual in nature. In your editor's opinion, the presentation style, which was clearly intended to shock and disturb, detracted from the message which was being delivered. It also ensured that much of the subsequent talk would be about the slides and the language, and not about what was really said. Your editor, who, at the outset, wondered if he could learn something from the speaker to spice up his own talks (which are notably less dramatic), concluded at the end that there was indeed something to learn, but the lessons were all negative.
A number of attendees complained, and the organizers, in response, apologized (to applause) at the closing session. Mark later posted an apology of his own. It seemed like a reasonable handling of the situation, and the discussion could have stopped there - but it didn't.
The lca-chat mailing list, which had mostly occupied itself with (1) making Brisbane's public transportation system seem much more complicated than it really is and (2) discussing the lack of toilet paper in one of the lodging choices, hosted several threads on whether the response to the talk was right. Interested parties are encouraged to read through the threads - which remained civil throughout - for the full discussion. But there are a few things which can be summarized:
Your editor disagrees with the last group and feels that the discussion is absolutely necessary. We are partway through a process - likely to take years - aimed at making our community and its gatherings more welcoming for all those we would like to have attend. LCA 2011 adopted a new style of policy on harassment which had not been used before, and Mark Pesce's talk was the first time it was invoked. The idea that we have everything right and that no further discussion required is, frankly, laughable. Some debugging will certainly be necessary - once we are sure we have the core design right.
While evaluating the design and pondering debugging, there are a couple of viewpoints from LCA organizers that warrant reading in full. The first is from LCA 2011 organizer Russell Stuart, who opposed the policy from the outset - though, having lost that battle, he argued for apologizing when the policy was violated. He says:
Russell fears that the policy heads toward outright censorship and should not be used by other conferences until it has been "substantially reworked." He found agreement from Susanne Ruthven, one of the lead organizers of LCA 2010 and the author of that conference's anti-harassment policy. That policy was aimed at preventing broadly-described "harassment or discrimination" and, seemingly, would not have been invoked for this talk:
Clearly there is a balance to be found here; outright harassment is not a freedom of speech issue, but the desire to create a more welcoming environment in general will almost certainly require curtailing certain types of speech. Those who see speech freedom as fundamental will resist such moves. Those who have suffered assault, or who simply do not want to circulate in a highly sexualized environment, will push in the other direction. Conference organizers - and speakers - may find themselves caught in the middle.
The problems addressed by anti-harassment policies are real. Conference attendees have had to put up with some horrifying experiences which - hopefully! - do not reflect what our community is about. Practices like the employment of booth babes or the use of women as sexually-charged attention magnets on slides do not create an environment which is conducive to the acceptance of women as equal participants. We absolutely need to clean up our act. But doing so will be an iterative process which must also respect other, equally fundamental freedoms. It's a design and debugging problem, and we are far from the final release on this bit of code.
Sendmail had its start at the University of California, Berkeley, in 1980; it was initially something Eric did while he was supposed to be working on the Ingres relational database management system. In those days, the Computer Science department had a dozen machines, but the main system was "Ernie CoVAX," which was accessed via ASCII terminals. There was a limited number of ports, so users had to connect via a patch panel in the mail room; contention for available ports was often intense.
Things got more interesting when the Ingres project got an ARPAnet connection; a single PDP11 machine, with two ports, was the only way to access the net at that time. There was no way the entire department was going to share those two ports without somebody getting hurt, so another solution was required. Eric looked at the problem, concluded that what everybody really wanted was the ability to send mail through the gateway machine, and decided that he would make a way to access email from other machines on campus. From this beginning delivermail was born.
There was a set of design principles that Eric adopted at that time. There was only one of him, so programming time was a truly finite resource. Redesigning user agents and mail stores was out of the question. Delivermail had to adapt to the world around it, not the other way around. The resulting program worked, but was not without its problems. The compiled-in configuration lacked flexibility, there was no address translation as messages moved between networks, and the parsing was simple and opaque. But it succeeded in moving mail around and giving the entire department access to the net.
Then the department got the BSD contract. Bill Joy needed a mail transfer agent to connect to the network, so he talked Eric into taking on the job. After all, how hard could it be? Among other things, the new MTA needed to support the SMTP mail protocol - which wasn't specified yet. Supporting SMTP also forced the addition of a mail queue, a job which turned out to be much harder than it looked. Eric hacked away, and Sendmail was shipped with 4.1BSD in 1982 with support for SMTP, header rewriting, queueing, and runtime configuration.
After that, Eric left Berkeley for a "lucrative" (heavy on the quotes) career in industry. Sendmail, meanwhile, was picked up by the Unix vendors. The Unix wars were in full force at that time; the inevitable result was a proliferation of different versions of Sendmail. The program became balkanized and incompatible across systems.
Eric returned to Berkeley in 1989 and started hacking on Sendmail again; the immediate need was support for the ".cs" subdomain at the university. That work snowballed into a major rewrite culminating in Sendmail 8; this version integrated a great deal of code from both the industry and the community. It added support for ESMTP, a number of new protocols, delivery status notifications, LDAP integration, eight-bit mail, and a new configuration package. Uptake increased after the Sendmail 8 release as a result of these features, but also as the result of the publication of the O'Reilly "bat" book. Documentation, it turns out, really matters.
Sendmail Inc. was created in 1998 with the fantasy that it would let Eric get back to coding. In reality, starting a company is more about marketing, sales, and money than about technology - a lesson many of us have learned. It was one of the first companies trying to mix open source and proprietary offerings; in those days, the prevailing wisdom is that a company needed proprietary lock-in to have any chance of success. Over time, though, functionality migrated to the free version; thus Sendmail gained support for encryption, authentication, milters (mail filters), virtual hosting, spam filtering, and more. And that's where things stand today.
As one might expect, 30 years of experience have led to a number of lessons worth passing on. Eric shared a few of them.
One is that requirements change all the time. The original delivermail program had reliability as its primary focus - few things are more hazardous to one's academic career than losing a professor's grant proposal. Over time, the requirements shifted toward functionality and performance; Sendmail had to scale up in speed and features as the Internet took off. Then users were demanding protection from spam and malware; that shifted Sendmail development toward keeping mail out. We have, Eric noted, gone full circle toward unreliable mail service. After that came requirements around legal and regulatory compliance - that is where a great deal of Sendmail Inc.'s business lies. There is currently an increasing focus on controlling costs, mobility, and social network integration. Without the ability to adapt to meet these shifting requirements, Sendmail would not have thrived through all these years.
With regard to Sendmail's design decisions, Eric said that some turned out to be right, some were wrong, and some were right at the time but are wrong now. One criticism that has been made is that Sendmail is an overly general solution; it can route and rewrite messages in ways which are generally unneeded in these days of Internet monoculture. Eric defended that generality by saying that the world was in great flux when Sendmail was designed; there was no way to really know how things were going to turn out. And, he said, he would do it again: "the world is still ugly."
Rewriting rules for addresses are a part of that generality; even at the time, it seemed like overkill, but he couldn't come up with anything better. It was, he said, probably the right thing to do. That said, the decision to use tabs as active characters was the stupidest thing he has ever done. That's how makefiles did it, and it seemed cool at the time. As a whole, he said, the concept was right, but the syntax and flow control could have been a lot better. Even so, he's glad he did matching based on tokens; basing Sendmail configuration around regular expressions would have been far worse.
If he were doing the configuration system now, it would look a lot more like the Apache scheme.
The message munging feature was needed for the rewriting of headers; it facilitated interoperability between different networks. It is still used a lot, he said, though it's arguably not necessary. Sendmail could benefit from a pass-through mode which shorts out the message munging, but that leaves open the question of what should be done with non-compliant messages. Should they be fixed, rejected, or just dropped? There is, he said, no obvious answer.
The embedding of SMTP and queueing in the mail daemon was the right thing to do; he does not agree with the Postfix approach of proliferating lots of small daemons. The queue structure itself involves two files for every message: one with the envelope, and one with the body. That forces the system to scan large numbers of small files on a busy system, which is not always optimal. At the time it was the right way to go; now he would probably use some sort of database for the envelopes. The decision to use plain text for all internal files was right, though; it makes debugging much easier.
With regard to the use of the m4 macro preprocessor for configuration, Eric admitted that the syntax is painful. But he needed a macro facility and didn't want to reinvent the wheel. The "damned dnl lines" for comments were a mistake, though, and completely unnecessary. In summary, some sort of tool was needed; m4 might not have been the best choice, but it's not clear what would have been.
With regard to extending or changing features: Sendmail has tended toward extending features and maintaining compatibility, and that has not always been the right thing to do. The hostname masquerading facility was one example; that feature was simply done wrong the first time around. Rather than fixing it, though, Eric papered over the problems with new features. It would have been better to inflict some short-term pain on users, perhaps aided by a migration tool, and be done with it. The unwillingness to replace mistaken features has a lot to do with why Sendmail is difficult to configure.
Sendmail goes out of its way to accept and fix bogus input; that was in compliance with the robustness principle ("be conservative in what you send but liberal in what you accept") that was widely accepted at the time. It increases interoperability, but at the cost of allowing broken software to persist indefinitely, leading to large costs down the road. Nonetheless, it was the right idea at the time for the simple reason that everything was broken then. But he should have tightened things up later on.
What would he have done differently? At the top of the list is trying to fix problems as soon as possible. These include tabs in the configuration file and the V7 mailbox format. He's really tired of seeing ">From" in messages; he said he could have fixed it and expressed his apologies for not having taken the opportunity. He would make more use of modern tools; Sendmail has its own build script, which is not something he would do today. He would use more privilege separation, though he would not go as far as Postfix. He would have made a proper string abstraction; strings are by far the weakest part of the C language.
There are also a number of things he would do the same, starting with the use of C as the implementation language. It is, he said, a dangerous language, but the programmer always knows what is going on. Object-oriented programming, he said, is a mistake; it hides too much. Beyond that, he would continue to do things in small chunks. The creation of syslog (initially as a way of getting debugging information out) was obviously the right thing to do; he was surprised that there was no centralized way of dealing with logging data on Unix systems. He would still implement rewriting rules, albeit with a different syntax. And he would continue not to rely too heavily on outside tools. There is a cost to adding dependencies on tools; sometimes it's better to just build what you need. There are, he said, projects using lex when all they really need is strtok().
There were a number of "takeaways" to summarize the talk:
The talk was evidently based on a chapter from an upcoming book on the architecture of open-source applications.
One member of the audience asked Eric which MTA he would recommend for new installations today. His possibly surprising answer was Postfix. He talked a lot with Postfix author Wietse Venema during its creation, and was impressed. Postfix is, he said, nice work, even if he doesn't agree with all of the design decisions that were made.
Last week's Security page had a quote from 37signals about its decision to drop support for OpenID. Since then there have been several postings that purport to explain the problems with OpenID and why it never gained much traction. One of the better analyses comes from Wired's webmonkey blog, which calls OpenID "The Web's Most Successful Failure". So, why hasn't OpenID taken the world by storm?
OpenID set out to solve, or help solve, the "single sign-on" (SSO) problem, so that users could have a single identity that they used with multiple web sites. But OpenID is more than that, because it allows users, rather than web sites, to decide how much personal information needs to be shared. It is this user-centric nature of OpenID that may be leading to its downfall.
We have looked at OpenID several times over the years, including an overview in 2006, and a look at OpenID 2.0 in 2007. By the time we looked at the OpenID Connect proposal back in June, the problems with users being able to control the amount of information provided to web sites was becoming evident. It was, in fact, a major reason that OpenID Connect was proposed.
While OpenID is by no means perfect, the resistance to its adoption is not necessarily completely technical. Other OAuth-based schemes have become much more popular at least in part because web site operators get access to much more personal information by default than they get when users log in with OpenID. Even site-specific registration tends to extract more information (email address, full name, and so on). Because that kind of information is valuable to web site operators—and willingly given up by the vast majority of users—OpenID users are seen to be "less valuable", as OpenID Connect developer Chris Messina pointed out. The Wired blog post put it this way:
But one of the main alternatives to OpenID—one that has seen much more adoption—is Facebook Connect (though the "Connect" part of the name has largely been dropped). As that name would imply, it is run by Facebook, which is an organization that is not noted for its interest in preserving user privacy. One hopes that the pervasiveness of Facebook sign-ons will have some boundaries. While it does solve the SSO problem for Facebook users, in a fairly uncomplicated way, it would be horrifying to be greeted by your bank's log-in screen asking for your Facebook ID.
OpenID suffers from some design flaws, using a URL as the OpenID identifier being one of the most prominent, but its Achilles heel is that it is complicated for users, beyond just remembering their OpenID URL(s). An additional problem is that some of the larger web services were only interested in being OpenID providers (i.e. using their URLs to log in elsewhere), and weren't particularly interested in being "relying parties" (i.e. taking OpenID URLs from elsewhere to allow users to log in). This asymmetric "support" for OpenID further muddied the waters for users.
At this point, though, we may well have seen the crest of the OpenID wave. Wired posits it being incorporated into Mozilla's (and other browser makers') efforts to move identity management into the browser itself. That would allow the browser to route around the individual web site log-in screens and authenticate the user behind the scenes, so OpenID could be used in a far less complicated manner.
In the end, OpenID is targeted at users who value their privacy and want to take control of their internet identities—two traits that seem to be in short supply for many users. Facebook Connect (and the Twitter equivalent) leverage huge user bases to make adoption by other web sites very attractive. Though there is evidently still some user confusion about using those authentication methods, the experience is more straightforward than OpenID.
So, where do we go from here? The US government is starting to make noise about trusted internet identities, which might provide an alternative SSO solution—though not without privacy (and other) concerns of its own. LWN has implemented OpenID relying party support, though there is still some work and testing to do before we can roll it out. The 37signals announcement and the related chatter seems likely to turn off some other sites that were considering OpenID support.
It is tempting to call OpenID a failure, and to some extent it is, but it has some compelling ideas, at least for technically (and privacy) savvy users. But the features that are most attractive to those users are precisely those that web site operators wish to avoid—anonymous/pseudonymous authentication doesn't play well with their business models. For sites like LWN, where registration doesn't require any personal information, the barriers to adoption are likely to be things like available developer time (that's certainly the case here). In addition, there has always been some interest from our readers in OpenID support but it never seemed to garner a critical mass clamoring for it. If OpenID had taken off the way many hoped it would, supporting it would have become a much higher priority for LWN and lots of other sites.
As Wired notes, OpenID was ahead of its time. It suffered from some technical problems—what new protocol doesn't?—but those could have been fixed if there was some groundswell of interest from users or web sites. Since that didn't happen, it's probably time to start thinking about other SSO options that aren't controlled by companies or governments. Without a solution that is under individual control, we risk being herded into systems that cater to the needs of these large organizations—with all the dangers to internet freedom that implies.
|Created:||February 2, 2011||Updated:||February 2, 2011|
|Description:||The calibre ebook management program suffers from directory traversal and cross-site scripting vulnerabilities; see this advisory for more information.|
|Package(s):||chm2pdf||CVE #(s):||CVE-2008-5298 CVE-2008-5299|
|Created:||January 28, 2011||Updated:||February 2, 2011|
chm2pdf 0.9 uses temporary files in directories with fixed names, which allows local users to cause a denial of service (chm2pdf failure) of other users by creating those directories ahead of time. (CVE-2008-5298)
chm2pdf 0.9 allows user-assisted local users to delete arbitrary files via a symlink attack on .chm files in the (1) /tmp/chm2pdf/work or (2) /tmp/chm2pdf/orig temporary directories. (CVE-2008-5299)
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4342|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||The econet protocol implementation can enable a remote attacker to oops the kernel with a maliciously-crafted UDP packet.|
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4346|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||A kernel vulnerability allows an attacker to bypass the mmap_min_addr restriction and map user-space memory at the null address.|
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4527|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||Two vulnerabilities in the OSS sound card drivers can facilitate local information disclosure or privileged code execution.|
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4529|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||A vulnerability in the IrDA socket implementation (on non-x86 systems) can leak some kernel memory to user space.|
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4565|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||The CAN protocol implementation can leak the address of a kernel data structure, possibly making exploitation of another vulnerability easier.|
|Package(s):||linux-2.6 kernel||CVE #(s):||CVE-2010-4649|
|Created:||January 31, 2011||Updated:||October 24, 2012|
|Description:||A buffer overflow in the InfiniBand subsystem may allow local users to corrupt memory and oops the system.|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||A buffer in the I/O-Warrior driver may enable a privilege escalation exploit by local users.|
|Created:||January 31, 2011||Updated:||August 9, 2011|
|Description:||The AV7110 driver does not properly check user input, enabling the corruption of memory and a local denial-of-service attack.|
|Created:||January 27, 2011||Updated:||February 2, 2011|
From the MyProxy advisory:
The myproxy-logon program (also called myproxy-get-delegation) in MyProxy versions 5.0 through 5.2 does not abort connections when it finds that the myproxy-server's certificate is valid and signed by a trusted certification authority but the certificate does not contain the expected hostname (or identity given in the MYPROXY_SERVER_DN environment variable), unless the myproxy-logon -T or myproxy-logon -b options are given.
|Created:||February 2, 2011||Updated:||June 15, 2011|
|Description:||The IcedTea openjdk implementation does not properly verify signatures on JAR files in some situations, allowing an attacker to run code which appears to be from a trusted source.|
|Created:||January 27, 2011||Updated:||April 1, 2011|
From the Pango advisory:
An input sanitization flaw, leading to a heap-based buffer overflow, was found in the way Pango displayed font files when using the FreeType font engine back end. If a user loaded a malformed font file with an application that uses Pango, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. (CVE-2011-0020)
|Created:||January 28, 2011||Updated:||December 9, 2011|
From the CVE entry:
CRLF injection vulnerability in the header function in (1) CGI.pm before 3.50 and (2) Simple.pm in CGI::Simple 1.112 and earlier allows remote attackers to inject arbitrary HTTP headers and conduct HTTP response splitting attacks via vectors related to non-whitespace characters preceded by newline characters, a different vulnerability than CVE-2010-2761 and CVE-2010-3172.
|Created:||January 28, 2011||Updated:||March 15, 2011|
From the Red Hat bugzilla entry:
A heap-based buffer overflow flaw was found in the way ProFTPD FTP server prepared SQL queries for certain usernames, when the mod_sql module was enabled. A remote, unauthenticated attacker could use this flaw to cause proftpd daemon to crash or, potentially, to execute arbitrary code with the privileges of the user running 'proftpd' via a specially-crafted username, provided in the authentication dialog.
|Created:||February 1, 2011||Updated:||April 19, 2011|
From the Pardus advisory:
The ASN.1 BER dissector in Wireshark 1.4.0 through 1.4.2 allows remote attackers to cause a denial of service (assertion failure) via crafted packets, as demonstrated by fuzz-2010-12-30-28473.pcap.
Page editor: Jake Edge
Brief itemsreleased on February 1. "Nothing hugely special in here, and I'm happy to say that most of the pull requests have been nice clear bug-fixes and fixing regressions. Thanks to most of you for that." As always, see the full changelog for all the details.
Stable updates: no stable updates have been released in the last week. The 184.108.40.206 update is in the review process as of this writing; it could be released as early as February 3.
The kernel configuration mechanism makes it possible to perform this selection. Part of this mechanism is wired into the build system; it allows source files to be passed over entirely if they contain nothing of interest. The other half, though, is implemented with preprocessor symbols and conditional compilation. Kernel developers may be discouraged from using #ifdef, but there are still a lot of conditional blocks in the code.
Sometimes, the logic which leads to the inclusion or exclusion of a specific block is complex and not at all clear. There are many configuration options in the kernel, and they can depend on each other in complicated ways. As a result, dead code - code which will not be compiled regardless of the selected configuration - may escape notice for years. Dead code adds noise to the source tree and, since nobody ever runs it, it is more than likely to contain bugs. If that code is re-enabled or copied, those bugs could spread through the tree in surprising ways.
So it would be good to be able to identify dead code and get it out of the tree. The newly-released undertaker tool was designed to do a number of types of static analysis, including dead code identification. Developers can run it on their own to find dead blocks in specific files; there is also a web interface which allows anybody to browse through the tree and find the dead sections. That should lead to patches hauling away the bodies and cleaning up the tree, which is a good thing.Kconfiglib, which, he hopes, will make that easier.
Kconfiglib is a Python library which is able to load, analyze, and output kernel configurations; care has been taken to ensure that any configuration it creates is identical to what comes out of the existing kernel configuration system. With Kconfiglib, it becomes straightforward to write simple tools like "allnoconfig"; it also is possible to ask questions about a given configuration. One possible tool, for example, would answer the "why can't I select CONFIG_FOO" question - a useful feature indeed.
There are currently no Python dependencies in the kernel build system; trying to add one could well run into opposition. But Kconfiglib could find a role in the creation of ancillary tools which are not required to configure and build a kernel as it's always been done. For the curious, there's a set of examples available.
Kernel development news
Monitoring a system for "reliability, availability, and serviceability" (RAS) is an important part of keeping that system, or a cluster of such computers, up and running. There is a wide variety of things that could be monitored for RAS purposes—memory errors, CPU temperature, RAID and filesystem health, and so on—but Borislav Petkov's RAS daemon is just targeted at gathering information on any machine check exceptions (MCEs) that occur. The daemon uses trace events and the perf infrastructure, which requires a fair amount of restructuring of that code to make it available not only to the RAS daemon, but also for other kinds of tools.
The first step is to create persistent perf events, which are events that are always enabled, and will have event buffers allocated, even if there is no process currently looking at the data. That allows the MCE trace event to be enabled at boot time, before there is any task monitoring the perf buffer. Once the boot has completed, the RAS daemon (or some other program) can mmap() the event buffers and start monitoring the event. This will allow the RAS daemon to pick up any MCE events that happened during the boot process.
To do that, the struct perf_event_attr gets a new persistent bitfield that is used to determine whether or not to destroy the event buffers when they are unmapped. In addition, persistent events can be shared by multiple monitoring programs because they can be mapped as shared and read-only. Once the persistent events are added, the next patch then changes the MCE event to become a persistent event.
With that stage set, Petkov then starts to rearrange the perf code so that the RAS daemon and other tools can access some of the code that is currently buried in the tools/perf directory. That includes things like the trace event utilities, which move from tools/perf/util to tools/lib/trace and some helper functions for debugfs that move to tools/lib/lk. These were obviously things that were needed when creating the RAS daemon, but not easily accessible.
A similar patch moves the mmap() helper functions from the tools/perf directory to another new library: tools/lib/perf. These functions handle things like reading the head of the event buffer queue, writing at the tail of the queue, and reading and summing all of the per-cpu event counters for a given event.
In response to the patch moving the mmap() helpers, Arnaldo Carvalho de Melo pointed out that he had already done some work to rework that code, and that it would reduce the size of Petkov's patch set once it gets merged into the -tip tree. He also noted that he had created a set of Python bindings and a simple perf-event-consuming twatch daemon using those bindings. While Petkov had some reasons for writing the RAS daemon in C rather than Python, mostly so that it would work on systems without Python or with outdated versions, he did seem impressed: "twatch looks almost like a child's play and even my grandma can profile her system now :)."
But the Python bindings aren't necessarily meant for production code, as Carvalho de Melo describes. Because the Python bindings are quite similar to their C counterparts, they can be used to ensure that the kernel interfaces are right:
Moving to a C version then becomes easy after the testing phase is over and the kernel bits are set in stone.
There are some additional patches that move things around within the tools tree before the final patch actually adds the RAS daemon. The daemon is fairly straightforward, with the bulk of it being boilerplate daemon-izing code. The rest parses the MCE event format (from mce/mce_record/format file in debugfs), then opens and maps the debugfs mce/mce_recordN files (where N is the CPU number). The main program sits in a loop checking for MCE events every 30 seconds, printing the CPU, MCE status, and address for any events that have occurred to a log file. Petkov mentions decoding of the MCE status as something that he is currently working on.
Obviously, the RAS daemon itself is not the end result Petkov is aiming for. Rather, it is just a proof-of-concept for persistent events and demonstrates one way to rearrange the perf code so that other tools can use it. There may be disagreements about the way the libraries were arranged, or the specific location of various helpers, but the overall goal seems like a good one. Whether tools like ras actually end up in the kernel tree is, perhaps, questionable—the kernel hackers may not want to maintain a bunch of tools of this kind—but by making the utility code more accessible, it will make it much easier for others build these tools on their own.
Peter is the maintainer for the bulk of the graphical input drivers. He has, he says, rewritten most of that subsystem, so he is to blame for the bugs which can be found there. Most input devices are easily handled through the evdev abstraction, but the Wacom driver is an exception. The things which are unique to these tablets (multiple input "devices," one associated with each pen, the pressure, tilt, and rotation axes, and the relatively high resolution) require a separate driver for their support. Thus, Wacom users must have the linuxwacom driver in their systems.
There is some confusion about the linuxwacom driver, because there are multiple versions of it, all of which can be found on SourceForge. One version (0.8.8) is created by Wacom itself; it is a classic vendor driver, Peter said, with everything that usually implies about the development process (code dumps) and the quality of the code itself. This driver ships as a tarball containing a wild set of permutations of kernel and X.org versions; it's a mess. But it's Wacom's mess, and the company has been resistant to efforts to clean it up.
Peter got fed up with this situation in 2009 and forked the driver. His version is now the default driver in a number of distributions, and is the only one which supports newer versions of the X server. Looking at the repositories, Peter found 78 commits total before the fork, all from Wacom. After the fork, there are 788 commits, 65% from Red Hat, and 12% from Wacom. Extracting the driver from its vendor-dominated situation has definitely helped to increase its rate of development.
Surprisingly, the original vendor driver is still under development by Wacom, despite the fact that it does not support current X servers and is not shipped by any distributors. The original mailing list is still in business, but, Peter warned, one should not ask questions about the new driver there. Kernel development, he said, should be done on the linux-kernel mailing list. There is also little point in talking to him about problems with the older driver; Wacom insists on keeping control over that code.
Update: Peter tells us that there are three mailing lists (linuxwacom-announce, linuxwacom-discuss and linuxwacom-devel) which are still the place to go for general questions, including hardware-specific questions. X driver development for the forked driver happens exclusively on linuxwacom-devel and all patches are sent there. So the mailing lists are definitely the place to ask questions, at least in regards to the X driver. The kernel driver is the exception here. Kernel driver development should happen on LKML, not on linuxwacom lists.
Much of the work Peter has done so far has been toward the goal of cleaning up the driver. That has involved throwing out a number of features. Some of those needed to go - the original driver tries to track the resolution of the screen, for example, which it has no business knowing. Support for the "twinview" approach to dual monitors has also been taken out. In some cases, the removed features are things that people want; support should eventually be restored once it can be done in the right way. Sometimes, Peter said, things have to get worse before they can get better.
Also gone is the wacomcpl configuration tool. It is, Peter said, some of the worst code that he has ever seen.
Peter did this talk to update the graphics community on the state of support for this driver, but he was also looking for input. His attitude toward development was described as "if it doesn't crash the server, it works." In other words, he is not a graphic artist, so he has no deep understanding of how this hardware is used. To get that understanding, he needs input from the user community regarding development priorities and what does not work as well as it should.
So artists making use of Wacom tablets should make sure that their needs are known; the developer in charge of the driver is ready to listen. Meanwhile, bringing a more open development process to the driver has increased the pace of development and is improving the quality of the code. If the usual pattern holds, before long Linux should have support for these tablets which is second to none.
The last LWN article on Ftrace described trace-cmd, which is a front end tool to interface with Ftrace. trace-cmd is all command line based, and works well for embedded devices or tracing on a remote system. But reading the output in a text format may be a bit overwhelming, and make it hard to see the bigger picture. To be able to understand how processes interact, a GUI can help humans see what is happening at a global scale. KernelShark has been written to fulfill this requirement. It is a GUI front end to trace-cmd.
KernelShark is distributed in the same repository that trace-cmd is:
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.gitTo build it, you just need to type make gui, as just typing make will only build trace-cmd. These two tools have been kept separate since a lot of embedded devices do not have the libraries needed to build KernelShark. A full HTML help is included in the repository and is installed with make install_doc. After installing the documentation, you can access the help directly from KernelShark from the "Help" menu.
This article is not a tutorial on using KernelShark, as everything you need to know about the tool is kept up-to-date in the KernelShark repository. Instead, this article will describe a use case that KernelShark was instrumental in helping to solve.
Some time ago, when the push/pull algorithm of the real-time scheduler in Linux was being developed, a decision had to be made about what to do when a high priority process wakes up on a CPU running a lower real-time priority process, where both processes have multiple CPU affinity, and both can be migrated to a CPU running a non-real-time task. One would think that the proper thing to do would be to simply wake up the high priority process on that CPU which would cause the lower priority process to be pushed off the running CPU. But a theory was that by doing so, we move a cache hot real-time process onto a cache cold CPU and possibly replace it with a cache cold process.
After some debate, the decision was made to migrate the high priority process to the CPU running the lowest priority task (or no task at all) and wake it there. Some time later, after the code was incorporated into mainline, I started to question this decision even though I was the one that fought for it. With the introduction of Ftrace, we now have a utility to truly examine the impact that decision has made.
The decision to move the higher priority task was based on an assumption that if the task was waking up, that it is more likely to be cache cold than a task that is already running. Thinking more about this case, one must think about what would cause a high priority task to wake up in the first place. If it is woken up periodically to do some work, then it can very well be the case that it will be cache cold. Any task that was scheduled in between can easily push out the cache of this high priority task. But what if the high priority task was blocked on a mutex? If the task was blocked on a mutex and another RT task was scheduled in its place then when the high priority task wakes up again, there is a good chance that the task will be cache hot.
A mutex in most real-time programs will usually be held for a short period of time. The PREEMPT_RT patch, which this code was developed from, converts spinlocks into mutexes, and those mutexes are held for very small code segments, as all spinlocks should be. Migrating a task simply because it blocked on a mutex increases the impact these locks have on the throughput. Why punish the high priority task even more because it blocked and had to wait for another task to run?
Before making any decision to change the code, I needed to have a test case that can show that the moving of a high priority task instead of preempting the lower priority task will cause the high priority task to ping pong around the CPUs when there is lock contention. A high priority task should not be punished (migrated) if it simply encounters lock contention with lower priority real-time tasks. It would also be helpful to know how changing this decision affects the total number of migrations for all the tasks under lock contention.
Having a 4 processor box to play with, I started writing a test case that would possibly cause this scenario, and use Ftrace to analyze the result. The first test case to try was to create five threads (one more than CPUs) and four pthread mutex locks. Have all threads wake up from a barrier wait and then loop 50 times grabbing each lock in sequence and do a small busy loop. The name of this test is called migrate.c.
The test application uses trace_marker as explained in previous articles to write what is happening inside the application to synchronize with kernel tracing.
Running the following with trace-cmd:
# trace-cmd record -e 'sched_wakeup*' -e sched_switch -e 'sched_migrate*' migrate # kernelsharkLike trace-cmd report, KernelShark will, by default, read the file trace.dat. You can specify another file by naming it as the first argument to KernelShark. While the KernelShark display images may be difficult to read fully in the article, clicking any of them will bring up a full-resolution version.
Since all tasks have been recorded, even trace-cmd itself, we want to filter out any tasks that we do not care about. Selecting Filter->Tasks from the KernelShark menu, and then choosing only the migrate threads will remove the extraneous tasks. Note that events that involve two tasks, like sched_switch or sched_wakeup, will not be filtered out if one of the tasks should be displayed.
In the default graph view, each on-line CPU is represented by a plot line. Each task is represented by a different color. The color is determined by running the process ID through a hash function and then parsing that number into a RGB format.
The lines sticking out of the top of the bars represent events that appear in the list below the graph.
By examining the graph we can see that the test case was quite naive. The lowest priority task, thread 0, never got to run until the other four tasks were finished. This makes sense as the machine only had four CPUs and there were four higher priority tasks running. The four running tasks were running in lock step, taking the locks in sequence. From this view it looks like the tasks went out of sequence, but if we zoom in to where the migrations happened, we see something different.
To zoom into the graph, press and hold the left mouse button. A line will appear, then drag the mouse to the right. As the mouse moves off the line, another line will appear that follows the mouse. When you let go of the mouse button, the view will zoom in making the locations of the two lines the width of the new window.
Repeating the above procedure, we can get down to the details of the migration of thread 3. Double clicking on the graph brings the list view to the event that was clicked on. A green line appears at the location of that was clicked.
On CPU 0, thread 3 was preempted by the watchdog/0 kernel thread. Because we filtered out all threads but the migration test tasks, we see a small blank on the CPU 0 line. This would have been filled in with a colored bar representing the watchdog/0 thread if the filters were not enabled. The watchdog/0 thread runs at priority 99, which we can see from the sched_switch event as the priority of the tasks is between the two colons. The priority shown is represented by the kernel's view of priority, which is inverse to what user-space uses (user space priority 99 is kernel priority zero).
When the watchdog/0 thread preempted thread 3, the push/pull algorithm of the scheduler pushed it off to CPU 3, which had the lowest priority running task. Zooming into the other migrations that happened on the other CPUs, show that the watchdog kernel thread was responsible for them as well. If it wasn't for the watchdog kernel threads, this test would not have had any migrations.
The first test took the naive approach of just setting up four locks and having the tasks grab them in order. But this just kept the tasks in sync. The next approach is to try to mix things up a little more. The concern about the real-time scheduler is how it affects the highest priority task. The next test creates the four locks again (as there are four CPUs) and five tasks each of increasing priority. This time, only the highest priority task grabs all the locks in sequence. The other four tasks will grab a single lock. Each lock will have a single task and the highest priority task grabbing that lock. To try to force contention, pthread barriers are used. For those unfamiliar with pthread barriers, they are synchronization methods to serialize threads. A barrier is first initialized with a number and all threads that hit the barrier will block until that number of threads have hit the barrier, then all the threads are released.
This test case creates two barriers for each lock (lock_wait and lock_go) each initialized with the number 2, for the two tasks (the unique low priority task and the high priority task) that will take the lock. The low priority task will take the lock and wait on a barrier (lock_wait). The high priority task will hit that barrier before it takes the corresponding lock. Because the low priority task is already waiting on the barrier, the high priority task will trigger the barrier to release both tasks because the barrier has a task limit of two. The high priority task will most likely try to take the mutex while the low priority task aleady has it. The low priority task will release the mutex and then wait on the other barrier (lock_go) letting the high priority task take the mutex.
Running this test under trace-cmd yields the following from KernelShark after filtering out all but the migrate test tasks.
Looking at the graph it seems that the highest priority thread stayed on the same CPU, and was not affected by the contention. Considering that the scheduler is set to migrate a waking real-time task if it is woken on a CPU that is running another real-time task, regardless of the priorities, one would think the high priority task would have migrated a bit more. Zooming in on the graph brings to light a bit more details to what is occurring.
What we can see from the graph, and from the list, is that the high priority thread did have contention on the lock. But because all threads are waiting for the high priority process to come around to its lock, the other threads are sleeping when the high priority process wakes up. The high priority process is only contending with a single thread at a time. Threads 0 and 2 share CPU 2 without issue, while threads 1 and 3 each still have a CPU for themselves.
The second test was on the right track. It was able to produce a contention but failed to have the CPUs busy enough to cause the highest priority task to wake up on a CPU running another real-time task. What is needed is to have more tasks. The final test adds twice as many running threads as there are CPUs.
This test goes back to all tasks grabbing all locks in sequence. To prevent the synchronization that has happened before, each thread will hold a lock a different amount of time. The higher the priority of a thread, the shorter time it will hold the lock. Not only that, but the threads will now sleep after they release a lock. The higher the priority of a task, the longer it will sleep:
lock_held = 1 ms * ((nr_threads - thread_id) + 1) sleep_time = 1 ms * thread_idThe lowest priority thread will never sleep and it will hold the lock for the longest time. To make things even more interesting, the mutexes have been given the PTHREAD_PRIO_INHERIT attribute. When a higher priority thread blocks on a mutex held by a lower priority thread, the lower priority thread will inherit the priority of the thread it blocks.
The test records the number of times each task voluntarily schedules, the number of times it is preempted, the number of times it migrates, and the number of times it successfully acquired all locks. When the test finishes, it gives an output of these for each thread. The higher the task number the higher the priority of the thread it represents.
Task vol nonvol migrated iterations 0 43 3007 1571 108 1 621 1334 1247 108 2 777 769 1072 108 3 775 17 701 108 4 783 50 699 108 5 788 2 610 109 6 801 89 680 109 7 813 0 693 110 Total 5401 5268 7273 868
Running this test under trace-cmd and viewing it with KernelShark yields a graph with lots of pretty colors, which means we likely succeeded in our goal. To prove that the highest priority thread did indeed migrate, we can plot the thread itself.
Using the "Plots" menu and choosing "Tasks" brings up the same type of dialog as the task filter that was described earlier. I selected the highest priority thread (migrate-2158), and zoomed in to get a better view. The colors on a task plot are determined by the CPU number it was running on. When a task migrates, the colors of the plot changes.
This test now demonstrates how a high priority task can migrate substantially when other RT tasks are running on the system. Changes to the real-time scheduler can now be tested. The commit changes the decision on which thread migrates when an real-time task wakes up on a CPU running another real-time task. The original way was to always move the task that is waking up if there is a CPU available that is running a task that is lower in priority than both tasks. Instead, the commit changes this to just wake up the real-time task on its CPU if it is higher priority than the real-time task that is currently running.
The migrate test now shows:
Task vol nonvol migrated iterations 0: 52 2923 2268 108 1: 569 1529 1457 109 2: 801 1961 2194 109 3: 808 789 1274 109 4: 810 61 155 109 5: 813 10 57 109 6: 827 35 81 110 7: 824 0 4 110 total: 5504 7308 7490 873
The total number of migrations has stayed around the same (several runs will yield a fluctuation of a few hundred), but the number of migrations for the highest priority task has dropped substantially, as it will not migrate simply because it woke up on a CPU running another real-time task. Note, the reason that the highest priority task migrated at all was because it woke up on a CPU that was running the task that owned the mutex that it was blocked on. As these are priority inheritance mutexes, the owner would have the same priority as the highest priority process that it is blocking. The wake up will not preempt a real-time task of equal priority. Perhaps that can be the next change to the real-time scheduler: have the wake up be aware of priority mutexes.
The highest priority thread (migrate-21412) was woken on CPU 3, which was running thread 1 (migrate-21406) which is the task that thread 7 originally blocked on. CPU 2 happened to be running thread 0 (migrate-21405) which was the lowest priority thread running at the time. Note that the empty green box that is at the start of the task plot represents the time between when a task was woken and the time it actually was scheduled in.
Using KernelShark allowed me to analyze each of my tests to see if they were doing what I expected them to do. The final test was able to force a common scenario where a high priority process is woken on a CPU running another real-time task, and cause the decision to be made, whether to migrate the waking task or not. This test allowed me to see how the changes to that decision affected the results.
This article demonstrates a simple use case for KernelShark, but there are a lot more features that aren't explained here. To find out more, download KernelShark and try it out. It is still in beta and is constantly being worked on. Soon there will be plugins that will allow it to read other file formats and even change the way it displays the graph. All the code is available and under the GPL, so you can add your own features as well (hint hint).
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
The waning days of January brought two intriguing developments in the world of Linux distributions and their application frameworks. On January 18th, Canonical's Mark Shuttleworth announced that starting with Ubuntu 11.10, the distribution would ship Qt libraries in the base ISO image, and was underwriting some development work to make it easier for Qt applications to tie in to the rest of the desktop. That news largely overshadowed the GNOME Foundation's January 17th announcement that it had hired Igalia to integrate the GTK+ toolkit into the MeeGo Handset platform, and to merge components from Maemo's Hildon framework upstream into GTK+ itself.
On the surface, both efforts mark the availability of a new framework in a distribution that up until now has seemingly shipped entirely one side of the "Qt/GTK+" fence (Ubuntu being GNOME-based, and MeeGo being Qt-based). But neither situation is as clear-cut as that. Canonical has always provided Qt libraries and Qt-based applications through its repositories — just not on the Ubuntu ISO image itself. MeeGo officially supports Qt as the third-party application developer platform, but it also includes many components from the GNOME stack.
Shuttleworth introduced the Qt announcement by saying that ease of use and effective integration are the key values in Ubuntu's user experience, and that although the distribution has historically given very strong preference to GTK+ applications, a toolkit itself is merely a means to an end. When evaluating whether or not to make a particular application part of the default install, he continued, the questions to ask are whether it is free software, whether it is "best-in-class," whether it integrates with the system settings, preferences, and other applications, whether it is accessible, and whether it looks and feels consistent with the rest of the desktop.
Although there are plenty of excellent Qt applications that meet most of those requirements, the sticking point in previous releases has been the fact that GTK+ applications all use the same centrally-managed preferences store (dconf), while Qt applications typically use KDE's. Aside from storing its own preferences in a separate location, a Qt application on Ubuntu does not have access to system-wide settings that affect its integration — font rendering, sound, peripheral settings, etc.
To fix this, Canonical has contracted developer Ryan Lortie to write dconf bindings for Qt. It is not yet clear what form Lortie's dconf work will take — some have suggested Qt's QSettings, others KDE's KConfig. Currently the plan for Ubuntu seems limited just to shipping Qt libraries in the 11.10 ISO, but Shuttleworth left the door open for individual Qt-based applications to be included, too.
It does not sound like Ubuntu is considering core KDE applications in this category, but Qt applications. As Ryan Paul pointed out at ars technica, KDE applications "come with heavy KDE infrastructure dependencies and have KDE-centric behaviors," while Ubuntu continues to develop a GNOME-based desktop. On that topic, Shuttleworth even explicitly said that the decision to add Qt libraries was "in no way a criticism of GNOME," and reiterated that the distribution is making GNOME the focus of its design work. Nevertheless, some in the comments on Shuttleworth's post appeared to read the announcement as a "move" away from GNOME and towards KDE.
There were also vocal reactions from many KDE supporters that seemed to interpret the announcement as an attempt to coerce KDE developers into altering their code to support Ubuntu specifically. KDE's Aaron Seigo called it "dictating" to Qt developers and "not that much different from saying that Qt apps should just use Gtk+ for rendering so they fit in better." He also said that KDE and Qt have led the way in defining standards (citing freedesktop.org), and contrasted the project with Canonical, saying the company had "historically taken rather heavy-set stances that worked against" giving developers the best choice of applications.
Seigo's blog is frequently inflammatory, of course — ironically, several commenters on Shuttleworth's announcement linked to an older post by Seigo in which he lambastes freedesktop.org as "messed up" and a "self-important disappointment" for developing dconf in the first place. A lower-key criticism came from openSUSE's community manager Jos Poortvliet, who referred to the project as "creating a special Ubuntu world" by keeping the dconf bindings Ubuntu-only, rather than integrating them with upstream GNOME and GTK+.
But it is not clear whether Lortie's dconf work truly will remain Ubuntu-only, or whether it will be embraced upstream — there are still simply too many unknowns. Ubuntu's community manager Jono Bacon posted a FAQ entry about the plan, but it offered no elaboration on the development process itself. Jim Campbell pointed to the contributor agreement that Canonical uses for other projects as a concern and suggested that it might cause the work to be Ubuntu-only.
Campbell also raises another interesting question in his post: whether GNOME developers will be enticed to write applications with Qt in general. Despite long-standing divisions between the Qt and GNOME frameworks, there is no reason an application must use GTK+ for its widget toolkit simply because it uses other GNOME libraries. It rarely happens, but that may be because no major distro installs both Qt and GTK+ libraries by default. A fact frequently overlooked in the discussion is that when Ubuntu ships both, it will be the first major distribution to do so. Even those distributions like openSUSE that offer users a choice of desktop environments at install-time typically install either one complete stack or the other.
Having both frameworks available at the same time could indeed make mixed-framework applications possible, which Paul also observed. That prospect does not thrill everyone, though. Blogger Martin Espinoza told LinuxInsider it amounts to more bloat and more dependencies in an already tight ISO image. Several suggested to Shuttleworth that it may be time for Ubuntu to move from a CD to a DVD image to cope with the increasing bulk of the default install.
While the Canonical project is an example of a distribution choosing to draw in another framework, the GNOME Foundation's announcement is the opposite: a framework cozying up to a distribution. Prior to MeeGo's birth, the Maemo distribution for Nokia handsets was based primarily on GNOME frameworks, including the GTK+ widget toolkit. After Nokia's acquisition of Trolltech, however, the company changed directions, and rewrote Maemo 5 with Qt.
Things got more complicated with the combined Maemo-plus-Moblin-equals-MeeGo stack. The MeeGo project officially recognizes Qt as the third-party development platform on which the SDK is based, and around which its marketing to device makers centers. But the overall MeeGo architecture still depends heavily on other, non-Qt components, including Cairo, Clutter, Pango, GConf, Telepathy, GLib, D-Bus, GStreamer, ATK, and Evolution Data Server. So it should not be surprising that the GNOME Foundation was interested in funding development to bring the GTK+ portion of the GNOME platform to MeeGo as well.
The Foundation put out a call for bids in October of 2010, detailing three requirements: ensuring GTK+ applications would run on the MeeGo Handset UX (User eXperience), adding upstream components to GTK+ to facilitate running GTK+ applications on MeeGo, and merging the functionality of Maemo's Hildon framework into upstream GTK+. The contractor chosen in January's announcement, Igalia, is a contract company based in Spain that contributes to GNOME, GStreamer, WebKit, and other open source projects. In the announcement, the Foundation said that Igalia's application "focused the most on integrating elements of Hildon into GTK+ upstream," and that this emphasis would make it easier to port desktop-based GTK+ and Maemo applications to MeeGo.
Hildon is the application framework originally created by Nokia for Maemo. It includes desktop components, an input system suited for touch-based and onscreen keyboard input, finger-friendly menu and user interface widgets, kinetic scrolling, and other handset-oriented features. Igalia started working on Hildon when Nokia shifted its Maemo attention to Qt. The GNOME-funded work will support two developers at the company, Claudio Saavedra and Carlos García Campos.
The announcement and the Igalia site are both short on details, but it does sound like the emphasis will be merging existing Hildon and mobile technologies into GTK+ proper, rather than maintaining a separate project. With the MeeGo project's self-proclaimed "upstream first" philosophy, that approach would make the most sense. But it is the reaction from MeeGo that is the biggest unanswered question. The project's support for Qt as a development framework is enthusiastic — which is what one would expect from Qt's corporate parent Nokia.
Whether or not the project will support GTK+ in future releases remains to be seen. When MeeGo launched in 2010, the architecture diagram included both widget toolkits, though it does not anymore. The FAQ still states that MeeGo will include GTK+, but it is absent from the developer documentation.
Rumors are that the Clutter-based interface on the Netbook UX will be replaced. Of course, the Netbook UX is already more GTK+-heavy, including desktop GNOME applications like Evolution, Banshee, and Empathy. Perhaps the real story there is merely how different the various MeeGo UXes really are: they are not simply finger-, keyboard-, or remote-based recasts of the same interface; they have very different components. The Netbook UX is still largely derived from Moblin, and the Handset UX comes from Maemo 5.
There is nothing wrong with that approach; MeeGo is most accurately described as a meta-distribution encompassing several distinctly different siblings. But at 2010's MeeGo Conference in Dublin, one of the key messages was that all MeeGo releases would come with a compliance-testing guarantee: an application that runs on one MeeGo device will run on any MeeGo device.
That guarantee rested largely on outside developers using Qt and QtMobility as the development framework, so one has to wonder how the project — particularly the program managers — will react when presented with a revamped and actively-developed GTK+ for Handsets that competes for developer attention with the official solution. MeeGo's governance structure is always described as a meritocracy, where anyone can contribute. Hopefully, as is true on the desktop, that will prove true, and developers can take their pick of frameworks.
In the space of 24 hours, a GNOME-based distribution announced that it would start shipping Qt libraries enabled, and the GNOME Foundation announced that it would pay to develop GTK+ for the Qt-based MeeGo. It would be nice if the open source community saw both situations the same way: as big players in the Linux ecosystem doing their best to give developers more choices for how to create their applications.
In neither situation does the new development work indicate that the distribution is "moving" away from one framework to the other. Unfortunately, however, the often dichotomous KDE-versus-GNOME mindset contributed a distracting amount of noise to the discussion. Partly that is because people erroneously equate Qt with KDE, and just as erroneously equate GTK+ to Qt. Neither comparison is apt. GTK+ is just the widget toolkit; the proper parallel to Qt is the entire GNOME Platform. KDE is a distinct project from Qt, and is an environment built on top of the Qt platform.
It also does way more harm than good to speculate on things like Ubuntu "switching" from GNOME to KDE (or even from GNOME to Qt). Commenters pointed to the 2D, fallback version of Unity as the secret reason why Shuttleworth decided to add Qt to 11.10. I personally suspect is has more to do with Shuttleworth's recent infatuation with Scribus, although I lack hard evidence. Considering that no specific Qt applications have been discussed for inclusion, it seems like the Qt inclusion is designed more to reach out to the "opportunistic developers" Ubuntu wants to attract than it is to bend existing Qt developers to Canonical's will.
Shuttleworth hits the nail on the head when he calls a toolkit a means to an end. That's inherent in the idea of a "toolkit." Whatever you may think of Canonical's motivations in funding Qt dconf work, or the GNOME Foundation's motivations in funding MeeGo GTK+ work, both projects are going to be empowering for developers — which is what the community usually cares most about in the long run.
Debian GNU/Linuxplans to do "live commenting" of the Squeeze (6.0) release process on Identi.ca. In addition, it is looking for "fun Debian facts" to fill in: "However, several steps of the release process are quite boring (e.g. waiting for the CD, DVDs and blue rays for 11 archs are builds). Therefore we would also like to fill this emptiness with funny or otherwise interesting facts about Debian (e.g. the 150'000 bugs closed in the two years since lenny got released)." (Thanks to Paul Wise.) GSoC admins coordinate Debian participation, interact with the students who are often new to Debian, and indirectly deal with the money who are used to sponsor the initiative. It is a role of responsibility and involves representing Debian in various ways. I'm therefore pleased to properly delegate the role to this year GSoC admins; see delegation text reported below. [...] I'm confident we will hear back soon from the GSoC admins about how we can help. In the meantime you can start thinking at your project proposal to both improve Debian and reach out to new contributors."
Fedorareports on a talk by anthropologist Diana Harrelson at FUDCon [Fedora Users and Developers Conference], which was held at the end of January in Tempe, Arizona. The talk focused on Harrelson's study on the Fedora community. "'My entire research was just to find out why you guys do it,' Diana said in her talk. Motivation may seem more obvious to those within communities, but from the outside, it looks more like doing a lot of hard work for no pay. [...] High on the list of reasons were learning for the joy of learning and collaborating with interesting and smart people. Motivations for personal gain, like networking or career benefits, were low on the list. Self motivation, however, is important, as seen in comments from multiple contributors who said things like, 'Mainly I contribute just to make it work for me.'"
Mandriva LinuxDue to a huge number of big changes in Mandriva 2011 so far, combined with rpm5 migration both in the repositories and inside the build system, we have decided to shift the release dates for Mandriva 2011 by two weeks, to give us a better time period to fit the remaining pieces." Mandriva 2011 final is now scheduled from June 13. In the meantime, a Technology preview is now available: "The Technology Preview showcases what will be inside the first Mandriva 2011 Alpha version. It already comes with rpm5, native systemd, networkmanager support, KDE 4.6.0, kernel 2.6.37, firefox 4b10, X.org server 1.9, clementine 0.6 and lots of updated packages everywhere."
Newsletters and articles of interest
Page editor: Rebecca Sobol
Oregon State University's Open Source Lab (OSU-OSL) has gotten a hand up from Facebook to create a on-demand testing infrastructure for open source projects called Supercell. The idea behind Supercell is to provide limited duration hosting for open source projects that need to test on specific operating systems, as well as providing facilities for projects to test software in a large cluster with several VMs running concurrently. When finished, Supercell will provide test infrastructure for open source projects that don't have their own server farm and testing infrastructure.
The project was announced on Thursday, January 20 by Facebook's Scott MacVicar. MacVicar wrote that there's a disparity in development resources between many open source projects and companies doing in-house development of software: namely that many open source projects lack the kind of hardware and testing infrastructure that companies have at their disposal.
To help solve the problem, Facebook has decided to donate hardware and funding to OSU-OSL to develop Supercell, a service for projects to test on multiple operating systems and architectures. The Open Source Lab provides hosting to quite a few open source projects and communities, so it's not surprising that Facebook would look to OSL for assistance with Supercell. Why is Facebook interested? OSL's Leslie Hawthorn said that, while she didn't want to speak for Facebook, her conversations with the company indicated that Facebook's goal with open source is to "let people make useful stuff," and that OSL was a natural partner because "they know we're neutral, and we're here for the benefit of open source."
Currently the hardware is x86 and AMD64, with a number of guest OSes available. At present, Supercell supports Debian Lenny (5.0), CentOS 5.5, Gentoo, Gentoo Hardened, and Ubuntu Lucid (10.04), Karmic (9.10), and Maverick (10.10). According to MacVicar, some Mac OS X servers (two Apple Xserves) are also available "for those projects that explicitly need to test on Mac OS." The current hardware for Supercell is two Dell servers with 4 Opteron 2.1GHz 12-core CPUs and 128GB of RAM each, another with 2 Intel E5620 2.4GHz 4-core CPUs and 12GB of RAM, and 12TB of disk for NFS storage.
Plans are also on the table to support Fedora, FreeBSD, and OpenBSD in the near future, and OSU-OSL is evaluating feedback from the community in deciding on additional OSes and architectures. According to the FAQ, Supercell may support Alpha, ARM, ARMel, PowerPC, SPARC, and others as a longer term goal.
The hardware cluster is being managed by Ganeti and Ganeti Web Manager on top of Linux and KVM. The entire stack under Supercell is, of course, open source. Ganeti, which was originally developed by Google, is a tool for virtualization management that handles deploying and managing virtual machine instances on top of KVM or Xen. According to its documentation, Ganeti can deploy a new virtual machine running Ubuntu in under 30 seconds, including hostname, networking, and serial console set up. Images are a gzipped tarball or filesystem dump of an operating system, usually running between 200 and 400MB in size.
Ganeti has been in development for some time, with its initial public release in August 2007 under the GPLv2. Google started the project in its Zurich office for cluster management of virtual servers on commodity hardware — pretty much the same thing that OSU-OSL and Facebook were looking for.
The Ganeti Web Manager is a bit newer, a result of collaboration between OSL and Google Code-In students from the 2010 Google Code-In. Ganeti Web Manager is a Django-based application that provides Web-based management for Ganeti clusters. It's still maturing, but the 0.4 release from December 22, 2010 is considered "enough to get people to start using it in production" according to OSL's Lance Albertson. The 0.4 release implements basic VM management, VNC console, a permissions system for managing clusters and virtual machines, and SSH Key management.
Since the service is "on-demand," what happens when a project comes back for a second round of testing? According to OSL's operations manager Jeff Sheltren, OSL plans to tie into a configuration management framework such as Puppet so projects can save and reuse configurations. "This will allow OSL to provide a base set of standard configurations people can use (think: 'I need a LAMP stack') as well as giving projects the ability to fine tune their environment and re-use that configuration for future VMs."
Projects eager to get hands on with Supercell will have to wait a few more months, at least if they're hoping to use OSU's hardware and services. The service is considered early alpha at this point, with a projection that it will be ready by the third quarter of 2011. In the interim, OSU is looking to find additional sponsors for Supecell. Sheltren says that Facebook's donation amounts to about $50,000 in hardware and funds. This will support a fair amount of development, but there's plenty of work ahead.
Currently there is no discussion list for the Supercell service, but OSL is looking for feedback on what other operating systems that Supercell could support and other requests from the community. By submitting the feedback form, interested parties can sign up for the Supercell announcement list as well. Hawthorn did say that OSL may set up a discussion list, and will be providing regular updates about Supercell via the blog and Twitter.
But projects looking to implement their own "Supercell" can start today, just add hardware. Developers interested in helping with Ganeti and Ganeti Web Manager can find more info on the wiki including the mailing list and documentation.
Though Ganeti Web Manager is considered "production ready" by Albertson, it still has a lengthy roadmap of features that OSL plans to integrate. For instance, templates for virtual machines, ability to modify or reinstall VMs, implement support for noVNC instead of the Java VNC client, and serial console support.
With any luck, Facebook won't be the sole supporter of Supercell outside of OSU-OSL. The project has a lot of potential to provide a much-needed facility for short-term testing resources that many projects simply couldn't afford.
Newsletters and articles
Page editor: Jonathan Corbet
Brief itemsannounced that it has received the last two freely available IPv4 address blocks from the Internet Assigned Numbers Authority (IANA). Under the existing plan, IANA will distribute the five remaining address blocks to each of the five Regional Internet Registries (RIR). The RIRs will then distribute addresses within those blocks to organizations within their regions. That means that IANA is out of IPv4 address space, and the RIRs won't be too far behind: "APNIC expects normal allocations to continue for a further three to six months. After this time, APNIC will continue to make small allocations from the last /8 block". Furthermore, "APNIC reiterates that IPv6 is the only means available for the sustained ongoing growth of the Internet, and urges all Members of the Internet industry to move quickly towards its deployment." press release about two companies that are re-branding the Blender 3D content creation suite and selling it. While that is not a GPL violation of any kind, the companies are playing fast and loose with copyright: "The companies IllusionMage and 3Dmagix resell via their websites Blender under their own name. Both websites are probably managed by the same person or company. [...] On their web pages they intentionally hide that the products are distributions of GNU GPL licensed software, and that the software is freely downloadable as well. More-over, even after contacting them several times, they don't remove copyrighted content from their websites. A lot of text and images have been copied from blender.org and random images - not even from blender - were copied from various CG [Computer Graphics] websites." (Thanks to Paul Wise.) In an amicus brief filed in Microsoft v. i4i, EFF argues that the existing high standard of proof for invalidating a patent in federal court unfairly gives the owners of bad patents the upper hand. Currently, when a defendant is accused of infringing a patent, the Federal Circuit wants to see "clear and convincing" evidence that that patent is illegitimate and the case against it unfounded. This is in contrast to the standard of proof for most civil cases, which is a "preponderance of the evidence" -- or a showing that more likely than not the allegations are true. In software cases, "clear and convincing" evidence of patent invalidity can be hard to come by, as source code is constantly changing over the life of a product and much of the original code is often unavailable. This is a particular problem with free and open source software, as the collaborative nature of the projects make documentation even harder."
Articles of interestpredictions for the year. It's always an interesting read; this year there are 25 separate predictions, including: "2011 will see the outbreak of the first massive botnet/malware that attacks smartphones, most likely iPhone or Android models running older software than the latest and greatest. If Android is the target, it will lead to aggressive finger-pointing, particularly given how many users are presently running Android software that's a year or more behind Google's latest—a trend that will continue in 2011." column at pcmag.com. He has an—ummm—interesting view of Linux and open source software, but he thinks it is time for Microsoft to adopt it: "The fact is Microsoft is zigging when it should be zagging. It needs to open a new division that has nothing to do with the rest of the company, so Open Source code can't come into contact with its commercial code. Here it can evolve an Open Source and Linux policy with products for sale and support services. The company needs to get back to an even footing with Google in the phone and, soon, the pad business. It may not catch up with Apple insofar as innovation is concerned, but it can't afford to languish and constantly be humiliated by seemingly pointless and dead-end rollouts."
The last round of FOSDEM speaker interviews are now available. They include: Manik Surtani (Infinispan), David Chisnall (Objective-C), Nicolas Spiegelberg (Facebook Messages), Chris Hofmann (Mozilla Firefox), Jos van den Oever (WebODF), Michael Meeks (LibreOffice), Chris Lattner (LLVM), and Andrew Gerrand (Go). From Van den Oever's interview: "The talk will explain what the WebODF project is about and how it can be used to add ODF support to your website or desktop application. There are several good Free Software solutions for working with ODF on the desktop and on mobile devices, notably LibreOffice and Calligra. These are written in C++, are compiled natively, and need to be installed on each machine on which they are used. Cloud solutions can be run in the browser, but there was no Free Software ODF software for the browser." FOSDEM will be held February 5-6 in Brussels, Belgium.
New BooksFLOSS Manuals has coordinated an effort to produce a free book called An Open Web, which is now available. The book was made with free software and is open to contributions from anyone. "The process for making the book is known as a 'Book Sprint.' It is an intensive and innovative methodology for the rapid development of books. It took five people and locked them in a room in Berlin's CHB for five days with the goal to produce a book with the sole guiding meme being the title — An Open Web. The authors had to create the concept, write the book, and output it to print in 5 days."
Resourcesbufferbloat.net web site is up. It is meant to be a place for developers and administrators to work on solving bloat-related problems; it currently hosts a few mailing lists and a talk on bufferbloat by Jim Gettys. brief introduction to Linux-based "plug" computers. "Fortunately, there's a class of computers ideally suited to that sort of job: "plug computers", sometimes called Sheevaplugs after an early model. The whole computer is built into the bit that plugs into the wall, so they're barely bigger than a normal "wall wart" power supply. They use power-efficient ARM CPUs, so you can run a server with only 5 watts. They're inexpensive, usually just over $100 for a plug with 512M RAM and 512M flash. Best of all, they come with Linux installed right out of the box."
Contests and AwardsThe Free Software Foundation Award for the Advancement of Free Software is presented annually by FSF president Richard Stallman to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software. [...] Nominations are also open for the 2010 Award for Projects of Social Benefit. The Social Benefit award recognizes a project that intentionally and significantly benefits society through collaboration to accomplish an important social task."
Education and CertificationThe Android and MeeGo developer courses will help meet new demands for Linux training and help to fill open positions at a variety of The Linux Foundation's member companies. These courses will give professionals lucrative job skills while helping to advance Linux in this space."
Calls for Presentationscall for proposals for OSCON is coming to a close on February 7. "OSCON (O'Reilly Open Source Convention), the premier Open Source gathering, will be held in Portland, OR July 25-29. We're looking for people to deliver tutorials and shorter presentations."
|Global Ignite Week 2011||several, worldwide|
|Red Hat Developer Conference 2011||Brno, Czech Republic|
|February 15||2012 Embedded Linux Conference||Redwood Shores, CA, USA|
|February 25||Build an Open Source Cloud||Los Angeles, CA, USA|
|Southern California Linux Expo||Los Angeles, CA, USA|
|February 25||Ubucon||Los Angeles, CA, USA|
|February 26||Open Source Software in Education||Los Angeles, CA, USA|
|Linux Foundation End User Summit 2011||Jersey City, NJ, USA|
|March 5||Open Source Days 2011 Community Edition||Copenhagen, Denmark|
|Drupalcon Chicago||Chicago, IL, USA|
|ConFoo Conference||Montreal, Canada|
|conf.kde.in 2011||Bangalore, India|
|PyCon 2011||Atlanta, Georgia, USA|
|March 19||Open Source Conference Oita 2011||Oita, Japan|
|Chemnitzer Linux-Tage||Chemnitz, Germany|
|March 19||OpenStreetMap Foundation Japan Mappers Symposium||Tokyo, Japan|
|Embedded Technology Conference 2011||San Jose, Costa Rica|
|OMG Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems||Washington, DC, USA|
|UKUUG Spring 2011 Conference||Leeds, UK|
|PgEast PostgreSQL Conference||New York City, NY, USA|
|Palmetto Open Source Software Conference||Columbia, SC, USA|
|March 26||10. Augsburger Linux-Infotag 2011||Augsburg, Germany|
|GNOME 3.0 Bangalore Hackfest | GNOME.ASIA SUMMIT 2011||Bangalore, India|
|March 28||Perth Linux User Group Quiz Night||Perth, Australia|
|NASA Open Source Summit||Mountain View, CA, USA|
|Flourish Conference 2011!||Chicago, IL, USA|
|Workshop on GCC Research Opportunities||Chamonix, France|
|April 2||Texas Linux Fest 2011||Austin, Texas, USA|
|Camp KDE 2011||San Francisco, CA, USA|
|SugarCon 11||San Francisco, CA, USA|
|Selenium Conference||San Francisco, CA, USA|
|5th Annual Linux Foundation Collaboration Summit||San Francisco, CA, USA|
|Hack'n Rio||Rio de Janeiro, Brazil|
|April 9||Linuxwochen Österreich - Graz||Graz, Austria|
|April 9||Festival Latinoamericano de Instalación de Software Libre|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds