LWN.net Weekly Edition for July 08, 2010
A line in the sand for graphics drivers
Support for certain classes of hardware has often been problematic for the Linux kernel, and 3D graphics chips have tended to be at the top of the list. Over the last few years, through a combination of openness at Intel and AMD/ATI and reverse engineering for NVIDIA, the graphics problem has mostly been solved - for desktop systems. The situation in the fast-growing mobile space is not so comforting, though. As can be seen in recent conversations, free support for mobile graphics looks like the next big problem to be solved.At a first glance, the announcement of a 2D/3D driver for Qualcomm "ES 3D" graphics cores (found in the Snapdragon processor which, in turn, is found in a number of high-end smartphones) seems like a good thing. Graphics support for this core is one of the binary blobs which is necessary to run Android on that processor, and it seemed like Qualcomm was saying the right things:
Advice and comment is what he got. The problem is that, while the kernel driver is GPL-licensed, it is only a piece of the driver. The code which does the real work of making 3D function on that GPU runs in user space, and it remains locked-down and proprietary. Dave Airlie, the kernel graphics maintainer, made it quite clear what he thinks of such drivers:
If you aren't going to create an open userspace driver (either MIT or LGPL) then don't waste time submitting a kernel driver to me.
Dave's message explains his reasoning in detail; little of it will be new to most LWN readers. He is concerned about possible licensing issues and, at several levels, about the kernel community's ability to verify the driver and to fix it as need be. Dave has also expressed his resentment on how the mobile chipset vendors are getting great value from Linux but seem to be entirely unwilling to give back to the kernel they have come to depend on so heavily.
This move may strike some people as surprising. There has been a lot of pressure to get Android-related code into the mainline, but now an important component is being rejected - again. The fact that user-space code is at issue is also significant. The COPYING file shipped with the kernel begins with this text:
Normally, kernel developers see user space as a different world with its own rules; it is not at all common for kernel maintainers to insist on free licensing for user-space components. Dave's raising of licensing issues might also seem, on its face, to run counter to the above text: he is saying explicitly that closed user-space graphics drivers might be a work derived from the kernel and, thus, a violation of the GPL. These claims merit some attention.
The key text above is "normal system calls." A user-space graphics driver does not communicate with its kernel counterpart with normal system calls; it will use, instead, a set of complex operations which exist only to support that particular chipset. The kernel ABI for graphics drivers is not a narrow or general-purpose interface. The two sides are tightly coupled, a fact which has made the definition of the interface between them into a difficult task - and the maintenance of it almost as hard. While a program using POSIX system calls is clearly not derived from the kernel, the situation with a user-space graphics driver is not nearly so clear.
It should also be pointed out that, while the kernel community does not normally try to dictate licensing in user space, that community has also never felt bound to add interfaces for the sole use of proprietary code. The resistance to the addition of hooks for anti-malware programs is a classic example.
But licensing is not the only issue here. In a sense, user-space 3D graphics drivers are really kernel components which simply happen to be running in a separate address space. They necessarily have access to privileged functionality, and they must program a general-purpose processor (the GPU) with the ability to easily hose the system. Without the user-space component, the kernel will not function well. Like other pieces of the kernel, the full 3D driver must be carefully examined to be sure that there are no security problems, fatal bugs, or portability issues. The kernel developers must be able to make changes to the kernel-side driver with full knowledge of what effect those changes will have on the full picture. A proprietary user-space driver clearly makes all of this more difficult - if not impossible.
User-space binary blob drivers also miss out on many of the important benefits of the free software development process. They will contain bugs and a great deal of duplicated code.
What Dave (and others) are clearly hoping is that, by pushing back in this way, they will be able to motivate vendors to open up their user-space drivers as well. The history in this regard is encouraging, but mixed. Over time, hardware vendors have generally come to realize that the value they are selling is not in the drivers and that they can make their lives easier by getting their code out into the open. What did it gain all of those wireless networking vendors to implement and maintain their own 802.11 stacks? One can only imagine that they must be glad to be relieved of that burden. But getting them to that point generally required pressure from the kernel development community.
Hopefully, this pressure will convince at least some mobile 3D vendors to open up as well. That pressure would be increased and made far more effective if at least some device manufacturers would insist on free software support for the components they use. There are companies working in this area which make a lot of noise about their support for Linux. They could do a lot to make Linux better by insisting on that same support from their suppliers.
Over the years, we have seen that pushing back against binary-only drivers has often resulted in changes for the better; we now have free support for a lot of hardware which was once only supported by proprietary drivers. Some vendors never relent, but enough usually have that the recalcitrant ones can simply be routed around. One shudders to think about what our kernel might look like now had things not gone that way. The prevalence of binary-only drivers in the mobile space shows that this fight is not done, though. 3D graphics drivers are unique in many aspects, including their use of user-space components. But, if we want to have a free kernel in the coming years, we need to hope that they will be subject to the same pressures.
Trying out Firefox 4
It should come as no big surprise that a large part of what we do here at LWN involves web browsing. We are, after all, a web publication, and many of our sources and other information come from various places across the web, so we tend to spend a lot of time using a web browser. When Mozilla announced the availability of the first beta of Firefox 4 on July 6, it seemed like a perfect opportunity to take the new browser for a spin.
![[Full-screen mode]](https://static.lwn.net/images/2010/ff4-fullscreen-sm.png)
One of the biggest—most visible—changes is the overall browser user interface. For Linux and Mac users, though, those UI changes have not yet been made, though some of the new functionality can be seen. By using the "Tabs on top" selection in the View -> Toolbars menu, something of limited preview of the new Firefox look can supposedly be achieved. Based on the descriptions from Windows users, though, there is much more to the changes than just moving the tabs. Full-screen mode in Firefox 4 (FF4) may give a better preview to what the default look will be: tabs all the way at the top that hide themselves when they aren't needed.
![[HTML5 WebM
video]](https://static.lwn.net/images/2010/ff4-video-sm.png)
In any case, the UI is only one part of the changes that come with FF4. Another important addition, at least for some folks, is the ability to play HD-quality video in the browser using Google's new WebM format. In my testing of the beta, videos seemed to work reasonably well—at least as well as they do using Flash in Firefox 3.5—but only at 360p. It may have been a bandwidth problem at this end, but 720p videos played poorly, if at all. It should be noted that the Flash version of the YouTube videos also suffered from some playback problems at 480p in FF4, but didn't seem to in Firefox 3.5.10.
On the other hand, in nearly a full day's worth of web surfing, FF4 was quite
solid, unlike 3.5 which seems plagued by some kind of hang that happens
more-or-less daily. I've always suspected Flash as the culprit, but never
narrowed it down to that for sure. One of the more interesting new
features in FF4 is crash
protection. The crash protection feature is meant to disable a
misbehaving plugin after it becomes unresponsive for 45 seconds by default.
As far as I could tell, that never occurred on any of the pages visited
with FF4. In the end, though, there were no crashes or hangs in five or
more hours of pretty continuous use, which makes for a pretty stable beta
in my book.
Another nice addition to the UI (one that Linux users do get to see in the beta) is turning the "Addons Manager" into a full-fledged tab, rather than a small pop-up window. It makes for much easier interaction when installing, updating, and configuring addons and plugins. One wonders if Preferences will eventually go that route, as the relatively small tabbed window that is used currently has gotten pretty cluttered. A reworking along the lines of the new Addons Manager would be welcome.
Installing the beta was completely straightforward: download a tar file, untar it, and type ./firefox. It picked up the settings, bookmarks, and so on from my current yum-installed version and, crucially, didn't rewrite those files in such a way that the older version could no longer read them. It is a well-behaved guest and, based on its performance so far, could easily become my default going forward. Undoubtedly, typing that will doom me to some horrible, unrecoverable crash right in the middle of pushing out this week's edition—luckily I have the laptop as a fall-back position.
There are lots of little things that will come in handy. Changes have been made to stop the CSS browser history leak by altering how getComputedStyle() works in JavaScript. That will be a boon for anyone concerned about privacy in their browsing history. There is a new "Heads Up Display" that will be useful to web developers. It looks very similar to the console provided by the ever-useful Firebug addon. And so on.
![[Feedback tool]](https://static.lwn.net/images/2010/ff4-feedback-sm.png)
Mozilla is making a big push to get feedback on FF4. By default, the Feedback addon is installed, which puts a prominent button in the upper right. The two main choices ("Firefox Made Me Happy/Sad Because...") take you to a page to quickly fill out information about what worked or didn't. One can also include the URL of a misbehaving web site into the report. It is a quick, nearly painless way to report problems (or give kudos) on the beta.
One of the things that the Mozilla folks concentrated on with FF4 was performance. Firefox has, somewhat deservedly, gotten the reputation of being slow, and that is something that the development team is working very hard to change. There are numerous improvements in FF4 to speed up the browser, but I didn't find them to be particularly noticeable. While I use the browser constantly, it may well be that I don't push it as hard as others, or perhaps am more forgiving. In any case, FF4 certainly didn't seem slower than its predecessors; maybe over time the performance increase will become more evident.
Other than certain addons not (yet, presumably) being available for FF4, I could pretty easily make the switch, which is a little surprising to me. Other reports have found FF4 to be much less stable than I did. In any case, it seems that Mozilla is on the right track with a fairly large incremental update to its browser. FF4 is scheduled to be released late this year and it looks to be a great browser to head into 2011 with.
Debian declassification delayed
In 2005, the Debian project voted to declassify messages on the debian-private mailing list after a period of three years. That is easier said than done, apparently. The General Resolution (GR) calls for volunteers to do the work of declassification, and few Debian Developers seem eager to do the work required to make it happen.
The debian-private list is, as the name suggests, a non-public list that is used by Debian Developers to discuss issues without the prying eyes of users, press, or anyone else outside the Debian project. The list and archive is only available to Debian Developers, and anything sent to debian-private is not to be spread to other lists.
Former Debian Project Leader (DPL) Steve McIntyre says that the traffic on debian-private "varies quite a lot, from a couple of dozen messages in some months to hundreds in others
", depending on whether there's a large and sensitive discussion. But most of the traffic is mundane, according to McIntyre:
Most of the traffic is quite boring these days: vacation messages as you've heard about, plus related discussions. We do have occasional sensitive discussions where, for a variety of reasons, people would rather not have them in public: discussions about relationships with upstream developers, people joining or leaving the project, etc. Normally nothing too juicy, I'm afraid, but it's often kept private to avoid offence as much as anything else.
Plus, as you'll see on a lot of geek mailing lists and newsgroups, there's quite a tendency to wander totally off-topic or into humour. And then a similar amount of stuff from people asking "why is this on -private?" Quite boring, really...
The GR to declassify was put forward by Anthony Towns prior to his stint as DPL, with an amendment proposed by Daniel Ruoso. Towns' GR called for a volunteer team to declassify messages on the debian-private mailing list three years after posting, with a set of exceptions. The volunteers are required to contact authors and give four to eight weeks to object to messages being made public. Posts with any financial information about outside organizations would not be published, and the posts are to be available to all Debian Developers two weeks prior to publication. The developer body can overrule publication by another General Resolution, even if the author consents.
Towns reasoned that debian-private went against the Debian Social Contract. According to Towns, the list has hosted important discussions in the evolution of Debian. The discussions should be open for examination at some point for academics and other projects to see how Debian has dealt with those issues.
Ruoso's amendment, which was approved, applied the GR only to messages sent after the GR passed. Thus, no messages prior to the end of 2005 are eligible for declassification.
In theory, the process of declassifying messages from 2006 onwards should be well underway. In practice, the GR seems to be an unfunded mandate. Volunteers to do the work seem to be in short supply. McIntyre asked for volunteers in January 2009, but little interest was shown. In May, current DPL Stefano Zacchiroli posted a request for volunteers for the declassification team to debian-project.
Since volunteer bodies seemed in short supply, Martin Krafft offered a simpler method. Krafft suggested that "archive chunks" be made available at monthly periods, with two months for authors to delete posts they don't wish disclosed. This was rejected as it does not fit with the original GR, and because some participants on debian-private in 2006 may no longer be Debian Developers — thus lacking access to delete messages.
The amount of work required may scare off the few developers actually interested in taking on the task. Don Armstrong replied to Zacchiroli's call for volunteers by saying he'd considered taking on the task and gave up:
I had actually glanced at working on this earlier, but stopped after a small bit of time, because it wasn't particularly useful, and because the sheer amount of work that it would require to satisfy the terms of the GR. (And frankly, the majority of the conversations in the archive either aren't interesting enough to bother publishing, or are on topics that such a large number of people will want their messages redacted, that it's kind of useless.)
Giacomo A. Catenazzi suggests, in a recent posting, that much of the discussion on debian-private is private because "we don't want to show all world about our vacation dates and destinations, about health and children, about personal issues we have with other people (in and outside Debian), etc.
" Russ Allbery echoed Armstrong, saying that many developers seem unwilling to declassify anything of interest:
The GR was an interesting idea, but based on the number of debian-private participants who, for anything that would be of any interest whatsoever after three years, have said they don't want their messages ever disclosed, I think in practice participants have spoken and have basically vetoed any sort of effective disclosure.
As it stands, it looks like the declassification GR will result in few or no messages being made public. The only action thus far is a status page reiterating the call for Debian Developers to take up the task. Zacchiroli posted an update on June 25 saying that some volunteers had been located, but "no one with actually enough free time to start doing the declassification right now.
"
This may be no great loss. The final GR, with the amendment constraining declassification to messages sent after January 1, 2006, means that messages showing the early evolution of Debian would not be made public. The provision that developers can veto release of messages after that date ensures that little of a controversial nature would be released, leaving little worth reading. As Andreas Tille suggests, it may be a better use of developers' time to fix RC bugs than spend time slogging through old debian-private discussions to prove just how open Debian is as a project.
One might also wonder why the project does not simply abolish debian-private altogether, in the spirit of openness. However, that would likely move sensitive discussions off of a project list altogether. It may be that the best option is discussion on a list open to all Debian Developers, but closed to the larger public, rather than discussions held out of view of the majority of the project.
Security
Vulnerability disclosure policies
Security vulnerability disclosure policy is contentious. Vendors typically argue for "responsible disclosure", while some in the security community think that "full disclosure" is the only way to fully protect the users of vulnerable software. There are other disclosure policies as well, but it is important to note that security researchers are under no obligation to disclose flaws that they find at all, which is something that all entities distributing software—vendors, projects, distributions, and so forth—should keep in mind. Anyone who finds a vulnerability and points it out is doing so as a favor, regardless of how the disclosure is done.
Full disclosure is the policy to immediately disclose the details of a vulnerability as soon as it is found. That may alert attackers to the flaw, but it also alerts users and allows them to make choices about mitigating the problem. It also puts enormous pressure on the maker of the software to produce a fix in a timely fashion.
Responsible disclosure, on the other hand, puts the choices largely in the hands of the vendor or project. The idea is to give the software maker some amount of time (usually on the order of weeks) to fix the problem and release a patch before any disclosure of the flaw is made. It is meant to be "responsible" to users, so that attackers don't get a leg up on the installed, vulnerable software before there is a fix available. But, responsible disclosure pre-supposes that attackers are not already aware of—and exploiting—the flaw.
There has also been a trend toward "partial disclosure" in the last few years. Typically practiced by those looking to make a name for themselves—and/or bring publicity to their research firm—partial disclosures are pretty much what they sound like: the announcement of the existence of a flaw with as few details as possible. But there is a fine, and difficult to draw, line between providing enough details to convince the security community that there is a flaw and not disclosing so much that others can figure out what that flaw is. Eventually, partial disclosures become some other kind of disclosure, either through the efforts of the original finder, or because other researchers were able to figure out the flaw from the clues.
Something of a new wrinkle in disclosure policy is zero
(or no)
disclosure—at least without payment. VUPEN Security has
announced two vulnerabilities in Microsoft Office 2010 on its security blog, but is
unwilling to disclose them to Microsoft until and unless the software giant
ponies up for the information. The H quotes VUPEN CEO Chaouki Bekrar:
"Why should security services providers give away for free
information aimed at making paid-for software more secure?
"
While there is nothing that requires security researchers to alert software makers to bugs in their code, it is a longstanding tradition to disclose those flaws. But security companies may now be more focused on mitigating any vulnerabilities they find for their customers, leaving the rest of the user community high and dry. At least until some deep-pocketed organization steps up and pays. While some in our community might be amused that Microsoft (and its users) are being treated this way, it may not be so funny if it starts happening to Linux or free software projects.
Microsoft is hardly blameless here. For years it treated security vulnerabilities as a public relations problem at worst. It has also had a rocky relationship with the security community, which has led to more than one exasperated disclosure of a "zero day" vulnerability. A recent privilege escalation in Vista and Server 2008 is just such a disclosure; researchers annoyed by criticism of Tavis Ormandy for the release of a Windows vulnerability formed the "Microsoft-Spurned Researcher Collective"—MSRC, just like the Microsoft Security Response Center—to anonymously make these kinds of zero day disclosures. It should be noted, though, that Microsoft is not alone; the Linux kernel community has also had a combative relationship with security researchers at times.
While there is little direct harm in security researchers keeping their knowledge of specific vulnerabilities to themselves, there is certainly the potential for harm with partial disclosures. This relatively new zero disclosure policy is, in reality, just a form of partial disclosure, and may provide attackers with just enough information to focus their efforts. If these researchers truly want to be paid for their efforts, they would be much better served by working with the established players in the vulnerability buying business (Tipping Point and others) or by approaching the affected vendor privately. For vulnerabilities in Linux and other free software, though, it's not particularly clear who would be willing to pick up the tab. We will just have to hope that, if that happens, any loud zero disclosure of a flaw like that provides enough clues for the "white hats" to track down the problem in short order.
Brief items
Quotes of the week
New vulnerabilities
acroread: multiple arbitrary code execution vulnerabilities
Package(s): | acroread | CVE #(s): | CVE-2010-1240 CVE-2010-1285 CVE-2010-1295 CVE-2010-1297 CVE-2010-2168 CVE-2010-2201 CVE-2010-2202 CVE-2010-2203 CVE-2010-2204 CVE-2010-2205 CVE-2010-2206 CVE-2010-2207 CVE-2010-2208 CVE-2010-2209 CVE-2010-2210 CVE-2010-2211 CVE-2010-2212 | ||||||||||||||||||||||||||||||||
Created: | July 1, 2010 | Updated: | January 21, 2011 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entry: CVE-2010-2203: Adobe Reader and Acrobat 9.x before 9.3.3 on UNIX allow attackers to execute arbitrary code or cause a denial of service (memory corruption) via unspecified vectors. Note that all of the other CVEs are listed as Windows and MacOS X only. In addition, they are all as opaque as the above. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
avahi: denial of service
Package(s): | avahi | CVE #(s): | CVE-2010-2244 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2010 | Updated: | May 19, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A deficiency in the way avahi daemon processed packets with corrupted checksum(s). A remote attacker on the same local are network (LAN) could send a DNS packet with broken checksum, that would cause avahi-daemon to exit unexpectedly due to a failed assertion check. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
bugzilla: information disclosure
Package(s): | bugzilla | CVE #(s): | CVE-2010-1204 | ||||||||
Created: | July 6, 2010 | Updated: | August 27, 2010 | ||||||||
Description: | From the CVE entry:
Search.pm in Bugzilla 2.17.1 through 3.2.6, 3.3.1 through 3.4.6, 3.5.1 through 3.6, and 3.7 allows remote attackers to obtain potentially sensitive time-tracking information via a crafted search URL, related to a "boolean chart search." | ||||||||||
Alerts: |
|
gcc: directory traversal
Package(s): | gcc | CVE #(s): | CVE-2010-0831 CVE-2010-2322 | ||||||||||||||||||||
Created: | July 6, 2010 | Updated: | September 28, 2012 | ||||||||||||||||||||
Description: | From the Red Hat bugzilla:
Dan Rosenberg reported a directory traversal flaw in fastjar that allows an attacker, who is able to convince a victim to extract a malicious .jar file, to overwrite arbitrary files on disk without prompting the victim. The files to be overwritten must be writable by the user extracting the .jar file. | ||||||||||||||||||||||
Alerts: |
|
libtiff: out-of-bounds writes
Package(s): | libtiff | CVE #(s): | CVE-2010-2233 | ||||||||||||||||
Created: | July 2, 2010 | Updated: | August 9, 2010 | ||||||||||||||||
Description: | From the Red Hat bugzilla:
A flaw was found in a way libtiff handled vertically flipped (i.e. with negative toskew) images on 64bit platforms. Numeric computation used to calculate buffer pointer was done in the way that incorrectly extended from 32bit type to 64bit, resulting in out-of-bounds writes. | ||||||||||||||||||
Alerts: |
|
mahara: multiple vulnerabilities
Package(s): | mahara | CVE #(s): | CVE-2010-1667 CVE-2010-1668 CVE-2010-1670 CVE-2010-2479 | ||||||||||||
Created: | July 2, 2010 | Updated: | August 23, 2010 | ||||||||||||
Description: | From the Debian advisory:
Several vulnerabilities were discovered in mahara, an electronic portfolio, weblog, and resume builder. The following Common Vulnerabilities and Exposures project ids identify them: Multiple pages performed insufficient input sanitising, making them vulnerable to cross-site scripting attacks. (CVE-2010-1667) Multiple forms lacked protection against cross-site request forgery attacks, therefore making them vulnerable. (CVE-2010-1668) Gregor Anzelj discovered that it was possible to accidentally configure an installation of mahara that allows access to another user's account without a password. (CVE-2010-1670) Certain Internet Explorer-specific cross-site scripting vulnerabilities were discovered in HTML Purifier, of which a copy is included in the mahara package. (CVE-2010-2479) | ||||||||||||||
Alerts: |
|
mediawiki: multiple vulnerabilities
Package(s): | mediawiki | CVE #(s): | CVE-2010-1189 CVE-2010-1190 | ||||
Created: | July 6, 2010 | Updated: | July 7, 2010 | ||||
Description: | From the Fedora advisory:
Three security issues are fixed in this update: A CSS validation issue was discovered which allows editors to display external images in wiki pages. A data leakage vulnerability was discovered in thumb.php which affects wikis which restrict access to private files using img_auth.php, or some similar scheme. MediaWiki was found to be vulnerable to login CSRF. The upstream authors recommend that all public wikis should be upgraded if possible. The fix includes a breaking change to the API login action. Any clients using it will need to be updated. | ||||||
Alerts: |
|
mediawiki: multiple vulnerabilities
Package(s): | mediawiki | CVE #(s): | CVE-2010-1647 CVE-2010-1648 | ||||||||
Created: | July 6, 2010 | Updated: | July 7, 2010 | ||||||||
Description: | From the Fedora advisory:
CVE-2010-1647 Cross-site scripting (XSS) vulnerability in MediaWiki 1.15 before 1.15.4 and 1.16 before 1.16 beta 3 allows remote attackers to inject arbitrary web script or HTML via crafted Cascading Style Sheets (CSS) strings that are processed as script by Internet Explorer. CVE-2010-1648 Cross-site request forgery (CSRF) vulnerability in the login interface in MediaWiki 1.15 before 1.15.4 and 1.16 before 1.16 beta 3 allows remote attackers to hijack the authentication of users for requests that (1) create accounts or (2) reset passwords, related to the Special:Userlogin form. | ||||||||||
Alerts: |
|
rpm: privilege escalation
Package(s): | rpm | CVE #(s): | CVE-2010-2059 CVE-2010-2198 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2010 | Updated: | September 22, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
lib/fsm.c in RPM 4.8.0 and unspecified 4.7.x and 4.6.x versions, and RPM before 4.4.3, does not properly reset the metadata of an executable file during replacement of the file in an RPM package upgrade, which might allow local users to gain privileges by creating a hard link to a vulnerable (1) setuid or (2) setgid file. (CVE-2010-2059) lib/fsm.c in RPM 4.8.0 and earlier does not properly reset the metadata of an executable file during replacement of the file in an RPM package upgrade or deletion of the file in an RPM package removal, which might allow local users to gain privileges or bypass intended access restrictions by creating a hard link to a vulnerable file that has (1) POSIX file capabilities or (2) SELinux context information, a related issue to CVE-2010-2059. (CVE-2010-2198) | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.35-rc4, released on July 4. "I've been back online for a week, and at least judging by the kinds of
patches and pull requests I've been getting, I have to say that I
think that having been strict for -rc3 ended up working out pretty
well. The diffstat of rc3..rc4 looks quite reasonable, despite the
longer window between rc's. And while there were certainly some things
that needed fixing, I'm hoping that we'll have a timely 2.6.35 release
despite my vacation [...]
". For all of the details, both the short-form
changelog and the full
changelog are available.
The stable kernel floodgates opened up and five separate kernels poured out: 2.6.27.48, 2.6.31.14, 2.6.32.16, 2.6.33.6, and 2.6.34.1. There will be no more 2.6.31 stable kernels, and there will only be one more 2.6.33 release. The others will still see updates for some time to come.
Quotes of the week
Kernel development news
Ask a kernel developer
One in a series of columns in which questions are asked of a kernel developer and he tries to answer them. If you have unanswered questions relating to technical or procedural things around Linux kernel development, ask them in the comment section, or email them directly to the author.How do I figure out who to email about a problem I am having with the kernel. I see a range of messages in the kernel log, how do I go from that to the proper developers who can help me out?
For example, right now I am seeing the following error:
[ 658.831697] [drm:edid_is_valid] *ERROR* Raw EDID:
[ 658.831702] 48 48 48 48 50 50 50 50 20 20 20 20 4c 4c 4c 4c HHHHPPPP LLLL
[ 658.831705] 50 50 50 50 33 33 33 33 30 30 30 30 36 36 36 36 PPPP333300006666
[ 658.831709] 35 35 35 35 0a 0a 0a 0a 20 20 20 20 20 20 20 20 5555....
Where do I start with tracking this down?
The kernel log is telling you where the problem is, so the real trick is going to be in tracking it down to who is responsible for the messages. There are different ways to go about this. You could first try to just grep the kernel source tree for the error string:
$ cd linux-2.6 $ git grep "\*ERROR\* Raw EDID"Ok, that didn't work, let's try to narrow down the string some:
$ git grep "Raw EDID" drivers/gpu/drm/drm_edid.c: DRM_ERROR("Raw EDID:\n");Ok, now you have a file to look at. But who is responsible for this file? As mentioned previously you can use the get_maintainer.pl script by passing it the filename you are curious about:
$ ./scripts/get_maintainer.pl -f drivers/gpu/drm/drm_edid.c David Airlie <airlied@linux.ie> Dave Airlie <airlied@redhat.com> Adam Jackson <ajax@redhat.com> Zhao Yakui <yakui.zhao@intel.com> dri-devel@lists.freedesktop.org linux-kernel@vger.kernel.orgThis shows that the main DRM developer, David Airlie, is the best person to ask, but other developers may also be able to help out. Sending mail to David, and CCing the others listed (including the mailing lists) will get it in front of those who are most likely to be able to assist.
Another way to find out what code is responsible for the problem is to look at the name of the function that was writing out the error:
[ 658.831697] [drm:edid_is_valid] *ERROR* Raw EDID:The function name is in the [] characters, and we can look for that to see what code is calling it:
$ git grep edid_is_valid drivers/gpu/drm/drm_edid.c: * drm_edid_is_valid - sanity check EDID data drivers/gpu/drm/drm_edid.c:bool drm_edid_is_valid(struct edid *edid) drivers/gpu/drm/drm_edid.c:EXPORT_SYMBOL(drm_edid_is_valid); drivers/gpu/drm/drm_edid.c: if (!drm_edid_is_valid(edid)) { drivers/gpu/drm/radeon/radeon_combios.c: if (!drm_edid_is_valid(edid)) { include/drm/drm_crtc.h:extern bool drm_edid_is_valid(struct edid *edid);This points again at the drivers/gpu/drm/drm_edid.c file as being responsible for the error message.
In looking at the function drm_edid_is_valid there are a number of other messages that could have been produced in the kernel log right before this one:
if (csum) { DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum); goto bad; } if (edid->version != 1) { DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version); goto bad; }So when you email the developers and mailing list found by the get_maintainer.pl script, it is always important to provide all of the kernel log, not just the few single last lines of the error, because there might be more information a bit higher up that shows more information that the developers can use to help debug the problem.
[ Thanks to Peter Favrholdt for sending in this question. ]
On the scalability of Linus
The Linux kernel development process stands out in a number of ways; one of those is the fact that there is exactly one person who can commit code to the "official" repository. There are many maintainers looking after various subsystems, but every patch they merge must eventually be accepted by Linus Torvalds if it is to get into the mainline. Linus's unique role affects the process in a number of ways; for example, as this article is being written, Linus has just returned from a vacation which resulted in nothing going into the mainline for a couple of weeks. There are more serious concerns associated with the single-committer model, though, with scalability being near the top of the list.Some LWN readers certainly remember the 2.1.123 release in September, 1998. That kernel failed to compile due to the incomplete merging (by Linus) of some frame buffer patches. A compilation failure in a development kernel does not seem like a major crisis, but this mistake served as a flash point for anger which had been growing in the development community for a while: patches were increasingly being dropped and people were getting tired of it. At the end of a long and unpleasant discussion, Linus threw up his hands and walked away:
Go away, people. Or at least don't Cc me any more. I'm not interested, I'm taking a vacation, and I don't want to hear about it any more. In short, get the hell out of my mailbox.
This was, of course, the famous "Linus burnout" episode of 1998. Everything stopped for a while until Linus rested a bit, came back, and started merging patches again. Things got kind of rough again in 2000, leading to Eric Raymond's somewhat sanctimonious curse of the gifted lecture. In 2002, as the 2.5 series was getting going, frustration with dropped patches was, again, on the rise; Rob Landley's "patch penguin" proposal became the basis for yet another extended flame war on the dysfunctional nature of the kernel development process and the "Linus does not scale" problem.
Shortly thereafter, things got a whole lot smoother. There is no doubt as to what changed: the adoption of BitKeeper - for all the flame wars that it inspired - made the kernel development process work. The change to Git improved things even more; it turns out that, given the right tools, Linus scales very well indeed. In 2010, he handles a volume of patches which would have been inconceivable back then and the process as a whole is humming along smoothly.
Your editor, however, is concerned that there may be some clouds on the horizon; might there be another Linus scalability crunch point coming? In the 2.6.34 cycle, Linus established a policy of unpredictable merge window lengths - though that policy has been more talk than fact so far. For 2.6.35, quite a few developers saw the merge window end with no response to their pull requests; Linus simply decided to ignore them. The blowup over the ARM architecture was part of this, but quite a few other trees remained unpulled as well. We have not gone back to the bad old days where patches would simply disappear into the void, and perhaps Linus is just experimenting a bit with the development process to try to encourage different behavior from some maintainers. Still, silently ignoring pull requests does bring back a few memories from that time.
Too many pulls?
A typical development cycle sees more than 10,000 changes merged into the mainline. Linus does not touch most of those patches directly, though; instead, he pulls them from trees managed by subsystem maintainers. How much attention is paid to specific pull requests is not entirely clear; he does look at each closely enough to ensure that it contains what the maintainer said would be there. Some pulls are obviously subjected to closer scrutiny, while others get by with a quick glance. Still, it's clear that every pull request and every patch will require a certain amount of attention and thought before being merged.
The following table summarized mainline merging activity by Linus over the last ten kernel releases (the 2.6.35 line is through 2.6.35-rc3):
Release Pulls Patches MergeWin Total Direct Total 2.6.26 159 426 288 1496 2.6.27 153 436 339 1413 2.6.28 150 398 313 954 2.6.29 129 418 267 896 2.6.30 145 411 249 618 2.6.31 187 479 300 788 2.6.32 185 451 112 789 2.6.33 176 444 104 605 2.6.34 118 393 94 581 2.6.35 160 218 38 405
The two columns under "pulls" show the number of trees pulled during the merge window and during the development cycle as a whole. Note that it's possible that these numbers are too small, since "fast-forward" merges do not necessarily leave any traces in the git history. Linus does very few fast-forward merges, though, so the number of missed merges, if any, will be small.
Linus still directly commits some patches into his repository. The bulk of those come from Andrew Morton, who does not use git to push patches to Linus. In the table above, the "total" column includes changes that went by way of Andrew, while the "direct" column only counts patches that Andrew did not handle.
Some trends are obvious from this table: the number of patches going directly into the mainline has dropped significantly; almost everything goes through somebody else's tree first. What's left for Linus, at this point, is mostly release tagging, urgent fixes, and reverts. Andrew Morton remains the maintainer of last resort for much of the kernel, but, increasingly, changes are not going through his tree. Meanwhile, the number of pulls is staying roughly the same. It is interesting to think about why that might be.
Perhaps there is no need for more pulls despite the increase in the number of subsystem trees over time. Or perhaps we're approaching the natural limit of how many subsystem pull requests one talented benevolent dictator can pay attention to without burning out. After all, it stands to reason that the number of pull requests handled by Linus cannot increase without bound; if the kernel community continues to grow, there must eventually be a scalability bottleneck there. The only real question is where it might be.
If there is a legitimate concern here, then it might be worth contemplating
a response before things break down again. One obvious approach would be
to change the fact that almost all trees are pulled directly into the
mainline; see this plot to see just how
flat the structure is for 2.6.35.
Subsystem maintainers who have earned sufficient trust could possibly handle more
lower-level pull requests and present a result to Linus that he can merge
with relatively little worry. The networking subsystem already works this
way; a number of trees feed into David Miller's networking tree before
being sent upward. Meanwhile, other pressures have led to the opposite
thing happening with the ARM architecture: there are now several
subarchitecture trees which go straight to Linus. The number of ARM pulls
seems to have been a clear motivation for Linus to shut things down during
the 2.6.35 merge window.
Another solution, of course, would be to empower others to push trees directly into the mainline. It's not clear that anybody is ready for such a radical change in the kernel development process, though. Ted Ts'o's 1998 warning to anybody wanting a "core team" model still bears reading nearly twelve years later.
But if Linus is to retain his central position in Linux kernel development, the community as a whole needs to ensure that the process scales and does not overwhelm him. Doing more merges below him seems like an approach that could have potential, but the word your editor has heard is that Linus is resistant to too much coalescing of trees; he wants to look stuff over on its way into the mainline. Still, there must be places where this would work. Maybe we need an overall ARM architecture tree again, and perhaps there could be a place for a tree which would collect most driver patches.
The Linux kernel and its development process have a much higher profile than they did even back in 2002. If the process were to choke again due to scalability problems at the top, the resulting mess would be played out in a very public way. While there is no danger of immediate trouble, we should not let the smoothness of the process over the last several years fool us into thinking that it cannot happen again. As with the code itself, it makes sense to think about the next level of scalability issues in the development process before they strike.
Bcache: Caching beyond just RAM
Kent Overstreet has been working on bcache, which is a Linux kernel module intended to improve the performance of block devices. Instead of using just memory to cache hard drives, he proposes to use one or more solid-state storage devices (SSDs) to cache block data (hence bcache, a block device cache).
The code is largely filesystem agnostic as long as the filesystem has an embedded UUID (which includes the standard Linux filesystems and swap devices). When data is read from the hard drive, a copy is saved to the SSD. Later, when one of those sectors needs to be retrieved again, the kernel checks to see if it's still in page cache. If so, the read comes from RAM just like it always has on Linux. If it's not in RAM but it is on the SSD, it's read from there. It's like we've added 64GB or more of - slower - RAM to the system and devoted it to caching.
The design of bcache allows the use of more than one SSD to perform caching. It is also possible to cache more than one existing filesystem, or choose instead to just cache a small number of performance-critical filesystems. It would be perfectly reasonable to cache a partition used by a database manager, but not cache a large filesystem holding archives of old projects. The standard Linux page cache can be wiped out by copying a few large (near the size of your system RAM) files. Large file copies on that project archive partition won't wipe out an SSD-based cache using bcache.
Another potential use is using local media to cache remote disks. You could use an existing partition/drive/loopback device to cache any of the following: AOE (ATA-over-Ethernet) drive, SAN LUN, DRBD or NBD remote drives, iSCSI drives, local CD, or local (slow) USB thumb drives. The local cache wouldn't have to be an SSD, it would just have to be faster than the media you're caching.
Note that this type of caching is only for block devices (anything that shows up as a block device in /dev/). It isn't for network filesystems like NFS, CIFS, and so on (see the FS-cache module for the ability to cache individual files on an NFS or AFS client).
Implementation
To intercept filesystem operations, bcache hooks into the top of the block layer, in __generic_make_request(). It thus works entirely in terms of BIO structures. By hooking into the sole function through which all disk requests pass, bcache doesn't need to make any changes to block device naming or filesystem mounting. If /dev/md5 was originally mounted on /usr/, it continues to show up as /dev/md5 mounted on /usr/ after bcache is enabled for it. Because the caching is transparent, there are no changes to the boot process; in fact, bcache could be turned on long after the system is up and running. This approach of intercepting bio requests in the background allows us to start and stop caching on the fly, to add and remove cache devices, and to boot with or without bcache.
bcache's design focuses on avoiding random writes and playing to SSD's strengths. Roughly, a cache device is divided up into buckets, which are intended to match the physical disk's erase blocks. Each bucket has an eight-bit generation number which is maintained in a separate array on the SSD just past the superblock. Pointers (both to btree buckets and to cached data) contain the generation number of the bucket they point to; thus to free and reuse a bucket, it is sufficient to increment the generation number.
This mechanism allows bcache keep the cache device completely full; when it wants to write some new data, it just picks a bucket, increments its generation number, invalidating all the existing pointers to it. Garbage collection will remove the actual pointers eventually; there is no need for backpointers or any other infrastructure.
For each bucket, bcache remembers the generation number from the last time it had a full garbage collection performed. Once the difference between the current generation number and the remembered number reaches 64, it's time for another garbage collection. Since the generation number has no chance to wrap, an 8-bit generation number is sufficient.
Unlike standard btrees, bcache's btrees aren't kept fully sorted, so if you want to insert a key you don't have to rebalance the whole thing. Rather, they're kept sorted according to when they were written out; if the first ten pages are already on disk, bcache will insert into the 11th page, in sorted order, until it's full. During garbage collection (and, in the future, during insertion if there are too many sets) it'll re-sort the whole bucket. This means bcache doesn't have much of the index pinned in memory, but it also doesn't have to do much work to keep the index written out. Compare that to a hash table of ten million or so entries and the advantages are obvious.
State of the code
Currently, the code is looking fairly stable; it's survived overnight torture tests. Production is still a ways away, and there are some corner cases and IO error handling to flesh out, but more testing would be very welcome at this point.
There's a long list of planned features:
IO tracking: By keeping a hash of the most recent IOs, it's possible to track sequential IO and bypass the cache - large file copies, backups, and raid resyncs will all bypass the cache. This one's mostly implemented.
Write behind caching: Synchronous writes are becoming more and more of a problem for many workloads, but with bcache random writes become sequential writes. If you've got the ability to buffer 50 or 100GB of writes, many might never hit your RAID array before being rewritten.
The initial write behind caching implementation isn't far off, but at first it'll work by flushing out new btree pages to disk quicker, before they fill up - journaling won't be required because the only metadata to write out in order is a single index. Since we have garbage collection, we can mark buckets as in-use before we use them and leaking free space is a non issue. (This is analogous to soft updates - only much more practical). However, journaling would still be advantageous so that all new keys can be flushed out sequentially; then updates to the btree can happen as pages fill up, versus many smaller writes so that synchronous writes can be completed quickly. Since bcache's btree is very flat, this won't be much of an issue for most workloads, but should still be worthwhile.
Multiple cache devices have been planned from the start, and mostly implemented. Suppose you had multiple SSDs to use - you could stripe them, but then you have no redundancy, which is a problem for writeback caching. Or you could mirror them, but then you're pointlessly duplicating data that's present elsewhere. Bcache will be able to mirror only the dirty data, and then drop one of the copies when it's flushed out.
Checksumming is a ways off, but definitely planned; it'll keep checksums of all the cached data in the btree, analogous to what Btrfs does. If a checksum doesn't match, that data can be simply tossed, the error logged, and the data read from the backing device or redundant copy.
There's also a lot of room for experimentation and potential improvement in the various heuristics. Right now the cache functions in a least-recently-used (LRU) mode, but it's flexible enough to allow for other schemes. Potentially, we can retain data based on how much real time it saves the backing device, calculated from both the seek time and bandwidth.
Sample performance numbers
Of course, performance is the only reason to use bcache, so benchmarks matter. Unfortunately there's still an odd bug affecting buffered IO so the current benchmarks don't yet fully reflect bcache's potential, but are more a measure of current progress. Bonnie isn't particularly indicative of real world performance, but has the advantage of familiarity and being easy to interpret; here is the bonnie output:
Uncached: SATA 2 TB Western Digital Green hard drive
Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP utumno 16G 672 91 68156 7 36398 4 2837 98 102864 5 269.3 2 Latency 14400us 2014ms 12486ms 18666us 549ms 460ms
And now cached with a 64 gb Corsair Nova:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP utumno 16G 536 92 70825 7 53352 7 2785 99 181433 11 1756 15 Latency 14773us 1826ms 3153ms 3918us 2212us 12480us
In these numbers, the per character columns are mostly irrelevant for our purposes, as they're affected by other parts of the kernel. The write and rewrite numbers are only interesting in that they don't go down, since bcache isn't doing write behind caching yet. The sequential input is reading data bonnie previously wrote, and thus should all be coming from the SSD. That's where bcache is lacking, the SSD is capable of about 235 mb/sec. The random IO numbers are actually about 90% reads, 10% writes of 4k each; without write behind caching bonnie is actually bottlenecked on the writes hitting the spinning metal disk, and bcache isn't that far off from the theoretical maximum.
For more information
The bcache wiki holds more details about the software, more formal benchmark numbers, and sample commands for getting started.
The git repository for the kernel code is available at git://evilpiepirate.org/~kent/linux-bcache.git. The userspace tools are in a separate repository: git://evilpiepirate.org/~kent/bcache-tools.git. Both are viewable with a web browser at the gitweb site.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Virtualization and containers
Miscellaneous
Page editor: Jake Edge
Distributions
The Hurd: GNU's quest for the perfect kernel
When Richard Stallman founded the GNU project in 1983, he started building a completely free Unix-like operating system bit by bit. Over the years, the GNU system got a compiler (GCC), a C library (glibc), a build system (make), a shell (Bash), an editor (Emacs), and so on. However, it wasn't until May 1991 that the project announced they would start working on a kernel, the last missing piece of their free operating system: the Hurd (note the article in the name, and that "Hurd" is the phonetic equivalent for the English word herd).
By then, Linus Torvalds had already started working on his Linux kernel, which he would announce in August of that year. In his now famous debate with Andrew Tanenbaum on the newsgroup comp.os.minix, he wrote (in January 1992):
Ted Ts'o, one of the earliest Linux developers, made a similar remark:
So, with all due respect to the project's mission and philosophy, we should be happy that the Hurd wasn't ready in 1991, because in 2010 it still isn't (at least for production use). Reading the history of the Hurd (for example in the article that The H published in late June), a comparison to the everlasting development of the video game Duke Nukem Forever is not completely far-fetched. A release has been promised several times, but the development schedule has been postponed even more and the architecture has been completely redone several times.
No vaporware
So what is the state of the Hurd? Is it vaporware, like Duke Nukem Forever? Fortunately not: the code exists, there is still work going on (for instance as part of Google Summer of Code), and there are even some relatively functional Hurd distributions. Let's look first at the code and the current architecture, and then at the Hurd distributions.
The current architecture of the Hurd is a set of "servers" running on top of the GNU Mach microkernel. These servers implement file systems, network protocols, authentication, processes, terminals, and other features that traditional monolithic kernels implement in the kernel itself. The Hurd servers implement clear protocols that formalize how the different components of the Hurd kernel interact, which is designed to reduce mutual trust between components. To transfer information to each other, the servers use Mach's interprocess communication (IPC) system. Collectively, these loosely coupled servers implement the POSIX API, with each individual server implementing its part of the specification.
Both the logo and the name of the Hurd project reflect this architecture. The logo is called the Hurd boxes, and it shows some boxes (the Hurd kernel's servers) connected by arrows (Mach's IPC messages). The name is, in hacker culture tradition, a recursive acronym. Thomas (then Michael) Bushnell, the primary architect of the Hurd, explained the meaning like this:
For readers that want a technical overview of the Hurd's architecture and some critiques on the decisions made, your author points to the paper A Critique of the GNU Hurd Multi-server Operating System [PDF] by Neal Walfield and Marcus Brinkmann, published in ACM SIGOPS Operating Systems Review in July 2007. The two have also written a forward-looking proposal for how a future GNU Hurd might be architected: Improving Usability via Access Decomposition and Policy Refinement.
To fix some of these shortcomings of the Hurd while preserving the main design goals, there's already a Hurd-ng project in the works, although it is mostly research oriented. For a time there was an effort, led by Walfield and Brinkmann, to port the Hurd from Mach to the L4 microkernel family, but it never reached a releasable state. The two also considered the Coyotos microkernel, but eventually Walfield moved on to working on a new microkernel Viengoos, about which he published two academic papers.
Hurd distributions
Over the years, a couple of Hurd-based distributions have been born, all of them spin-offs of an existing Linux distribution. In March 2003, Jon Portnoy started a Gentoo GNU Hurd system, but it was abandoned at the end of 2006. There was also a Hurd live CD at superunprivileged.org, but it appears to have stalled in December 2007. Luckily, two Hurd distributions are still active projects: Debian GNU/Hurd and Arch Hurd.
Debian GNU/Hurd is an active subproject of Debian, in development since 1998, but it is not officially released yet ("and won't be for some time
", according to the project's home page). Debian GNU/Hurd is calling for help in the development of the distribution, which amounts mostly to porting software. About 67 percent of the Debian packages have been ported to the Hurd kernel, and the Hurd project has a compilation of common porting problems and their solutions. It's interesting to point out that the GNU Hurd developers rely on Debian GNU/Hurd and endorse it on their web site. The last official release of the Hurd without the Debian parts was 0.2 in 1997. On the Hurd project's status page they explain:
People that want to try Debian GNU/Hurd can install the most recent version, called Debian GNU/Hurd L1, or run the live CD that Justus Winter created. There's also a Qemu image made by Jose Luis Alarcon Sanchez (and made smaller by Samuel Thibault), which makes it easy to test Debian GNU/Hurd virtualized with KVM. All these versions work on x86 and x86_64 (in 32-bit mode) PCs. There's also some information about the first steps after installation, such as getting networking running and configuring X, and the GNU/Hurd user's guide is focused on the Debian flavor.
Another Hurd-based distribution, which started in the beginning of 2010, is Arch Hurd (with a logo that is a nice blend between the logos of Arch Linux and the Hurd). This project attempts to bring the spirit of Arch Linux (a minimalist and bleeding edge operating system) to the Hurd kernel. With just four developers, the project has managed to develop a basic Arch Hurd system in a couple of months.
Recently a live CD and installer has been written and, as expected from an Arch project, there is extensive documentation about installing Arch Hurd using the live CD and installing from within another distribution. Xorg 1.8 has already been ported and, while there are not many graphical programs yet, the system has the Openbox window manager and the XTerm terminal emulator.
Because the Hurd runs on Mach, hardware compatibility for these distributions depends on what Mach supports. Currently, GNU Mach uses a maximum of 1 GiB of RAM. Video card drivers are the ones from X.org, but if they depend on a special Linux kernel interface, only the VESA driver works. Sound cards and USB are not supported yet.
Google Summer of Code
The Hurd has been active in the Google Summer of Code project for some years now, sometimes as part of the GNU project, sometimes on its own. This year, the Hurd is participating with three projects under the GNU umbrella.
Jeremie Koenig, mentored by Samuel Thibault, is working on adapting the Debian Installer to produce up-to-date Debian GNU/Hurd installation images. Currently Debian GNU/Hurd is installed either using outdated CD images, or from an existing Debian GNU/Linux system using the crosshurd package.
Emilio Pozuelo Monfort, mentored by Carl Fredrik Hammar, is fixing compatibility problems exposed by projects' test suites when executed on the Hurd. This doesn't sound like an exciting task, but it is needed: test suites for programs like Perl, Python, GNU coreutils, and glib regularly fail on the Hurd because of shortcomings in the Hurd's implementation of system interfaces.
A third GSoC project, mentored by Sergio López, is being done by Karim Allah Ahmed: his goal is to bring the virtual memory management in The Hurd/Mach closer to that of other mainstream kernels, like Linux, FreeBSD, and XNU (the kernel of Mac OS X). He will look at those implementations to see if he can come up with improvements and optimizations for the Hurd.
Conclusion
All in all, the easiest way to currently run a Hurd system is by installing Debian GNU/Hurd. The system runs virtualized in Xen or Qemu, or on a physical system, provided that the hardware is supported by Mach. The lack of sound and USB support obviously means that it is of limited use for desktop users, but on the Hurd status page there is a testimonial of someone that has been using the Hurd for most of his everyday work for two years, so it is possible if your demands are low. A cursory look with apt-cache search by your author on his Debian GNU/Hurd installation in KVM confirmed that there's actually a lot of software available, including graphical programs.
Installing the Hurd is still not as simple as in the Linux world, but there are projects working on that. Although the Hurd is clearly not ready yet for general use as a desktop system, for people that are interested in the GNU operating system, now is a good time to give it a try.
New Releases
openSUSE 11.3 RC2 is now available
The second and final release candidate for openSUSE 11.3 is out. The final release is scheduled for July 15, 2010.Red Hat Enterprise Linux 6 Beta 2 Now Available
Red Hat has announced the release of RHEL 6 beta2. "Customer and partner testing of Red Hat Enterprise Linux 6 Beta is in full swing, and we have been very pleased with the strong positive feedback that we have received from our testing community. We are on track to deliver a final product that we expect will meet customer needs for years to come. The first Beta was released in April, and incorporated a wide range of new and upgraded features. Today we have released Red Hat Enterprise Linux 6 Beta 2, which provides an updated installer, additional new technologies and resolutions to many of the issues that were reported in the initial Beta."
Ubuntu Maverick Alpha 2 released
Ubuntu's Maverick Meerkat Alpha 2 (10.10) is available for testing. This release includes the following editions: Ubuntu Desktop and Netbook, Ubuntu Server for UEC and EC2, Kubuntu Desktop and Netbook, Xubuntu, Ubuntu Studio, and Mythbuntu.
Distribution News
Debian GNU/Linux
bits from listmasters and [RFH] lists.d.o web archive
Martin Zobel-Helas has a few bits from the Debian listmasters. Topics include Request for help: web archives, List archive spam, Bug categorization, and How to help.
Fedora
Fedora Hall Monitors policy updated
The Fedora project has updated its "hall monitors" policy which empowers monitors to try to maintain a "be excellent to each other" atmosphere on the mailing lists. The biggest change seems to be that the monitors will no longer attempt to shut down list threads and will, instead, focus on specific problematic postings. "The Hall Monitors are not a solution to the problem of poisonous people, but rather, a method of addressing the symptom of the problem."
Fedora Board IRC Recap 2010-07-02
Click below for a recap of the July 2, 2010 meeting of the Fedora advisory board. Topics include FPL news, fedoracommunity.org, and Planet wiki page.
SUSE Linux and openSUSE
Maintenance for openSUSE 11.0 extended
The support period for openSUSE 11.0 has been extended by about a month in order to provide users with a smoother upgrade path to 11.3. The last updates for 11.0 are now scheduled for July 21, 2010. openSUSE 11.3 is scheduled for a final release July 14, 2010.openSUSE India IRC Channel
There is now an IRC channel for openSUSE lovers in India.
Newsletters and articles of interest
Quote of the Week
Distribution newsletters
- DistroWatch Weekly, Issue 361 (July 5)
- Fedora Weekly News Issue 232 (July 1)
- openSUSE Weekly News, Issue 130 (July 3)
- Ubuntu Weekly Newsletter, Issue 200 (July 3)
Duffy: Fedora Board Meeting, 2 Jul 2010
Máirín Duffy has written a summary of the July 2 meeting of the Fedora Board. "There is currently an ongoing discussion on the advisory-board list about having a fedoracommunity.org subdomain point to a fedoraproject.org/wiki page. Paul does not think it is the best idea because the point behind the fedoracommunity.org domain was to enable truly community-run sites that run outside of fedoraproject.org's policies. Not many Board members have disagreed on the thread so he wanted to bring up the topic to make sure it was being considered."
Mandriva's Future Rosy or Rose Colored? (Linux Journal)
Susan Linton takes a look at the delayed release of Mandriva Linux 2010 Spring. "Mandriva has been providing enterprise and educational systems and support as well as selling boxed sets and USB drives of its desktop system, but they've always had difficulty competing with other Linux companies such as Red Hat and Novell in terms of revenue and the ability to land long-term contracts. Experts question Mandriva's ability to construct a profitable business model and users hope a freely available version will continue to be a part of the business model. Speculation on Mandriva forums has them abandoning the PowerPack and splitting their offerings into enterprise and community versions, much like Redhat and Fedora or Novell and openSUSE." The final release is currently scheduled for July 8, 2010.
A Mint system based on Debian
The Linux Mint team is working on a Linux Mint desktop based on Debian testing. "The idea of a Linux Mint desktop based on top of Debian Testing is quite seducing. It's much faster than Ubuntu and the current Linux Mint desktops, it uses less resources, and it opens the door for a rolling distribution, with a continuous flow of updates and no jumps from one release to another. It's something we've always been tempted to do. Needless to say, whether it's been because of our lack of communication on that topic or not, this has been a source of numerous rumors within the community."
Final openSUSE 11.3 release candidate arrives (The H)
The H looks at the second and final release candidate of openSUSE 11.3. "The 11.3 release of the openSUSE Linux distribution is based on the 2.6.34 Linux kernel and features support for the Btrfs file system in the installer. Support for NVIDIA graphics cards has been improved with the integration of the Nouveau driver. Other changes include various KDE, GNOME and LXDE desktop updates and changes, and better support for netbooks."
Red Hat Enterprise Linux 6 Hits Beta 2, Targeting Year End Release (Datamation)
Datamation covers the release of RHEL 6 Beta 2 and suggests that the final RHEL 6 release will happen before the end of the year. "In addition to the enhancements to the kernel and installer, Red Hat is also restructuring the beta for some specific use-cases. There is now a RHEL 6 Server release as well as a RHEL 6 Workstation release. Additionally, for the server release, there are now Clustered Storage, Large File System, High Availability and Load Balance versions."
Sabayon Linux 5.3 (Linux Journal)
Susan Linton reviews Sabayon Linux 5.3, a Gentoo based distribution. "To end users Sabayon Linux is a fully functioning, complete, and easy-to-use distribution. It ships with the latest in desktops and software with lots of tools for setup, maintainence, and configuration. In fact, it comes with lots and lots of software, including a nice selection of quality games. Users can choose from GNOME of KDE editions compiled for x86 or x86_64 systems. It also includes 3D acceleration and proprietary Wi-Fi drivers as well as codecs and plugins for full multimedia enjoyment."
Stuart Winter: I learnt more about Linux in two weeks of using Slackware than in two years of using Red Hat (The Slack World)
The Slack World talks with Stuart Winter, a Slackware developer and maintainer of the ARMedslack project. "I started using Red Hat Linux in 1996 for a few years, but had a shell account on a friend's Slackware box. I started reading the config files and the rc scripts and saw how well they were commented and how clean the system looked. All of my systems were ARM desktops back then, apart from the one PC with Red Hat on it. So I got hold of a 486 and installed Slackware v3.5. I learnt more about Linux in two weeks of using Slackware than two years of using Red Hat-purely because everything had to be done by hand."
Page editor: Rebecca Sobol
Development
"Open" phone call routing with OpenVBX
Silicon Valley startup Twilio
released an open source phone-routing application framework named OpenVBX last month, branded as "the web-based, open source phone system for business.
" As is often the case with web services, though, using open in the product name does not necessarily mean customers have access to all of the code.
Twilio itself runs a commercial phone routing service, currently supporting inbound and outbound U.S. phone calls and SMS messages and outbound-only international calls. Unlike consumer-oriented offerings such as Google Voice (which offers a turn-key application that routes incoming calls and text messages, among other things), Twilio's service has always targeted developers. The company's service offers an API, to which developers would write their own PHP-and-MySQL applications to dictate call routing, call logging, voicemail transcription, or other functionality.
Up until the launch of OpenVBX, Twilio users were building their own applications from scratch, with some reusable code-sharing among the community, and using Twilio's own collection of prefabricated application components called "Twimlets." Some developers, such as OtherNum, even wrote their own full-fledged "virtual branch exchange" applications on top of the Twilio service.
OpenVBX is essentially one of those virtual branch exchange applications (hence the VBX) written by Twilio itself, and released under the Mozilla Public License. Users install OpenVBX on their own server, define call routing plans (called "flows" in Twilio's documentation), configure user accounts, write custom scripts and plugins, and even set outbound phone numbers. All of the actual call routing, however, is done on Twilio's cloud service, which charges per-minute and per-message rates and leases local and toll-free U.S. phone numbers.
Calling all servers
For those simply wanting to set up a voice-response system or basic business switchboard, OpenVBX is impressive both for its straightforward setup and its ease of management. The OpenVBX server requires nothing but a basic LAMP stack and a Twilio account to install and configure. Interested users can download the source code, or install it as a one-click-installer option if using the Dreamhost hosting service. For those not ready to pony up for a Twilio-provided phone number, the company offers a free "sandbox" number complete with $30 in calling credit. Apart from the sandbox, however, there is no way to test or deploy OpenVBX other than paying to lease a phone number from the company.
Once installed, administrators can use the OpenVBX dashboard to add user accounts and phones (both mobile and land line). Each user account has an associated inbox, which stores SMS messages and voicemails (in audio and transcribed form), and can initiate calls and send text messages through the web interface.
This level of service makes OpenVBX akin to a multi-user, in-house alternative to Google Voice. The real power comes from the ability to do more complex call response. Administrators can lease multiple Twilio-provided numbers and configure the behavior of each one individually, by connecting it to a particular "call flow" and/or "SMS flow."
![[OpenVBX call flow editor]](https://static.lwn.net/images/2010/openvbx-flow-editor-sm.png)
Flows are assembled in the browser with a drag-and-drop visual editor. The flow editor walks the administrator through the process, starting with the question "When a call begins, what should we do?" — which can be answered by dragging one of the available applets onto the empty outline box beneath the question. Playing a message, recording a voicemail, routing the call to one of the outbound numbers, and offering an interactive menu are among the basic applets. Each applet has its own simple configuration, allowing voice prompts to the caller via computer-synthesized voice, pre-recorded message, or digital audio file, and more.
The built-in applets offer enough functionality to route incoming calls based on caller responses like a traditional phone tree, plus a few modern niceties such as initiating conference calls, or sending an SMS to notify users of a new call.
But the really interesting possibilities come through third-party OpenVBX plugins, which can not only enable new routing functionality (such as routing based on keyword matching against incoming SMS messages), but can tie the OpenVBX system into any other web application. Plugins already exist to check whether the caller is in the Highrise customer relationship management (CRM) system, to check for the caller's location via Foursquare, and to open a ticket in the Zendesk customer service system, among others. Installing a plugin is as simple as unzipping the downloaded file to the OpenVBX /plugins/ directory and configuring it through the web administration panel. Third-party plugins are linked to, but not hosted by, Twilio, and may have their own licensing terms.
Performance-wise, OpenVBX delivers as advertised. In my own test, it introduces no noticeable lag to calls, processes scripts quickly, and provides good voice quality. The synthesized text-to-speech voice leaves something to be desired, but what it lacks in personality it makes up for in clarity.
Twilio provides detailed documentation for writing OpenVBX plugins, including message handling, the user interface toolkit, and the APIs for data, users, and groups. For more detailed help with the Twilio API that connects an OpenVBX instance to the Twilio service, there is additional documentation at twilio.com.
Nice enough, but open enough?
Considering how easy to use OpenVBX is, it is unfortunate that it builds on a proprietary API and a closed flow description language (TwiML). Strictly speaking, it would be inaccurate to call OpenVBX an "open core" product, since the open layer is on the outside. Nor really would it be accurate to call it the opposite (open veneer?), because the Twilio service itself is a load-balanced farm of virtual servers running the open source Asterisk switching software on Amazon's EC2 cloud.
Clearly a cloud-based Asterisk or FreeSWITCH instance is something that a lot of customers want; Twilio has no shortage of competitors offering a similar service that frees the buyer from hardware requirements and allows per-minute billing. In addition, the layer that Twilio places in between Asterisk and OpenVBX simplifies configuration — but there is no reason for it to be proprietary. TwiML, for example, duplicates the functionality of the W3C's open VoiceXML standard.
Moreover, Twilio seems to discourage would-be platform developers from porting OpenVBX to a different backend. The FAQ page says in response to the question "Can I replace Twilio with X as the backend for OpenVBX?":
Elsewhere on the page, the company says it may accept patches into OpenVBX's trunk after evaluation by the project. A more open attitude would probably attract more developers to the project, because a friendly front-end is something that all Asterisk users, not just Twilio, would benefit from.
The race is on
In fact, maintaining its middleware stack as a proprietary layer does not guarantee Twilio any chance of success. Already, cloud-phone-service competitor Teleku announced that it would support the full Twilio API and TwiML alongside its other services, hoping to attract OpenVBX users — something that it could not do before OpenVBX's release due to incomplete public documentation. Twilio provides extensive API documentation, bundled with a more liberal LICENSE document than is applied to the OpenVBX code (requiring only copyright notice reproduction), making such interoperability trivial to implement. There are also several independent Twilio API-using developers who were put out of business by the release of OpenVBX itself who are now looking for something else ot occupy their time — including the creator of OtherNum, mentioned above.
In addition, other Asterisk front-ends are available that can tie in to cloud-based call-routing services, such as OpenVoice, which is designed to work with the Tropo cloud service. Since the OpenVBX source is available, too, perhaps Twilio should expect ports to other backends or complete forks just because interest in phone routing is on the rise. Despite what Twilio does offer through its cloud service, there are several big gaps: no live voice recognition, no integration with instant messaging services, and — most notably — no SIP support. Considering these areas without feature coverage, acting like a fork will not happen is a formula for being left behind.
All things considered, OpenVBX is a pleasure to set up and run. Installation and administration are simple, the feature set is powerful enough to deploy in the field, and the quality is good. The only down side is the proprietary middle-layer between an excellent open source telephony server and an excellent telephony front-end. Hopefully Twilio will see the strategic value of opening the entire stack in light the highly competitive — and growing — voice application space.
Brief items
Quote of the week
GNOME 2.31.4 released
A development snapshot of the GNOME desktop is available for testing. "This is the first release featuring a number of modules that have switched from GTK+ 2.x to 3, and it also brings a surprise standalone gdk-pixbuf."
GNU Parallel makes parallel processing in shell easy
GNU Parallel has been released. "It gives shell users a generic tool for running jobs in parallel - either on the local computer or on a list of computers using SSH."
Mercurial 1.6 released
Version 1.6 of the Mercurial distributed version control system has been released. The new features and bug fixes from the release can be found on the What's New page. "The two most important new features of this release are: * pushable bookmarks. This lets you synchronize bookmarks between repositories using push and pull. * a powerful new revision query language." Click below for the full announcement.
PyTables 2.2 released: entering the multi-core age
The final release of PyTables 2.2 has been announced. "PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use." Click below for a list of new features.
Python 2.7 released
The Python development team has announced the release of Python 2.7. "Python 2.7 will be the last major version in the 2.x series. However, it will also have an extended period of bugfix maintenance." This release includes many features that were introduced in Python 3.1.
Rhythmbox 0.13.0 "Albatross" released
Rhythmbox 0.13.0 "Albatross" is now available. "This is the start of a new series of releases [focused] on adding new features and addressing the latest round of platform updates."
Shotwell 0.6.1 released
The Shotwell 0.6.1 release is out. Shotwell is a photo management application; LWN reviewed it in June. The biggest changes include support for raw images, the ability to zoom images, and an "open in external application" feature.Twisted 10.1.0 released
Twisted Matrix Laboratories has announced the release of Twisted 10.1.0. Highlights include Deferreds cancellation support, new "endpoint" interface, inotify support for Linux, and AMP transferring timestamps support.xorg-server 1.8.2
The stable release of xorg-server 1.8.2 is available. "This is the last regular 1.8 release unless someone else wants to take over as RM. Until that happens, the server-1.8-branch is open. If you have patches that you think are necessary for the 1.8 series, please push them there."
Newsletters and articles
Development newsletters from the last week
- Tcl-URL! (July 2)
- Caml Weekly News (July 7)
Tell Your Story with Celtx (Linux.com)
Joe "Zonker" Brockmeier looks at Celtx, which is a tool for doing media pre-production, over at Linux.com. "In short, Celtx is the tool you want if you're writing a book, script, play, or other media on Linux. It's a very specific tool that fits those tasks very well. It can also be bent slightly for other forms of media. Case in point, I've been toying with Celtx's storyboard template as a way to prep for talks. It's a lot easier to break down a talk into sections, and is a better way to organize the flow of a talk. At least in my opinion. There's still some duplication of work in creating the slides later, but it's not too bad."
Better multimedia support for OpenOffice.org on Unix systems
OpenOffice.org has announced a switch to GStreamer for multimedia support. "From a technical point of view, the new GStreamer backend will be enabled by default on Unix systems, but can be disabled via configure with the --disable-gstreamer switch. In this case or in cases where no appropriate library can be found during runtime, a fallback to the old JMF implementation will happen."
Overbite Project brings Gopher protocol to Android (ars technica)
Ars technica looks the Overbite Project. "The Overbite Project is an open-source effort to produce browser plugins and client applications that enable support for Gopher, an early network protocol that preceded HTML and the contemporary World Wide Web. The lead developer behind the project is retrocomputing enthusiast Cameron Kaiser, one of the few remaining champions of gopherspace. His latest undertaking is a mobile Gopher client for Google's Android operating system."
Zimmerman: Weve packaged all of the free software what now?
Ubuntu CTO Matt Zimmerman ponders package managers on his blog. He is concerned that the success of the package manager approach has led to using that model for many kinds of deployments (data, embedded, client/server, and so on) that might best be solved in other ways. "No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won't account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system."
Peer into Firefox's future in latest beta (CNet)
CNet looks at the first beta for Firefox 4. "Firefox 4 beta also sees significant improvements to Firefox under the hood. There's a new HTML5 parser, and support for more HTML5 form controls than in previously released Firefox betas or the nightly builds. The HTML5 WebM video format for HD is natively supported, and APIs have been improved for Websockets, HTML History, and JS-ctypes, a foreign function interface for extensions. CSS transitions are partially supported too."
Page editor: Rebecca Sobol
Announcements
Non-Commercial announcements
Open Clip Art Library 2.3
The Open Clip Art Library team has announced the release of Open Clip Art Library 2.3. "The latest version, including greater than 32,300 completely scalable vector graphics, serves as a platform for more than 1,500 artists to collaborate and share work."
Commercial announcements
PostgreSQL nears parity with Oracle 11g
2ndQuadrant has issued a press release on the streaming replication and Hot Standby features in the upcoming release of PostgreSQL 9.0. "Simon Riggs, CTO and Founder of 2ndQuadrant, said the implementation of streaming replication and Hot Standby in the core of the database, will encourage companies to migrate from expensive proprietary databases, such as Oracle."
Red Hat Reports First Quarter Results
Red Hat has announced financial results for its fiscal year 2011 first quarter ended May 31, 2010. ""We had a strong start to our fiscal year with 20% organic revenue growth and 28% non-GAAP operating income growth," stated Jim Whitehurst, President and Chief Executive Officer of Red Hat. "We executed well and achieved a significant increase in the number of large deals booked year-over- year, including several with an initial consulting component which we believe is a positive indicator of new project spending and future subscription billings.""
Articles of interest
Cornelius and Vincent Join Different Games (KDE.News)
KDE.News has a joint interview with KDE e.V. President Cornelius Schumacher and GNOME Foundation President Vincent Untz. In it, they discuss Schumacher recently becoming a "Friend of GNOME" and Untz's reciprocal move to become a "KDE Supporting Member". "I don't remember who exactly brought it up, but somebody said that it would be cool to have somebody from GNOME join us. We are working together increasingly well after all and there are lots of good relationships between the two communities. GNOME and KDE share so much in terms of goals, philosophy, and technology, that it makes a lot of sense to stay together, and address the challenges for free software on the client. [...] So after a carefully crafted scientific selection process to find the coolest guy in GNOME, we came up with the idea to ask Vincent to join the game, and the other way around for me to become a friend of GNOME."
Linux at HP: A Decade of Leadership (OS News)
OS News covers a talk by Bdale Garbee, HP Open Source & Linux Chief Technologist. "According to Garbee, HP has a somewhat unique approach to open source. HP participates not just by funding projects but truly getting their hands dirty as a direct open source member, collaboratively supporting existing community values and behaviors and developing even more robust enterprise capabilities. HP is especially unique in the open source field in that they have only ever used existing licences: HP has never created its own license. In addition, HP was an early contender in fighting license proliferation, or when pieces of software cannot be combined because their licences are incompatible. Finally, HP combines its efforts with Linux distributions in order to bring the most effective solutions for its customers."
Russell: Superfreakonomics; Superplug for Intellectual Ventures
Rusty Russell looks at the last chapter of Superfreakonomics on his blog. That chapter is a glowing overview of the "business model" of the huge and well-funded patent troll, Intellectual Ventures. "Now, I don't really care if one company leeches off the others. But if they want to tax software, they have to attack free software otherwise people will switch to avoid their patent licensing costs. And if you don't believe some useful pieces of free software could be effectively banned due to patent violations, you don't think on the same scale as these guys."
Monty appeals Oracle's Sun merger (The Register)
The Register reports that Michael 'Monty' Widenius has submitted an appeal against the European competition watchdog's decision to clear Oracle's takeover of Sun Microsystems earlier this year. "Michael 'Monty' Widenius, who sold his chunk of the MySQL database to Sun Microsystems in 2008 for $1bn, told various news outlets including The Register on Friday that he would make a full statement about the appeal once Oracle had responded to it."
New Books
Building the Realtime User Experience--New from O'Reilly
O'Reilly has released "Building the Realtime User Experience - Creating Immersive and Interactive Websites". The book "shows you how to build realtime user experiences by adding chat, streaming content, and including more features on your site one piece at a time, without making big changes to the existing infrastructure. You'll also learn how to serve realtime content beyond the browser."
Resources
CE Linux Forum Newsletter: June 2010
The June edition of the CE Linux Forum Newsletter covers ELC Europe Keynotes Announced, eLinux Wiki Editor Contest Award Winners, Japan Technical Jamboree #33, NHK (Japan Broadcasting Corporation) TV Program on Linux, and Busybox Enhancements.FSFE Newsletter - July 2010
The Free Software Foundation Europe newsletter for July 2010 is out. "This edition covers Neelie Kroes' statement about Open Standards, the Free Software discussion in Saxony (Germany), and the relicensing of WebM to be GPL compatible, and asks you all to keep in touch with your politicians about Free Software issues."
Contests and Awards
Calling all Open Source innovators
The Demo Cup will take place on October 1, 2010 in Paris, France at the 2010 Open World Forum. It is a 'demo' competition, open to all Open Source and free software innovators. "Twelve companies, chosen from written applications, will be nominated and invited to give an eight-minute presentation of their work to the large audience expected to attend the Open World Forum, including decision-makers with a keen interest in Open Source, and a jury of experts in innovation (investors, Open Source integrators and experts)." Entries are due by July 15, 2010.
Upcoming Events
Linux.conf.au requests 2012 bid proposals
The Linux Australia council has announced that they are now accepting proposals for a venue for the 2012 edition of linux.conf.au (LCA). "If you are thinking of bidding, please get in touch with the Linux Australia council. Michael Carden is acting as our LCA2012 Bid liaison and will be happy to have a chat to you about what's involved and pass on some example bids and a sample budget."
Key Speakers Named for Magnolia-CMS Conference 2010
The key speakers for the Magnolia-CMS Conference have been named. The conference takes place in Basel, Switzerland, September 16-17 2010.Events: July 15, 2010 to September 13, 2010
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 12 July 16 |
Ottawa Linux Symposium | Ottawa, Canada |
July 15 July 17 |
FUDCon | Santiago, Chile |
July 17 July 24 |
EuroPython 2010: The European Python Conference | Birmingham, United Kingdom |
July 17 July 18 |
Community Leadership Summit 2010 | Portland, OR, USA |
July 19 July 23 |
O'Reilly Open Source Convention | Portland, Oregon, USA |
July 21 July 24 |
11th International Free Software Forum | Porto Alegre, Brazil |
July 22 July 23 |
ArchCon 2010 | Toronto, Ontario, Canada |
July 22 July 25 |
Haxo-Green SummerCamp 2010 | Dudelange, Luxembourg |
July 24 July 30 |
Gnome Users And Developers European Conference | The Hague, The Netherlands |
July 25 July 31 |
Debian Camp @ DebConf10 | New York City, USA |
July 31 August 1 |
PyOhio | Columbus, Ohio, USA |
August 1 August 7 |
DebConf10 | New York, NY, USA |
August 4 August 6 |
YAPC::Europe 2010 - The Renaissance of Perl | Pisa, Italy |
August 7 August 8 |
Debian MiniConf in India | Pune, India |
August 9 August 10 |
KVM Forum 2010 | Boston, MA, USA |
August 9 | Linux Security Summit 2010 | Boston, MA, USA |
August 10 August 12 |
LinuxCon | Boston, USA |
August 13 | Debian Day Costa Rica | Desamparados, Costa Rica |
August 14 | Summercamp 2010 | Ottawa, Canada |
August 14 August 15 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
August 21 August 22 |
Free and Open Source Software Conference | St. Augustin, Germany |
August 23 August 27 |
European DrupalCon | Copenhagen, Denmark |
August 28 | PyTexas 2010 | Waco, TX, USA |
August 31 September 3 |
OOoCon 2010 | Budapest, Hungary |
August 31 September 1 |
LinuxCon Brazil 2010 | São Paulo, Brazil |
September 6 September 9 |
Free and Open Source Software for Geospatial Conference | Barcelona, Spain |
September 7 September 9 |
DjangoCon US 2010 | Portland, OR, USA |
September 8 September 10 |
CouchCamp: CouchDB summer camp | Petaluma, CA, United States |
September 10 September 12 |
Ohio Linux Fest | Columbus, Ohio, USA |
September 11 | Open Tech 2010 | London, UK |
If your event does not appear here, please tell us about it.
Event Reports
KDE Akademy 2010 conference: First videos and slides published (The H)
The H reports that the first videos and slides from Akademy are available. "The initial slides and videos include a video of the first 2010 keynote, entitled "MeeGo redefining the Linux desktop landscape", presented by Valtteri Halla, Director of MeeGo Software at Nokia, and a presentation by KDE software developer Aaron Seigo discussing the future of Free Software and the KDE community titled "Reaching For Greatness"."
Page editor: Rebecca Sobol