LWN.net Weekly Edition for June 23, 2011
One more hop for Kermit
Way back in the Good Old Days, when your editor was a VMS system administrator, the ARPAnet was small, slow, and extremely limited in access. While some of the cooler folks were building USENET around uucp-based links, Unix systems were still not readily available to most of us. But we were beginning to get personal computers, bulletin-board systems were being set up, and the fortunate among us could afford 1200 baud modems. The tool of choice for using those modems in those days was often a little program called "Kermit," which was freely downloadable even then. Your editor doesn't remember when he last used a modem, but he was still interested to see that this 30-year-old project is about to go through a final transition; it will, among other things, be free software at last.For many systems in those days - especially systems without a government budget behind them - the "network interface" was the serial port. That's how we connected to The Computer at work; that's also how we got onto the early services which were available in those days. The first thing any RS232 networking user needed was a way to move keystrokes between one interface (where they sat) and the interface connected to the modem. Unix systems often came with a tool called "cu" for that purpose, but, even on those systems, users tended to gravitate toward a newish tool called Kermit instead.
Kermit has its roots at Columbia University, where it was developed as a way to communicate between computers on a heterogeneous network. It would fit easily onto a floppy and was (unlike cu) easy to set up and use; one just needed to figure out how many data bits, how many stop bits, and what kind of parity to use (RS232 is a fun "standard"), type an appropriate ATD command at the modem, and go. Kermit could even handle things like translating between different character sets; talking to that EBCDIC mainframe was not a problem.
Over a short period of time, Kermit developed some reasonable file transfer capabilities. Its protocol was efficient enough, but it was also designed to deal with serial ports which might interpret control characters in unexpected ways, turn eight-bit data into seven-bit data, and more. The robustness of the protocol meant it "just worked" between almost any two arbitrary types of machines. So it's not surprising that, in those pre-Internet days, Kermit became a popular way of moving files around.
Columbia distributed a number of versions of Kermit in source form; it could be had from bulletin board sites, DECUS tapes, and more. The program was never released as free software, though. When Kermit was first coming into use, free software licenses as such didn't exist yet. The university considered releasing the code into the public domain but decided that it wasn't a good idea:
The license for Kermit varied over time, but was always of the "you may make noncommercial use of this program" variety. The final version of the Kermit license allowed bundling the program with free operating systems, but did not allow modifications to the source without permission. As a result, despite the fact that Kermit's license allowed distribution with systems like Linux, most distributors eventually shied away from it because it was not truly free software.
Anybody other than free operating system projects wanting to distribute Kermit commercially had to buy per-seat licenses (at a cost of $3-10 each) from Columbia.
Kermit has proved remarkably durable over the years. During Kermit's lifespan, the Internet has taken over, even our phones can run ssh, and RS232-based communications have mostly fallen by the wayside. Kernel developers are still known to use serial consoles for some types of debugging, but your editor would predict that a substantial portion of the rest of LWN's readership has never had to wonder where that null modem cable went to. Kermit's user base must certainly be shrinking, but it is still being maintained and sold to those who need it.
Or, it least, it was; in March the university announced that it was unable to continue supporting the Kermit project. One assumes that commercial license sales had finally dropped to the point where they weren't worth the administrative overhead of dealing with them. As of July 31, 2011, the university will put no further development effort into Kermit and will no longer provide support and maintenance services. A three-decade project is coming to a close.
Columbia University is doing one more thing with Kermit before the end,
though - it is releasing the software under the BSD license. C-Kermit 9.0 will carry
that license; the first beta release was made on June 15. The 9.0
release will also support the FORCE-3 packet protocol ("for use under
severe conditions
"), improved scripting, various fixes, and more.
So the 9.0 release, presumably scheduled for sometime close to the
July 31 deadline, will not just have new features; it will be free
software for the first time.
As a result of this change, Kermit may soon show up in a distribution repository near you; most Linux users are, at this point, unlikely to care much. But, for many of us, there will yet come a time when the only way to talk to a system of interest is through a serial connection. Kermit is far from the only option we have at this point, needless to say, but it's a good one. Kermit's hop into the the free software community is more than welcome.
On keys and users
Secure communication requires encryption keys, but key management and distribution are difficult problems to solve. They are also important problems to solve; solutions that can be easily used by those who are not technically inclined are especially needed. Turning encrypted communications into the default, rather than an option used mostly by the technically savvy, will go a long way toward protecting users from criminals or repressive regimes. Even if these secure communications methods are only used by a subset of internet users, that subset truly needs these tools.
HTTPS
For most users, the only encrypted communication they use is SSL/TLS for accessing web pages served over HTTPS. The keys used for HTTPS are normally stored on the server side in the form of certificates that are presented to the browser when the connection is made. In order to establish the identity of the remote server, to avoid phishing-style attacks, those certificates are signed by certificate authorities (CAs) such that the browser can verify the key that it receives.
This scheme suffers from a number of problems, but it works well enough in practice so that, by and large, users can safely enter credit card and other personal information into the sites. It relies on centralized authorities in the form of the CAs, however, which can be a barrier to web site owners or users who might otherwise be inclined to serve content over HTTPS. In addition, trusting CAs has its own set of dangers. But, contrary to some of the big web players' beliefs, the web is not the only form of communication.
Other protocols
For the most part, things like instant messages, email, peer-to-peer data transfers, and voice over IP (VoIP) are all transmitted over the internet in the clear. Solutions exist to encrypt those communications, but they are not widely used, at least partly because of the key problem. Each protocol generally has its own idea of key formats and storage, as well as how to exchange those keys.
For efforts like the Freedom Box, it will be extremely important to find a solution to this key problem, so that users can more-or-less effortlessly communicate in private. If multiple applications that used keys could agree on common key types and formats, it would reduce the complexity. That may be difficult for a number of reasons, some technical, others social, so perhaps finding a way to collect up all of a user's keys into a single "key bundle" makes sense.
Generally, encryption these days uses public key cryptography, which means that there are actually two keys being used. One is the public key that can be widely disseminated (thus its name), while the other is a private key that must be kept secure. A key bundle would thus have two (or more) parts, the bundle that's shown to the rest of the world for communication and authentication, and the other that's kept secret. This private bundle would require the strongest protections against loss or theft.
Generating the keys, and agreeing upon some kind of bundle format, are fairly straightforward technical problems. Those can be solved rather easily. But the real problems will lie in educating users about keys, the difference between public and private keys, and how to properly protect their private keys. User interfaces that hide most of that complexity will be very important, but there will be things that users need to understand (chief among them will be the importance of private bundle safety).
Storing the keys
It would be a simpler problem to solve if the private bundle could be kept, securely, in one location, but that's not really a workable scenario. Users will want (or need) to communicate from a wide variety of devices, desktops, laptops, tablets, mobile phones, etc., and will want to maintain their identities (i.e. keys) when using any or all of them.
Storing the bundles at some "trusted" location on the internet ("in the cloud" in the parlance of our times) might be possible but it would also centralize their location. If some entity can cut off access to your key bundle, it leaves you without a way to communicate securely. Spreading the bundles out to various locations, and caching them locally, would reduce those problems somewhat. Obviously, storing the bundles anywhere (including locally) would require that the bundles themselves be encrypted—using a strong password—which leads to another set of potential problems, of course.
Since these private key bundles will be so important, there will need to be some way to back them up, which would be an advantage of the cloud scenario. Losing one's well-established identity could be a very serious problem. A worse problem would be if someone of malicious intent were to gain control over the keys. For either of those problems, some kind of revocation mechanism for the corresponding public keys would need to be worked out.
There are already some solutions to these problems, but, like much of the rest of the key management landscape, they tend to be very key and application-specific. For example, GNU Privacy Guard (GPG) public keys are often registered at a key server so that encrypted email can be decrypted and verified. Those keys can also be revoked by registering a revocation certificate with the key server. But, once again, key servers centralize things to some extent. Likewise, SSL/TLS certificates can be revoked by way of a certificate revocation list that is issued by a CA.
Public keys
Most of the above is concerned with the difficulties in maintaining the private bundle, but there are problems to be solved on the public key side as well. Gathering a database of the public keys of friends, family, colleagues, and even enemies will be important in order to communicate securely with them. But, depending on how that public key is obtained, it may not necessarily correspond to the individual in question.
It is not difficult to generate a key that purports to be associated with any arbitrary person you choose. The SSL/TLS certificate signing process is set up to explicitly deal with that problem by having the CAs vouch that a given certificate corresponds to a particular site. That kind of centralized authority is not a desirable trait for user-centric systems, however, so something like GPG's (and others') "web of trust" is a better model. Essentially, the web of trust allows the user to determine how much trust to place in a particular key, but it requires a fair amount of user knowledge and diligence that may make it too complex for non-technical users.
Free software can make a difference
As can be seen, there are a large number of hurdles that need to be cleared in order to make secure communication both ubiquitous and (relatively) simple to use. Key management has always been the achilles heel of public key cryptography, and obvious solutions are not ready to hand. But they are important problems to solve, even though they may only be used by a minority of internet users. For many, the additional hassles required to securely communicate may not be overcome by the concern that their communications may be intercepted. For others, who are working to undermine repressive regimes for example, making it all Just Work will be very important.
This is clearly an area where free software solutions make sense. Proprietary software companies may be able to solve some of these problems, but the closed-source nature of their code will make it very worrisome to use for anyone with a life-and-death need for it. There have just been far too many indications that governments can apply pressure to have "backdoors" inserted into communications channels. With open standards and freely available code, though, activists and others can have a reasonable assurance of privacy.
These problems won't be solved overnight, nor will they all be solved at once. Public key cryptography has been with us for a long time without anyone successfully addressing the key management problem. Recent events in the Middle East and elsewhere have shown that internet communication can play a key role in thwarting repressive regimes. Making those communications more secure will further those aims.
Karen Sandler on her new role at GNOME
Eight months after Stormy Peters left the post to join Mozilla, the GNOME Foundation has chosen Karen Sandler as its new executive director. Sandler is leaving a position with the Software Freedom Law Center (SFLC) as general counsel and starting with the Foundation June 21, but will be working part-time for each organization during the transition.
Prior to joining the SFLC, Sandler was an associate with Gibson, Dunn & Crutcher LLP in New York and Clifford Chance in New York and London. Taking up a position as executive director seems a slight departure from practicing law, even if focused on free software — so what made Sandler choose to pursue working with the GNOME Foundation? Sandler says the appeal is GNOME itself: "It's an incredibly important and impressive software that's been entering a critical time. I can't wait to be a part of that and assist the GNOME community to develop and grow.
"
She does acknowledge that it's a departure from focusing on legal issues, but says that she's looking forward to the change. "As a lawyer you're generally working to avoid pitfalls and anticipate the worst case scenarios and I'm excited to help much more proactively than that.
"
So what will Sandler actually be doing as executive director?
When Peters held the role, she says that she started by "asking
everyone in the GNOME community what they thought I should do.
"
During her time, Peters says that she ran the Foundation's day-to-day operations, served as a contact point for questions, helped build the marketing team and travel committee, and "helped make events happen
" though she says events mostly happened thanks to the GNOME community.
Sandler, taking a cue from Peters says that she'll ask what people think she'll be working on, but hopes to spend at least some time on advocacy:
She notes that this isn't dissimilar to what she's been doing for the SFLC already:
Sandler also points out that there are likely to be a lot of housekeeping tasks that have gathered dust since Peters left for Mozilla and her first days will be "spent getting up to speed, getting to know people, taking care of the administrative backlog and ramping up for the Desktop Summit (and of course following up on items raised during the Summit). I'll also look to renew relationships and generally try to immerse myself in all things GNOME.
"
Peters left in November 2010, and the search committee for the executive director was announced at the end of December. The committee included Peters, IBM's Robert Sutor, GNOME Foundation director Germán Póo-Caamaño, Kim Weins of OpenLogic, Jonathan Blandford of Red Hat, Luis Villa of Mozilla, former GNOME board member Dave Neary, and Bradley Kuhn of the Software Freedom Conservancy (SFC). However, Kuhn says that he stepped down from the committee once Sandler emerged as a serious candidate, as they had worked closely together at the SFLC and SFC; Kuhn also considers her a personal friend.
One thing Sandler won't be doing is driving the technical direction of GNOME. Sandler says that, like Peters, she has "a limited role
" in the technical direction of GNOME, and says "I'd support whatever the community and release team decided.
"
Another large part of the role is fundraising. The executive director is the point person for the advisory board and works to encourage members to sign up and donate not only the advisory board fees, but also to contribute to specific events like GNOME Hackfests.
Financially, the GNOME Foundation is doing well enough. In April 2009
there was a concern
that the foundation would be unable to continue the executive director
position due to lack of funds. In that message, John Palmieri said that Peters had managed to recruit a number of new corporate sponsors, but "we are still projecting that without a significant influx of steady contributions we will be unable to keep an Executive Director on the payroll without cutting into the activities budget.
" The foundation started leaning heavily on its Friends of GNOME program for individual donations, and doubled advisory board fees for corporate members from $10,000 per year to $20,000.
Despite the increased fees and additional income from Friends of GNOME, the budget for 2011 shows a decline in income of about $52,000, while the proposed expenditures are higher by about $64,000. Though it's worth noting the expenditures will likely be lower than planned, as it appears the budget was prepared with the expectation an executive director would be hired by March.
The GNOME Foundation budget for 2011 is $518,000 — with $145,510 earmarked for employees, and the executive director position is the largest part of that budget. (The GNOME Foundation also has a part-time system administrator.) So, is the executive director position the best use of GNOME Foundation resources? Sandler says yes:
Sandler is joining the GNOME Foundation at an interesting time. The GNOME community is looking a bit fragmented at the moment. The GNOME 3.0 release has gotten mixed reviews, some users are feeling dissatisfied with the lack of ability to modify GNOME to suit their needs, and the relationship between GNOME and Canonical is strained at best. The GNOME Shell has not been widely embraced — Fedora 15 has shipped GNOME Shell, but Canonical has gone its own way with Unity, and other GNOME-centric distributions like Linux Mint chose to sit out the GNOME 3.0 release and ship GNOME 2.32.
In short, some would say that GNOME as a project has seen better days. Sandler is not convinced of that:
GNOME 3 has been controversial, but I think that's an exaggeration [that the project has seen better days]. I (and a whole lot of others based on the press I've been reading) think that the rewrite is really great. Some of the choices made in the rewrite were strong decisions and make for a different desktop experience but all were made with what is best for the user in mind. Some people will object to change no matter what it is - you can't make everyone happy. But you can never move forward if you are not prepared to take a few risks, even if it means some of your users will stay with the old version for a while. Honestly, I think GNOME 3 will win users over as it gets worked into into the main distros, but it will take time for that to happen completely.
I've also read that some of the changes coming for GNOME 3 are geared towards developers, which hopefully will make it easier to write great applications for GNOME 3, not to mention just the attractiveness of the platform overall. As GNOME 3 applications improve so will adoption.
Whether GNOME 3 has time to evolve is another question. The Linux desktop, on traditional PCs and laptops, simply is not gaining much traction beyond its current audience. But Linux is being used in a number of mobile systems that address end users, but GNOME in its entirety is not yet there. Sandler says that she believes GNOME 3 will make GNOME more relevant on mobile devices:
Sandler is also unwilling to give up on the PC desktop just now. "It's also probably worth noting that desktop computing is still how most people use their computers (I think people forget that sometimes)!
"
As for GNOME 3's slow adoption and potential fragmentation, Sandler says
that the transition is "still underway and it will take time to see
how things really shake out.
" She says that "strong
decisions
" will always alienate some people, but she hopes that
GNOME can restore relationships. "I think that ultimately good
software and good community will fuel increased participation rather than
the fragmentation that seems [to] be arising now.
"
The upcoming Desktop Summit may be a good opportunity to mend some fences, says Sandler. "The GNOME board tells me that there's already a full list of sponsors and attendees for the Desktop Summit (and that Canonical in particular is planning to attend and sponsor the event). I believe that developers in the different distros are all talking to GNOME and I hope that we'll only see more cooperation going forward.
"
Ultimately, Sandler says she's optimistic about GNOME: "Coming to GNOME is obviously a vote of confidence from me personally. I love my work at SFLC and am only persuaded to leave because I think it's a great opportunity to be a part of a great organization.
"
One thing is certain, working with GNOME at this stage is likely to be an interesting job. We wish Sandler the best in her new role, and thank her for taking the time to talk with LWN ahead of the announcement.
Security
A hole in crypt_blowfish
A longstanding bug that was recently found in the crypt_blowfish password hashing library highlights the problems that can occur when a bug is found in a widely used low-level library. Because crypt_blowfish has been around for so long (this bug is said to go back to 1998 or possibly 1997), it has been used by various other packages (PHP for example) as well as some Linux distributions. The security impact is not likely to be huge, because it only affects passwords with somewhat uncommon characteristics, but the impact on those who have stored hashed passwords generated using the library may be a bit more painful.
Password hashing is a standard technique that is used by authentication mechanisms (for logging into a system or web application for example), so that the plaintext password does not need to be stored. Instead, something derived from the plaintext password is stored. Typically, one-way cryptographic hash functions like SHA-1 or the Blowfish cipher are used for that purpose. When the user presents the password, the same function is applied to that input and compared to what is stored. The idea is that even if an attacker gets access to the password database, they would need to "crack" the hashed values stored there. As its name implies, crypt_blowfish implements password hashing based on Blowfish.
The bug itself is quite simple, and the change to fix it will likely make the problem obvious to C programmers:
tmp |= *ptr;
to:
tmp |= (unsigned char)*ptr;
The problem was actually discovered in the John the Ripper (JtR) password cracking program. As part of creating a test suite, "magnum" was attempting to crack a password that consisted of a single non-ASCII character (£ or 0xa3), which was hashed using a library other than crypt_blowfish. Because JtR and crypt_blowfish share their implementation of the Blowfish encryption algorithm, tests using both would pass, but independent implementations would generate the correct hash, thus failing to match what was generated by JtR.
JtR and crypt_blowfish developer Alexander Peslyak (aka Solar Designer) analyzed the effects of the bug and found that some password pairs would hash to the same value with only minimal differences (e.g. "ab£" hashed to the same value as "£"), which would make password cracking easier. A further analysis shows that some characters appearing just before one with the high bit set may be effectively ignored when calculating the hash. That would mean that a simpler password than that given by the user could be used and would still be considered valid—a significant weakening of the user's password.
It should be noted that Solar Designer has been very forthcoming with details of the problem and its effects. His CVE request is quite detailed, and he takes full responsibility for the error. Other projects who find security holes in their code would do well to follow his lead here.
There is no real problem for JtR, as it can get the updated code so that it can crack passwords with high-bit-set characters in them. In addition, it could continue to use the broken code to crack passwords that were hashed incorrectly. But for applications that use crypt_blowfish, it's a different story. Passwords with the high bit set may be fairly uncommon—at least for sites with typically ASCII-only users—but there is no easy way to determine whether stored hashes are invalid or not.
The safest solution for administrators with potentially bad password hashes (which could include those running Openwall Linux (Owl), SUSE, and ALT Linux which can use crypt_blowfish for hashing the password database) is to invalidate all passwords and have their users input new ones. That could prove to be a logistical nightmare, however, depending on how easily, and securely, users can set new passwords without authenticating with their existing password. Requiring that users change their existing passwords after logging in is an alternative, but one that might give attackers a window in which to operate. It also leaves passwords unchanged for any users that do not log in.
The risks of attackers using this flaw to log in to a site are likely to be fairly small, but they do exist. If a site's login process is susceptible to brute force attacks, this bug makes it somewhat easier to do so, at least for specific password types. On the other hand, if the password database has been exposed, and some passwords were not crackable (at least for attackers using cracking programs other than JtR), this information would give them the means to crack those passwords. In the end analysis, it is an exploitable hole, but not the kind to send administrators into panic mode.
It is somewhat amazing that this bug has existed for 13+ years in a fairly widely used library. Up until now, no one has tested it in this way, at least publicly, which should serve as something of a warning to those who are using well-established software, both free and proprietary. With free software (crypt_blowfish has been placed into the public domain which may be a bit legally murky but is in keeping with free software ideals) there are more and easier opportunities to examine and test the code, but that's only effective if the testing is actually done.
There are, without doubt, bugs still lurking in various security-oriented libraries that we depend on every day, so testing those libraries (as well as code that is not being used for security purposes) regularly and systematically can only help find them. While it took more than a decade for this bug to come to light, it's worth pointing out that it certainly could have been discovered by others in the interim. Attackers clearly do their own testing, and are generally not inclined to release their results. That may not be the case here, but one should always recognize that public disclosure of a vulnerability is not necessarily tied to its discovery.
Brief items
Security quotes of the week
Actually, we are prolonging the security support for 4 and later, it's not just a minimum of six months any more, now it's "forever", just that the security updates always bring features and a new "version" as well. ;-)
Twenty minutes passed, and all four were still actively using Facebook.
The story behind the mysterious CyanogenMod update
A group called Lookout Mobile Security has put out an alert regarding malware which is targeting custom Android builds. "The application follows the common pattern of masquerading as a legitimate application, though a few extra permissions have been added. At first glance, it appears like other recent Android Trojans that tries to take control over the mobile phone by rooting the phone (breaking out of the Android security container), but instead jSMSHider exploits a vulnerability found in the way most custom ROMs sign their system images. The issue arises since publicly available private keys in the Android Open Source Project (AOSP) are often used to sign the custom ROM builds. In the Android security model, any application signed with the same platform signer as the system image can request permissions not available to normal applications, including the ability to install or uninstall applications without user intervention." This, it seems, is the vulnerability closed by the May 2011 CyanogenMod security update. (Seen on The H).
New vulnerabilities
ffmpeg mplayer: arbitrary code execution
| Package(s): | ffmpeg mplayer | CVE #(s): | CVE-2011-1931 | ||||||||||||||||
| Created: | June 21, 2011 | Updated: | September 20, 2011 | ||||||||||||||||
| Description: | From the Pardus advisory:
A vulnerability has been fixed in ffmpeg, which can be result in an out of array write and potentially arbitrary code execution. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
groff: insecure tmp file usage
| Package(s): | groff | CVE #(s): | CVE-2009-5044 | ||||||||||||||||||||||||||||||||
| Created: | June 16, 2011 | Updated: | June 8, 2012 | ||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory: groff created temporary files in an insecure way. Local attackers could potentially exploit that to overwrite files of other users. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2011-1768 CVE-2011-2182 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 20, 2011 | Updated: | March 7, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2011-1768: Alexecy Dobriyan reported an issue in the IP tunnels implementation. Remote users can cause a denial of service by sending a packet during module initialization. CVE-2011-2182: Ben Hutchings reported an issue with the fix for CVE-2011-1017 that made it insufficient to resolve the issue. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
klibc: command injection
| Package(s): | klibc | CVE #(s): | CVE-2011-1930 | ||||||||
| Created: | June 21, 2011 | Updated: | September 27, 2013 | ||||||||
| Description: | From the Pardus advisory:
A vulnerability has been fixed in klibc, which allows attackers to injection command. | ||||||||||
| Alerts: |
| ||||||||||
libvirt: arbitrary host file access
| Package(s): | libvirt | CVE #(s): | CVE-2011-2178 | ||||||||||||||||
| Created: | June 16, 2011 | Updated: | July 12, 2011 | ||||||||||||||||
| Description: | From the openSUSE advisory: A regression re-introduced automatic disk probing again which potentially allowed to uses to access arbitrary files (CVE-2011-2178). | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
libxml2: code execution
| Package(s): | libxml2 | CVE #(s): | CVE-2011-1944 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 16, 2011 | Updated: | March 1, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory: Chris Evans discovered that libxml2 incorrectly handled memory allocation. If an application using libxml2 opened a specially crafted XML file, an attacker could cause a denial of service or possibly execute code as the user invoking the program. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
moodle: multiple vulnerabilities
| Package(s): | moodle | CVE #(s): | |||||
| Created: | June 16, 2011 | Updated: | June 22, 2011 | ||||
| Description: | From the Debian advisory: MSA-11-0011 Multiple cross-site scripting problems in media filter MSA-11-0015 Cross Site Scripting through URL encoding MSA-11-0013 Group/Quiz permissions issue | ||||||
| Alerts: |
| ||||||
movabletype-opensource: multiple vulnerabilities
| Package(s): | movabletype-opensource | CVE #(s): | |||||||||
| Created: | June 17, 2011 | Updated: | December 30, 2011 | ||||||||
| Description: | From the Debian advisory:
It was discovered that Movable Type, a weblog publishing system, contains several security vulnerabilities: A remote attacker could execute arbitrary code in a logged-in users' web browser. A remote attacker could read or modify the contents in the system under certain circumstances. | ||||||||||
| Alerts: |
| ||||||||||
Mozilla products: multiple vulnerabilities
| Package(s): | firefox | CVE #(s): | CVE-2011-2374 CVE-2011-2375 CVE-2011-2373 CVE-2011-2377 CVE-2011-2371 CVE-2011-2366 CVE-2011-2367 CVE-2011-2368 CVE-2011-2370 CVE-2011-2369 CVE-2011-2364 CVE-2011-0083 CVE-2011-2362 CVE-2011-0085 CVE-2011-2363 CVE-2011-2365 CVE-2011-2376 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 22, 2011 | Updated: | July 23, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | The firefox, thunderbird, and seamonkey teams have issued updated versions of their products to address a long list of "critical" security vulnerabilities. See the firefox 3.6.18, and firefox 5 "fixed vulnerability" lists for more. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nagios: cross-site scripting
| Package(s): | nagios3 | CVE #(s): | CVE-2011-1523 CVE-2011-2179 | ||||||||||||||||
| Created: | June 16, 2011 | Updated: | April 2, 2012 | ||||||||||||||||
| Description: | From the Ubuntu advisory: Stefan Schurtz discovered than Nagios did not properly sanitize its input when processing certain requests, resulting in cross-site scripting (XSS) vulnerabilities. With cross-site scripting vulnerabilities, if a user were tricked into viewing server output during a crafted server request, a remote attacker could exploit this to modify the contents, or steal confidential data, within the same domain. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
pam_ssh: privilege escalation
| Package(s): | pam_ssh | CVE #(s): | |||||||||||||
| Created: | June 21, 2011 | Updated: | June 22, 2011 | ||||||||||||
| Description: | From the Red Hat bugzilla:
It was found that pam_ssh, PAM module for use with SSH keys and ssh-agent, did not properly drop root SGID privileges prior executing the ssh-agent authentication agent. A local attacker could use this flaw to potentially escalate their privileges. | ||||||||||||||
| Alerts: |
| ||||||||||||||
php: code execution
| Package(s): | php5 | CVE #(s): | CVE-2011-1938 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 16, 2011 | Updated: | August 25, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Novell bug database: Stack-based buffer overflow in the socket_connect function in ext/sockets/sockets.c in PHP 5.3.3 through 5.3.6 might allow context-dependent attackers to execute arbitrary code via a long pathname for a UNIX socket. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SUSE Manager: multiple vulnerabilities
| Package(s): | SUSE Manager | CVE #(s): | CVE-2009-4139 CVE-2011-1594 | ||||
| Created: | June 20, 2011 | Updated: | June 22, 2011 | ||||
| Description: | From the SUSE advisory:
CVE-2009-4139: A cross-site request forgery (CSRF) attack can be used to execute web-actions within the SUSE Manager web user interface with the privileges of the attacked user. CVE-2011-1594: Open Redirect bug at the login page (Phishing)
| ||||||
| Alerts: |
| ||||||
SUSE Manager Proxy: insecure SSL settings
| Package(s): | SUSE Manager Proxy | CVE #(s): | |||||
| Created: | June 20, 2011 | Updated: | June 22, 2011 | ||||
| Description: | From the SUSE advisory:
This security update of SUSE Manger Proxy improves the SSL cipher suite setting to only allow secure SSLCipher and SSLProtocols. | ||||||
| Alerts: |
| ||||||
torque: remote code execution
| Package(s): | torque | CVE #(s): | CVE-2011-2193 | ||||||||||||||||||||
| Created: | June 21, 2011 | Updated: | September 5, 2012 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Torque server does not check the length of the "job name" argument before using it - this string is verified only on the client side. It is possible to use a modified Torque client or DRMAA interface to submit a job with an arbitrary chosen job name in terms of length and content. Thus, it is possible for the attacker to overflow buffer and overwrite some Torque server process internal data causing its specific behavior. Note that this data overwriting could lead to remote code execution. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
unixODBC: buffer overflow
| Package(s): | unixODBC | CVE #(s): | CVE-2011-1145 | ||||
| Created: | June 20, 2011 | Updated: | June 22, 2011 | ||||
| Description: | From the openSUSE advisory:
Specially crafted reply of a malicious server could overflow a buffer in unixODBC | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.0-rc4, released on June 20. It consists of a bunch of fixes (some for a couple of significant performance regressions) and a couple of new drivers. It also apparently has a new compilation error which may require the application of this patch to get around. The full changelog has all the details.Stable updates: there have been no stable updates released in the last week.
Quotes of the week
- Do you need to bias your pin to 3.3 V with an 1MOhm resistor?
- Do you want to mux your pin with different functions?
- Do you want to set the load capacitance of your pin to 10pF?
- Do you want to drive your pin as open collector?
- Is all or some the above software-controlled in your ASIC?
- Do you despair from the absence of relevant frameworks?
DON'T WORRY! The pinctrl subsystem is here to save YOU!
In place of them, here's a lovely haiku to sooth you:
Sand castle contest
Rain falling sideways and cold
Summer in Oregon
Netconf 2011
The Linux networking developers held their roughly annual networking minisummit on June 14 and 15 in Toronto. There will probably not be a detailed report from the gathering, and no videotaping was done, but there is still some information out there. Interested readers can start with the agenda for the gathering, which includes the slides for almost all of the presentations. Also worth a read is Ben Hutchings's summary with notes from most of the talks:
[...]
David [Miller] wants to get rid of the IPv4 routing cache. Removing the cache entirely seems to make route lookup take about 50% longer than it currently does for a cache hit, and much less time than for a cache miss. It avoids some potential for denial of service (forced cache misses) and generally simplifies routing.
All told, it looks like an interesting and productive gathering; if past patterns hold, we will get a more thorough summary at the 2011 Kernel Summit in October.
Kernel development news
User-friendly disk names
Device names, particularly for disks, can be confusing to Linux administrators because they get assigned at boot time based on the order in which the disks are discovered. So the same physical disk can be assigned a different device name (in /dev) on each boot, which means that kernel log messages and the output of various utilities may not correspond with the administrator's view of the system. A recent patch set looks to change that situation, but it is meeting some resistance from kernel hackers who think it should be handled by user space.
The patches posted by Nao Nishijima are pretty straightforward. They just add a preferred_name entry to struct device, which can be updated via sysfs. The patches then change some SCSI log messages and /proc/partitions output to use the preferred name if it has been set. Greg Kroah-Hartman expressed concerns about changing /proc/partitions as various tools parse that file so it is part of the kernel's user-space interface. Adding the preferred name as a new field on each line might very well confuse utilities that parse the file.
More importantly, though, he notes that one could just change the tools so that they use the names as arguments or in their output. Any scheme that would map preferred names to specific disks requires some kind of mapping file, so tools that wanted to use these preferred names (things like mount, smartd, and other disk-related tools) could do so using that mapping without involving the kernel at all:
I still think this is the correct way to solve the problem as it is a userspace issue, not a kernel one.
While the patches only use preferred_name for disk devices, the
idea is to allow them to be added to any device (and then change log
messages and utilities to use them). It is modeled after the
ifalias entry that was added for network devices back in 2008, but
some don't see that as something to emulate. Allowing only one alias for
network devices is generally not enough "because people want not a
single but multiple names at the same time
", Kay Sievers said, so ifalias is only used by some SNMP
utilities. Currently, udev
maintains a set of links in /dev/disk/by-* that relate disks to
kernel devices by a variety of characteristics (ID, label, path, and
UUID). James Bottomley would like to see
that be extended for the
preferred names:
This will ensure that kernel output and udev input are consistent. It will still require that user space utilities which derive a name for a device will need modifying to print out the preferred name.
But there are problems inherent in that idea. In order for udev to know that the preferred name was set, a uevent would have to be generated. That could be done, but it leads to other problems, as Sievers points out (instead of by-preferred, he uses by-pretty):
/dev/disk/by-pretty/foo
and some tool later thinks the pretty name should better be 'bar', it
writes the name to /sys, we get a uevent, the old link disappears, we
get a new link, mount has no device node anymore for the mounted
device ...
Essentially, udev keeps track of the devices present in the system (and
their attributes like, potentially, preferred name), but doesn't have any
concept of tracking "no longer valid names
" as Sievers puts it. That means that udev can't
just leave older entries around when user space changes the preferred name: "We can not just add stuff to /dev without a udev database entry, it
would never get removed on device unplug and leave a real mess
behind.
"
One possible solution for the renaming problem is to only allow one write to preferred_name, so that, once established, those aliases couldn't be changed without a reboot. udev could set up the proper links, and various tools could use the aliases as needed. That would solve the renaming problem at the cost of some flexibility. In general, no one was really opposed to the idea of some kind of more-mnemonic name for disks, it is more of a question of how to get there.
Sievers proposed adding a way for udev to list all of the symlinks that it creates during device discovery. Anyone (or any tool) that needed to associate an alias with a particular disk could use that output to determine the current device being used (based on the UUID for example), then make the substitution as appropriate. That would work, in general, but Bottomley sees it as overly complex for users:
The reason for storing this in the kernel is just that it's easier than trying to update all the tools, and it solves 90% of the problem, which makes the solution usable, even if we have to update tools to get to 100%.
But Sievers and driver core maintainer Kroah-Hartman see it as papering over more substantial issues. Sievers, at least, would like to see text-file-style debug and error message output replaced (or supplemented) with something more structured:
Adding just another single-name to the kernel just makes the much-too-dumb free-text printk() a bit more readable, but still sounds not like a solution. Pimping up syslog is not the solution to this problem, and it can't be solved in the kernel alone.
But, from the user's perspective, disks may already have names (with labels on the enclosures themselves for example) and it would be quite convenient for the kernel's messages to reflect them. In the end, Sievers isn't opposed to a disk-specific (rather than for all devices) solution, though he thinks it isn't really the right direction to go. Kroah-Hartman agrees and is adamant that this change not go into the driver core. Based on that, Nishijima plans to redo the patches, moving the name to struct gendisk, renaming the field to alias_name (rather than "preferred") to better reflect its purpose, and generating a uevent when the name changes.
But, following the lead of the network ifalias will add to the kernel's user-space interface, only this time for disks. While it may solve an immediate problem for administrators, it will also leave behind some legacy code when, or if, a better solution comes around. That's unfortunate, but, since it solves a real problem, and the change is restricted to subsystem whose maintainer (Bottomley) is in favor of it, it may well turn up in the mainline before long. Any change to system error and debug logging along the lines of what Sievers described is certainly quite a ways off, though there have long been calls for structured kernel output. Sometimes it is just easier to make a change like this in one place, rather than trying to identify and fix all of the places outside of the kernel that would need it.
The platform device API
In the very early days, Linux users often had to tell the kernel where specific devices were to be found before their systems would work. In the absence of this information, the driver could not know which I/O ports and interrupt line(s) the device was configured to use. Happily, we now live in the days of busses like PCI which have discoverability built into them; any device sitting on a PCI bus can tell the system what sort of device it is and where its resources are. So the kernel can, at boot time, enumerate the devices available and everything Just Works.Alas, life is not so simple; there are plenty of devices which are still not discoverable by the CPU. In the embedded and system-on-chip world, non-discoverable devices are, if anything, increasing in number. So the kernel still needs to provide ways to be told about the hardware that is actually present. "Platform devices" have long been used in this role in the kernel. This article will describe the interface for platform devices; it is meant as needed background material for a following article on integration with device trees.
Platform drivers
A platform device is represented by struct platform_device, which, like the rest of the relevant declarations, can be found in <linux/platform_device.h>. These devices are deemed to be connected to a virtual "platform bus"; drivers of platform devices must thus register themselves as such with the platform bus code. This registration is done by way of a platform_driver structure:
struct platform_driver {
int (*probe)(struct platform_device *);
int (*remove)(struct platform_device *);
void (*shutdown)(struct platform_device *);
int (*suspend)(struct platform_device *, pm_message_t state);
int (*resume)(struct platform_device *);
struct device_driver driver;
const struct platform_device_id *id_table;
};
At a minimum, the probe() and remove() callbacks must be supplied; the other callbacks have to do with power management and should be provided if they are relevant.
The other thing the driver must provide is a way for the bus code to bind actual devices to the driver; there are two mechanisms which can be used for that purpose. The first is the id_table argument; the relevant structure is:
struct platform_device_id {
char name[PLATFORM_NAME_SIZE];
kernel_ulong_t driver_data;
};
If an ID table is present, the platform bus code will scan through it every time it has to find a driver for a new platform device. If the device's name matches the name in an ID table entry, the device will be given to the driver for management; a pointer to the matching ID table entry will be made available to the driver as well. As it happens, though, most platform drivers do not provide an ID table at all; they simply provide a name for the driver itself in the driver field. As an example, the i2c-gpio driver turns two GPIO lines into an i2c bus; it sets itself up as a platform device with:
static struct platform_driver i2c_gpio_driver = {
.driver = {
.name = "i2c-gpio",
.owner = THIS_MODULE,
},
.probe = i2c_gpio_probe,
.remove = __devexit_p(i2c_gpio_remove),
};
With this setup, any device identifying itself as "i2c-gpio" will be bound to this driver; no ID table is needed.
Platform drivers make themselves known to the kernel with:
int platform_driver_register(struct platform_driver *driver);
As soon as this call succeeds, the driver's probe() function can be called with new devices. That function gets as an argument a platform_device pointer describing the device to be instantiated:
struct platform_device {
const char *name;
int id;
struct device dev;
u32 num_resources;
struct resource *resource;
const struct platform_device_id *id_entry;
/* Others omitted */
};
The dev structure can be used in contexts where it is needed - the DMA mapping API, for example. If the device was matched using an ID table entry, id_entry will point to the specific entry matched. The resource array can be used to learn where various resources, including memory-mapped I/O registers and interrupt lines, can be found. There are a number of helper functions for getting data out of the resource array; these include:
struct resource *platform_get_resource(struct platform_device *pdev,
unsigned int type, unsigned int n);
struct resource *platform_get_resource_byname(struct platform_device *pdev,
unsigned int type, const char *name);
int platform_get_irq(struct platform_device *pdev, unsigned int n);
The "n" parameter says which resource of that type is desired, with zero indicating the first one. Thus, for example, a driver could find its second MMIO region with:
r = platform_get_resource(pdev, IORESOURCE_MEM, 1);
Assuming the probe() function finds the information it needs, it should verify the device's existence to the extent possible, register the "real" devices associated with the platform device, and return zero.
Platform devices
So now we have a driver for a platform device, but no actual devices yet. As was noted at the beginning, platform devices are inherently not discoverable, so there must be another way to tell the kernel about their existence. That is typically done with the creation of a static platform_device structure providing, at a minimum, a name which is used to find the associated driver. So, for example, a simple (fictional) device might be set up this way:
static struct resource foomatic_resources[] = {
{
.start = 0x10000000,
.end = 0x10001000,
.flags = IORESOURCE_MEM,
.name = "io-memory"
},
{
.start = 20,
.end = 20,
.flags = IORESOURCE_IRQ,
.name = "irq",
}
};
static struct platform_device my_foomatic = {
.name = "foomatic",
.resource = foomatic_resources,
.num_resources = ARRAY_SIZE(foomatic_resources),
};
These declarations describe a "foomatic" device with a one-page MMIO region starting at 0x10000000 and using IRQ 20. The device is made known to the system with:
int platform_device_register(struct platform_device *pdev);
Once both a platform device and an associated driver have been registered, the driver's probe() function will be called and the device will be instantiated. Registration of device and driver are usually done in different places and can happen in either order. A call to platform_device_unregister() can be used to remove a platform device.
Platform data
The above information is adequate to instantiate a simple platform device, but many devices are more complex than that. Even the simple i2c-gpio driver described above needs two additional pieces of information: the numbers of the GPIO lines to be used as i2c clock and data lines. The mechanism used to pass this information is called "platform data"; in short, one defines a structure containing the specific information needed and passes it in the platform device's dev.platform_data field.
With the i2c-gpio example, a full configuration looks like this:
#include <linux/i2c-gpio.h>
static struct i2c_gpio_platform_data my_i2c_plat_data = {
.scl_pin = 100,
.sda_pin = 101,
};
static struct platform_device my_gpio_i2c = {
.name = "i2c-gpio",
.id = 0,
.dev = {
.platform_data = &my_i2c_plat_data,
}
};
When the driver's probe() function is called, it can fetch the platform_data pointer and use it to obtain the rest of the information it needs.
Not everybody in the kernel community is enamored with platform devices; they seem like a bit of a hack used to encode information about specific hardware platforms into the kernel. Additionally, the platform data mechanism lacks any sort of type checking; drivers must simply assume that they have been passed a structure of the expected type. Even so, platform devices are heavily used, and that's unlikely to change, though the means by which they are created and discovered is changing. The way of the future appears to be device trees, which will be described in the following article.
Platform devices and device trees
The first part of this pair of articles described the kernel's mechanism for dealing with non-discoverable devices: platform devices. The platform device scheme has a long history and is heavily used, but it has some disadvantages, the biggest of which is the need to instantiate these devices in code. There are alternatives coming into play, though; this article will describe how platform devices interact with the device tree mechanism.The current platform device mechanism is relatively easy to use for a developer trying to bring up Linux on a new system. It's just a matter of creating the descriptions for the devices present on that system and registering all of the devices at boot time. Unfortunately, this approach leads to the proliferation of "board files," each of which describes a single type of computer. Kernels are typically built around a single board file and cannot boot on any other type of system. Board files sort of worked when there were relatively small numbers of embedded system types to deal with. Now Linux-based embedded systems are everywhere, architectures which have typically depended on board files (ARM, in particular) are finding their way into more types of systems, and the whole scheme looks poised to collapse under its own weight.
The hoped-for solution to this problem goes by the term "device trees"; in essence, a device tree is a textual description of a specific system's hardware configuration. The device tree is passed to the kernel at boot time; the kernel then reads through it to learn about what kind of system it is actually running on. With luck, device trees will abstract the differences between systems into boot-time data and allow generic kernels to run on a much wider variety of hardware.
This article is a good introduction to the device tree format and how it can be used to describe real-world systems; it is recommended reading for anybody interested in the subject.
It is possible for platform devices to work on a device-tree-enabled system with no extra work at all, especially once Grant Likely's improvements are merged. If the device tree includes a platform device (where such devices, in the device tree context, are those which are direct children of the root or are attached to a "simple bus"), that device will be instantiated and matched against a driver. The memory-mapped I/O and interrupt resources will be marshalled from the device tree description and made available to the device's probe() function in the usual way. The driver need not know that the device was instantiated out of a device tree rather than from a hard-coded platform device definition.
Life is not always quite that simple, though. Device names appearing in the device tree (in the "compatible" property) tend to take a standardized form which does not necessarily match the name given to the driver in the Linux kernel; among other things, device trees really are meant to work with more than one operating system. So it may be desirable to attach specific names to a platform device for use with device trees. The kernel provides an of_device_id structure which can be used for this purpose:
static const struct of_device_id my_of_ids[] = {
{ .compatible = "long,funky-device-tree-name" },
{ }
};
When the platform driver is declared, it stores a pointer to this table in the driver substructure:
static struct platform_driver my_driver = {
/* ... */
.driver = {
.name = "my-driver",
.of_match_table = my_of_ids
}
};
The driver can also declare the ID table as a device table to enable autoloading of the module as the device tree is instantiated:
MODULE_DEVICE_TABLE(of, my_of_ids);
The one other thing capable of complicating the situation is platform data. Needless to say, the device tree code is unaware of the specific structure used by a given driver for its platform data, so it will be unable to provide that information in that form. On the other hand, the device tree mechanism is equipped to allow the passing of just about any information that the driver may need to know. Making use of that information will require the driver to become a bit more aware of the device tree subsystem, though.
Drivers expecting platform data should check the dev.platform_data pointer in the usual way. If there is a non-null value there, the driver has been instantiated in the traditional way and device tree does not enter into the picture; the platform data should be used in the usual way. If, however, the driver has been instantiated from the device tree code, the platform_data pointer will be null, indicating that the information must be acquired from the device tree directly.
In this case, the driver will find a device_node pointer in the platform devices dev.of_node field. The various device tree access functions (of_get_property(), primarily) can then be used to extract the needed information from the device tree. After that, it's business as usual.
In summary: making platform drivers work with device trees is a relatively straightforward task. It is mostly a matter of getting the right names in place so that the binding between a device tree node and the driver can be made, with a bit of additional work required in cases where platform data is in use. The nice result is that the static platform_device declarations can go away, along with the board files that contain them. That should, eventually, allow the removal of a bunch of boilerplate code from the kernel while simultaneously making the kernel more flexible.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Mageia bootstraps its package update policy
Mageia, the community-driven fork of Mandriva Linux, has made its initial release, just eight months after the project's inception. It is a major accomplishment, but the development team has already turned its attention to the next pressing challenge: how to set up and manage package updates, including how to handle security fixes, how to integrate backports and missing packages, and how to bootstrap a QA process.
The release of Mageia 1 marks a turning point for the distribution, because end users are now encouraged to install and use it on a daily basis — and up until now, most of those users were still running Mandriva, which continued to receive regular package updates.
A user switching to Mageia is likely to encounter several distinct package update scenarios: security updates to close specific vulnerabilities, incremental bugfix updates, and new packages that for one reason or another were not ready for inclusion with the initial release of Mageia 1. Each scenario has its peculiarities, although for simplicity's sake, it is in the project's interest to provide as uniform a testing and update process as possible.
Mageia's effort to hash out an update process is not simply a start-up problem. Many on the team were unhappy with Mandriva's process over the years. As Michael Scherer explained:
The Mageia team, therefore, has set goals for its own process to be more streamlined, more scalable, and to take security vulnerabilities more seriously. Many of the core team members are former Mandriva contributors (and quite a few were Mandriva employees), so the process includes their opportunity to learn directly from Mandriva's mistakes.
Thus far, Mageia has not pushed out any updated packages to the updates repository, although a June 19 message indicates that the first package updates have at least reached testing. Meanwhile, the outstanding list of "missing package" backport requests, which includes security updates made by Mandriva, stands at over a dozen. So, since the release of Mageia 1, security updates have languished along with other updates, though it appears that will be changing soon.
Updates (security and otherwise)
The discussion over how to structure the update process began on the mageia-dev mailing list in early June. As is the case for Mandriva, Mageia plans to run separate RPM repositories for main releases and package updates, plus testing repositories for use by developers and packagers. The draft update policy draws the distinction between the "updates" repository (which is to be used to provide bug-fix or security-fix updates to existing packages), and the "backports" repository (which is to be used to provide new releases of packages).
Nevertheless, there are a handful of exceptions. First, new versions of packages that require an update in order to work with a remote online service are allowed in the updates repository (the examples include networked games and virus scanners). Second, new versions of applications are permitted when the old version is no longer supported by the upstream project itself (a rule that primarily encompasses applications with a short lifecycle like the Chromium and Firefox web browsers).
The update release process follows the same broad plan as used by many other distributions: anyone can file a bug against a package; once the bug is triaged and assigned, the package maintainer can apply the relevant patches and rebuild the package. The maintainer is then responsible for submitting the new build to the updates_testing repository and re-assigning the bug to the QA team. The QA team tests the package to ensure that it fixes the bug, that it both installs and successfully updates the previous version of the package, and (in the case of security fixes) that it closes the security vulnerability.
A few obstacles remain to implementing this process. The first is that, as a new distribution, very few packages have official maintainers. Mageia may assign bugs to package committers as an initial step, but the understanding is that maintainership of a package is a voluntary task. The second obstacle is setting up QA communication channels, to ensure that QA team members do not accidentally step on each other's toes. Whether that takes the form of a read-only mailing list or a web announcement mechanism has yet to be determined.
The third obstacle is that while the QA team is significant in size (and growing), the security team is still much smaller, and less formalized. Conventional wisdom states that the security team has the time-consuming task of monitoring security vulnerability lists and databases, as well as the difficult job of assessing that a rebuilt package fixes the security vulnerability, as opposed to simply verifying a specific "normal" bug fix.
That is not to diminish the amount of work that goes into the QA process. Mageia, like Mandriva before it, prides itself on providing well-tested packages that both integrate with free software applications and play well with third-party (potentially proprietary) offerings. It supports both the GNOME and KDE desktop environments when many distributions choose either one or the other, and is available for both 32-bit and 64-bit architectures.
Backports and support
That desire to offer well-tested packages comes to the forefront in the project's discussion over how to handle "missing packages
" and backports. The root of the issue is that Mageia 1 lacks several packages that were available in previous Mandriva releases. Due to personnel constraints, it simply isn't possible to package and test every application and library — so the project did its best with the Mageia 1 release, and is attempting to craft a policy for providing the missing pieces to users between now and the release of Mageia 2.
A case could be made that missing packages are essentially no different than other bugs, and if a package can be tested and added to the repository, it should go in the updates repository. The alternate viewpoint is that updates is by definition reserved for fixes to existing packages, and that the backports repository exists for the specific purpose of allowing users early access to packages slated for the next release — early access at their own risk.
But Mageia 1 is a special circumstance; many of the missing packages have long existed for Mandriva users, and in some cases are even stable and "expected" in a standard distribution (examples from the mailing list include passwd-gen and dd_rescue). Forcing users in need of those packages to enable an unstable repository like backports (which could inadvertently update other packages to new, un-QA-validated versions) puts them at an unacceptable risk. Yet treating backports as a fully-supported repository just to enable the addition of missing packages has its own risks, namely far more work for the QA and security teams.
Scherer argued that if the root cause of the trouble was that backports has historically been synonymous with "bug-filled," then fixing the backports process was the solution. Others suggested that the real pain-point was the notion that backports is "unsupported" — a term casually tossed around in many distributions, but rarely if ever defined. After all, in most cases, a user reporting a problem with a backports package still receives help fixing it from the community. Complicating the issue further is that Mandriva's package management tool RPMDrake allowed users to access specific packages from Mandriva's backports repository, but without enabling the repository as a whole — a behavior that carries its own set of QA and security trade-offs.
Samuel Verschelde suggested a plan of attack for bugs filed against backports, that differentiates them from bugs in updates by establishing a "maintainer first, then upstream" process that relieves the regular QA team from the burden of integrating the fixes. Christiaan Welvaart recommended a way to tie bugs in the backports repository more closely in with the existing QA process by providing better step-by-step guidance to bug reporters.
Eventually, the team came to a consensus around a plan that makes a special exception for the transitional period between Mageia 1 and the release of Mageia 2. During the transition period, missing packages can be added to the updates repository provided that they follow the same QA process as every other update, and that there is a bug report filed formally requesting the package. After Mageia 2 is released, that process will be closed, and missing packages should go through the backports repository, or simply wait for the next release cycle.
Of course, Mageia is also in the process of working out the details of its release cycle, so the proposed backports plan has its small share of uncertainty. At the moment, the distribution has a bleeding-edge distribution named "Cauldron" that operates as a rolling release, but stable end-user releases are tentatively planned to follow the six-month cycle established by Mandriva.
Bubble bubble, hopefully with minimal toil and trouble
Bootstrapping a Linux distribution is always a tremendous amount of work, even for teams like Mageia's, with its healthy number of ex-Mandriva engineers and community contributors. But as many old Linux veterans will tell you, getting the first release out the door is not nearly as taxing as setting up a smooth development, testing, and release process that is sustainable over a multi-year period. Even established distributions struggle with QA and package testing issues from time to time.
The Mageia backports discussion touched on an intriguing issue: what precisely it means when an open source project declares something "supported" or "unsupported." It appears as if that thread has fallen by the wayside, and the team has moved on to the more practical issues of fleshing out the backports process and handling the corner-case of assisting Mandriva users as they transition to the first ever Mageia release. But perhaps Mageia's unusual position as a brand-new distribution staffed by team members that moved over collectively from Mandriva gives it a unique chance to address the supported/unsupported question. It has already allowed them to produce a full-featured distribution in well under a year's time.
Brief items
Distribution quotes of the week
AV Linux 5.0 Released
AV Linux is live media distribution aimed at multimedia creation. From the 5.0 release announcement: "This release balances the rock-solid reliability of Debian's stable 'Squeeze' release and fortifies it with some carefully selected Sid and Custom packages to make it a state-of-the-art Multimedia Content Creation Powerhouse. This release will usher in a less frequent annual release cycle and shift the focus from Linux Audio/Video software testing to reliable Linux Audio/Video PRODUCTION. If you are someone who'd rather create content than experiment with alpha/beta software then this release is for you."
openSUSE releases milestone 2
The openSUSE project has announced the release of openSUSE 12.1 milestone 2 (of 6). "openSUSE is developed in a repository called Factory. Packages flow from the devel projects into Factory upon OK from the release team following the Factory Development Model. During the development cycle (more detailed model) periodic releases are made available for testing - these are the milestones. Six of them become available. After some several freezes go into effect, the component freeze just before the fourth milestone for instance. And about a couple of weeks after the last milestone the first of two Release Candidates is made ready for testing. The final openSUSE 12.1 release is expected on November 11th."
Scientific Linux 5.6
Scientific Linux has announced the release of SL 5.6. "Scientific Linux release 5.6 is based on the rebuilding of RPMS out of SRPMS's from Enterprise 5 Server and Client. It also has all errata and bugfixes up until May 13, 2011." See the release notes for details.
USU Linux and Jeoss Linux
Two distributions were added to the list this week.USU Linux hails from Bulgaria. It's Ubuntu based and comes in three editions. The mini edition is suitable for home or business use, the desktop edition has a strong focus on educational uses, and the netbook edition is specially created for little mini portable computers. (Thanks to Tsvetomir Parvanov)
Jeoss Linux is a compact install-everywhere Ubuntu based server oriented distribution. It is directly-installable even on legacy, limited resource, and embedded x86 platforms. (Thanks to Patrick Masotta)
Distribution News
Debian GNU/Linux
Planet Debian Derivatives
Paul Wise announces the creation of Planet Debian Derivatives. "For those of you who are interested in the activities of distributions derived from Debian, it aggregates the blogs and planets of all the distributions represented in the derivatives census. The list of feeds will be expanded semi-automatically as more distributions participate in the census and maintainers of census pages add new blog and planet URLs. Many thanks to Joerg Jaspert for doing the necessary setup procedures for the addition of the new sub-planet to Planet Debian. I'm glad that it was accepted alongside the sole language-based sub-planet (Planet Debian Spanish)."
New round of Debian IRC Training Sessions
The Debian project has announced the forthcoming start of a new round of IRC Training Sessions, which will be held on IRC by experienced community members. "Future sessions will be held on a regular basis, and will deal with various aspects of the Debian organization and different teams."
Gentoo Linux
Voting for the Gentoo Council 2011-2012 Election begins
The 2011-2012 Gentoo Council election is underway. Voting began June 20 and closes July 4, 2011.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 410 (June 20)
- openSUSE Weekly News, Issue 180 (June 18)
- Ubuntu Weekly Newsletter, Issue 221 (June 20)
Tiny Core Linux 3.7 brings "multi-Core" ISOs (The H)
The H reports on the release of Tiny Core Linux 3.7. New features include a 2.6.33.3 kernel with support for reading and writing NTFS partitions, font updates, control panel updates, and a "multi-core" ISO image that contains the Tiny Core, Micro Core, and the Network Tools editions in just 45M. "Tiny Core is a minimal Linux distribution that weighs in at just over 10 MB in size. The "tiny frugal" desktop distribution features the BusyBox tool collection and a minimal graphics system based on Tiny X and JWM. The core can run entirely in RAM, allowing for a very fast boot. With the help of online repositories, Tiny Core Linux can be expanded to include additional applications."
Page editor: Rebecca Sobol
Development
Making static analysis easier
It has been clear for some years now that static analysis tools have the potential to greatly increase the quality of the software we write. Computers are well placed to analyze source code and look for patterns which could indicate bugs. The "Stanford Checker" (later commercialized by Coverity) found a great many defects in a number of free software code bases. But within the free software community itself, the tools we have written are relatively scarce and relatively primitive. That situation may be coming to an end, though; we are beginning to see the development of frameworks which could become the basis for a new set of static analysis tools.The key enabling changes have been happening in the compiler suites. Compilers must already perform a detailed analysis of the code in order to produce optimized binaries; it makes sense to make use of that analysis for other purposes as well. Some of that already happens in the compiler itself; GCC and LLVM can produce a much wider set of warnings than was once possible. These warnings are a good start, but much more can be done. That is especially true if projects can create their own analysis tools for project-specific checks; projects of any size tend to have invariants and rules of their own which go beyond the rules for the source language in general.
The FSF was, for years, hostile to the idea of making it easy to plug analysis modules into GCC, fearing that a plugin mechanism would enable the creation of proprietary modules. After some years of deliberation, the FSF rewrote the license exception for its runtime modules in a way that addressed the proprietary module worries; since then, GCC has had plugin module support. The use of that feature has been relatively low, so far, but there are signs that the situation may be beginning to change.
An early user of the plugin mechanism was the Mozilla project, which created two modules (Dehydra and Treehydra) to enable the writing of analysis code in JavaScript. These tools have seen some use within Mozilla, but development there seems to have slowed to a halt. The mailing list is moribund and the software does not appear to have seen an update in some time.
An alternative is GCC MELT. This project provides a fairly comprehensive plugin which allows the writing of analysis code in a Lisp-like language. This code is translated to C and turned into a plugin which can be invoked by the compiler. MELT is extensively documented; there are also slides from a couple of tutorials on its use.
MELT seems to be a capable system, but there do not appear to be a lot of
modules written for it in the wild. One does not need to look at the
documentation for long to understand why; the "basic hints" start with:
"You first need to grasp GCC main internal representations (notably
Tree & Gimple & Gimple/SSA).
" MELT author Basile
Starynkevitch's 130-slide
presentation on MELT [PDF] does not get past the introductory GCC material
until slide 85. MELT, in other words, requires a fairly deep
understanding of GCC; it's not something that an outsider can pick up
quickly. The lack of easy examples to work from is not helpful either.
More recently, David Malcolm has announced the release of a new framework which enables the creation of plugins as Python scripts which run within the compiler. His immediate purpose is to create tools for the development of the Python system itself; the most significant checker in his code tries to ensure that object reference counts are managed properly. But he sees the tool as potentially being useful for a number of other projects and even for prototyping new features for GCC itself.
At a first glance, David's gcc-python-plugin mechanism suffers from the same difficulty as MELT - the initial learning curve is steep. It is also a very young and incomplete project; David has, by his own admission, only brought out the functionality he had immediate need for. The analysis code seems more approachable, though, and the mechanism for running scripts directly in the compiler seems more natural than MELT's compile-from-Lisp approach. It may be that this plugin will attract more users and developers than MELT as a result.
Or it may just be that your editor, being rather more proficient in Python than in Lisp, naturally likes the Python-based solution better.
In any case, one conclusion is clear: writing static analysis plugins for GCC is currently too hard; even capable developers who approach the problem will need to dedicate a significant chunk of time to understanding the compiler before they can begin to achieve anything in this area. The efforts described above are a big step in the right direction, but it seems clear that they are the foundations upon which more support code must be built. It's hard to say when it will reach the tipping point that inspires a flood of new analysis code, but it's easy to say that we are not there yet.
GCC is not where all the action is, though; there is also an interesting static analysis tool which has been built with the LLVM clang compiler. Documentation of this tool is scarce, but it appears to be capable of detecting some kinds of memory leaks, null pointer dereferences, the computation of unused values, and more. Some patches have been posted to add a plugin feature to this tool, but they do not seem to have proceeded very far yet.
Back in May, John Smith ran the checker on several open source projects to see what kind of results would emerge. Those results have been posted on the net; they show the kind of potential problems that can be found and the nice HTML output that the checker can create. Some of the warnings are clearly spurious - always a problem with static analysis tools - but others seem worth looking into. In general, the clang static analyzer seems, like the other tools mentioned here, to be in a relatively early state of development. Things are moving fast, though; this tool is worth keeping an eye on.
Actually, that is true of the static analysis area in general. The lack of good analysis tools has been a bit of a mystery - given the number of developers we have, one would think that a few would scratch that particular itch. Your editor would not have minded living in a world with one less version control system but with better analysis tools. But the nature of free software development is that people work on problems that interest them. As the foundations of our static analysis tools get better, one can hope that more developers will find those foundations interesting to build on. The entire development community will benefit from the results.
Brief items
Quotes of the week
The Document Foundation ponders certification
The Document Foundation has posted a draft proposal for a certification program built around LibreOffice. "TDF provides LibreOffice Certification and promotes the ecosystem through his channels and with an aggressive marketing campaign targeted to corporate users, in order to increase LibreOffice adoption based on certified professional value added services for migration, integration, development, support and training. LibreOffice Certification is fee based, and fee might vary according to the value provided by the partner to The Document Foundation." It's worth noting that this discussion is just beginning; TDF is looking for comments on how such a program might best be designed.
MediaWiki 1.17.0 released
The MediaWiki 1.17.0 release is out. Changes in this release include a new installer, the "ResourceLoader" framework, better category sorting, and better Oracle database support. See the release notes for more information.Mozilla Delivers New Version of Firefox
Mozilla has announced the release of Firefox 5. "The latest version of Firefox includes more than 1,000 improvements and performance enhancements that make it easier to discover and use all of the innovative features in Firefox. This release adds support for more modern Web technologies that make it easier for developers to build amazing Firefox Add-ons, Web applications and websites."
pari-2.5.0
Pari/GP is "a widely used computer algebra system designed for fast computations in number theory (factorizations, algebraic number theory, elliptic curves...), but also contains a large number of other useful functions to compute with mathematical entities such as matrices, polynomials, power series, algebraic numbers etc., and a lot of transcendental functions." Pari 2.5.0 is the first major release in five years. There are numerous new features which will doubtless make sense to math-intensive people; click below for the details.
Pyro 4.7 released
Pyro is "a library that enables you to build applications in which objects can talk to each other over the network, with minimal programming effort. You can just use normal Python method calls, with almost every possible parameter and return value type, and Pyro takes care of locating the right object on the right computer to execute the method. It is designed to be very easy to use, and to generally stay out of your way. But it also provides a set of powerful features that enables you to build distributed applications rapidly and effortlessly." The 4.7 release is out; new features include AutoProxy, asynchronous method calls, simplified server setup, and "part of" a new manual.
Harmattan Python is available
"Harmattan Python" is a Python environment for the "MeeGo 1.2 Harmattan Platform," as found in the newly-announced N9 phone. "This release is a culmination of several years of hard work and offers the most complete and full-featured Python programming language support on any mobile platform." More information can be found on the Harmattan Python page.
Tornado 2.0
Version 2.0 of the Tornado web server is out. "The framework is distinct from most mainstream web server frameworks (and certainly most Python frameworks) because it is non-blocking and reasonably fast. Because it is non-blocking and uses epoll or kqueue, it can handle thousands of simultaneous standing connections, which means it is ideal for real-time web services." This release includes templating changes, Python 3.2 support, and more; see the release notes for details.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (June 21)
- LibreOffice development summary (June 21)
- Linux Audio Monthly Roundup (June)
- PostgreSQL Weekly News (June 19)
- Tahoe-LAFS Weekly News (June 17)
An Early Expedition with Media Explorer on Linux (Linux.com)
Over at Linux.com, Nathan Willis explores the Media Explorer, a Linux-based media center application. "The practical upshot is that Media Explorer is very lean, and runs very fast. The overview includes five tabs: one each for audio, video, and still images, plus a search tab and custom playlist tab where you can build up a queue. The interface is smoothly animated in Clutter, and is accessible with the keyboard, mouse, or an IR remote. There is even a handy set-up tool that walks you through the process of assigning keypresses on your remote to actions inside the app."
Phillips: Qi Hardware Releases Free Wireless Hardware
On his blog, Jon Phillips writes about free/libre hardware that implements the 802.15.4 wireless personal area network protocol. The devices come from Qi Hardware; one fits into the Micro-SD slot of a Ben Nanonote, the other fits into a standard USB slot. This is all part of the Ben WPAN project, which describes itself as follows: "Ben WPAN is a project to create an innovative patent-free wireless personal area network (WPAN) that is copyleft hardware. The primary protocol is 6LoWPAN, pronounced "SLoWPAN". The project lead is Werner Almesberger and it involves using the UBB, new testing software, and the Ben Nanonote to produce a next generation wireless personal area network." (Thanks to Paul Wise.)
Page editor: Jonathan Corbet
Announcements
Brief items
FSFE on AVM v. Cybits
Here's a press release from the Free Software Foundation Europe on an important GPL case going to trial in Berlin. "AVM claimed that when their customers install Cybits' filtering software on AVM routers it changes the routers' firmware and consequently infringes on AVM's copyright. In the opinion of AVM, even changing the Linux kernel components of the firmware is not allowed. The Court of Appeals of Berlin rejected this argument in its decision on the request for a preliminary injunction in September 2010, after Mr. Welte intervened in the case. Now, the District Court of Berlin will have to decide on the issue again, this time in the main proceedings." (See also Harald Welte's comments on the case.)
Nokia's N9 handset launched
Nokia has announced the availability of the "N9", itsThe Nokia N9 is the ultimate Qt-powered mobile device. I find it a pleasure to watch and play with. Its polymer unibody chassis complemented by a strong and scratch-resistant curved glass makes it both solid and smooth in the hand. Multitasking is pushed forward with a combination of open tasks, events and apps. You navigate through these views with a simple gesture, a swipe of a finger." The company is making some handsets available through its community device program.
Formally Launching the MeeGo Community Device Program
The MeeGo Blog has an announcement on the launch of the MeeGo Community Device Program. "We're still waiting for the long anticipated MeeGo handset, but in the meantime, Intel is providing ExoPCs for tablet development. To get them into your hands, we now have an online application to get your project proposals in front of the community approval committee."
Karen Sandler has been named GNOME foundation executive director
The GNOME foundation has announced the appointment of Karen Sandler as the new executive director for the organization. Sandler comes to GNOME from the Software Freedom Law Center where she served as a general counsel. "Sandler's dedication to software freedom, her non-profits experience and her involvement in a wide range of free and open source software communities distinguish her as the logical choice for GNOME. 'I'm very excited that Karen is joining the GNOME Foundation as Executive Director!', says Stormy Peters, former Executive Director who has recently joined the GNOME Board as a new Director, 'Karen brings a wealth of experience in free software projects and nonprofits as well as a passion for free software. That experience will be invaluable as GNOME continues to expand its reach with GNOME 3.0 and GNOME technologies.'" Look for an interview with Sandler about her new role later today here at LWN.
Articles of interest
Oracle v. Google - Updating the Reexams (Groklaw)
Groklaw has an update on the reexamination of Oracle's patents that are being asserted in its case against Google. "In the reexamination of U.S. Patent 6192476 the USPTO has issued an office action in which it rejects 17 of the patent's 21 claims... While Oracle has asserted seven different patents in its claims against Google, if this reexamination is exemplary of what Oracle can expect in each of the other reexaminations, Oracle will have a hard time finding claims that it can successfully assert against Google, and there lies Oracles conundrum. Oracle either has to agree with the court's directive to limit the number of claims it will assert at trial, or it is likely the court will simply stay the trial until the reexaminations are complete."
Keeping the Desktop Dream Alive (LinuxInsider)
LinuxInsider talks with Linux Foundation Executive Director Jim Zemlin. "What you call fragmentation is that core kernel, which is a multibillion-dollar investment, and what people are doing is taking that and building products in the marketplace based on it, whether it's Google Search, Android, Samsung TV, Facebook, a music service or the New York Stock Exchange. You could characterize all these things as fragmentation, but I'd characterize that as an efficient market -- in other words, the market is solving the problems today."
Keeping the Desktop Dream Alive, Part 2 (TechNewsWorld)
TechNewsWorld has the second half of a LinuxInsider interview with Jim Zemlin. "Let's take a non-high-tech marketplace like power production -- let's use power companies. They're basically setting up smart grid technology to meter people's [electricity] consumption on a 15-minute incremental basis so they can manage power patterns and make sure the grid is allocating energy effectively. If you're polling 12 million customers' power usage every 15 minutes, you're polling millions of transactions that have to be centralized, stored and analyzed, then have the data pushed out. You have power meters, servers that store and analyze the data, high-performance computers to crunch the data. In all those categories, Linux is either the No. 1 operating system or the fastest-growing operating system."
New Books
HTML5 & CSS3 For The Real World--New from SitePoint
SitePoint has released "HTML5 & CSS3 For The Real World", by Estelle Weyl, Louis Lazaris, and Alexis Goldstein.
Calls for Presentations
Call for Participation: Workshops and BoFs at the Desktop Summit 2011
The Desktop Summit 2011 will be held August 6-12 in Berlin, Germany. From August 10-14 there will be workshops and Birds of a Feather sessions (BoFs), and the wiki is open for people to register your sessions until July 3. "The Desktop Summit will have an exciting program of talks. But the most important part of the conference are the Workshop & BoF days. This is the part of the conference where the participants get together to discuss and work on the future of the Free Desktop. It is where the latest technology is demonstrated in a one-to-few setting and where decisions are made. The organisation committee would like to schedule as many of these sessions beforehand as possible. We expect over 1000 visitors and scheduling helps to ensure minimal overlap with other sessions and allows us to provide a clear timetable for the visitors."
PyCon Finland 2011 Call For Proposals
PyCon Finland will take place October 17-18, 2011 in Turku, Finland. "The presentation slots will be 40 minutes + 10 minutes of discussion at the end. Shared sessions are also possible. The language for the presentations should be English to encourage international participation." The proposal deadline in August 1.
2nd Call For Papers, 18th Annual Tcl/Tk Conference 2011
Tcl'2011 will take place October 24-28 in Manassas, Virginia. Abstracts and proposals are due August 26. "The program committee is asking for papers and presentation proposals from anyone using or developing with Tcl/Tk (and extensions)."
Upcoming Events
LinuxCon North America schedule released
The schedule for LinuxCon North America has been released. The conference will be held August 17-19 in Vancouver, British Columbia. Highlights include a conversation between Linus Torvalds and Greg Kroah-Hartman, keynotes from Clay Shirky, Allison Randal, Martin Mickos, Jim Whitehurst, and others, as well as a "20th anniversary of Linux" gala event: "a Roaring 20's themed special event open to all attendees on the first night of LinuxCon. Dinner, drinks, casino games, awards, prizes and more. Costumes & tuxedos encouraged!" More information can be found on the main web page.
Events: June 30, 2011 to August 29, 2011
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| June 29 July 2 |
12º Fórum Internacional Software Livre | Porto Alegre, Brazil |
| July 9 July 14 |
Libre Software Meeting / Rencontres mondiales du logiciel libre | Strasbourg, France |
| July 11 July 16 |
SciPy 2011 | Austin, TX, USA |
| July 11 July 12 |
PostgreSQL Clustering, High Availability and Replication | Cambridge, UK |
| July 11 July 15 |
Ubuntu Developer Week | online event |
| July 15 July 17 |
State of the Map Europe 2011 | Wien, Austria |
| July 17 July 23 |
DebCamp | Banja Luka, Bosnia |
| July 19 | Getting Started with C++ Unit Testing in Linux | |
| July 24 July 30 |
DebConf11 | Banja Luka, Bosnia |
| July 25 July 29 |
OSCON 2011 | Portland, OR, USA |
| July 30 July 31 |
PyOhio 2011 | Columbus, OH, USA |
| July 30 August 6 |
Linux Beer Hike (LinuxBierWanderung) | Lanersbach, Tux, Austria |
| August 4 August 7 |
Wikimania 2011 | Haifa, Israel |
| August 6 August 12 |
Desktop Summit | Berlin, Germany |
| August 10 August 12 |
USENIX Security 11: 20th USENIX Security Symposium | San Francisco, CA, USA |
| August 10 August 14 |
Chaos Communication Camp 2011 | Finowfurt, Germany |
| August 13 August 14 |
OggCamp 11 | Farnham, UK |
| August 15 August 16 |
KVM Forum 2011 | Vancouver, BC, Canada |
| August 15 August 17 |
YAPC::Europe 2011 Modern Perl | Riga, Latvia |
| August 17 August 19 |
LinuxCon North America 2011 | Vancouver, Canada |
| August 20 August 21 |
PyCon Australia | Sydney, Australia |
| August 20 August 21 |
Conference for Open Source Coders, Users and Promoters | Tapei, Taiwan |
| August 22 August 26 |
8th Netfilter Workshop | Freiburg, Germany |
| August 23 | Government Open Source Conference | Washington, DC, USA |
| August 25 August 28 |
EuroSciPy | Paris, France |
| August 25 August 28 |
GNU Hackers Meeting | Paris, France |
| August 26 | Dynamic Language Conference 2011 | Edinburgh, United-Kingdom |
| August 27 August 28 |
Kiwi PyCon 2011 | Wellington, New Zealand |
| August 27 | PyCon Japan 2011 | Tokyo, Japan |
| August 27 | SC2011 - Software Developers Haven | Ottawa, ON, Canada |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
