LWN.net Weekly Edition for July 1, 2010
GNOME finalizes its speaker guidelines
Various free software communities, from distributions to individual software projects, have codes of conduct that are meant to govern the behavior of their members, at least in the projects' common areas. The intent is to reduce friction—flamewars and other unproductive communication—between project participants and to present a more welcoming face to newcomers. The GNOME project has a code of conduct that it has been discussing for some time; more recently it has also taken up guidelines for speakers at GNOME conferences.
Based on the discussion, it seems clear that Richard Stallman's keynote at the Gran Canaria Desktop Summit (GCDS) was one of the main reasons that some felt speaker guidelines were needed. His talk ended with a segment about "Emacs virgins" (or the "Saint IGNUcius comedy routine" as Stallman calls it) that was considered to be sexist and thus offensive by some in the community. Stallman has been reprising that particular bit for many years, and he, at least, doesn't see it as sexist. But it certainly offended some, and set off a firestorm of controversy last July.
Sometime after that, concerned folks contacted the GNOME Foundation board
to see what could be done to try to prevent that kind of thing from
happening again. As part of that effort, Matthew Garrett drafted
guidelines, Murray Cumming "improved the wording
", and Vincent
Untz posted a link to the guidelines for
discussion back in
March.
That earlier draft was a bit different than the current, adopted version, but quite similar in spirit. In the discussion back in March, there was some wordsmithing about the "Dealing With Problems" section, but the draft was mostly well-received. Stallman, though, had a concern that the guidelines were overbroad:
Stallman noted that part of his GCDS talk was about the riskiness of using C#, but Sandy Armstrong was quick to point out that the guidelines were aimed at a different part of his presentation:
I don't think they are meant to prevent you from making critical statements on relevant subject matter based on technical or legal arguments.
The discussion tailed off at that point, but was rekindled when Untz announced the final guidelines. Complaints about the guidelines seem to break down into two basic categories: that the rules are too vague, much as Stallman argued in March, or that they constitute "censorship" of speakers. While, strictly speaking, it may not be "censorship" (depending on which of the many definitions is used), it is certainly meant to steer speakers away from certain topics—those that might offend the audience or community.
Patryk Zawadzki doesn't like the vagueness, but thinks that there are other ways to address the problem:
The guidelines are somewhat vague, but that is done on purpose:
There are six separate guidelines listed, some of which are, or should be,
pretty obvious, for example: "Avoid things likely to offend some
people. Your presentation content should be suitable for viewing by a wide
range of audiences so avoid slides containing sexual imagery, violence or
gratuitous swearing.
" Others, though, try to cut to the heart of
the issue and, instead of proscribing conduct or topics,
provide overarching advice:
Stallman's earlier concern was addressed in the revision process, and he is
firmly behind the guidelines: "If the community wants these guidelines, I support
them.
" While the particulars of the GCDS keynote kept popping up in
the discussion, it's clear that only a few really want to continue that
particular debate. As Brian Cameron put it
in a message worth reading in its entirety:
There is something of a sense of resignation to the fact that a policy like
this is
needed. But, as Cameron noted, the board has been criticized for how
quickly and effectively it has responded to offensive presentations in the
past. The
guidelines provide a solid footing for any action the board may wish to
take in the future; one possibility is explicitly mentioned:
"Furthermore, if necessary, the GNOME Foundation might publicly
distance itself from your opinions.
" Michael Meeks summed up the situation well:
The final "Dealing With Problems" section is the part of the document that
has drawn most of the comments this time around. Joanmarie Diggs is concerned that parts of that section are
"neither 'positive' nor 'welcoming' to would-be speakers
". In
particular, the "disclaimer" paragraph, which is meant to head off anticipated
complaints, may be reworked. There seems to be a consensus that changes
are needed in that section, though it's not clear whether it should be
expanded to fill in the gaps, reworked in more welcoming terms, or
eliminated entirely. As Cameron said, the guidelines will likely be a
"living document", and if there are problems with it, changes will be made.
While there is no specific enforcement language in the document ("Enforcement is subject to the judgment of the session overseer
"), Garrett,
at least, sees that
as a possible hole. His original draft
"suggested that event runners be able to stop presentations if they
felt they were gratuitously in breach of the guidelines
", but that
was contentious and was removed. He is concerned that "guidelines
mean little without enforcement
", but does see the current language
as a reasonable compromise. As he notes, there are those who will find the
current watered-down version to be too intrusive, so something of a balance
has been struck between the needs of speakers and their audiences.
There have been plenty of examples of presentations made at free software conferences that offended some subset of their audience. As it is unlikely that the speakers set out to do that, guidelines like these will be helpful to speakers by making them at least stop and think about their words and imagery. That is likely to lead to better presentations and happier audiences, which can only be a good thing.
Bilski: business as usual
For many months now, anybody who pays attention to the US patent system has been anxiously awaiting the decision in the Bilski case. This case started as a lawsuit against the US patent office over its rejection of a business method patent. As this case worked its way toward the US Supreme Court, it came to be seen by many as a vehicle by which, just maybe, patents on business methods and software could be struck down. Much energy - and many amicus briefs - were directed toward that goal. As the last possible date for a ruling approached, the Free Software Foundation observed: "For Supreme Court watchers, following Bilski has been like following the World Cup. Productivity has fallen and ulcers have grown." Alas, it seems that the World Cup analogy extends to bad calls as well.
The ruling is out; Groklaw has it. With the concurring dissents, it runs to 71 pages. Reading the whole thing can lead to a much better understanding of the history of patent law in the US, but, for those concerned about possible changes to the patent system, the conclusion is far more succinct:
In other words, the court chose to rule on the value of one specific patent
application. In the process, it possibly loosened the criteria slightly by
saying that the "machine
or transformation" test is not the sole guide to patentability. But
the court went out of its way to avoid deciding - either way - whether
business methods as a whole could be patented. The only real mention of
software patents was a passing note that relying too heavily on "machine or
transformation" could "create uncertainty as to the patentability of
software, advanced diagnostic medicine techniques, and inventions based on
linear programming, data compression, and the manipulation of digital
signals.
" But, even there, the court went out of its way to have
anything read into its words:
This refusal to face the issue can only come as a disappointment to anybody who was hoping that the court would make substantial changes to the current application of patent law in the US. But it can't have come as any real surprise to people who are familiar with the current court. The current chief justice - John Roberts - has been very clear from the outset that he is not interested in the writing of expansive rulings. The court was asked to decide on one specific patent, so that's what it did. No nonsense about, say, laying down a clear interpretation of the law that would eliminate the need for a long series of court cases stretching into the future.
There are many who would argue that this is exactly how it should be, that it's up to the legislature, not the courts, to write the laws. Others would argue that the American precedent-based legal system guarantees that the courts will have a hand in the writing of law that people actually live by in any case, and that the court should have taken the opportunity to reduce the amount of uncertainty in this area. Certainly, it would have been nice if the court had thought a little more broadly; now it seems that the only alternatives are more court cases or an attempt to get the Congress to do something constructive, or, likely, both. One could argue that the decision to do nothing was a bad call indeed.
That said, while it would be nice if the courts would just fix the situation, it may well be the case that rewriting the law to explicitly restrict the range of patentable inventions would be the best solution. Getting the US Congress to do something about the patent system is a daunting prospect, but it's not beyond the realm of possibility. There is an increasing awareness that the patent system is costing businesses a lot of money and is impeding the competitiveness of the country as a whole. While there are powerful interests in favor of the status quo, there are others pushing for reform. It might just happen, someday.
Meanwhile, we're stuck with the same situation we had before this decision was handed down. Software patents remain a threat in the US and they are looking increasingly threatening elsewhere. We will have to continue fighting them in all of the same ways, including what is arguably the most effective strategy of all: make free software so useful and so ubiquitous that the industry has no choice but to continue to try to protect Linux and, hopefully, find a way to address the patent threat for real.
Two GCC stories
The GNU Compiler Collection (GCC) project occupies a unique niche in the free software community. As Richard Stallman is fond of reminding us, much of what we run on our systems comes from the GNU project; much of that code, in turn, is owned by the Free Software Foundation. But most of the GNU code is relatively static; your editor wisely allowed himself to be talked out of the notion of adding an LWN weekly page dedicated to the ongoing development of GNU cat. GCC, though, is FSF-owned, is crucial infrastructure, and is under heavy ongoing development. As a result, it will show pressures that are only seen in a few places. This article will look at a couple of recent episodes, related to licensing and online identity, from the GCC community.
Documentation licensing
Back in May, GCC developer Mark Mitchell started a discussion on the topic of documentation. As the GCC folks look at documenting new infrastructure - plugin hooks, for example - they would like to be able to incorporate material from the GCC source directly into the manuals. It seems like an obvious idea; many projects use tools like Doxygen to just that end. In the GCC world, though, there is a problem: the GCC code carries the GPLv3 license, while the documents are released under the GNU Free Documentation License (GFDL). The GFDL is unpopular in many quarters, but the only thing that matters with regard to this discussion is that the GFDL and the GPL are not compatible with each other. So incorporating GPLv3-licensed code into a GFDL-licensed document and distributing the result would be a violation of the GPL.
After some further discussion, Mark was able to get a concession from Richard Stallman on this topic:
This is a severely limited permission in a number of ways. To begin with, it applies only to comments in header files; the use of more advanced tools to generate documentation from the source itself would still be a problem. But there is another issue: this permission only applies to FSF-owned code. As Mark put it:
I find that consequence undesirable. In particular, what I did is OK in that scenario, but suddenly, now, you, are a possibly unwitting violator.
Dave Korn described this situation as being
"laden with unforeseen potential booby-traps
" and suggested
that it might be better to just give up on generating documentation from
the code. The conversation faded away shortly thereafter; it may well be
that this idea is truly dead.
One might poke fun at the FSF for turning a laudable goal (better documentation) into a complicated and potentially hazardous venture. But the real problem is that we as a community lack a copyleft license that works well for both code and text. About the only thing that even comes close to working is putting the documentation under the GPL as well, but the GPL is a poor fit for text. Nonetheless, it may be the best we have in cases where GPL-licensed code is to be incorporated into documentation.
Anonymous contributions
Ian Lance Taylor recently described a problem which will be familiar to many developers in growing projects:
He also noted that a contributor who goes by the name NightStrike had offered to build a system which would track patches and help ensure that they are answered; think of it as a sort of virtual Andrew Morton. This system was never implemented, though, and it doesn't appear that it will be. The reason? The GCC Powers That Be were unwilling to give NightStrike access to the project's infrastructure without knowing something about the person behind the name. As described by Ian, the project's reasoning would seem to make some sense:
NightStrike, who still refuses to provide that information, was unimpressed:
Awesome or not, this episode highlights a real problem that we have in our community. We place a great deal of trust in the people whose code we use and we place an equal amount of trust in the people who work with the infrastructure around that code. The potential economic benefits of abusing that trust could be huge; it's surprising that we have seen so few cases of that happening so far. So it makes sense that a project would want to know who it is taking code from and who it is letting onto its systems. To do anything else looks negligent.
But what do we really know about these people? In many projects, all that is really required is to provide a name which looks moderately plausible. Debian goes a little further by asking prospective maintainers to submit a GPG key which has been signed by at least one established developer. But, in general, it is quite hard to establish that somebody out there on the net is who he or she claims to be. Much of what goes on now - turning away obvious pseudonyms but accepting names that look vaguely real, for example - could well be described as a sort of security theater. The fact that Ian thanked NightStrike for not making up a name says it all: the project is turning away contributors who are honest about their anonymity, but it can do little about those who lie.
Fixes for this problem will not be easy to come by. Attempts to impose identity structures on the net - as the US is currently trying to do - seem likely to create more problems than they solve, even if they can be made to work on a global scale. What we really need is processes which are robust in the presence of uncertain identity. Peer review of code is clearly one such process, as is better peer review of the development and distribution chain in general. Distributed version control systems can make repository tampering nearly impossible. And so on. But no solution is perfect, and these concerns will remain with us for some time. So we will have to continue to rely on feeling that, somehow, we know the people we are trusting our systems to.
Security
HTTPS Everywhere brings HTTPS almost everywhere
Widespread end-to-end encryption for online communication often seems like a pipe dream: few email users bother with PGP, still fewer VoIP users ever use SRTP or ZRTP. But the one area where the general public has caught on to the need for secure transport channels is in web traffic, thanks to electronic commerce. The Electronic Frontier Foundation (EFF) recently released a Firefox extension called HTTPS Everywhere that leverages the widespread availability of HTTPS connections among popular Internet services. HTTPS Everywhere automatically rewrites URLs for a variety of providers, from software-as-a-service offerings to news outlets. The add-on is not configured to rewrite every URL by default, but it is a plug-and-play security enhancement.
In the initial HTTPS Everywhere announcement on June 17, Peter Eckersly said that the inspiration for the project was Google's launch of an HTTPS-encrypted search service in May. He later told ZDNet that the initial goal of the add-on was to create a tool to encrypt all Google searches (the Google HTTPS service initially worked only through the www.google.com domain and not the localized, international Google sites), but was quickly extended to other sites once the team — which also includes volunteers from the Tor (aka The Onion Router) project — found how simple it was.
HTTPS Everywhere is built on top of code that originated in the NoScript project, modified both to be easier to use and with additional functionality. Thus far, the extension is only available on the EFF's project page, not through the official Mozilla Add-ons site. The latest release is 0.1.2, though unfortunately no Firefox version-compatibility information is provided.
When installed, the extension provides a very simple preferences interface: a single pop-up window with checkboxes for each supported site or service. The result is instantaneous rewriting of URL requests to keep traffic on TLS or SSL encrypted HTTPS connections — including the initial request and subsequent internal links. The effect of checking or unchecking a site is instantaneous as of the next URL request; however it should be noted that previously-rewritten URLs already in the location bar or history are not "reverted" merely by changing the extension's preferences.
What it does
HTTPS Everywhere works by rewriting URLs based on matching requests against a series of regular-expression-based rules. Each rule is specific to a service, so that users can deactivate particular rules if they prove problematic. That is a valid concern, as some sites provide HTTPS connections, but do not offer the same services as they do over HTTP. Google Search, for example, supports web, video, news, books, blog, microblog, and forum content over HTTPS, but not image or shopping content. Many users have reported that using Facebook's HTTPS service disables the built-in chat client.
The current list of supported sites includes Google's search and services (such as Gmail and Google Voice) as separately-selectable options, as well as Facebook, Identi.ca, Twitter, the DuckDuckGo, Scroogle, and Ixquick search engines, Wikipedia, the New York Times, the Washington Post, the EFF, Mozilla, and Tor sites, San Francisco hacker space Noisebridge, and the Gentoo project's Bugzilla. Users can write their own URL matching and rewriting rules by following a tutorial at the HTTPS Everywhere site. Authors are encouraged to send in their creations to the project for possible inclusion in subsequent releases.
Rule sets use a simple XML format; each ruleset element can contain one or more rule elements with a "from" and "to" pattern to map the rewriting required. The patterns use JavaScript regular expressions, which is part of why HTTPS Everywhere can provide more redirects than NoScript's simple HTTP-to-HTTPS replacement.
An example from the site is Wikipedia, which runs an HTTPS server at secure.wikimedia.org, but not at the language-specific host names, such as sm.wikipedia.org or uk.wikipedia.org. HTTPS Everywhere's ruleset rewrites http://en.wikipedia.org/wiki/Example to https://secure.wikimedia.org/wikipedia/en/wiki/Example. HTTPS Everywhere also supports exclusion rules to work around HTTP-only subdomains in an otherwise HTTPS-supported domain, and it can gracefully downgrade to HTTP for sites that automatically redirect HTTPS requests to HTTP, without getting trapped in a loop .
Eckersly said that he hopes NoScript will be able to incorporate some of HTTPS Everywhere's enhancements back into its own extension, but for the foreseeable future intends to keep offering HTTPS Everywhere as its own, easy-to-use alternative.
What it doesn't
HTTPS Everywhere simply rewrites the outgoing URL requested by the browser, so it is only of use with sites already running an HTTPS server. Tor, in contrast, provides an encrypted first-step channel into the anonymous Tor network for every site visited, though the last step link from Tor to HTTP-only web sites is, of course, not encrypted.
EFF points out that users using HTTPS Everywhere may still see the broken-lock icon in Firefox for some sites, because many services use HTTP servers for some of their own page content (such as images) and to include insecure third-party content.
It is also important to note that while HTTPS encrypts the connection to the server and the resource path portion of the requested URL, the server name portion of the request is still visible (not only through setting up the connection, of course, but also potentially via DNS lookup). In addition, although HTTPS Everywhere can encrypt cookie requests over HTTPS, it does not provide the stronger cookie-management policies of NoScript. Thus, while eavesdroppers and credential thieves will be set back by HTTPS Everywhere, it does not encompass every security and privacy feature.
Finally, the genuinely paranoid no doubt know that encryption does not mean anonymity. Your IP address is visible in every request, and user tracking can be performed in many esoteric ways without peeking at the contents of the sites you read. The latter danger is ingeniously displayed by EFF's own Panopticlick, which gathers potentially trackable information from request headers, browser plugins, fonts, and other system information.
Security everywhere
The HTTPS Everywhere page discusses several similar secure-browsing alternatives, in addition to the aforementioned NoScript and Tor. Sid Stamm's Force-TLS is a Firefox extension that implements Strict Transport Security (STS) — although STS itself does not encrypt the initial request, it only tells the user agent to use HTTPS for subsequent requests, making it marginally less secure. Stanford's ForceHTTPS also includes a custom database of URL rewriting schemes, but was only released as a prototype in 2008, supporting Gmail and a handful of banking web sites.
The Chrome extension KB SSL Enforcer receives a little heat on the HTTPS Everywhere site, because it loads both HTTP and HTTPS requests for each page, thus potentially exposing the HTTP page to eavesdroppers. According to the developer, this is due to limitations in Chrome's APIs. Eckersley said that HTTPS Everywhere uses multiple Firefox APIs, including nsIObserver, nsIContentPolicy, and nsITraceableChannel, to try to capture every request path — even favicons and requests initiated by other add-ons — but still welcomes further networking testing by users.
The project reports that it has received dozens of user-contributed rulesets, including many for high-traffic sites, but that merging them all into a new default rule set for the next release will take some time. A 0.2.x "development" branch XPI installer was uploaded to the site on June 29th, which incorporates some of these additions.
Privacy and security online is a non-stop arms race between exploit-crafters and those making tools to thwart them. In that context, HTTPS Everywhere is not a perfect solution, but for many people it is an excellent, easy-to-use way to secure a large chunk of their daily web traffic.
Brief items
Quotes of the week
Researcher 'Fingerprints' The Bad Guys Behind The Malware (dark reading)
Dark reading looks at a presentation at the upcoming Black Hat conference about research on "fingerprinting" malware authors. Greg Hoglund, founder and CEO of HBGary, also plans to release a free fingerprinting tool at the conference. "A single clue alone might not mean much until you start combining multiple clues together, he says. His fingerprinting tool will help incident responders do exactly that: 'The fingerprint tool will tell them interesting clues as to the artifacts left behind in the [malware] development environment -- what version compiler was used, the original project name even if they changed the name of the file, which is common,' he says. 'A lot of attackers rename their attack to something that sounds innocuous, but sometimes you can extract the original project name, and find a path on the hard drive and libraries. When you combine all of this together, it creates a fingerprint [of the attacker].'"
New vulnerabilities
kvirc: multiple vulnerabilities
| Package(s): | kvirc | CVE #(s): | CVE-2010-2451 CVE-2010-2452 | ||||||||||||||||||||
| Created: | June 28, 2010 | Updated: | August 13, 2010 | ||||||||||||||||||||
| Description: | From the Debian advisory:
Two security issues have been discovered in the DCC protocol support code of kvirc, a KDE-based next generation IRC client, which allow the overwriting of local files through directory traversal and the execution of arbitrary code through a format string attack. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
lftp: mysterious vulnerability
| Package(s): | lftp | CVE #(s): | CVE-2010-2251 | ||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2010 | Updated: | October 27, 2010 | ||||||||||||||||||||||||||||||||||||
| Description: | The lftp file transfer program has been updated due to a "multiple HTTP client download filename vulnerability." | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
libpng: buffer overflow and memory leak
| Package(s): | libpng | CVE #(s): | CVE-2010-1205 CVE-2010-2249 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2010 | Updated: | January 19, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | The libpng library suffers from a buffer overflow and a memory leak exploitable via a malicious image file. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mozilla: multiple vulnerabilities
| Package(s): | mozilla | CVE #(s): | CVE-2010-1201 CVE-2010-0183 CVE-2008-5913 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 24, 2010 | Updated: | January 21, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From CVE-2010-1201: Unspecified vulnerability in the browser engine in Mozilla Firefox 3.5.x before 3.5.10, Thunderbird before 3.0.5, and SeaMonkey before 2.0.5 allows remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. From the Red Hat Bugzilla entry for CVE-2010-0183: Security researcher wushi of team509 reported that the frame construction process for certain types of menus could result in a menu containing a pointer to a previously freed menu item. During the cycle collection process this freed item could be accessed, resulting in the execution of a section of code potentially controlled by an attacker. From the Red Hat Bugzilla entry for CVE-2008-5913: An unspecified function in the JavaScript implementation in Mozilla Firefox creates and exposes a "temporary footprint" when there is a current login to a web site, which makes it easier for remote attackers to trick a user into acting upon a spoofed pop-up message, aka an "in-session phishing attack." | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
perl-libwww: unexpected download filename
| Package(s): | perl-libwww | CVE #(s): | CVE-2010-2253 | ||||||||||||||||||||||||
| Created: | June 24, 2010 | Updated: | November 3, 2010 | ||||||||||||||||||||||||
| Description: | From the oCERT advisory: Unsafe behaviours have been found in lftp and lwp-download handling the Content-Disposition header in conjunction with the 'suggested filename' functionality. Additionally, unsafe behaviours have been found in wget and lwp-download in the case of HTTP 3xx redirections during file downloading. The two applications automatically use the URL's filename portion specified in the Location header. Implicitly trusting the suggested filenames results in a saved file that differs from the expected one according to the URL specified by the user. This can be used by an attacker-controlled server to silently write hidden and/or initialization files under the user's current directory (e.g. .login, .bashrc). | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
python-paste: cross-site scripting/arbitrary javascript execution
| Package(s): | python-paste | CVE #(s): | |||||||||||||
| Created: | June 28, 2010 | Updated: | June 30, 2010 | ||||||||||||
| Description: | From the Fedora advisory:
The only real change is to paste.httpexceptions, which was using insecure quoting of some parameters and allowed an XSS hole, most specifically with its 404 messages. The most notably WSGI application using this is paste.urlparse.StaticURLParser and PkgResourcesParser. By directing someone to an appropriately formed URL an attacker can execute arbitrary Javascript on the victim's client. paste.urlmap.URLMap is also affected, but only if you have no application attached to /. Other applications using paste.httpexceptions may be effected (especially HTTPNotFound). WebOb/webob.exc.HTTPNotFound is not affected. | ||||||||||||||
| Alerts: |
| ||||||||||||||
ruby WEBrick: cross-site scripting
| Package(s): | ruby | CVE #(s): | CVE-2010-0541 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2010 | Updated: | August 15, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | The ruby WEBrick web server suffers from a cross-site scripting vulnerability exploitable via error pages. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
wireshark: multiple vulnerabilities
| Package(s): | wireshark | CVE #(s): | CVE-2010-2283 CVE-2010-2284 CVE-2010-2285 CVE-2010-2286 CVE-2010-2287 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 30, 2010 | Updated: | June 15, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | Wireshark has new set of dissector vulnerabilities with "unknown impact and remote attack vectors." | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel remains 2.6.35-rc3. Linus has returned from his vacation, though, and has resumed merging changes into the mainline.Stable updates still have not been seen since 2.6.32.15 (June 1) and 2.6.33.5 on May 26.
Quotes of the week
xstat() and fxstat()
POSIX has long defined variants on the stat() system call, which returns information about files in the filesystem. There are a couple of limitations associated with stat() which have been seen as a problem for a while: it can only return the information defined in the standard, and it returns all of that information, regardless of whether the caller needs it. David Howells has attempted to address both problems with a new set of system calls:
ssize_t xstat(int dfd, const char *filename, unsigned atflag,
struct xstat *buffer, size_t buflen);
ssize_t fxstat(int fd, struct xstat *buffer, size_t buflen);
The struct xstat structure resembles struct stat, but with some differences. It includes fields for file metadata like the creation time, the inode "generation number," and the "data version number" for filesystems which support this information, and it has a version number to provide for changes in the system call API in the future.
What also has been added, though, is a query_flags field where the caller specifies which fields are actually desired; if all that is needed is the file size or the link count, for example, the caller can say so. The kernel may return other information as well, but it does not have to go out of its way to ensure that it's accurate. There can be a real performance benefit to this behavior, especially for network-mounted filesystems where getting an updated value may require a conversation with the server. There is also a provision for adding "extra results" for types of metadata which are not currently envisioned.
The addition of this sort of stat() variant has been discussed for years, so something is likely to be merged. Chances are good, though, that the API will change somewhat before the patch is finalized. There were objections to the use of a version number in the xstat structure; the overhead of supporting another system call, should one become necessary, will be less than that of dealing with multiple versions. There were also complaints about the use of that structure as both an input and an output parameter, so the input portion (the query flags) may just become a direct system call parameter instead.
Update: there is already a new version of the patch available with some changes to the system call API.
Kernel development news
The return of EVM
The integrity measurement architecture (IMA) has been a part of Linux for roughly a year now—it was merged for 2.6.30—and it can be used to attest to the integrity of a running Linux system. But IMA can be subverted by "offline" attacks, where file data or metadata is changed out from under IMA. Mimi Zohar has proposed the extended verification module (EVM) patch set as a means to protect against these offline attacks.
In its default configuration, IMA calculates hash values for executables, files which are mmap()ed for execution, and files open for reading by root. That list of hashes is consulted each time those files are accessed anew, so that unexpected changes can be detected. In addition, IMA can be used with the trusted platform module (TPM) hardware, which is present in many systems, to sign a collection of these hash values in such a way that a remote system can verify that only "trusted" code is running (remote attestation).
But an attacker could modify the contents of the disk by accessing it under another kernel or operating system. That could potentially be detected by the remote attestation, but cannot be detected by the system itself. EVM sets out to change that.
One of the additions that comes with the EVM patch set is the integrity appraisal extension, which maintains the file's integrity measurement (hash value) as an extended attribute (xattr) of a file. The security.ima xattr is used to store the hash, which gets compared to the calculated value each time the file is opened.
EVM itself just calculates a hash over the extended attributes in the security namespace (e.g. security.ima, security.selinux, and security.SMACK64), uses the TPM to sign it, and stores it as the security.evm attribute on the file. Currently, the key to be used with the TPM signature gets loaded onto the root keyring by readevmkey, which just prompts for a password at the console. Because an attacker doesn't have the key, an offline attack cannot correctly modify the EVM xattr when it changes file data. Securing the key is important, so future work will entail using TPM sealed keys and encrypted symmetric keys so that the plaintext EVM key will never be visible to user space.
With all of that in place, a system administrator can be sure that the code running on the system is the same as that which was measured. Presumably, the initial measurement is done from a known good state. After that, any offline attack would need to either modify a file's contents, which would cause the IMA comparison to fail, or modify its security xattrs, which would cause the EVM comparison to fail.
These patches have been bouncing around in various forms for five years or
more; we first looked at EVM
in 2005. The EVM patch describes some of the changes that EVM has
undergone along the way: "EVM has gone
through a number of iterations, initially as an LSM module, subsequently
as a LIM [Linux integrity
module] integrity provider, and now, when co-located with a security_
hook, embedded directly in the security_ hook, similar to IMA.
"
That evolution reflects both changes suggested in the review process as
well as a realization that, since Linux security modules (LSMs) don't stack, it would be impossible to
have both EVM and SELinux, say, in one kernel. That led to adding IMA, and
now EVM, as calls out from the appropriate security hooks or VFS code.
For EVM, the hooks affected are security_inode_setxattr(), security_inode_post_setxattr(), and security_inode_removexattr(), each of which embeds a call to the appropriate evm_* function. The evm_inode_setxattr() function protects the security.evm xattr from modification unless the CAP_MAC_ADMIN capability is held. The other two calls update the EVM hash associated with a file when xattrs are changed.
The patches aren't too intrusive outside of the security subsystem, though they do touch some other areas. Two new generic VFS calls (vfs_getxattr_alloc() and vfs_xattr_cmp()) were added to simplify xattr handling. Because various additional file attributes (beyond just the security xattrs, like inode number, uid, mode, and so on) are used in the EVM hash, changes to those need to cause a recalculation, which necessitated changes fs/attr.c. And so on.
There are few comments on this iteration of the EVM patches. The idea has been through several rounds of review over the years and the patches have picked up an ACK from Serge E. Hallyn. EVM closes the offline attack hole in the protection that IMA provides and would thus seem to make a good addition the mainline kernel. For those who want to try it out now, there are instructions available on the Linux integrity subsystem web page. Unless major complaints appear, one would think that EVM might well be a candidate for 2.6.36.
Slab allocator of the week: SLUB+Queuing
The SLUB allocator first made its appearance in April, 2007. It went into the mainline shortly thereafter. This allocator was intended to provide better performance while being much more memory efficient than the existing slab allocator. One of the key mechanisms for improving memory use was to get rid of the extensive object queues maintained by slab; with enough processors, those queues can grow to the point that they occupy a significant percentage of total memory even when there is nothing in them. SLUB works well in many workloads, but it has been plagued by regressions on certain benchmarks. So SLUB has never achieved its goal of displacing slab altogether, and developers have talked occasionally about getting rid of it.But SLUB does better than slab on other benchmarks, and its code is widely held to be more readable than slab - though that is widely held to be faint praise. So, over the years, attempts have been made to improve the SLUB allocator's performance. The latest such attempt is SLUB+Queuing which, according to its developer Christoph Lameter, beats slab on the all-important "hackbench" benchmark.
There are a couple of significant changes in the SLUB+Q patch set which are intended to improve the performance of SLUB. At the top of the list is the restoration of queues to the allocator. SLUB+Q does not use the elaborate queues found in slab, though; there is, instead, a single per-CPU queue containing pointers to free objects belonging to the cache. Allocation operations are now simple, at least when the queue is not empty: the last object in the queue is handed out, and the length of the queue is decreased by one. Freeing into a non-empty queue is similar. So the fast path, in both cases, should be fast indeed.
If a given CPU's queue goes empty, the SLUB+Q allocator must fall back to allocating objects out of pages, perhaps allocating more pages in the process. That, of course, is quite a bit slower. In an attempt to minimize the cost of this slow path, SLUB+Q will go ahead and pre-fill the queue, up to the "batch size" (half of the queue's total length, by default) with free objects. So, in a situation where many more objects are being allocated than freed, the fast allocation path will continue to be used most of the time.
If the queue overflows, instead, the allocator must push objects back into the pages they came from. Once again, the behavior chosen is to prune the queue back to a half-full state; the allocator will not push back all objects in the queue unless the kernel has indicated that it is under serious memory pressure. The default size of the queue is dependent on the object size, but it (along with the batch size) can be changed via a sysfs parameter.
The other significant change has to do with how free objects are handled
when they are not stored in one of the per-CPU queues. In current mainline
kernels, SLUB maintains a list of pages which contain some free objects.
Note that it does not keep pages which are fully allocated (those can be
simply forgotten about until at least one object contained therein is
freed); it also does not keep pages which are fully free (those are handed
back to the page allocator). The partial pages contain one or more free
objects which are organized into a linked list, as is vaguely shown in the
diagram to the right. There is a certain aesthetic value to doing things
this way; it uses the free memory itself to keep track of free objects,
minimizing the amount of overhead needed for object management.
Unfortunately, there is also a cost to storing list pointers in the freed objects. Chances are good that, by the time the kernel gets around to freeing an object, it will not have been used for a bit; it may well be cache-cold on the freeing CPU. Objects which are on the free list are even more likely to be cache-cold. Putting list pointers into that object will bring it into the CPU cache, incurring a cache miss and, possibly, displacing something which was more useful. The result is a measurable performance hit.
Thus, over time, it has become clear that memory management is more efficient if it can avoid touching the objects which are being managed. The SLUB+Q patches achieve this goal by using a bitmap to track which objects in a given page are free. If the number of objects which can fit into a page is small enough, this bitmap can be stored in the page structure in the system memory map; otherwise it is placed at the end of the page itself. Now the management of free objects just requires tweaking bits in this (small) bitmap; the objects themselves are not changed by the allocator.
The hackbench benchmark works by creating groups of processes, then quickly passing messages between them using sockets. SLUB has tended to perform worse on this benchmark than slab, sometimes significantly so. With the new patches, Christoph has posted benchmark results showing performance which is significantly better than what slab achieves. If these results hold, SLUB+Q will have overcome one of the primary problems seen by SLUB.
It should be noted, though, that performance on a single benchmark is not especially indicative of the performance of a memory allocator in general; SLUB already beat slab on a number of other tests. Memory management performance is a subtle and tricky area to work in. So a lot more testing will be required before it will be possible to say that SLUB+Q has truly addressed SLUB's difficulties without introducing regressions of its own. But the initial indications look good.
What makes a valid timespec?
The notion that one should be liberal in what one accepts while being conservative in what one sends is often expressed in the networking field, but it shows up in a number of other areas as well. Often, though, it can make more sense to be conservative on the accepting side; the condition of many web pages would have been far better had early browsers not been so forgiving of bad HTML. The tradeoff between being accepting and insisting on correctness recently came up in a discussion of a proposed API change for the futex() system call; "conservative" appears to be the winning approach in this case.The futex() system call provides fast locking operations to user space. Callers will normally block until a lock becomes available, but they can also provide a struct timespec value specifying the maximum amount of time to wait:
struct timespec {
long tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
The interpretation of the timeout value is a little strange. For a FUTEX_WAIT command, the timeout is relative to the current time; for any other command, it is either ignored or treated as an absolute time. In particular, the operations like FUTEX_WAIT_BITSET and FUTEX_LOCK_PI use absolute timeouts.
Oleg Nesterov recently came to the kernel mailing list with an interesting glibc problem. If the tv_sec portion of the timeout is negative, the kernel will fail the futex() call with an EINVAL error. The POSIX thread code is not prepared for that to happen and shows its anger by going into an infinite loop - behavior which is not normally appreciated by user-space programmers. The glibc developers have concluded that this behavior is a kernel bug; to them, a negative absolute time value indicates a time before the epoch. Since the epoch is, for all practical purposes, the beginning of time, the response to a pre-epochal time should be ETIMEDOUT, which the library is prepared to deal with.
This position was not well received. Thomas Gleixner responded that times before the epoch cannot be
programmed into the system clock and, thus, are not accepted by any Linux system
call which deals with absolute times. Since some system calls cannot
possibly accept such values, Thomas says, none should: "I'm strictly
against having different definitions of sanity for different
syscalls.
"
Linus, too, opposes accepting negative times, but for slightly different reasons:
In other words, a negative time value is an indication that something, somewhere has gone wrong. In such situations, rejecting the value may well be the best thing to do.
That leaves the glibc developers in the position of having to fix their code to deal with this (previously) unexpected return value. The good news, such as it is, is that they'll be working on that code anyway. It seems that the same function will also loop if it gets EFAULT back from futex(), and that is clearly a user-space bug.
Patches and updates
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Kanotix 2010
Kanotix is a Debian-based Linux distribution aimed at new users or anyone wishing for an easy-to-use system. In 2005 Kanotix was looking good, it was stable, fast, and had some handy extras. But then it suddenly disappeared. Now it's back.
In the early days, Kanotix development continued along, with fresh ISOs being released periodically. It received positive reviews and took its place alongside PCLinuxOS, SimplyMepis, and other smaller but well respected projects. Then it began to experience internal strife. On November 30, 2006, it was reported that problems arose in the project and that co-founder Stefan Lippers-Hollmann had left, taking the Paypal donation button with him. He cited lack of innovation and releases, slipping schedule, unequal workloads, and deteriorating communication as the main reasons. At that time, the last major release had been on December 31, 2005. Between then and January 2008 several developmental releases were announced, but no final release was ever announced, and nothing has been heard since. Most considered the project dead.
Until now, that is; Kanotix Excalibur 2010 was released on June 8. As it turns out, the project was never really dead. Project founder and lead developer Jorg Schirottke said that test images and specialized builds had been made, but not released publicly, because they weren't installable. In fact, that was the main issue that caused the project to appear dormant to outsiders. The installer needed updates and the developer responsible for it was preoccupied with other obligations. After several test builds and getting the installer fixed a public preview was released in December 2009, but not widely publicized. While the departure of Lippers-Hollmann didn't stop Kanotix, it certainly hobbled it - for a while anyway.
Schirottke states that it was his switch from Debian unstable to stable that caused the friction that resulted in the loss of his earlier team. Despite that, he said he would do it again because "it can be really stressful to support unstable
" and with stable "I usually know what impact changes will have. As a result I was then basically the sole remaining developer.
"
The current Kanotix team is still quite small. Besides head honcho Schirottke, Andreas Loibl works on the installer, Debian Live, and things like Hal and PolicyKit, while Maximilian Gerhard works on announcements, the wiki, web site, and helps in the IRC and user forums. Kanotix also has a core set of loyal users that help with testing.
Trying it out
When booting the live DVD, the first thing one will notice is the lack of customization for KDE 4 and the same theme from 2005 for KDE 3. Again, Schirottke chalks this up to not having the manpower to build new themes right now. In fact, Schirottke welcomes help from anyone with a knack for making themes and wallpapers.
Like its contemporaries, Kanotix offers a nice line-up of software. Amarok and MPlayer are included for multimedia. Iceweasel, Skype, and Pidgin are there for Internet and communications. GIMP and OpenOffice.org 3.2 round out the necessities. Wine is also included. Kanotix uses Debian's APT for package management and is set up to pull from Kanotix repositories, but I'm not so sure there's a lot extra there. However, updates are current and seem to be maintained well.
Kanotix developers announced that they use a
2.6.32-21 kernel with the BFS scheduler, a low-latency desktop scheduler
designed by Con Kolivas. Schirottke said
that it provides "5% more speed. That may not sound [like] much, but that's
usually the same diff from one cpu to the next faster one.
" Tests
to determine any benefits of BFS comprised of application compile times
which consistently compiled 5% faster with a BFS kernel. "The latest
Kanotix kernels also use the full ck patches [Con Kolivas's full patch
set] with 1000 Hz setting - best for gamers and [it] was requested for this usage case.
" Schirottke states game play was noticeably faster and smoother using the 1000 Hz setting with those patches.
Schirottke says he doesn't develop for any one "specific kind of
user
", but that it is "mainly users without deep Linux
knowledge
" who use Kanotix. The team, he continued:
One thing notably missing is the control center. Back in Kanotix's
heyday, there was a nice graphical tool for configuring some extras like
network connections and 3D acceleration or installing Flash. Schirottke
says it was merely a wrapper for his scripts, some of which are still
present in /usr/local/bin. Others are
stored at the site. Schirottke intends users to use the scripts
directly now and he notes, "I think it is a good idea when new Linux
users begin to use the Konsole without fear as soon as possible. If
somebody uses GUI for everything the learning effect is zero.
" He
also suggests using the included Wicd for network connections. The web site has a FAQ and Wiki to help users install or update the scripts, with hardware and software tips, administration tasks, networking procedures, backup suggestions, and lots more. Some of the information is a bit old, but much is still relevant, and updates to the site are being made as time permits.
Kanotix comes in KDE 3 and KDE 4 flavors for 32-bit and 64-bit architectures. Why offer both a KDE 3 and KDE 4 version? Schirottke explains:
When Kanotix first appeared, Ubuntu was a newcomer and Kanotix was a viable contender for the easier-to-use-Debian crown. They've lost a bit of ground, but it's possible they could rise to prominence again. Kanotix might be ideal for those looking for a personal touch because very few other projects have a lead developer willing to personally answer questions and help in real time on IRC. Kanotix is also ripe for new developers looking for a place to cut their teeth or those who feel under-appreciated in their current projects. Schirottke would welcome the help. And that's where they are. It's a nice start to a comeback, but it feels as though the distribution needs more hammering out and polishing up right now.
New Releases
Debian GNU/Linux 5.0 updated
The Debian project has announced the fifth update of its stable distribution Debian GNU/Linux 5.0 "lenny". "Those who frequently install updates from security.debian.org won't have to update many packages and most updates from security.debian.org are included in this update."
The MeeGo Handset Project Day 1 release
The MeeGo handset user experience project has announced the "day 1 release" of its code. "The MeeGo Handset Day 1 image is provided as a community developer preview and we are in a very early and active development state. While we don't recommend installing it on your primary phone just yet, we invite all developers who are interested to have an early look using a development device." More information can be found in the release notes; there's also a number of screenshots available.
Fedora's Bodhi 0.7.5 release
Bodhi is a web-system used to test and publish updates for Fedora distributions. Click below for a look at the notable changes in this release.
Distribution News
Debian GNU/Linux
More collaboration among Debian-based distributions
The Debian Project has announced the opening of its Derivatives Front Desk, "a forum where contributors to Debian-based distributions can meet and discuss the best ways to push their changes back to Debian or otherwise ask for help on how to interact with Debian development."
bits from the DPL: meetings, derivatives, assets, sponsoring,...
Debian Project Leader Stefano Zacchiroli has an update on what he's been up to lately. He's been at LinuxTag, giving interviews, attending meetings, and much more.Overview of Debian's financial flows
Debian auditor Luk Claes presents an overview of Debian's finances. The largest single expense was for DebConf housing, followed by the new archive server.Python 2.6 migrated to Debian testing
Python 2.6 is now in Debian testing and therefore will be available in the upcoming "squeeze" release. "There's a price we have to pay for this fast transition: A few packages are uninstallable on one or another architecture because we couldn't get them to build fast enough. Also, first packages start to depend on other new packages (i.e. atlas), so they couldn't migrate. A few non-free packages had to be removed because they are not binNMUable by our current infrastructure; of course as always we're happy to help them to migrate again to testing once they're fixed."
Debian Policy 3.9.0.0 released
A new version of Debian Policy has been released. "The next time you upload your packages, please review them against the upgrading checklist in the debian-policy package and see if they require changes for the new version of Policy."
Fedora
Jared Smith is the new Fedora Project Leader
Outgoing Fedora leader Paul Frields has announced that his replacement will be Jared Smith. "Jared's been a long-time user of both Red Hat and Fedora, and an active participant in the the Fedora community since 2007. He's primarily spent his time working with the infrastructure and documentation teams. He's helped with the development of Fedora Talk, our community VoIP telephony system.... Jared has also participated in community events such as various FUDCons and Fedora Activity Days. In addition, he has assisted with toolchain development, release materials, and steering duties as a member of the Fedora Docs team."
The Unofficial Fedora FAQ Returns
The Unofficial Fedora FAQ has been updated for Fedora versions 11, 12 and 13. "As always, fedorafaq.org contains lots of straightforward and useful information on how to play DVDs, listen to MP3s, install Flash, and lots of answers to other frequently asked questions."
Fedora 11 End of Life
Fedora 11 has reached its end of life for updates. "No further updates, including security updates, will be available for Fedora 11."
SUSE Linux and openSUSE
New Maintenance and Security Update Notifications
Marcus Meissner takes a look at some changes to openSUSE and SLE maintenance and security update notifications. "The method of delivery of these notifications differs between the Enterprise and openSUSE products".
Women of openSUSE
The Women of openSUSE project has been launched. "There are Debian Women, Ubuntu Women and Fedora Women. Where are the openSUSE Women? I'm sure, that there are women who use openSUSE and maybe program on some projects, too. I'm really interested in meeting you and talk about technical things. What about creating a community to bring all of us openSUSE interested women together?" Interested participants should subscribe to the project's new mailing list.
Ubuntu family
Ubuntu Developer Week: 12th-16th July 2010
The next Ubuntu Developer Week will be held July 12-16, 2010. "Ubuntu Developer Week is back again, which means five days of action-packed IRC sessions where you learn more about hacking on Ubuntu, developing Ubuntu and how to interact with other projects."
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (June 28)
- DistroWatch Weekly, Issue 360 (June 28)
- Fedora Weekly News Issue 231 (June 23)
- openSUSE Weekly News Issue 129 (June 26)
- Ubuntu Weekly Newsletter, Issue 199 (June 26)
Page editor: Rebecca Sobol
Development
ownCloud 1.0: towards freedom in the cloud
Frank Karlitschek announced the ownCloud project during his Camp KDE presentation [PDF] in January in San Diego. After only five months, he has released ownCloud 1.0. It is meant to be a completely open source replacement, using the AGPL license, for proprietary cloud storage solutions like Dropbox or Ubuntu One.
OwnCloud, a KDE project (actually, a Social Desktop project), has a strong use case: because KDE runs on all kinds of devices (work and home PCs, notebooks and netbooks, mobile phones), it becomes more and more problematic for users to keep their data synchronized between these devices, or to create backups of their documents. A lot of users (not only in the KDE world) have therefore migrated to web-based applications like Google Docs, GMail and Flickr, or synchronization services like Dropbox and Ubuntu One.
Dropbox is a web-based file hosting service that enables users to store their files on the servers of the company Dropbox, synchronize them with their Windows, Mac or Linux computer or their mobile device, and share them with others. Canonical's Ubuntu One offers roughly the same functionality, but only has a Linux client for now. The unfortunate unifying theme in this trend is that all these web-based applications and services, including Canonical's, are proprietary. A few months ago, Frank described why it's important to create non-proprietary alternatives:
I think we have to make sure that our great KDE desktop applications support features like sharing data, accessing data from any device, automatic versioning, backuping and encryption.
A server in each house
The ownCloud project solves this by adding a relatively easy-to-install server package as a companion to the user's desktop, notebook, netbook, and mobile phone. Users can install ownCloud on their home PC that is publicly accessible via a static IP address or dynamic DNS, or on a VPS (virtual private server) or dedicated server that they rent. Files and other data can then be stored on their personal cloud storage and made accessible to all of their devices.
OwnCloud is a web application written in PHP. Users can choose between MySQL and SQLite for the database. The latter is especially nice, because it makes the setup easy for new users. Moreover, ownCloud uses a database abstraction layer, so support for additional databases is possible. The project is already working on PostgreSQL support. Your author installed ownCloud 1.0 on a fresh Turnkey LAMP 2009.10 server, which is based on Ubuntu 8.04 and offers a fully configured LAMP stack (Linux, Apache, MySQL, and PHP/Python/Perl).
After the installation, users can open the ownCloud directory in their favorite browser, which launches the "first run wizard" that helps with the initial configuration. For instance, this allows the user to choose between MySQL and SQLite, set up the administrative account, and it has some options for automatic backups, forcing SSL, and the location of the data directory.
When logged in on the web interface, the user can upload, download, or delete files, create directories, look at the log of actions, and change the settings. The possibilities are rather basic, though. For example, it's not yet possible to move files between directories on the server and the interface doesn't show any information about the files other than the file name. So all in all, ownCloud's web interface isn't all there yet.
Essentially, ownCloud is a simple web-based file server: it stores the user's files in some personal cloud storage. Users can then access these files via a web interface, even on their mobile phones. But ownCloud also supports WebDAV, so users are not limited to the web interface: their documents folder can be mounted on Linux, Mac OS X, and Windows. Your author mounted his ownCloud installation successfully on a Ubuntu 10.04 desktop with GNOME, using the "Connect to server..." menu. The WebDAV directory that needs to be entered is listed on the bottom of the ownCloud web interface, and if the web server supports SSL, access can be encrypted.
OwnCloud also supports the Open Collaboration Services API (OCS), so the user's ownCloud installation can push notifications to the KDE desktop if interesting events occur, such as someone who accesses a shared file or a full storage device. You have to use the latest KDE (KDE SC 4.5 RC1) to get notifications working, and the configuration is still a bit rough: the user has to add his ownCloud as a social desktop provider in the system settings "Account Details" - "Social Desktop" - "Add Provider" by entering the URL http://servername/ocs/providers.php. After this, add the "Social News" widget to the desktop or panel to see the notifications. This configuration setup will be made easier in ownCloud 1.1.
Future features
While the current release isn't overwhelming feature-wise, the project's roadmap gives an idea of some interesting features we can expect. For the next version, the plan is to support easily sharing files and directories with other people. There's also an idea to use the Git version control system as a storage backend to have a history of all files. There will also be a command line synchronization client and a KDE front-end to be able to access files offline. According to Frank, it should be easy to develop a GNOME front-end for ownCloud, and he is already talking with some GNOME developers about a closer collaboration in the future. OCS is a Freedesktop specification, so it's not tied to KDE.
Moreover, one of the KDE Google Summer of Code students, Martin Sandsmark, is working on a client side library for better integration of desktop applications with the ownCloud service. When this library is finished, KDE applications can use it to easily store and share data like bookmarks, calendar data, or configuration data across different systems. The library is using the OCS API as a data exchange protocol, so it works together with KDE's social desktop project.
OwnCloud has a basic plugin system which will be improved a lot in version 1.1 (expected in August 2010). This means that enterprising users can write their own additional services in the future. Frank shares some interesting examples in his release announcement:
- Music server. You can listen to your music from every device without copying it around.
- Podcast catcher. A central place to collect your audio and video podcasts and access it via a HTML5 interface or a native media player.
With respect to encrypted file storage, ownCloud is still in the planning phase. The idea is to develop an encryption backend using GPG, but this is not trivial; Frank told your author:
On July 4, Frank will give a talk about the future of ownCloud at Akademy 2010 in Tampere, Finland. Developers who are interested in contributing can find the code on Gitorious and discuss these matters on the mailing list or the #owncloud IRC channel on Freenode.
Taking back control
With the recent trend of many open source users migrating to proprietary cloud storage solutions, it's nice to see that the ownCloud project is working on a solution that should be able to compete with them. While ownCloud 1.0 doesn't look that spectacular and is not well documented, the project's vision is. When ownCloud 1.1 is released in a few months, the software will probably be a lot more polished, and users will then really be able to take back control over their own data.
Brief items
Firefox 3.6.6 released
The Firefox 3.6.6 release is out. It seems that the crash protection feature introduced in 3.6.4 caused some grief for Farmville players, so the frozen-plugin timeout has been increased.k3b 2.0 released
Version 2.0 of the k3b optical disk recording application has been announced. "With a few exceptions, K3b keeps feature-parity with 1.0.x series, but it also introduces a number of new features. Perhaps the biggest among these is support for Blu-ray drives. Additionally a lot of work have been put into improving the overall user experience."
KDE SC 4.4.5 Released: Ceilidh
Version 4.4.5 of the KDE Software Collection is out. "This is expected to be the final bugfix and translation update to KDE SC 4.4. KDE SC 4.4.5 is a recommended update for everyone running KDE SC 4.4.4 or earlier versions. As the release only contains bugfixes and translation updates, it will be a safe and pleasant update for everyone."
End of support for PostgreSQL 7.4 and 8.0
The PostgreSQL project has announced that there's only one more update coming for the 7.4 and 8.0 releases. "We urge users still using 7.4 or 8.0 in production to begin planning migration to newer versions immediately." Additionally, 8.1 goes unsupported in November.
PyPy 1.3 released
Version 1.3 of the PyPy Python implementation is out. There are a lot of improvements and speedups in the just-in-time compiler, along with alpha-quality support for CPython extension modules written in C. LWN looked at PyPy back in May.Python "newthreading" proof of concept released
The "newthreading" project within the Python community is a new attempt at improving concurrency in Python programs and facilitating the removal of the much-maligned global interpreter lock. A proof-of-concept implementation has just been released. "This pure Python implementation is usable, but does not improve performance. It's a proof of concept implementation so that programmers can try out synchronized classes and see what it's like to work within those restrictions." More information can be found on the newthreading page.
RedNotebook 1.0 released
The 1.0 release of the RedNotebook graphical journal is out. "It includes a calendar navigation, customizable templates, export functionality and word clouds. You can also format, tag and search your entries."
Thunderbird 3.1 has been released
Mozilla Messaging has announced the release of the Thunderbird email client version 3.1. Based on Gecko 1.9.2, it has a number of new features and improvements including a new quick search toolbar, improvements for searching, phishing protection, an attachment reminder, a one-click address book, and more. "One-click Address Book is a quick and easy way to add people to your address book. Add people by simply clicking on the star icon in the message you receive. Two clicks and you can add more details like a photo, birthday, and other contact information." Click below for the full announcement.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (June 29)
- PostgreSQL Weekly News (June 27)
Queru: The Froyo Code Drop
Jean-Baptiste Queru talks about the Android 2.2 code dump from Google. "In order to make it easier for device manufacturers and custom system builders to use Froyo, we've restructured our source tree to better separate closed-source modules from open-source ones. We've made many changes to the open-source code itself to remove unintentional dependencies on closed-source software. We've also incorporated into the core platform all the configuration files necessary to build the source code of Android Open-Source Project on its own. You can now build and boot a fully open-source system image out of the box, for the emulator, as well as for Dream (ADP1), Sapphire (ADP2), and Passion (Nexus One)." The post as a whole describes something which is beginning to look more like a real open source project.
FFmpeg gets its own implementation of Google's VP8 codec (ars technica)
Ars technica looks at an effort to create a native VP8 video codec implementation for the FFmpeg project. "Building on top of FFmpeg will allow them to take advantage of a substantial body of existing code. FFmpeg already supports previous iterations of the codec, such as VP5 and VP6, which share some common characteristics with VP8. According to [Ronald] Bultje, some of the optimizations developed for FFmpeg's H.264 and VP5/6 code can be shared seamlessly with the new VP8 implementation. This approach will lead to a smaller footprint than if the developers were to simply graft Google's code into FFmpeg."
Guido on the history of Python
Python creator Guido van Rossum has been writing a series on the history of the language; the latest installment is titled From List Comprehensions to Generator Expressions". "Why the differences, and why the changes to a more restrictive list comprehension in Python 3? The factors affecting the design were backwards compatibility, avoiding ambiguity, the desire for equivalence, and evolution of the language. Originally, Python (before it even had a version :-) only had the explicit for-loop. There is no ambiguity here for the part that comes after 'in': it is always followed by a colon. Therefore, I figured that if you wanted to loop over a bunch of known values, you shouldn't be bothered with having to put parentheses around them."
Knowledge: A Different Approach to a Database on the Desktop (KDE.News)
KDE.News takes a look at Knowledge, a prototype of a flexible database. "Knowledge was coded in Python to save development time and, while incomplete, allows the ideas behind Information Management software to be explored. Knowledge goes beyond existing KDE applications for storing textual information in a searchable format (Knotes etc.) because the data is stored in a highly structured way, which provides many possibilities for both the GUI display of data, and for searches. The data is stored in records, but unlike most other databases these are flexible as fields within each can be arbitrarily added, removed or re-ordered."
Programming with Scratch (Linux Journal)
Linux Journal reviews an educational programming language for younger children, Scratch. "Scratch allows the user to write programs by dragging and connecting simple programming instructions. The programming instructions resemble puzzle pieces and will only "fit" together in ways that make semantic sense. For example, you can't put the "Start" instruction inside an "If" instruction. The instruction pieces are also color-coded according to what type of instruction they represent; all control structure pieces are yellow, while all motion pieces are blue. The program that the user creates controls one or more objects, or sprites. From a programmer's perspective, Scratch has a very sophisticated set of instructions, as we'll see soon."
Page editor: Jonathan Corbet
Announcements
Non-Commercial announcements
LinuxQuestions.org Turns Ten
The site LinuxQuestions.org has been around for ten years. "In the 3,654 days between then and now, LQ has exceeded my expectations in every way."
Articles of interest
GNU HURD: Altered visions and lost promise (The H)
The H has a lengthy historical look at the GNU HURD. "The dinner was prepared, but the meal has yet to appear. By 2006, there was on ongoing project to port the HURD from L4 to Coyotos, which was considered more secure, and more recently ["NeilNeal] Walfield has worked on moving the HURD to the viengoos microkernel. The HURD has never had the community and resources that amassed around Linux, and looking in from the outside, sometimes looks more like a focus for research into OS design than it does a working kernel.
ASCAP Declares War on Free Culture (ZeroPaid)
ZeroPaid covers a new ASCAP campaign against free content licenses. "'At this moment,' the letter says, 'we are facing our biggest challenge ever. Many forces including Creative Commons, Public Knowledge, Electronic Frontier Foundation and technology companies with deep pockets are mobilizing to promote 'Copyleft' in order to undermine our 'Copyright.' They say they are advocates of consumer rights, but the truth in these groups simply do not want to pay for the use of our music. Their mission is to spread the word that our music should be free.'"
Here's Bilski: It's Affirmed, But . . . (Groklaw)
The US Supreme Court's Bilski decision is out, finally, and Groklaw has it. Anybody hoping that Bilski would do away with software patents will have little to celebrate. "The lower court's decision is affirmed, and so no patent for Bilski. However, business methods are not found totally ineligible for patents, just this one. Not everyone on the court agrees in all particulars. So it's complicated, and obviously not all we hoped for. However, what is clear is that the 'machine or transformation test,' while useful, is not the sole test for eligibility to obtain a patent. That was what the US Court of Appeals for the Federal Circuit had decided was the sole test. The US Supreme Court decided not to decide today about the patentability of software."
Initial thoughts on Bilski (opensource.com)
Red Hat counsel Rob Tiller is not completely discouraged by the Bilski ruling. "The Court also made clear that the the Federal Circuit's pre-Bilski approach is no longer valid. Justice Kennedy's majority opinion made explicit that the Court was not endorsing the 'useful, concrete, and tangible result test' of State Street, and the Justice Stevens's concurring opinion stated that following that test would be 'a grave mistake.' Thus the Federal Circuit's test that contributed to opening the floodgates on software patents is no longer operative."
Has Oracle been a disaster for Sun's open source? (The H)
The H looks at the future of Sun's open source projects under Oracle. "The problem is that Oracle is naturally trying to optimise its acquisition of Sun for its own shareholders, but seems to have forgotten that there are other stakeholders too: the larger open source communities that have formed around the code. That may make sense in the short term, but is undoubtedly fatal in the long term: free software cannot continue to grow and thrive without an engaged community."
New Books
LPI Linux Certification in a Nutshell--New from O'Reilly
The book "LPI Linux Certification in a Nutshell" is available from O'Reilly.Mahara 1.2 EPortfolios Beginner's Guide
The book "Mahara 1.2 EPortfolios Beginner's Guide" is available from Packt Publishing.
Resources
Linux Foundation Monthly Newsletter: June 2010
The June edition of the Linux Foundation Monthly Newsletter covers LinuxCon Program, T-shirt Design Contest, LF's End User Summit, a new Compliance White Paper, and several other topics.
Calls for Presentations
Call For Papers, 17th Annual Tcl/Tk Conference 2010
The 17th Annual Tcl/Tk Conference (Tcl'2010) will take place October 11-15, 2010 in Chicago/Oakbrook Terrace, Illinois, USA. Abstracts and proposals are due by August 1, 2010XDS 2010, September 16-18 in Toulouse, France
The X Developers' Summit (XDS) 2010 will be held in Toulouse France, September 16-18, 2010. The call for papers is open until August 16, 2010.
Upcoming Events
FPL travel to LATAM events
The newly appointed Fedora Project Leader, Jared Smith, will make his first offical appearance at FUDCon (Fedora Users and Developers Conference) in Santiago de Chile, July 15-17, 2010. "In addition, just a few days afterward Jared will attend the FISL 11 conference in Porto Alegre, Brazil."
SCALE moves to larger venue in 2011
The Southern California Linux Expo (SCALE) will be moving to a larger venue for 2011. SCALE 9X is slated for February 18-20, 2011.Events: July 8, 2010 to September 6, 2010
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| July 3 July 10 |
Akademy | Tampere, Finland |
| July 6 July 9 |
Euromicro Conference on Real-Time Systems | Brussels, Belgium |
| July 6 July 11 |
11th Libre Software Meeting / Rencontres Mondiales du Logiciel Libre | Bordeaux, France |
| July 9 July 11 |
State Of The Map 2010 | Girona, Spain |
| July 12 July 16 |
Ottawa Linux Symposium | Ottawa, Canada |
| July 15 July 17 |
FUDCon | Santiago, Chile |
| July 17 July 24 |
EuroPython 2010: The European Python Conference | Birmingham, United Kingdom |
| July 17 July 18 |
Community Leadership Summit 2010 | Portland, OR, USA |
| July 19 July 23 |
O'Reilly Open Source Convention | Portland, Oregon, USA |
| July 21 July 24 |
11th International Free Software Forum | Porto Alegre, Brazil |
| July 22 July 23 |
ArchCon 2010 | Toronto, Ontario, Canada |
| July 22 July 25 |
Haxo-Green SummerCamp 2010 | Dudelange, Luxembourg |
| July 24 July 30 |
Gnome Users And Developers European Conference | The Hague, The Netherlands |
| July 25 July 31 |
Debian Camp @ DebConf10 | New York City, USA |
| July 31 August 1 |
PyOhio | Columbus, Ohio, USA |
| August 1 August 7 |
DebConf10 | New York, NY, USA |
| August 4 August 6 |
YAPC::Europe 2010 - The Renaissance of Perl | Pisa, Italy |
| August 7 August 8 |
Debian MiniConf in India | Pune, India |
| August 9 August 10 |
KVM Forum 2010 | Boston, MA, USA |
| August 9 | Linux Security Summit 2010 | Boston, MA, USA |
| August 10 August 12 |
LinuxCon | Boston, USA |
| August 13 | Debian Day Costa Rica | Desamparados, Costa Rica |
| August 14 | Summercamp 2010 | Ottawa, Canada |
| August 14 August 15 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
| August 21 August 22 |
Free and Open Source Software Conference | St. Augustin, Germany |
| August 23 August 27 |
European DrupalCon | Copenhagen, Denmark |
| August 28 | PyTexas 2010 | Waco, TX, USA |
| August 31 September 3 |
OOoCon 2010 | Budapest, Hungary |
| August 31 September 1 |
LinuxCon Brazil 2010 | São Paulo, Brazil |
If your event does not appear here, please tell us about it.
Mailing Lists
New MeeGo Announcement Mailing Lists
Two new mailing lists have been created for MeeGo announcements; meego-announce for major project releases and meego-security for security announcements. "Both of these are low volume mailing lists, and posting to these lists is restricted to a few people who are responsible for official project communications. These lists are a good way to keep up with major announcements for people who aren't involved in the day to day development of MeeGo."
Audio and Video programs
Videos from Red Hat Summit
Videos of keynotes and some sessions from the recent Red Hat Summit and JBoss World are available online. (Thanks to Martin Jeppesen)
Page editor: Rebecca Sobol
