There were two hardware-related predictions: that the awareness of the value of open hardware would grow, and that there would be a number of Linux-based tablets. Neither was realized in any complete sort of way. The success of Android shows some level of appreciation for openness; it can be instructive to look at second-hand sales of Android handsets and note how many of them are described as "rooted." The Free Software Foundation also tried to raise awareness with its endorsement program but, as your editor said at the time, that program appears unlikely to have much real-world effect.
As far as Linux-based tablets go: we have seen a few Android devices, with Samsung's Galaxy Tab being the most prominent. But Android on tablets has been surprisingly slow to arrive, and MeeGo, needless to say, is slower yet. Perhaps 2011 will be the year of the Linux tablet.
The prediction that software patents would be a problem was not particularly hard to make. Sure enough, a number of suits have been launched, mostly in the mobile computing area.
Copyright assignment policies: the prediction that there would be debate around such policies was accurate. The LibreOffice project, in particular, had a surprisingly high-volume (in both the amplitude and quantity sense) debate on copyright assignment, but the developers behind LibreOffice seem determined that they will be more successful without any such policy. The GNOME project and the newly-formed MeeGo project also came out strongly against copyright assignment. These policies remain firmly entrenched in many projects, but the trend appears to be against them.
That Oracle's acquisition of Sun would proceed was also a relatively easy prediction. Your editor said that MySQL would be treated with a relatively light hand, which has proved mostly to be the case. What your editor missed was how badly most of the rest of Sun's free software projects would fare. Significant forks of OpenSolaris and OpenOffice now exist, and there is discontent in other projects as well. Oracle's relationship with the kernel community remains good, but the company seems to care little about projects higher in the stack.
The browser wars: perhaps they have heated up again, as predicted; certainly Google Chrome seems to be gaining strength. Mozilla is competing with a number of interesting initiatives, including a mobile version of Firefox. Good stuff is happening - but Internet Explorer still hangs on to over half of all traffic.
The prediction that solid-state storage devices would go into wider use was boring and obvious. Perhaps more interesting was the claim that some distributors would be offering Btrfs. That has certainly happened; your editor did not foresee, though, that the MeeGo project would adopt Btrfs as its default filesystem.
The rumors of the death of the big kernel lock were only exaggerated by a little; the 2.6.37 kernel (which should come out just after the new year) can be built in a useful mode without the BKL entirely.
The growth of LLVM was another fairly obvious prediction; a number of interesting things have happened with that project in the last year. Identifying Unladen Swallow as one of those things turned out to be a bad choice, though; whether Unladen Swallow is dead or just resting remains to be seen, but it is not a hotbed of activity at the moment.
Your editor predicted a "scary security incident" involving mobile devices. There have been some examples of malicious Android applications, but nothing that qualifies as a truly scary incident - that we know about, anyway. The scary stuff, instead, happened at other levels, with the Google attacks and the Stuxnet worm being the most prominent examples. The "year of the sandbox" was also predicted, but nothing of any real interest seems to have happened in that area.
It's not surprising that there was a lot of discussion of cloud computing, as predicted. The sun also rose every morning. On the other hand, the predicted release of GNOME 3 did not happen. The predicted increase in Python 3 adoption is also hard to find; there does seem to be a little more interest, but most developers seem to be in no hurry to leave 2.x behind.
The last prediction - on the importance of community distributions - is hard to measure, but it's not clear that the situation has changed markedly. What we are seeing is a bit more attention to staying close to upstream projects and working more closely with them. In its own way, Oracle's decision to slip a 2.6.32-based kernel into its RHEL5 clone is an example of this. MeeGo's desire to push patches upstream rather than carrying them is another.
So what did your editor miss entirely? The seeming increase in high-profile forks (LibreOffice, Mageia, IllumOS, ...) is one of them. The creation of MeeGo through the merger of Moblin and Maemo was another. In retrospect, it's not surprising that the sharks would start to circle around Novell, but your editor certainly did not think that the company might be in different hands by the end of the year. The failure of PHP6 was also obvious in retrospect.
One other interesting omission might, at the beginning of the year, have been phrased something like "the embedded Linux world will begin to get its act together." In this year, we've seen the creation of the Linaro project to try to improve tools and support for the important ARM architecture. The Yocto project - meant to ease the process of creating embedded distributions - launched. A number of embedded vendors came together and decided to standardize on the 2.6.35 kernel, which will receive improved long-term support as a result. The number of embedded vendors contributing to free software projects is growing. There is plenty of room for improvement yet, but things seem to be headed in the right direction.
The most obvious prediction of all was that free software would be stronger than ever. Despite our ups and downs, our flame wars and lawsuits, our bugs and our forks, we're doing great. It's been another good year for Linux and free software, and it has been a pleasure covering it for this audience. Thanks to all of you for making this community happen.
In November I wrote about the apparent demise of the well-known mail delivery agent procmail, which has not been updated since 2001, but is still routinely packaged by Linux distributions. Whatever your feelings about procmail itself, the story prompted a discussion in the comment threads that we revisit periodically as a community: what exactly does it mean for a free or open source project to die anyway?
There is no one "right answer;" the context of the project, its governance, user community, and the opinion of the debaters make for a wide spectrum of definitions. The process of single-vendor project shutting down can instantaneously switch off source code access, online documentation, and mailing lists. More often, the tools and trappings of a project atrophy one at a time; the documentation wiki slips behind the current release, the milestones never make it past "alpha" announcements on the mailing list and onto the Downloads page, and the user forum slowly transforms into a refugee camp where abandoned users help each other patch and shoehorn the aging code into compatibility with newer system libraries.
In most cases, however, the code itself is still available — somewhere. But if we look back at the projects that did close up shop during 2010, it is clear that the source availability factor alone is not always sufficient to regard a project still among the living. In the end, what makes a free software project dead comes down to practical questions. When was the last release? Are there any support plans for businesses? Is there reliable user-to-user support? Is there support for new developers wishing to leverage the code?
For all practical purposes, there are degrees of mortality to be considered. The owner of the code can walk away, shut down the resources, and fire the developers. Projects that meet that fate have little hope unless an entirely new team revives the work from scratch. But almost as serious is when the project owner or leadership pulls all of the developers and puts them on some successor project — however legitimate the ordination is, there is always a risk that the successor project will never see the light of day, and users cannot make the jump until it does.
In any event, it is the end of the year, which puts many of us in a reflective frame of mind, so taking stock of the wreckage from the past twelve months can be illuminating — both with regard to how open source projects die, and the different directions events can take afterward.
The easiest casualties to identify are those marked by a corporate owner's official press release or a project's clear statement of discontinuation. A few common factors precede many of these cases — generally predictable ones like lack of consumer adoption or legal woes; risks every project endures on a daily basis.
Easily the most high-profile FOSS project to get the axe this year was Google's real-time collaborative editor Wave. Despite a highly-publicized fall 2009 launch, the subsequent releases of much of the code, and worldwide Wave developer events, the search behemoth pulled the plug in August 2010. For reasons that still baffle me (although the "highly-publicized" part is no doubt a key ingredient), this decision was met with sheer joy from many in the technology press, and outright celebration ensued in some darker quarters. The fact that many of those rejoicing continued to describe Wave as an "instant messaging" tool — which it was not — points to poor product management and muddled marketing as critical mistakes on Google's part.
The same blunders may have stricken the Open Source Applications Foundation (OSAF)'s Chandler, a cross-platform email-and-calendaring application, although its end was not nearly as widely observed. The last announced release was made in July of 2009, although commits trickled in until the end of the year, and while the user and developer mailing lists survive, they now consist solely of requests for install-time help. Despite a well-funded benefactor underwriting its development, the project never achieved a fraction of the mindshare enjoyed by Mozilla Thunderbird (and its Lightning calendar add-on), which itself remains a minority player. I'd be willing to wager less than half of the people who read this paragraph knew what Chandler was, much less had tried it.
In contrast, Linux on the PlayStation 3 met a quick demise in 2010, and it was big news. While the firmware that allowed installing Linux wasn't free software itself, of course, it did allow booting the device with a user-installed OS, and was supported by several Linux distributions. It was such big news because PS3's corporate parent, Sony, knifed the project intentionally, publicly, and without remorse. Citing "security concerns," Sony pushed an April firmware update out to PS3s that disabled the "Other OS" feature the devices had supported since their launch in 2006. Rumors were that fear of Blu-Ray piracy enabled through PS3 hacking were the "concern" in question, although no such exploits were ever published.
A peculiar footnote to the PS3 Linux obituary was Sony's sudden announcement of an OpenStep-based application development framework it named SNAP, which was quickly followed by Sony's sudden announcement that SNAP was canceled. The stated idea was to create an open development framework for Apple's iOS, thus prying the tightly-closed lid off of the iPad/iPhone platform to allow in fresh rays of freedom. Considering that the dream of an open homebrew-development community was the initial justification for allowing Other OS on the PS3, SNAP's brief moment in the sun is probably no big surprise.
The LimeWire peer-to-peer (P2P) file sharing tool was scuttled in October, a move dictated by court order. Sources "close to the company" told PCMag that the application will be reborn as a "copyright-friendly service." Because the court order prevents LimeWire from distributing a client capable of uploading or downloading from the Gnutella P2P network, though, there is little chance that the open source "LimeWire Basic" version of the client will return at all.
Not all project terminations were the result of corporate mismanagement or copyright paranoia, however. Linux's HAL hardware abstraction layer, for example, is officially deprecated in favor of udev. Although this transition has been planned since 2008, both that year and 2009 continued to see additional HAL point releases. As of 2010, the major desktop distributions have migrated away from HAL, although several individual applications still pull it in as a dependency. HAL may continue to receive security patches, but its active life is essentially over.
Speaking of patches and active lives, several large projects fell into the awkward "dead but still claiming lots of users" category, which poses its own unique set of challenges.Consider the Moblin and Maemo siblings, for example. Intel's netbook distribution and Nokia's handset distribution were welded together into the brand-new MeeGo initiative in February of 2010, which bodes well for the future of the code itself. But both of the parent projects targeted embedded (or at least, non-standard-hardware) devices. Consumers who purchased an N900 phone from Nokia might be miffed to learn that there will be no MeeGo release for the device.The daring can boot MeeGo builds on the N900 from an SD card, but they do so at their own risk.
OpenSolaris was just one of many Sun projects acquired by the proprietary database vendor Oracle, and although several of the others (Java, OpenOffice, and MySQL) have had their fair share of headaches and battles since the acquisition, OpenSolaris is the only one to be scrapped outright. A leaked Oracle memo announced the move in September, under which upcoming "Solaris 11" releases might be available through a "technology partner program," but the open source version marches straight for the grave.
In November, the Symbian Foundation met an unceremonious end when majority stakeholder Nokia announced that it would re-absorb the Symbian unit and shut down all of the Foundation's web assets. Those assets disappeared on December 17th, though Nokia reportedly still employs the Symbian development team. Officially, Symbian will remain open source software, and what was the Symbian Foundation will morph into a "licensing body" — but the actual source code will disappear entirely sometime in March. One would be excused for thinking that that doesn't sound particularly open source; anyone who needs the code is encouraged to drop an email to firstname.lastname@example.org — a friendly offer, but not one that alleviates fears of abandonment.
Sometimes, of course, a corporate parent can cut a project loose, and the project can continue to survive or even grow. Such was the case for Etherpad, the web-based collaborative editor acquired by Google in 2009. Google opened up the code right after the acquisition, but snuffed out the service in May of 2010. Prior to the switch-off date, several replacement services sprouted up based on the Etherpad source code — Pirate Pad, PrimaryPad, OpenEtherpad, and more.
In addition to straight derivatives, the existence of forks sometimes makes it hard to determine when to declare a project dead, but at least one project is a plausible candidate in 2010.
The PHP-based content management system (CMS) Mambo suffered an acrimonious leadership battle in 2005 that led to the departure of the bulk of the developers, who started the Joomla CMS. As is often the case is such a fork, the remaining owners of the Mambo trademark and source code copyrights asserted that nothing was wrong and that development would continue unabated. Although that may have been true for a while, here at the end of 2010 it has been a full calendar year since there were any signs of life from Mambo (longer still since there was a release), apart from the occasional Twitter alert that the project's servers had been attacked. Joomla, on the other hand, seems fine.
The final category is made up of those projects that have disappeared or show no signs of life, but which, for one odd reason or another, are impossible to outright pronounce them as dead.
Take Xandros, for example. The commercial Linux distribution has not made a release since 2007, although it has acquired a handful of other companies since then, which indicates that capital is not the problem. One of those acquisitions was even fellow distribution Linspire, which has also failed to make a release since 2007. It's not clear whether or not the distribution is dead, though the company itself still exists and sells support contracts for existing Xandros Linux users. The company does have other products, but also it went all of 2010 without making a press release. Any new products the company may be developing is being done behind closed doors
Snort also has a corporate parent that continues to do business, but the tool itself still faces uncertainty. Some in security circles worry that the popular intrusion detection system is on life support if not actually terminal. The reason is that the long-discussed 3.0 rewrite, in planning since 2007, still has yet to appear. The project continued to make incremental updates to the existing version of Snort in 2010, but that apparently was not good enough to satisfy the US government, which paid to have a Snort replacement written.
Raindrop was a combined-messaging inbox system developed by Mozilla Labs, and offered an unusual combination of features: merging email, instant messages, and microblogging discussions into a single stream, and intelligently filtering one-to-one, group, and automated messages. Despite optimistic beginnings, the project quietly stopped receiving updates in late spring, and the mailing lists fizzled. A Mozilla Labs developer told me in October that a Raindrop replacement would arrive "soon" ... but it never has.
The XUL-based cross platform media player Songbird did not shut down entirely in 2010, but it did drop all Linux support in April. Shortly thereafter, it looked as though things on the Linux front were going to be A-OK, when a group of contributors announced the Nightingale project that would pick up where Songbird left off. Eight months later, however, and there has still been no code released.
The caveats on these seemingly expired projects vary. One has to give a some leeway to Nightingale; starting a project from the ground floor but with a large, pre-existing codebase is never easy. With regard to Raindrop, Mozilla Labs is explicitly marketed as the experimental wing of the browser maker, where R&D happens, and actual projects come and go. Either project could still awaken from its slumber and lead a long, happy life. Snort 3.0 could drop tomorrow, of course; perhaps Uncle Sam is just impatient, and the 3.0 rewrite is close to perfection. Who knows what Xandros HQ could have up its sleeve; the ISO downloads are cryptically marked as "out of stock" so maybe it's as simple as a missing hard disk.
Looking back at the list of 2010's dearly departed, you see a snapshot of the open source ecosystem as a whole. Some projects, like Google Wave, Symbian, and Chandler, never found the user-base their creators were hoping for. Others, like LimeWire and PS3 Linux, were forced to walk the plank thanks to legal threats from the code-meets-commercial-media arena. Songbird and Xandros were both popular when they were available, but appear to have simply lost support among the people who write the paychecks at their respective companies (and who knows what happened to SNAP, but at the very least we can agree that "too much support among management" was not among its problems). Mambo got taken out by infighting between its leadership and core developers. If you looked at the active open source projects making the news today, you'd likely find the same kinds of problems.
What is interesting to note about 2010's obituary is that there was only one Oracle acquisition among the fallen. Despite the database company's dominance of the news cycle for lawsuits and anti-community practices, it did not actually succeed in killing that many open source projects. Whether that tells you something about the hype factor of the acquisition or the resilience of the free software community is anybody's guess.
The developers behind OpenIndiana, the community-driven replacement for OpenSolaris, would probably say the latter. That brings us to the other potential lesson from 2010: the number of open source projects that survived, in one form or another. Etherpad is positively flourishing, Joomla is more popular than ever now, MeeGo is growing and even expanding into new areas, and even the much-maligned Wave has been resurrected as an Apache project (presumably to the consternation of some members of the FOSS press).
Three years ago I looked at the projects that perished in 2007 for NewsForge. There were nine projects on that year's Big Sleep list, and although this is not an exact parallel (the 2007 article only covered projects I personally had written about during the preceding year), I can't help but notice that only one of them has survived in any form that I can identify today. There is reason to be hopeful about at least three or four of this year's victims.
The difference could be due to random variation, but it is also possible that the community has learned from experience. For example, there have been large-scale dump-the-code-over-the-wall releases in the past that did not work out as well as Etherpad; perhaps Etherpad's continued existence ought to make it a case study for other such "if the community wants it, the community can have it" divestments. It might not even be too late for some of this year's casualties, say, Raindrop and Symbian. Even though both of them have some prospect for survival, good intentions offer no guarantee either will still be here in 2011.
Here is LWN's thirteenth annual timeline of significant events in the Linux and free software world for the year.
In what is becoming a fairly standard pattern, 2010 brought various patent lawsuits, company acquisitions, new initiatives, and new projects. It also brought new releases of the software that we use on a daily basis. There were licensing squabbles and development direction disagreements—all things that we have come to expect from the Linux and free software world over a year's time. Also as expected, though, were the improvements in the kernel, applications, distributions, and so on that make up that world. Linux and free software just keep chugging along, and we are very happy to be able to keep on reporting about it.
Like last year, we broke things up into quarters, and this is our report on the final quarter, October-December 2010, though there may be an addition or two for December. The previous quarters can be found as follows:
This is version 0.8 of the 2010 timeline. There are almost certainly some errors or omissions; if you find any, please send them to email@example.com.
LWN subscribers have paid for the development of this timeline, along with previous timelines and the weekly editions. If you like what you see here, or elsewhere on the site, please consider subscribing to LWN.
For those with a nostalgic bent, our timeline index page has links to the previous twelve timelines and some other retrospective articles going all the way back to 1998.
-- Paul Querna
The LLVM compiler project releases version 2.8, including major improvements to the Clang C++ support and two new projects: libc++ and LLDB (announcement).
Red Hat settles a patent case with the patent troll Acacia, but shares no details of the settlement terms (InternetNews blog posting).
-- John Gilmore
Microsoft VP Scott Charney suggests barring computers without a "health certificate" from the internet as a way to fight botnets and other internet security threats. Of course, those certificates would have to be issued by Microsoft. (blog posting).
Ubuntu 10.10 ("Maverick Meerkat") is released (announcement).
Debian welcomes non-packaging contributors as project members in a landslide vote: 285-14 (vote results).
The Open Document Format Plugfest is held in Brussels, Belgium to discuss interoperability between ODF-supporting applications (LWN coverage).
The AsbestOS bootloader, which allows Playstation 3s to run Linux once again, is released (announcement).
The first ever GStreamer conference is held in Cambridge, UK (LWN coverage).
-- Dave Neary
Mark Shuttleworth announces that Unity will be the default desktop for 11.04 ("Natty Narwhal") in preference to the GNOME 3 Shell (ars technica report).
The Consumer Electronics Linux Forum (CELF) announced a merger with the Linux Foundation at the Embedded Linux Conference Europe (ELCE), which was held in Cambridge, UK. (CELF/LF merge blurb and ELCE coverage: The state of embedded Linux and Device trees).
The Yocto project for easing embedded Linux development is announced at ELCE (project home page).
A plugin for Firefox that sniffs web application credentials from wireless networks, called Firesheep, is released (LWN article).
MeeGo 1.1 is released (announcement).
The 2010 Kernel summit is held in Cambridge, MA (extensive LWN coverage).
-- Tejun Heo
Fedora 14 is released (announcement).
Stormy Peters announces that she is leaving her position as GNOME foundation executive director to work at Mozilla on the open web (blog post)
-- Ingo Molnar
Red Hat Enterprise Linux 6 is released (press release).
The Apache Software Foundation issues a warning that it will stop participating in the Java Community Process if the TCK tests are not made available to it; access to the TCK has been promised for some time (Apache statement).
AMD joins the MeeGo project (press release).
-- Alan Cox
GNU's Savannah project hosting site suffers a SQL injection attack that reveals users' encrypted passwords (LWN blurb).
CentOS struggles with its efforts to release its rebranding of RHEL 6 (LWN coverage).
Novell puts out a message to assure those worried that Attachmate will retain the Unix copyrights even after the acquisition closes (brief message).
A generic anti-harassment policy for open source conferences is developed in the wake of numerous sexual (and other) harassment incidents (LWN article).
The Linux Foundation publishes its annual kernel development report (announcement).
A Linux client for the Ryzom MMORPG is released (LWN article).
The GRUB bootloader accepts code to support booting from ZFS and releases the code under the GPLv3, without a copyright assignment (LWN article).
KOffice forks (or splits) and becomes the Calligra Suite (LWN article).
The Hudson continuous integration server runs into Oracle interference when trying to change its development infrastructure in yet another example of the software giant not quite understanding free software communities (LWN blurb).
-- Paul Stadig
Matt Asay announces his resignation as Canonical's COO in order to join a mobile web application startup (blog post).
The Yocto project has a two-day summit in San Francisco involving 40 members of the embedded Linux community (LWN coverage).
An allegation is made that the US FBI paid to have a backdoor put into OpenBSD's IPSEC implementation, though it is still unclear whether there is any truth to it (LWN blurb, update from Theo de Raadt).
-- Teemu Ikonen suggests a name for Debian's MeeGo packages
The Apache Software Foundation resigns from the Java Community Process executive board as it previously warned that it would over the availability of the TCK tests (LWN blurb).
Richard Purdie is named as a Linux Foundation fellow to work on the Yocto project and other related tools (announcement).
Several projects announce that they have become licensees of the Open Invention Network, which collects patents for the defense of free software projects (LWN blurbs: Gentoo, The Document Foundation (LibreOffice), and KDE).
FOSS.IN announces that 2010 will be the last year it is held; it has been the premier free and open source conference in India over the last decade or so (LWN posting).
X11R7.6 is released (announcement).
Openwall GNU/*/Linux 3.0 is released, which marks the ten year anniversary of the security-enhanced Linux distribution (announcement).
Linux capabilities are a sparsely used kernel facility to add granularity to the set of privileges that a process can have. By using capabilities, an administrator can grant a process a limited set of privileges, rather than the usual, essentially binary, choice between granting all privileges via setuid() or granting just those of the user running the program. Combining capabilities with user namespaces will allow administrators to apply those fine-grained privileges to containers, which is just what a patch set proposed by Serge E. Hallyn sets out to do.
We have looked at capabilities several times in the past, most recently in the context of adding capability sets to files, though an earlier article provides more details on the rules that govern how capabilities are applied and inherited. With the addition of file capabilities, Linux systems have all the tools needed to eliminate most setuid programs though, in practice, that hasn't happened. There is an effort underway to eliminate most setuid programs for Fedora 15, however.
Namespaces are part of the Linux containers implementation, which is a lightweight virtualization technique that allows groups of processes to run in their own little world, separate from the rest of the processes running on the system. These containers must not be able to see or interact with things outside, so various global resources (things like process IDs, network devices, filesystems, and so on) need to be wrapped in a namespace layer that provides the illusion that the container is its own system. User namespaces provide a container with its own set of UIDs, completely separate from those in the parent. Each of the different kinds of namespaces can be created by using flags to the clone() system call.
The idea behind Hallyn's patches, the core of which was originally developed by Eric Biederman, is to eventually allow unprivileged users to create namespaces. In order to do that, the capabilities of processes in a namespace must not leak out to parent (or even sibling) namespaces. In the core patch, Hallyn says that the proposed changes accomplish 90% of the goal to allow unprivileged namespace creation, with some UID confusion issues still to be addressed.
In the initial user namespace—the "normal" namespace that is created at boot time—capabilities for a task are calculated in the usual way, using the permitted, effective, and inheritable capability sets associated with the task. The proposed changes will restrict any capabilities in a child user namespace to only act within that namespace or on any of its descendants.
Each capabilities set is contained in a structure that references the user it corresponds to, and those user structures have a namespace to which they are attached. When checking to determine whether a particular set of capabilities should be used, the code looks at whether the user is part of the target namespace. If so, its capabilities are used, if not, each parent namespace is checked all the way back to the initial user namespace. Since the capabilities can only be associated with one namespace (via a user in that namespace), they are only active in the namespace that contains them or any descendant from that namespace.
The user that creates the namespace will have all capabilities in that namespace, not just the set of capabilities they have in the parent. Essentially, the creator has the privileges of the root user in any namespace he or she creates.
In order to ensure that the namespace creator's capabilities don't leak out to the rest of the system, a new capability check is added in the patch:
int ns_capable(struct user_namespace *ns, int cap);The existing capable() function, which determines whether a task has a particular capability or not, has been changed to call ns_capable(), but it passes the initial user namespace for ns. That means that the existing calls to capable() currently sprinkled around the kernel do not suddenly change their semantics. In order to allow specific capabilities to function in a user namespace, calls to capable() need to be changed to ns_capable() while passing the appropriate namespace. The cap_capable() function, which is eventually called from ns_capable(), has been changed to properly handle capabilities in user namespaces.
In this way, kernel functionality that requires certain capabilities can be incrementally added to user namespaces while still protecting the rest of the kernel from being affected. Hallyn's patches enable three specific capabilities for user namespaces by making the change from capable() to ns_capable(). The first, and simplest, just allows the sethostname() system call to be successfully called if the user in the namespace has CAP_SYSADMIN. The second, which is slightly more complicated, but still a pretty small change, alters check_kill_permission() to allow CAP_KILL enabled tasks to send a signal to another task. The last patch allows CAP_SYS_PTRACE capable tasks to use ptrace() on other tasks in the user namespace.
This is an incremental approach that will allow each addition of user namespace capabilities to be reviewed and tested separately before adding them into the mainline. Hallyn notes his current plans for enabling some additional capabilities from user namespaces:
Capabilities are something of gnarly corner of the kernel, and one that has caused problems in the past (e.g. the "sendmail capabilities" bug). Combining them with namespaces is a bit of a delicate task. Clearly, if regular users are able to create these namespaces, it is imperative that any tricky interactions caused by capabilities in namespaces do not lead to privilege escalations. From that perspective, Hallyn's approach seems sound.
The immobiliser unit should be connected securely to the vehicle's electronic engine control unit, using the car's internal data network. But these networks often use weaker encryption than the immobiliser itself, making them easier to crack.
What's more, one manufacturer was even found to use the vehicle ID number as the supposedly secret key for this internal network. The VIN, a unique serial number used to identify individual vehicles, is usually printed on the car. "It doesn't get any weaker than that," Nohl says.
|Created:||December 20, 2010||Updated:||December 22, 2010|
|Description:||From the Gentoo advisory:
Multiple vulnerabilities were found in Chromium.
A remote attacker could trick a user to perform a set of UI actions that trigger a possibly exploitable crash, leading to execution of arbitrary code or a Denial of Service.
It was also possible for an attacker to entice a user to visit a specially-crafted web page that would trigger one of the vulnerabilities, leading to execution of arbitrary code within the confines of the sandbox, successful Cross-Site Scripting attacks, violation of the same-origin policy, successful website spoofing attacks, information leak, or a Denial of Service. An attacker could also trick a user to perform a set of UI actions that might result in a successful website spoofing attack.
|Created:||December 17, 2010||Updated:||February 2, 2011|
|Description:||From the Red Hat bugzilla:
A flaw was found in ISC's dhcpd  where, if a server receives a TCP connection on a port that has been configured for communication with a failover peer, it would be come unresponsive to all normal DHCP protocol traffic. This will result in the server no longer providing DHCP services to clients until it is restarted.
This flaw only affects DHCP version 4.2 and is corrected in DHCP 4.2.0-P2. Previous versions of DHCP are not vulnerable.
|Created:||December 17, 2010||Updated:||December 22, 2010|
|Description:||From the Ubuntu advisory:
It was discovered that Eucalyptus did not verify password resets from the Admin UI correctly. An unauthenticated remote attacker could issue password reset requests to gain admin privileges in the Eucalyptus environment.
|Created:||December 16, 2010||Updated:||February 22, 2011|
From the Mandriva advisory:
A cross-site scripting (XSS) vulnerability in Gitweb 188.8.131.52 and previous versions allows remote attackers to inject arbitrary web script or HTML code via f and fp variables (CVE-2010-3906).
|Created:||December 21, 2010||Updated:||September 2, 2011|
|Description:||From the Red Hat advisory:
It was found that some structure padding and reserved fields in certain data structures in QEMU-KVM were not initialized properly before being copied to user-space. A privileged host user with access to "/dev/kvm" could use this flaw to leak kernel stack memory to user-space.
|Package(s):||firefox, thunderbird, seamonkey||CVE #(s):||CVE-2010-3778|
|Created:||December 21, 2010||Updated:||May 2, 2011|
|Description:||From the CVE entry:
Unspecified vulnerability in Mozilla Firefox 3.5.x before 3.5.16, Thunderbird before 3.0.11, and SeaMonkey before 2.0.11 allows remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors.
|Created:||December 22, 2010||Updated:||January 17, 2011|
|Description:||Tor does not correctly handle data from the network, leading to buffer overflows which could possibly be exploited for remote code execution.|
Page editor: Jake Edge
Brief itemsreleased on December 21. Linus says:
The full changelog can be found on kernel.org.
Stable updates: Willy Tarreau released 184.108.40.206 on December 18. "It fixes a number of minor security issues, mainly information leaks from the kernel stack on some 64-bit architectures, or possible NULL derefs and crashes in some less commonly used protocols (eg: econet, x25, irda)." He also notes that 2.4 will now be supported through the end of 2011.
The change is to obfuscate the names of the firmware files in the Linux source code. That way, if a user tracks down what firmware to install and installs it under the name that the code wants, it will. But Linux Libre will still not suggest installation of the nonfree firmware file to handle a particular device.
Accepting such things in mainline would weaken the very principle that as made Open source in general and Linux in particular such a success, while refusing it isn't going to affect the survival of Open Source anyway. The compromise here would be only in the corporate world's favor. And as the past history has shown in such cases, the Open Source way always ends up prevailing eventually, despite the lack of corporate assistance.
There are two main contenders to replace STGT: LIO and SCST. In the end, there's really only room in the kernel for one SCSI target implementation, so there naturally has been a fair amount of tension between these two projects. Whenever the discussion turned to choosing one, it tended toward the ugly side. SCSI maintainer James Bottomley has done his best to stay out of the flames, but, in the end, he must make a decision and merge one of them.
A few months back, it began to become clear that LIO was going to be the winner. More recently, James gave the green light to begin merging this code for the 2.6.38 kernel. Suffice to say that SCST maintainer Vladislav Bolkhovitin did not take the decision well and did his best to restart the battle in a wider context. James has stuck with his decision, though, saying that there is not much to choose between technically, and that it came down to community:
The previous discussions appear to have worn down most other participants, so few people chose to join in this time around. There doesn't seem to be anything to suggest that the decision will change at this point; unless something surprising happens, LIO will be the in-kernel SCSI target subsystem as of 2.6.38.a patch which allows ping to run as an unprivileged user. It implements a new type of socket protocol (IPPROTO_ICMP) which, despite its name, is not usable for ICMP communications in general. The only type of message which is allowed through is ICMP_ECHO (and the associated replies).
Interestingly, this patch has been trimmed down from the version which is applied to Openwall kernels. In the full version, the ability to create ICMP sockets is restricted to a specific group, which can be set by way of a sysctl knob. The ping binary is then installed setgid. In this way, full access to ICMP sockets is not given to unprivileged users, while ping only gets enough privilege to create such sockets. The group check was removed from the posted patch to make acceptance easier, but it seems likely to be added back before the next posting.
For more information about the thinking behind this design, see this message from Solar Designer.
Kernel development news
Most wireless networking happens in the 2.4 GHz frequency band; as many users will have noticed, that band tends to get crowded and noisy in places. For this reason, both 802.11a and 802.11n specify a number of channels in the 5GHz band as well. The relative lack of traffic at 5GHz makes it attractive for this use, even though the effective range of an access point is reduced somewhat. Pushing more wireless traffic to 5GHz will greatly increase the total bandwidth available.
Naturally, there is a catch. While other uses of that frequency range are few, among them are counted air traffic control and weather radars. Interfering with these radars will be frowned upon by regulators who have strange notions about how aviation safety should take priority over that post-lunch Twitter update. These regulators typically show a distinct lack of humor toward anybody who doesn't pay attention to their rules; once again we see how wireless networking often tends to be the leading edge of encounters between Linux and the regulatory environment.
To make the 5GHz band available for wireless networking in a safe manner, various agencies have laid out specifications for how a wireless device selects an operating channel. This scheme, called "dynamic frequency selection" (DFS), requires that a "master" station listen to a channel for a minimum period of time to ensure that no radars are operating there before transmitting. Thereafter, the station must continue to listen for radars; should one happen to move into the neighborhood, the station must shut down all communications and move to a different channel. In essence, wireless devices operating in the 5GHz band must actively avoid transmitting on channels where radars are operating.
Most Linux systems will not have to concern themselves directly with radar detection. A "slave" device, as might be found in a typical laptop, need only follow the master device's instructions with regard to where it can transmit. But any device which wants to function as a master - including access points and anything running in ad hoc mode - must notice radars and react accordingly.
Wireless adapters, having radio receivers tuned to the frequency range of interest, can help with this process. Should a blast of RF energy hit the antenna, the adapter can return an error to the host system indicating that a radar-like patch of interference was encountered. It's not quite that simple, though: random interference is far from unknown in the wireless world. If a wireless device bailed out of a channel every time it received some unexpected interference, communication would be painful at best. So something a little smarter needs to be done.
That something, of course, is to look for the specific patterns of interference that will be generated by a radar. Radars emit short bursts of RF radiation, followed by longer periods of listening for the returns. The good news is that these patterns are fairly well defined in terms of the radar's pulse width, pulse repetition interval, and frequency. The bad news is that these parameters vary from one regulatory domain to the next. So while the US has specified a specific set of patterns that a device must recognize, the European Union has defined something different, and Japan has a variant of its own. So radar detection must be specific to the environment in which the device is operating.
A group of developers, mostly representing wireless hardware companies has started a project to implement DFS for Linux. A preliminary patch set has been posted by Zefir Kurtisi to how how DFS might be done. These patches add a simple function to the ieee80211 API:
void ieee80211_add_radar_pulse(u16 freq, u64 ts, u8 rssi, u8 width);
The hardware driver can use this function to inform the 802.11 core whenever the interface reports the detection of a radar pulse. These events will be tracked; if, over time, they match the pattern for radars defined by the regulatory environment, the code will conclude that a radar is operating and that evasive action is called for. If the hardware can do full radar detection directly, the driver can report the existence of a radar with:
void ieee80211_radar_detected(u16 freq);
The current patch is only able to detect one variety of European radar; it is meant as a sort of proof of concept. The means by which parameters will be loaded to describe radars in different jurisdictions is yet to be worked out; one assumes that the existing regulatory compliance mechanism will be used, but alternatives are being considered. One way or the other, Linux should be able to coexist with radars in the 5GHz band in the near future. A version which helps in the avoidance of speeding tickets may take a little longer.visited this issue in 2009, when the addition of rtkit was put forward as the (pulseaudio-based) solution for casual audio use. Serious audio users - those using frameworks like JACK - have always wanted more direct access to realtime scheduling, though. That access has, for some years, been provided through resource limits. Now it seems that a feature merged for the 2.6.25 kernel is, two years later, beginning to cause grief for some JACK users. The resulting discussion is an interesting illustration of technical differences, how long it can take for new features to filter through to users, and how one should best deal with the kernel development community.
The combination of the RLIMIT_RTPRIO and RLIMIT_RTTIME resource limits allows the system administrator to give specific users the ability to run tasks with realtime priority for a bounded period of time. The feature is easily configured in /etc/security/limits.conf and will prevent casual users from locking up the system with a runaway realtime process. This feature is limited in its flexibility, though, and is relatively easy to circumvent, so it has never been seen as an ideal solution.
The better way, from the point of view of the scheduler developers, is to use realtime group scheduling. Group scheduling uses control groups to isolate groups of processes from each other and to limit the degree to which they can interfere with each other; there has been an increase in interest in group scheduling recently because this feature can be used to improve interactivity on loaded systems. But group scheduling can also be used to give limited access to realtime scheduling in a way which cannot be circumvented and which guarantees that the system cannot be locked up by a rogue process. It is a flexible mechanism which can be configured to implement any number of policies - even if the full feature set has not yet been implemented. More information on how this feature works can be found in sched-rt-group.txt in the kernel documentation tree.
If realtime group scheduling is enabled in the kernel configuration, access to realtime priority based on resource limits is subordinated to the limits placed on the control group containing any given process. So if a process is run in a control group with no access to realtime scheduling, that process will not be able to put itself into a realtime scheduling class regardless of any resource limit settings. And that is where the trouble starts.
The kernel, by default, grants realtime access to the "root" control group - the one which contains all processes in the absence of some policy to the contrary. So, with a default setup, processes will be able to use resource limits to run with realtime priority. If, however, (1) the libcgroup package has been installed, and (2) that package has been configured to put all user processes into a default (non-root) group, the situation changes. The libcgroup default group does not have realtime access, so processes expecting to be able to run in a realtime scheduling class will be disappointed.
As it happens,
Ubuntu 10.10 the upcoming Ubuntu 11.04
release installs and configures libcgroup in just this
mode. That causes trouble for Ubuntu users running JACK-based audio
configurations; audio dropouts are not the "perfect 10" experience they had
been hoping for. In response, there has been quite a bit of complaining on the
JACK list, most of which has been aimed at the kernel. But it is not, in
fact, a kernel problem; the kernel is behaving exactly as intended - a
fact which has not made JACK developers feel any better.
As libcgroup developer Dhaval Giani pointed out, there are a few ways to solve this problem. The easiest is to simply turn off the default group feature with a one-line configuration change; only slightly less easy is enabling realtime access for that default group. The best solution, according to Dhaval, is to create a separate control group for JACK which would provide realtime access to just the processes which need it. That solution is slightly trickier than he had imagined, mostly because JACK clients are not necessarily started by JACK itself, so they won't be in the special JACK group by default. There are ways of getting around this difficulty, but they may require Linux-specific application changes.
The JACK developers were not greatly mollified by this information; in their view, audio developers have been getting the short end of the stick from the kernel community for years, and this change is just more of the same. They would, it seems, rather stick with the solution they have, which has been working for a few years now. As Paul Davis put it:
Many of the other thoughts expressed on the list were rather less polite. The audio development community, it seems, feels that it is not being treated with the respect that it deserves.
It is true that the audio folks have had a bit of a hard time of it. They have made a few attempts to engage with the kernel community which have been less than successful; since then, they have mostly just had to accept what came their way. And what has come their way has not always been what they felt they needed. As expressed by Alex Stone, the audio community clearly feels that the kernel developers should be paying more attention:
Sort of confirms the indifference to jack/RT as a significant component in the linux audio/midi/video world, doesn't it?
One other sentence in Alex's message deserves special attention, though: "If we don't yell, we don't get considered?" The answer to that question is "yes." The kernel serves a huge community of users, many of whom are represented within the kernel development community. It is entirely unsurprising that groups which don't "yell" tend to find that their needs are not always met. Any group which declines to participate, feeling instead that it's so important that kernel developers should come to them, is bound to be disappointed with future kernels. We all have to yell when our toes are stepped on; the sooner we yell the better the results will be.
That said, no amount of yelling at the kernel will help when the problem is elsewhere. Ubuntu has created a configuration in which allowing unprivileged access to realtime scheduling requires a bit more administrative work than it did before. Fedora, which also installs libcgroup, has, perhaps accidentally, avoided this problem by not enabling the "default group" option. So one might say that Ubuntu would be an appropriate target for any yelling on this topic. But increased use of control groups is clearly on the horizon for a number of distributions; systemd depends on them heavily. So the realtime audio community will need to work with control groups, like it or not. The good news is that control groups provide the needed features, and they do it in a way which is more secure and which allows more control over policy.
The JACK community seems to have figured this out; there have already been some patches posted to give JACK an understanding of control groups. It would also appear that the libcgroup developers are working on the problem in the hope of producing a solution which doesn't require application changes. Then, hopefully, Linux audio developers will have a solution which they can expect to rely on for many years (though they will want to keep an eye on the progress of the deadline scheduling patches). Certainly this kind of solution is something they have been wanting for a long time.
(Thanks to David Nielson for the heads-up).
The idea behind Frederic's patch set is to enable a process to disable the timer interrupt while it is running. If a set of conditions can be met, this will allow the process to run without regular interference from the timer tick. If other sources of interrupts are directed away from the CPU as well, this process should be able to run uninterrupted for some time. There are a few complications, though.
Actually going into the tickless mode is relatively easy; the process need only write a nonzero value to /proc/self/nohz. The patch imposes a couple of conditions on these processes: (1) the process must be bound to the CPU it is running on, and (2) no other process can be running in the tickless mode on that CPU. If those conditions hold, the write to /proc/self/nohz will succeed and the kernel will try to disable the timer tick while that process runs.
The key word here is "try"; there are a number of things which can keep the disabling of the tick from happening. The first of those is any sort of contention for the CPU. If any other processes are trying to run on the same CPU, the scheduler tick must happen as usual so that decisions on preemption can be made. Since a process can be made runnable from anywhere in the system, Frederic's patch performs a potentially expensive inter-processor interrupt whenever the second process is made runnable on any CPU, regardless of whether that CPU is currently running in the no-tick mode or not.
Another thing that can gum up the works is read-copy-update (RCU). If there are any RCU callbacks which need to be processed on the CPU, that CPU will not go into the no-tick mode. RCU also needs to be notified whenever the CPU goes into a "quiescent state," so that it can know when it is safe to invoke RCU callbacks on other CPUs. If RCU has indicated an interest in knowing when the target CPU goes quiescent, once again, no-tick mode cannot be entered. The CPU can also be forced out of the no-tick mode if RCU develops a curiosity about quiescent states anywhere in the system.
Given that RCU is heavily used in contemporary kernels, one would think that its needs would prevent no-tick mode most of the time. Another part of the patch set tries to mitigate that problem with the realization that, if a process is running in user space with the timer tick disabled, the associated CPU is necessarily quiescent. When a CPU is running in this mode, it will enter an "extended quiescent state" which eliminates the need for notification to the rest of the system. The extended quiescent state will probably increase the amount of no-tick time on a processor considerably, but at a small cost: the architecture-level code must add hooks to notify the no-tick code on every kernel entry and exit.
Reviews of the code, so far, have focused on various details which need to be managed differently, but there has not been a lot of criticism of the concept. It's early-stage code, so it doesn't take care of everything that normally happens during the timer tick, a fact which reviewers have pointed out. The biggest gripe, perhaps, has to do with the conditions mentioned at the beginning of the article: the process must be bound to a single CPU, and there can only be one no-tick process running on that CPU. Peter Zijlstra said:
Frederic has indicated that the code can be changed to lift those restrictions, but at the cost of some added complexity. Once the restrictions are gone, it may make sense to just enable the no-tick mode whenever the workload is right for it, regardless of a request (or the lack thereof) from any specific process. That would make the no-tick mode more generally useful; it would also reduce the role of the timer tick just a little more. The kernel would still be far from a fully tickless system, but every step in that direction helps.
Patches and updates
Core kernel code
Filesystems and block I/O
Benchmarks and bugs
Page editor: Jonathan Corbet
Those who want to install alternative firmware on their router generally pick OpenWrt, DD-WRT, or Tomato, but Eric Bishop found their web interfaces to be too focused on power users. So he started tinkering with OpenWrt and built a new web interface on top of it. That became Gargoyle, which had its first stable release in July 2009. The project is meant for average users and focuses a lot on usability, but that doesn't mean it's short of features.
Eric started Gargoyle because there really wasn't an open source router firmware replacement that was easy to use. Gargoyle is a web front-end to OpenWrt, which makes it comparable to other projects like X-Wrt and LuCI (OpenWrt's new web interface). The latter two projects, though, want to provide maximum functionality in their web interface. According to Eric, they are designed to be easy for developers to improve, which means that it's easier to add new features. As a result, both X-Wrt and LuCI tend to be quite feature-rich, but aren't necessarily very easy for the typical end user to figure out:
Tomato and DD-WRT provide the source code to their web interfaces, but the license prohibits the distribution of modified versions without the author's permission and thus both projects don't qualify as open source. In contrast, the Gargoyle web interface is completely open source: it's released under the terms of the GPLv2, with a clarification that permits adapting the web interface to configure proprietary back-end software, provided that all modifications to the web interface portion remain covered by the GPL. The rationale behind this clarification is that it makes Gargoyle more attractive for companies to use in their hardware.
Gargoyle is based on the most recent Kamikaze (development) release of the OpenWrt firmware. It is even possible to install Gargoyle as a set of packages on top of an existing OpenWrt installation (with a simple opkg install gargoyle command after adding the Gargoyle repository to /etc/opkg.conf). But the project's web site also has some images for routers that have Broadcom or Atheros chipsets and use the MIPS architecture, which includes many popular routers. Full details about which routers are supported can be found on the OpenWrt wiki. If the router is supported by OpenWrt but Gargoyle doesn't have an image for its architecture, you have to build the image yourself. Installation instructions for some popular routers such as the Linksys WRT54G family and the Asus WL500G Premium are fairly straightforward, typically just involving the router's reset button, a computer with an Ethernet cable, and tftp to upload the firmware image. Interested users can choose to download Gargoyle's stable branch (currently 1.2.5) or the experimental branch (currently 1.3.8).
After a successful installation, the user connects to the router with an Ethernet cable, after which the router's web interface is accessible at http://192.168.1.1 or https://192.168.1.1 with a default administrator password. Gargoyle also allows SSH access by default for "root" with the same default password. After the first login into the web interface, the user is asked to change the root password, which is a smart move. The next page gives the user the choice between configuring the router as a gateway (if it's connected to a DSL or cable modem) or as a wireless bridge/repeater. Below this are the WAN and LAN options, and at the bottom the user configures the wireless network for things like the SSID, encryption type, and password/key. After that, the Ethernet cable is no longer needed.
The available settings are divided into three menus in a sidebar at the left of the page: Status, Connection, Firewall, System, and Logout. The base settings that the user entered after installation are found under "Connection->Basic", but other submenus of the Connection menu provide ways to configure DHCP, dynamic DNS, and routing. The Firewall menu name is a bit of a misnomer, as it is more about all settings involved with ports and restrictions. For example, this is the place where port forwarding and Quality of Service (QoS) are set up, as well as bandwidth quotas.
The latter is an especially interesting and unique feature that is not often found in open source router firmware: it allows the user to restrict specific computers to download or upload a specified amount of data. The settings are very flexible: administrators can choose to restrict the quotas only on specific days or hours and they can configure how often (hourly, daily, weekly, monthly) along with the hour at which the volume restrictions are reset. In the latest experimental branch, administrators can also throttle bandwidth when a device's bandwidth quota is reached, allowing a lower level of service in that case instead of blocking all network access. There's another interesting submenu, Restrictions, that goes further and can restrict all network access for specific time periods or block specific ports, protocols, or website URIs. All in all, these features are intuitive to use and perfect to restrict your children's devices for example.
Where Gargoyle really shines in comparison to other open source router firmware are the graphs in the Status menu. The submenu "Bandwidth Usage" shows graphs of the bandwidth usage over the WAN interface — by default for the last 15 minutes, but the granularity of the view can be changed to 6 hours, 24 hours, 30 days, or a year. Moreover, it's also possible to show the bandwidth usage of up to three individual hosts in the same graph. At the bottom of the page, the same information is shown in tabular form and there's even a button to download the data as a CSV file, ready to be processed by other tools. Another interesting submenu of the Status menu is "B/W Distribution", which shows the relative use of the bandwidth by all network clients in a pie chart. The granularity of the time period can be changed for that chart as well.
The fact that Gargoyle is based on OpenWrt has the advantage that most of the tips and tutorials for OpenWrt also work on Gargoyle. So you don't have to sacrifice functionality for usability. If the web interface doesn't expose a specific function, just log in as root via SSH, install the needed packages, and run the right commands. There are around 1500 packages available to install using the opkg package manager.
Gargoyle isn't that well-known, so it shouldn't be a surprise that it doesn't have that many developers. Other than Eric, there is just one person who has been consistently contributing to the project: Paul Bixel. He is primarily interested in the QoS functionality in Gargoyle, and Eric is excited about Paul's main contribution:
The active congestion controller makes the QoS feature, which divides the available bandwidth between different classes of traffic, more flexible. The problem with QoS is that in order to allocate, for example, 25 percent of available bandwidth to HTTP traffic, the user needs to know how much bandwidth is available. According to Eric, all QoS schemes — including those in Tomato, DD-WRT, and OpenWrt/LuCI — have a setting where users need to enter the total amount of bandwidth that's going to be divided between the different classes of traffic. If ISPs provided a constant minimum amount of bandwidth to their customers this wouldn't be so bad — you would just enter whatever that amount is and move on. However, the amount of available bandwidth is usually not constant. Depending on how busy the ISP is at a given time, bandwidth available to an end user can fluctuate dramatically. The active congestion controller addresses this issue, Eric explains:
Both Gargoyle's QoS functionality and the active congestion controller are not just web interface front ends but features with code deep in the OpenWrt/Gargoyle stack. These changes have not been sent upstream, but, as all of the code is GPL, anyone could add it to OpenWrt. However, the active congestion controller depends on the Gargoyle QoS strategy, which differs substantially from OpenWrt, so this feature cannot be used in OpenWrt without also adopting Gargoyle's QoS code.
Besides Eric and Paul, there are a bunch of people who have made smaller contributions, such as Artur Wronowsky, who implemented Wake-on-LAN functionality which will come out in the next release, and Cezary Jackiewicz who translated the entire interface into Polish. Unfortunately, the latter is in the form of a huge patch that only supports Polish, but Eric wants to implement proper internationalization support in the experimental 1.5 branch some time after the stable 1.4 branch has been created.
According to Eric, the best way to contribute to Gargoyle is to clone his github mirror of Gargoyle, commit your fix, and send him a pull request: "That makes it really easy for me to review changes, and merge them into the main repository."
There isn't really a concrete roadmap for the project, but Eric explains we'll see a new 1.4 stable branch within the next month or two:
In the longer term, Eric has been meaning to implement a captive portal, a technique that forces clients on the network to see a web page with authentication before they are able to use the network normally:
Your author has been using Gargoyle on his router at home for over a year and he is rather surprised that Gargoyle is not as well-known as other router firmware. Indeed, it has a unique combination of properties: it's completely open source, it's easy to use for casual users, it offers pretty graphs, it has a flexible bandwidth quotas system, and the active congestion controller that is being worked on seems like a nice piece of technology. Granted, the development team is small, but they have a clear vision. Moreover, it's all based on OpenWrt, so there's a plethora of packages and documentation available.
FedoraI've known David for a number of years, and have do doubt that he will do a fantastic job. He's proven himself as an outstanding Fedora Ambassador and mentor, and shown his ability to be effective and tactful in his communications. He has also shown tremendous dedication and loyalty to the Fedora community."
SUSE Linux and openSUSEsummary of the December 15 meeting of the openSUSE Board. Topics include Introduction of new openSUSE Board Chairman - Alan Clark, Foundation Creation, and Membership Approval Concerns.
Newsletters and articles of interesta review of a preview release of the forthcoming Jolicloud 1.1 version. "Netbooks are the obvious target of this distribution, and by default, it's setup as a browser for website and cloud based applications. However, it's easy to expand, and I think this could be a distribution with a lot of uses. It's possible to add applications, and it can also be installed on any hardware that standard Ubuntu can including desktop PCs. Even better, as well as focussing on convenience, it's easy to use, meaning that it might be a good platform for people who aren't very good at using computers." reveals her favorite distributions over on Linux Planet. "Arch is my new favorite no-frills Linux. Arch is well-maintained, and the one big feature that sets it apart from all other Linuxes is the Arch Linux Wiki. This is the best-documented Linux distro of all. Rather than wasting energy continually re-inventing poorly-designed GUI interfaces in place of good howtos, Arch relies on sensible design and good documentation. It is sleek, clean, and efficient, and thanks to good design and documentation it is easy to learn. It fits any role well-- desktop, server, router, and I like it as an audio production platform. It makes the most out of modest hardware, and supports a full range of audio applications." another "top 5" list featuring Arch, Salix, Slackware, Debian, and Unity Linux. "The Unity project, not to confuse with the desktop environment, has had their first full release in July 2010 and have recently updated with a second point release. I like small distributions that provide a minimal base for a custom install, and Unity excells at that. It has been designed with explicitly this aim in mind, while providing users with the Goodies that is the Mandriva set of tools, known as, or better combined in, the Mandriva Control Center."
Page editor: Rebecca Sobol
It's taken far longer than originally expected, but Xfce 4.8 is nearly here. Originally due in April, and then June, the 4.8 release is making slow and steady progress towards a final release. The second preview release (4.8pre2) came out on December 6th and is looking fairly solid. Xfce 4.8 is a modest update, but this release cycle has brought much more than a few features and bugfixes.
Xfce is meant to be a lightweight desktop environment, which is modular and compliant with standards from freedesktop.org. It's popular on Linux, but is meant to be run on just about any Unix-like OS. Xfce uses even version numbers to indicate stable releases, and odd version numbers to indicate development releases — much like GNOME. This is not accidental, since Xfce started using GTK+ and Glib from the GNOME project during the Xfce 3.0 cycle.
Xfce 4.8 is a relatively minor update on the surface. It doesn't bring extensive user interface changes like GNOME 3.0. But Xfce also has a much smaller developer community, and the 4.8 cycle has been plagued with developers bowing out of the project for one reason or another.
Jannis Pohlmann, one of the Xfce maintainers, addressed the delays in a post on his blog in January. This was not the first time that an Xfce release had been well past the release date. The 4.6 release was also delayed, and wound up being two years in the making when it was released in February 2009.
The developers have been busy. During this release cycle, many of the core components have been rewritten or replaced. For instance, the the Xfce Panel was completely rewritten. The rewrite should provide much better support for users who are working with multi-head setups, as well as better launcher management. HAL and ThunarVFS have been removed or relegated to legacy status, and support for GIO, PolicyKit, and ConsoleKit, and udev have been added.
In addition, Xfce replaced its old UI library (libxfcegui4) with a new library called libxfce4ui. This, of course, required other components to be ported to the new library. And the port to GIO also caused delays. With great changes come great delays in development cycles.
This release cycle also saw a transition to Transifex for Xfce translations. As of August, Xfce had received 4,012 submissions in 45 languages from 101 users in Transifex. Xfce also migrated to Git during the 4.8 development cycle, which probably slowed work a bit and also caused at least one contributor to move their project to Sourceforge rather than having to learn Git.
Finally, the release process underwent a revision to allow sub-projects (like the Thunar file manager, or the panel, window manager, etc.) to release separately. Though this release has been slow in coming, the idea is that future releases will be easier to manage without requiring all components to release simultaneously.
Pohlmann notes that the Xfce development team is "very small," with the news that the maintainer of three core components (xfdesktop, xfconf, and xfce4-session) was leaving due to a new job. Two existing Xfce maintainers stepped up to share responsibility for xfdesktop and xfce4-session, but Pohlmann also notes that his university work was mostly limiting his contribution to communicating about the status of the project, and not much hacking. In short, more developers would be welcome.
So would a little cash. Unlike GNOME or KDE, Xfce is a fairly informal project — and without ready funds to support developer gatherings or any kind of activities. At least for now. In October, Pohlmann announced his intent to form a non-profit for Xfce in Germany. Why Germany? Pohlmann says that it doesn't matter much where it's registered for the purposes of donations and "there are a number of German Xfce contributors and users, so chances are good that there will always be someone to take care of things." The foundation is still in the works, but one hopes it will be finished in the early part of 2011.
It's been a while since I've spent any time using Xfce, and the first impression is that very little has changed in my absence. The desktop doesn't look any different than I remember it, though testing the packages on Xubuntu it would be easy to mistake Xfce for GNOME 2.x at first glance. To get the full effect, I got rid of the default Xfce configuration and ran the first-run setup wizard. It's hard to believe that this desktop was once a clone of the ugly duckling Common Desktop Environment (CDE).
Xfce is not quite as full-featured as GNOME or KDE, but then again, it's not meant to be. The basic desktop consist of the Xfce panel or panels, the desktop session, the Thunar file manager, and the Xfwm4 window manager. Everything "just works," without really getting in the way. Adding new launchers to the panel, or modifying the panel, works without any problem.
One longstanding complaint about Xfce is the lack of a proper menu editor. This release doesn't include a native Xfce menu editor, but it's now possible to use GNOME's Alacarte menu editor to edit the Xfce Panel menu. Whether the Xfce project will whip up its own menu editor at some point seems unclear, but there doesn't really seem to be any need — Alacarte does the job just fine.
Most of the changes in Xfce 4.8 are invisible, or nearly so, to the user. Yes, you can now use Gigolo to easily connect to remote and local filesystems, which is new. No, you really don't want to know why it's called that.
Thunar now has a "Network" item in the side panel, and the Trash icon is optional now. The panel length can be set by the percent of the desktop it should consume, and some improvements have been made for a vertical placement of the panel. Users won't notice, but Xfce now uses ConsoleKit to handle its shutdown or startup. In general, there are lots of minor changes that one has to dig to notice. This is not a bad thing, though. Xfce wasn't in need of radical changes.
The final Xfce 4.8 release is scheduled for January 16, 2011, and it should appear in the next releases of all the major distributions that ship Xfce (Xubuntu 11.04, openSUSE 11.4, Fedora 15, etc.). If you're already using Xfce, there's no rush to upgrade — the changes are subtle enough that most users won't notice them unless a specific bug (or the inability to edit the menus) has been particularly annoying. It does look like a solid, no-frills release, though — and a welcome option for Linux users who want an old-school desktop environment that's fast and relatively light on resources.
Newsletters and articlesan ITPro article (obnoxiously split into six parts) comparing GCC and LLVM from a licensing point of view. "In other words GCC is constructed in such a way that those who wish to provide extensions with licences that are incompatible with the GPL and copyleft are persuaded to contribute the software back to the community in the shape of the GPL - and this has been beneficial to the community - in that it has opened up architectures and languages that might not otherwise have been available to other users of GCC."
Page editor: Jonathan Corbet
Brief itemsa draft trademark policy. They try hard to cover all the bases. "Typical fair use of the trademarks is expected and no specific permission from us is needed. MariaDB is built by and for its community. We share access to the trademarks with the entire community for the purposes of discussion, development, and advocacy. Anyone should feel free to mention MariaDB or display our project's logos." The Document Foundation is a major free software project, and LibreOffice a key office suite for creating, managing and sharing documents. By becoming a licensee of the Open Invention Network, we fight software patents - which stifle innovation and encourage predatory business practices - and at the same time we improve the protection of our software projects." "We view an OIN license as one of the key methods through which open source innovators can deter patent aggression," said Adriaan de Groot, vice president of KDE. "We are committed to freedom of action in Linux, and in taking a license we help to address the threat from companies that support proprietary platforms to the exclusion of open source initiatives, and whose behaviors reflect a disdain for inventiveness and collaboration."" announced that the Git project has joined the Conservancy. "By joining the Conservancy, Git obtains the benefits of a formal non-profit organizational structure while keeping the project focused on software development and documentation." notes that the Linux Foundation has adopted an anti-harassment policy. "Those of us who have attended Linux Foundation events will probably agree that their policy simply puts into writing what they were already doing. Other organizations which already have strong agreement on both standards of behavior and internal decision-making may be interested in adopting Linux Foundation's simpler, streamlined policy." This document aims at promoting interoperability in the European public sector. The document is the result of a prolonged and hard-fought process. Free Software Foundation Europe accompanied this process and offered input to the European Commission at various stages." Only one month after the letters for the PDFreaders campaign of FSFE were sent, 172 public institutions have removed advertisements for proprietary PDF readers from their websites." Through the magic of the Google Font API any web designer can now pick Ubuntu from the Google Font Directory and bring the beauty and legibility of the Ubuntu fonts to their web properties."
Articles of interesta long discussion on why he stepped up to organize FOSS.in 2010 after saying that he was done. The key information is at the end, though: "So yes, you read correctly: There won't be a FOSS.IN next year. FOSS.IN/2010 is the last one. This is Team FOSS.IN's swansong." Here's hoping it doesn't turn out that way in the end; it would be sad to lose this important event. covers the launch of an official YouTube channel for Google's Open Source Programs Office (OSPO). "According to Google Open Source Team member Ellen Ko, the new channel is aimed at organizing videos related to Google and other open source projects in a single place." reporting that CPTN Holdings LLC, which acquired the Novell patents (or will when and if the sale closes), is owned by four industry heavyweights: "Twitter user @VM_gville just pointed me to the website of the German federal antitrust authority ("Bundeskartellamt"), which discloses a merger (or more precisely, joint venture) notification filed a week ago (on 09 December 2010), according to which the four companies behind CPTN Holdings LLC -- the acquirer of 882 Novell patents -- are Microsoft, Apple, EMC, and Oracle. The product market in which the newly formed company plans to operate is defined as "patents"." talks with Attachmate CEO Jeff Hawn. "What is Attachmate's history with open source projects? Attachmate does not have a corporate track record in the open source business. However, we recognize the importance of open source technology, particularly Linux, and the growing value it brings to enterprises globally. We also recognize and value the openSUSE project, the contribution that the community makes to the SUSE business and most importantly, the many ways in which the community benefits SUSE customers." covers Novell's Dister award winners. "Novell has announced the winners of its first annual 'Dister' awards, which celebrate "innovators and inventors" who use SUSE Studio to build creative SUSE Linux-based software offerings. Novell is handing out two $10,000 grand prizes to two companies: Radical Breeze and Anderware. Here is what they built, and how open source-focused incentive programs like this can really succeed."
Resourcesannounced the release of a report on "the international status of open source software 2010." The report is a 150-page PDF file looking at open source use across the planet. "The result of this analysis is the identification of the factors that account for the differences in maturity and penetration of open source software in the different geographical regions. Among these factors, we must highlight the key role of Public Administrations in promoting open source software, both by developing policies to promote and encourage its use and by becoming a key user of this software, as happens in those European countries most advanced in the use and development of free technologies. Other factors that explain the different maturity levels among countries are the level of education and the access their citizens have to the information society. In this regard, as a result of its high level of technical training, India shows a high level of open source software development, despite the limited access the general population has to the information society."
Calls for Presentations
|PyPy Leysin Winter Sprint||Leysin, Switzerland|
|January 22||OrgCamp 2011||Paris, France|
|linux.conf.au 2011||Brisbane, Australia|
|Southwest Drupal Summit 2011||Houston, Texas, USA|
|January 27||Ubuntu Developer Day||Bangalore, India|
|FUDCon Tempe 2011||Tempe, Arizona, USA|
|Cloud Expo Europe||London, UK|
|FOSDEM 2011||Brussels, Belgium|
|February 5||Open Source Conference Kagawa 2011||Takamatsu, Japan|
|Global Ignite Week 2011||several, worldwide|
|Red Hat Developer Conference 2011||Brno, Czech Republic|
|February 15||2012 Embedded Linux Conference||Redwood Shores, CA, USA|
|February 25||Build an Open Source Cloud||Los Angeles, CA, USA|
|Southern California Linux Expo||Los Angeles, CA, USA|
|February 25||Ubucon||Los Angeles, CA, USA|
|February 26||Open Source Software in Education||Los Angeles, CA, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds