|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for April 29, 2010

Fedora, Mozilla, and trademarks

By Jake Edge
April 28, 2010

Trademarks and free software can make a volatile mix. It is understandable that a project would want to ensure that code shipping under its name is "the real McCoy", but modifying the source and distributing the result is a hallmark of free software. Trademark policy can place limits on what changes—for bugs, features, or even policy compliance—downstream projects can make and still use the trademarked names. The tension between the two has led some, like Debian, to re-brand Mozilla projects, so that they can ship the changes they want; some Fedora developers would like to see that distribution follow suit.

A Thunderbird crashing bug, reported by Felix Schwarz to the fedora-devel mailing list, is the proximate cause for the current controversy. Numerous Fedora users were running into the bug, and it had been patched upstream for several weeks, but there had been no release of Thunderbird for Fedora to fix the problem. Schwarz reported that the patch fixed the crash for him and others, and asked that it be pushed out: "However it is still not fixed in Thunderbird F-12 CVS. Can you please push the fix to CVS and push builds to testing/stable?"

Martin Stransky, one of the Fedora Mozilla maintainers, noted that "we're patching mozilla packages only for really critical issues because of mozilla trademarks", which caused concern that the trademarks were causing Fedora to ship a buggy Thunderbird. While the patch was available in the upstream repository, it hadn't been merged into the branch for the next release. Stransky said that he had requested that the next Thunderbird release include the fix in Mozilla's bugzilla entry, but that wasn't sufficient for some.

It turns out that the bug had been reported back in early March, but wasn't given a very high priority by the Thunderbird developers for two reasons: they couldn't reproduce it and it was not showing up with any frequency in their crash statistics. Meanwhile, since early April, it was crashing fairly frequently for Fedora users, leading to multiple bugs being filed, all of which were eventually collapsed into one with numerous commenters and "me too" posts.

But the idea that the trademark policy might prevent Fedora from shipping a working Thunderbird led to calls for rebranding the mail client (and the Firefox web browser) with different icons and names, as Debian has done. In fact, adopting the "Iceweasel" and "Icedove" names, if Debian is amenable, was one of the suggestions. Ralf Corsepius put it this way:

Thanks for providing evidence of how trademarks are being applied to void the benefits of "open source".

The obvious logical consequences of what you say would be
* either to remove the packages you are referring to from Fedora because they are effectively unmaintainable.
* or to remove the trademarks and re-brand the packages.

Fedora engineering steering committee (FESCo) member Kevin Kofler seems to be spearheading the effort to get out from under Mozilla's trademark policy. He agreed with Corsepius in a series of posts, which laid out multiple reasons that Fedora should consider renaming, including the Mozilla project's habit of bundling its own versions of system libraries—something that goes against Fedora packaging policies. He points to libpng as one example:

Another big issue is libpng: xulrunner is bundling a forked libpng for APNG [Animated PNG] support (which isn't even available for anything else to build against, so e.g. Konqueror can't support APNG). (APNG is a nonstandard extension to PNG which Mozilla is arbitrarily pushing instead of the existing MNG format which the PNG developers are supporting. [...]) Debian is patching it to use the system libpng (which removes APNG support, so it's unlikely to pass trademark approval, ever), we aren't. This is a blatant violation of our own packaging guidelines.

Several other examples were noted by Kofler, including Thunderbird bundling Gecko, rather than using the system xulrunner, and xulrunner bundling libffi. He also expressed frustration with the integration of Mozilla projects into desktops, in particular KDE. Overall, Kofler and others are completely fed up. Bruno Wolff III doesn't see any advantages to using the trademarks:

I don't see how using Mozilla trademarks provides significant benefit to Fedora. It seems to mostly benefit Mozilla. I don't see why we should be breaking our rules to help them.

Other posters defended Mozilla, but the loudest voices were clearly those who were unhappy with the status quo. Chris Tyler pointed out several reasons that Fedora should continue using the Mozilla trademarks including the well-established Mozilla brands:

Mozilla's brands are very well-known: They have 350+ million users across multiple platforms (Windows, Mac, Linux), far more than we have in Fedora. The ability to use these apps in Fedora helps to assures new users that switching costs will be low.

Tyler also sees the trademark rules as reasonable to protect users from distributions that unintentionally introduce vulnerabilities when patching Firefox or Thunderbird. He is optimistic that things can be worked out:

Fedora has a great relationship with Mozilla. They're an amazing project filled with people that Get It, and we can work out issues with them in a cooperative way.

It is possible for Fedora to patch its version of Thunderbird or Firefox, but it must get approval from Mozilla for each patch. the team that maintains the Fedora packages for Mozilla projects only wants to go through that process for "really critical issues" as Stransky noted. Later in the thread, he outlines which issues qualify: zero-day vulnerabilities and crashes that affect everyone. Kofler is, unsurprisingly, not happy with that Mozilla policy either:

They're the ONLY Free Software upstream insane enough to require approval for EVERY SINGLE patch as a condition to use their trademark. Imagine if Linus Torvalds did that for the kernel Linux, or the GNOME and/or KDE developers for their desktop environments. It would be impossible to maintain a distro under such conditions! Why does Mozilla get that sort of preferential treatment?

Fedora also wants to control its trademarks, though, and has its own trademark guidelines which are substantially similar in spirit—at least—to those of Mozilla. Adam Williamson described it this way:

You can't modify Fedora under F/OSS principles and still call it Fedora, just like you can't modify Firefox under F/OSS principles and still call it Firefox. Both of us do this to protect the good name of the project. We'd be in an extremely glass house-y situation if we tried to 'call out' Mozilla over this. It'd be ridiculous.

Lead Mozilla package maintainer Christopher Aillon tried to clarify the situation somewhat. The "impact of the bug was misjudged", he said, which is frustrating users: "I think we have a responsibility to both Fedora and Mozilla to include a fix for it". He intends to get a fix into updates-testing for a few days to ensure there are no regressions for other users. He also defends the process that the packaging team uses:

The main purpose to get patches accepted by upstream before inclusion in Fedora is to make sure we are doing things the right way. For example, some patches may inadvertently break standards compliance, have ill side effects with JavaScript, may fix connecting to some mail servers at the expense of others, etc.

[...] We do have an agreement with Mozilla and as such, we are permitted to use the Firefox and Thunderbird trademarks. But even if we did not or it were decided those marks were not important to us, I strongly feel that we should continue do things the right way and get patches accepted upstream first.

Furthermore, Aillon stated that the trademark policy wasn't really an issue for this particular bug. In a comment on the ticket filed by Rahul Sundaram asking FESCo to look into the issue, Aillon said it was simply a misjudgment by the packaging team about the importance of the bug. FESCo discussed the matter at its April 27 meeting, but decided not to change anything with respect to the Mozilla packages.

There are a number of different issues swirling around this bug. It seems likely that if the packagers had noticed how many users were being affected, it would have been quickly patched and the larger issue might never have come up—at least temporarily. But, the problem that some—particularly strong free software advocates—have with Mozilla's trademark policy is not likely to go away.

There are some legitimate concerns regarding the ways in which the Fedora packaging guidelines are being routed around for Mozilla packages. There are also some odd, seemingly political, questions around the use of APNG, which will require Fedora to either patch APNG into libpng or ignore the "no bundled libraries" rule for Mozilla for the foreseeable future. It is, in short, something of a mess, but not enough of one to send Fedora down the Debian path.

There is hope that some of the other concerns that Kofler and others raised will improve in time. Tyler points to the recent addition of Fedora systems to the Mozilla test farm as a step in the right direction. Previously, CentOS systems, with older versions of the system libraries, were used. Testing on Fedora could well lead to better system integration as well as bundling fewer libraries in the Mozilla packages.

There doesn't seem to be any movement toward weakening the Mozilla trademark requirements, and that policy will always be anti-freedom to some. There are lots of other projects with much looser trademark guidelines; even some high-profile projects like Linux itself. Some may feel that Mozilla is overreaching, but Fedora is in no position to lecture Mozilla about trademark policies. Those who are bothered by those policies will either need to avert their eyes or find another distribution.

Comments (47 posted)

Bringing open source to schools

April 28, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

The best way to ensure the spread and success of open source is to introduce the next generations of users, and potential contributors, to open source at an early age. But this isn't trivial. Aside from software suitable for young users, it takes a lot of support materials to teach a class and spread the word to educators. One of the better documented attempts at reaching students is Máirín Duffy's eight-session Inkscape class and K12 Educator's Guide to Open Source Software.

The decentralized and loosely joined nature of the open source community has its advantages, but being easy to navigate isn't one of them. A case in point: While proprietary software companies have effective and plentiful resources to encourage educators to use their products, finding open source solutions is not as straightforward. This is particularly true for educators who are largely unaware of open source options. Duffy, an interaction designer for Red Hat and contributor to Fedora, has taken a stab at mitigating the problem by designing an eight-day Inkscape class for ten 7th grade students at Blanchard Middle School in Westford, Mass. and pulling together a guide for educators as well.

In October of last year, Duffy announced to the Fedora design team list that she'd be doing a Inkscape course at a local school on behalf of Red Hat, and was seeking help to put together the lesson plans. The class was held from early January through early February.

The course spans eight 40 minute sessions focusing on a single Inkscape project. Students learn about using Inkscape to create a logo for a band. Red Hat EmbroidMe Chelmsford sponsored putting the logo on a T-shirt at the end of the projects so students could actually have something to show from the course. In addition to the lesson plans and exercises developed by Duffy, the course materials also include the Inkscape manual from FLOSS Manuals and Tavmjon Bah's Inkscape: Guide to a Vector Drawing Program.

Even though the class is Inkscape-specific, the lesson plans and posts give some idea how to create a class from scratch or by adapting existing materials.

In the course of the project, Duffy also decided to create a two-page educator's guide. The guide and class materials are released under the Creative Commons Attribution 3.0 along with Inkscape files (for the guide) and ODF (for the lesson plans) for anyone who would like to remix and redistribute the content. The guide also uses free and open source fonts, so there should be no barriers to redistribution or refining the guide.

The guide provides links to some prominent resources for open source in K12 (primary and secondary education in North America) such as the education channel on Red Hat's OpenSource.com, the Open Clip Art Library, and the Open Educational Resources Commons. It also points to sites for finding open source applications, like osalt.com, and the K12OpenSource.org application catalog.

Note that it is, as the title implies, a guide to open source and not a guide to Linux. As such, it is primarily a pointer to community resources and a few applications of interest. It also contains a few choice quotes supporting open source by educators, and links to the posts Duffy wrote to correspond with the Inkscape class she taught. Says Duffy:

In the end, functional and full-featured open source tools that can be used in the classroom today do exist, so this guide is meant to send that message as well as point to a little bit of the story of where those open source tools come from.

Though the guide and materials were created for the class, Duffy says that it should work for "any Red Hatter or really any person reaching out to K12 schools". Having these types of source materials can go a long way to helping volunteers work with schools. Duffy says that "any time we can save those volunteers in having materials ready to go saves them time they can use to do more".

The students worked with Inkscape, but on Macs running Mac OS X and using Wacom tablets donated by Red Hat. Why not go all out and try to get the students and school using Linux as well? One of the things Duffy cautions against is to push educators too hard to embrace a fully open source stack all in one go:

An important thing to note is the guide does not reference ANY Linux distros - no openSUSE EDU, no Edubuntu, but also no Fedora Education spin or the Fedora Sugar on a Stick spin. The guide is focused on open source, not Linux. This is because as we found via direct experience in running our own open source outreach program with a local middle school - programs like this can be an educator's first experience with open source. If you push too hard for a 100% open source stack, you run greater risk of something going wrong and turning the educator / school off from open source entirely. It's better to take a measured approach - rather than coming in and changing the school district's operating system (which many times is simply not practically possible), it's better to vouch for cross-platform FLOSS applications like Inkscape and OpenOffice.org, taking a more gradual/measured approach.

Is there anything missing from the open source stack that schools need? According to Duffy, there's a gap when it comes to 2D animation applications:

If there is one thing in free & open source software that's missing that schools need, it's a simple, easy-to-use 2D animation program. It's the one thing the various teachers I've talked to always ask about, and I'm always sad to have no really great answer to give them. I know there are some projects working on this (e.g., Synfig) but we're not there yet.

It's hard to encompass the entire open source ecosystem for educators in two pages. Luckily, Duffy wasn't trying to. She points out that the guide is under a Creative Commons license, and she wanted to encourage others to build on it and revise it. What about a central source for this information? The guide points to several resources, but there's no over-arching authority to point educators to. Duffy acknowledges that there's not a site or project that includes all of the information educators need, but suggests that interested parties work with the K12 open source wiki referenced in the guide.

From the posts about the class, it seems that the sessions went well and the students took to Inkscape pretty happily. The choice of a relevant project that the students could become enthused about was probably a contributing factor, and the fact that they were working with a creative tool like Inkscape. Having hands-on activities obviously helped a lot. One consideration Duffy mentioned is that it's important to plan a lot of time for exercises, and that eight class periods is much less time than one thinks.

Finally, Duffy encourages LWN readers to take the guide and lesson plans and run with them:

I'd like to ask your readers that, if they haven't considered it already, they should think about volunteering in a local school! It's extremely rewarding. Kids look at the world in a different way and that perspective shift is really refreshing and energizing.

This is encouraging work that has helped introduce ten students to open source and started them on the path to working with more open source software. However, there are still millions of school children in North America and around the world that haven't been reached yet. Duffy's guide and posts provide a good idea what to expect when working with school kids and shows that they take to FOSS pretty quickly when it is presented correctly. With any luck, Duffy's posts and example will help inspire more open source advocates to take the lead in the classroom and introduce kids to open source applications.

Comments (6 posted)

Barriers to London's open source adoption

April 28, 2010

This article was contributed by Tom Chance.

When the Greater London Authority, a city-wide body, started to look at free and open source software they had little to learn from other levels of British government. The pace of adoption has been glacial in the UK, despite recent interest in open data. Having rolled out cost-saving open source technology for some back-office systems and web sites, the GLA has found that partnership across the wider public sector introduces the biggest barriers. The need for government bodies to interface with each other has held back the aspirations of the UK's open source action plan.

The British Government and public bodies were slow in recognizing the benefits of free and open source software; it took until 2004 for the publication of the first very vague policy document promoting it. Here in London there's little sign of progress six years on, as exposed by this question from politician Darren Johnson. In February he asked the Mayor of London how "the Greater London Authority and functional bodies [are] implementing the Government's open source action plan, recently re-issued by the Cabinet Office". Those "functional bodies" work on different parts of London's city governance: economic development, transport, policing, and fire and emergency services.

With the notable exception of the Greater London Authority (GLA) itself, which has "led the way in the use of Open Source Technology", the careful answers from wider group of functional bodies betray the lack of movement. The London Development Agency "has not implemented any open source software to this point"; the Metropolitan Police Authority "review opportunities for the use of open source software... as part of their ongoing programme of continuous improvement"; Transport for London "eagerly awaits further direction as it emerges from Government".

A follow-up request by Johnson for proactive, published action plans from each of the bodies in the GLA Group was met with another vague promise to "review the Open Source plans of all the functional bodies with a view to encouraging greater take-up and increasing the level of savings".

Open source technology in the GLA

The GLA's Technology team support the elected Mayor, Assembly, and approximately 700 staff, who in turn serve a capital city of 7.56 million people. While desktop users will mostly see the standard Microsoft offering — Windows XP, Internet Explorer, Office 2003 — much of the back-office kit in the basement has long run using Red Hat Enterprise Linux.

The GLA also started to use Drupal for building web sites, starting with a few custom-made interactive web sites such as the open Datastore and the public consultation for the Climate Change Adaptation Strategy. More recently, the main GLA web site moved over to Drupal, drawing the attention of the London Assembly committee tasked with scrutinizing the GLA's budget. Why? Because they were able to drop a web site project that had run up a bill of £1.2m using proprietary software and moved over to a new Drupal web site that cost £150k.

Drupal offered other benefits in addition to the low price tag. Drupal's extensive library of modules, which enable them to easily offer more interactive web 2.0 sites, made it the most attractive option from a technical point of view. This wasn't the case in 2006, when the GLA couldn't find an open source Content Management System (CMS) that was sufficiently powerful. According to evidence [PDF] laid out to the London Assembly committee by Daniel Ritterband, the GLA's Director of Marketing, Drupal offered other long-term advantages:

In 2009 the GLA had to make a decision on whether or not it was going to continue with a project that would keep it tied into a contract with a proprietary CMS that has relatively high development costs and a limited number of support agencies in the UK, or whether or not it should start again on a free, open source platform used by millions of people globally, which would deliver better long-term value to Londoners.

David Munn, GLA Head of Technology, is certain that cost has been the major driver in these changes, and not just in the obvious headlines such as web site procurement costs. Virtualization not only reduced the hardware and electricity bills for the servers, but also reduced the need for air conditioning by two-thirds. This change also hugely reduced their carbon emissions, following a report into the total lifecycle impacts of their IT that was finished last year. The high level of scrutiny from the elected Assembly, the media, and the public forces them to bear down on costs and reduce their carbon emissions.

Within the GLA the IT Strategy barely mentions free and open source software, although Munn is certain that the approach to reducing costs and carbon emissions strongly favors it. Adoption of Linux and Drupal has largely come about because of his enthusiasm for open source, which is shared by a few other key members of staff.

Barriers to open source in the public sector

While back-office systems and web sites have been easy, progress elsewhere has been much slower.

Munn's Technology team has to enable GLA staff to work with the other functional bodies in the GLA group. Microsoft Sharepoint was deployed to share project rooms across organizations, for example. The Mayor's Office also need ready access to Transport for London's CCTV network so that they can be fully involved in any emergency situations.

The other functional bodies have their own complex relationships. The Metropolitan Police, for example, link up with 42 other territorial police forces, the national Government and a complex tapestry of other law enforcement agencies. The Police manage a poorly understood suite of databases held by multiple agencies that are accessed using custom-made software; they typically exchange basic information using quite complex Microsoft Office documents that OpenOffice.org struggles to work with; and have a range of custom software for specialized needs such as the forensic services. So far, they claim, open source technology hasn't met their needs.

These complex needs make a move to open source software difficult for the GLA as well, due to their need to interface with the Metropolitan Police. Munn and his team did consider rolling OpenOffice.org out across their desktops, but the probable backlash from staff who might, for example, struggle to work with complex Microsoft Office documents outlining police budgets running to hundreds of pages outweighed the cost savings it would achieve.

Another problem that forces the GLA's hand is the move towards "shared services", where different public bodies run off the same technology platform in order to "provide the GLA Group with a more concerted and cost-effective approach to serving London" (according to an internal GLA document). For example, the GLA has recently moved onto Transport for London's procurement and finance systems. Transport for London (TfL), who run the Tube, other parts of the public transport network, and the main road network, is substantially larger than the GLA, with some 20,000 staff managing more than 27 million journeys a day. When TfL procured those systems it went for robust, industry-standard proprietary technology. Again, they claim that open source technology hasn't met their needs.

Now the GLA is locked into these systems, with a dedicated fibre-optic cable running between the two buildings.

The GLA did investigate downsizing their in-house Technology team and sharing TfL's services, but found it would be considerably more expensive. Their small size allows them to stay nimble and low-cost, in comparison with the complexity and caution implied by the size of TfL. Had they switched, they would in all likelihood have shut down the Linux-based back-office systems and moved onto TfL's proprietary stack.

Munn sees cloud computing as a possible way of levering in more open source software in the future. He observed that younger staff will increasingly be familiar with cloud technology suites from the likes of Google, and will find the idea of the full Microsoft office suite on the computer an anachronism. Where those cloud stacks are open source, or where they break up the stranglehold that Microsoft and other proprietary vendors have, we might see progress. But they could equally see a shift from one proprietary platform to another.

In the meantime, it seems that free and open source technology won't move into the front office of the British public sector until central government forces everyone's hand, overcoming network effects that prevent bodies like the GLA from moving first.

Comments (3 posted)

Page editor: Jonathan Corbet

Security

IPFire 2.5: Firewalls and more

April 28, 2010

This article was contributed by Koen Vervloesem

IPFire is a special-purpose Linux distribution that makes it easy to set up a firewall, in particular for users that want a secure gateway between internet and their home or small business. IPFire has an administrative web interface that aims to be clear to beginners but at the same time doesn't ignore experienced users.

The project started in 2005 as an IPCop derivative, but the 2.x version moved to Linux From Scratch as its base. There's no company behind IPFire, and the project is developed by a small but active core of developers (five core developers and about a dozen 'community developers' who develop new add-ons), under the supervision of project lead Michael Tremer. The latest release is IPFire 2.5 - Core 37, based on Linux kernel 2.6.27.42.

Straightforward installation

Your author downloaded IPFire 2.5 - Core 37. The download page offers the image in various formats: an installable CD image (via HTTP or a torrent, it's only 77 MB big), images for a USB stick, flash images for embedded x86 devices, and an image to run as a Xen guest.

IPFire has rather modest system requirements, but it's too heavy for real embedded use. A Pentium-class processor is the minimum, with a recommended clock frequency of at least 333 Mhz. The distribution requires 128 MB RAM but recommends 512 MB. The hard drive needs 1 GB, but 2 GB is recommended because of the need for log files. Of course if the firewall also acts as a file server, the hard drive should be much bigger, but combining these two purposes in one machine is not the most secure setup. And last but not least, the system needs at least two network cards: one for the WAN connection and one for the LAN. Add a wireless card for a wireless network.

The installation instructions guide the user through the installation process, which is rather straightforward and takes just a couple of minutes. It lets the user choose a target drive and a filesystem (the default ReiserFS, Reiser4, or ext3), partitions and formats the hard drive, and installs all files.

Colorful configuration

The configuration phase starts with the normal options, such as the keyboard layout, timezone, hostname and domain, and the root and admin passwords. After this comes the network configuration, but this introduces some new terminology; for "network type" there are the following possibilities: "GREEN + RED", "GREEN + RED + ORANGE", "GREEN + RED + BLUE", and "GREEN + RED + ORANGE + BLUE".

By default, IPFire configures a network of type "GREEN + RED", which means that it knows two networks: a green network for the LAN, and a red network for the WAN. Users that want a separate network for wireless clients should add a blue network, and users that want a separate server network (a "demilitarized zone"), should add the orange network.

Next, all the chosen networks will be assigned a network interface, IP address, and subnet mask. For the red interface, this obviously depends on the way the connection with the internet provider works. Users can choose from a static IP address, DHCP, or PPP dialup. In the following step, the DNS and gateway settings are made, but if the red interface uses DHCP, the correct values are already completed. In the last step, it's possible to enable the DHCP server and enter the IP range for the LAN. Users can redo the whole configuration procedure later using the command line program setup.

Add-ons and security

IPFire can be extended with add-ons, which are installed through the package manager Pakfire. Available add-ons include security and network tools like Tripwire, Guardian, Snort, or Squidclamav, but also file servers like Samba or NFS and Voice-over-IP packages such as Asterisk or Teamspeak, and some miscellaneous tools such as iftop, htop, or rsync. Right now, there are 97 add-ons available. Pakfire can be used from the web interface or from the console.

For a distribution that claims a focus on security, it's a bit strange to see that users have to do a full core update to get security fixes, but according Michael, users don't have to wait that long for fixes:

We release a new core update in intervals of about four weeks, sometimes earlier, sometimes later. Users will find a short notice in the web interface and just need one click on the Pakfire page to close security issues. Updates for add-ons are distributed in the same way.

Web interface

[IPFire utilization]

Most of the features of IPFire can be configured using the web interface, available over the green network interface. The administrator has to log in using the admin username and password entered during the configuration. The web interface is subdivided into various "tabs": System, Status, Network, Services, Firewall, IPFire, and Logs. Each of them is again subdivided into different pages. The System tab contains IPFire's main settings: users can change the look and feel of the web interface, activate ssh access (which runs on port 222 by default to reduce brute force attacks) and save the configuration of their IPFire installation to a file to restore it later. The latter page even has an option that creates an ISO image with the current settings, which can be burned to a CD in order to reinstall the complete system with the same settings. It's also possible to back up the settings of the add-ons.

The Status tab gives access to graphs and tables about virtually anything users want to know about their firewall. There are graphs of the CPU usage and load, of the memory and swap usage, of the network traffic on each interface, of port scans and ping answer times, and so on. There's even a page with temporal data of hard disk and case temperature, and another one with SMART information and disk usage of the hard disk. Last but not least, the Services page gives a nice overview of the running services and diagrams of the memory usage of the processes. This page also allows the user to enable or disable the installed add-ons, which are mostly extra services.

The Network tab offers a lot of configuration options, from DHCP and DNS servers to Wake-on-LAN and a web proxy server (using Squid). The web proxy page is really extensive and allows some advanced settings, including a transparent mode which generates iptables rules with the nice effect that clients don't have to be configured for proxy use, and a URL filter that blocks access to specified web sites or web sites containing offensive words.

[IPFire services]

In the Services tab, users can configure the default services of IPFire. To create a virtual private network, IPSec or OpenVPN can be used. Dynamic DNS can be used to allocate a domain name to the dynamic IP address IPFire gets from the ISP, and IPFire is able to synchronize its time to an external server via NTP and serve its time to machines in the local network. On the Quality of Service page users can specify different classes of services and grant them a specific bandwidth usage, and the Intrusion Detection System page enables Snort to help detect malicious behaviour on the network.

The Firewall tab has settings for port forwarding, external access to the IPFire machine, and firewall rules for outgoing traffic. Add-ons can be installed in the IPFire tab. And last but not least, the Logs tab has pages with graphs and log files of a lot of services, and the behavior of syslog can be configured here.

All in all, the web interface gives access to a lot of functionality, but the pages are not laid out in the most consistent way. For example, the IP addresses and other connection information of the network interfaces is shown on the Home page under the System tab, while it would be more appropriate under the Status tab. And information about the services is interspersed between the Status - Services page and the pages under the Services tab. Moreover, services installed as an add-on can be started and stopped in the Status - Services menu, while the main services only show their status on the same page and have to be enabled or disabled on their own page. In addition, the pages in the Logs tab repeat a lot of information that is already shown in the Status tab, and shows some other information that would be more logically placed on other pages, such as the firewall logs and the logs of the intrusion detection system.

Development

In the meantime, the IPFire developers are working on the 3.x branch. It will probably be based on the stable Linux 2.6.32 kernel, patched by the IPFire developers with the grsecurity and PaX patches as well as the Open Cryptographic Framework for accelerated cryptographic primitives. The developers are also enabling the stack smashing protection of GCC to prevent buffer overflows.

For this new release, the developers will focus on the networking part of the distribution. According to Michael, there will definitely be support for IPv6, VLANs, more than the current four independent network zones, port trunking to increase bandwidth, and many more VPN features (e.g. using StrongSwan). The web interface will also be entirely rewritten with a focus on usability and configurability, and it will be based on Python instead of Perl. Pakfire will also be rewritten in Python. The first alpha release is already available.

The network zones model with the four colors will change slightly in the next release. According to Michael, it will be possible to create multiple zones of the same type, which could be interesting in a couple of cases:

For example, think about a network green0 for the marketing staff and another network green1 for the administration staff. This also works well for wireless networks, for example one for the staff and another one for customers.

As a consequence, the blue zone will be dropped in IPFire 3.0, and an additional network type will be added for the management of IPFire. The internet zone will be red, any local network would be green, the management network will be grey, and a DMZ zone would be orange.

Conclusion

IPFire is a nice solution for a secure network gateway for people that need a web interface, but that is the part of the distribution that still needs a lot of work. The web interface has too many options and doesn't show them in a consistent way, which will turn off many beginners. Fortunately, the web interface for the 3.x version will be entirely rewritten with a focus on usability.

Comments (3 posted)

New vulnerabilities

cacti: SQL injection

Package(s):cacti CVE #(s):
Created:April 26, 2010 Updated:April 28, 2010
Description: From the Debian advisory:

It was discovered that Cacti, a frontend to rrdtool for monitoring systems and services missed input sanitizing, making an SQL injection attack possible.

Alerts:
Debian DSA-2039-1 cacti 2010-04-23

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2010-1188
Created:April 27, 2010 Updated:November 12, 2010
Description: From the Red Hat advisory:

A use-after-free flaw was found in the tcp_rcv_state_process() function in the Linux kernel TCP/IP protocol suite implementation. If a system using IPv6 had the IPV6_RECVPKTINFO option set on a listening socket, a remote attacker could send an IPv6 packet to that system, causing a kernel panic (denial of service).

Alerts:
Red Hat RHSA-2010:0882-01 kernel 2010-11-12
SUSE SUSE-SA:2010:036 kernel 2010-09-01
Red Hat RHSA-2010:0439-01 kernel 2010-05-25
Red Hat RHSA-2010:0424-01 kernel 2010-05-18
CentOS CESA-2010:0394 kernel 2010-05-08
Red Hat RHSA-2010:0394-01 kernel 2010-05-05
Red Hat RHSA-2010:0380-01 kernel 2010-04-27
Ubuntu USN-947-2 kernel 2010-06-04
Ubuntu USN-947-1 linux, linux-source-2.6.15 2010-06-03

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel kernel-pae CVE #(s):CVE-2010-1084 CVE-2010-1087 CVE-2010-1146
Created:April 27, 2010 Updated:September 23, 2010
Description: From the Pardus advisory:

Linux kernel 2.6.18 through 2.6.33, and possibly other versions, allows remote attackers to cause a denial of service (memory corruption) via a large number of Bluetooth sockets, related to the size of sysfs files in (1) net/bluetooth/l2cap.c, (2) net/bluetooth/rfcomm/core.c, (3) net/bluetooth/rfcomm/sock.c, and (4) net/bluetooth/sco.c. (CVE-2010-1084)

The nfs_wait_on_request function in fs/nfs/pagelist.c in Linux kernel 2.6.x through 2.6.33-rc5 allows attackers to cause a denial of service (Oops) via unknown vectors related to truncating a file and an operation that is not interruptible. (CVE-2010-1087)

The Linux kernel 2.6.33.2 and earlier, when a ReiserFS filesystem exists, does not restrict read or write access to the .reiserfs_priv directory, which allows local users to gain privileges by modifying (1) extended attributes or (2) ACLs, as demonstrated by deleting a file under .reiserfs_priv/xattrs/. (CVE-2010-1146)

Alerts:
openSUSE openSUSE-SU-2010:0664-1 Linux 2010-09-23
SUSE SUSE-SA:2010:035 kernel 2010-08-18
Red Hat RHSA-2010:0631-01 kernel-rt 2010-08-17
CentOS CESA-2010:0610 kernel 2010-08-11
Red Hat RHSA-2010:0610-01 kernel 2010-08-10
Debian DSA-2053-1 linux-2.6 2010-05-25
Pardus 2010-63 kernel kernel-pae 2010-05-18
rPath rPSA-2010-0037-1 kernel 2010-05-07
Pardus 2010-57 kernel kernel-pae 2010-04-27
CentOS CESA-2010:0504 kernel 2010-07-02
SUSE SUSE-SA:2010:031 kernel 2010-07-20
openSUSE openSUSE-SU-2010:0397-1 Linux Kernel 2010-07-19
Red Hat RHSA-2010:0504-01 kernel 2010-07-01
Ubuntu USN-947-2 kernel 2010-06-04
Ubuntu USN-947-1 linux, linux-source-2.6.15 2010-06-03

Comments (none posted)

krb5: arbitrary code execution

Package(s):krb5 CVE #(s):CVE-2010-1320
Created:April 22, 2010 Updated:July 21, 2010
Description:

From the Red Hat bugzilla entry:

A double-free vulnerability was found in the KDC in MIT krb5 versions 1.7 and later. This flaw could allow an authenticated remote attacker to crash the KDC by inducing the KDC to perform a double-free, or to possibly allow for the execution of arbitrary code (although the latter is believed to be difficult).

Alerts:
Gentoo 201201-13 mit-krb5 2012-01-23
Ubuntu USN-940-1 krb5 2010-05-19
SuSE SUSE-SR:2010:010 krb5, clamav, systemtap, apache2, glib2, mediawiki, apache 2010-04-27
Fedora FEDORA-2010-7130 krb5 2010-04-21
Ubuntu USN-940-2 krb5 2010-07-21

Comments (none posted)

nano: multiple vulnerabilities

Package(s):nano CVE #(s):CVE-2010-1160 CVE-2010-1161
Created:April 27, 2010 Updated:September 9, 2010
Description: From the Pardus advisory:

GNU nano before 2.2.4 does not verify whether a file has been changed before it is overwritten in a file-save operation, which allows local user-assisted attackers to overwrite arbitrary files via a symlink attack on an attacker-owned file that is being edited by the victim. (CVE-2010-1160)

Race condition in GNU nano before 2.2.4, when run by root to edit a file that is not owned by root, allows local user-assisted attackers to change the ownership of arbitrary files via vectors related to the creation of backup files. (CVE-2010-1161)

Alerts:
Fedora FEDORA-2010-13157 nano 2010-08-20
Gentoo 201006-08 nano 2010-06-01
Fedora FEDORA-2010-6776 nano 2010-04-16
Fedora FEDORA-2010-6775 nano 2010-04-16
Pardus 2010-58 nano 2010-04-27

Comments (none posted)

seamonkey: information disclosure

Package(s):seamonkey CVE #(s):CVE-2009-3385
Created:April 22, 2010 Updated:June 14, 2010
Description:

From the NVD entry:

The mail component in Mozilla SeaMonkey before 1.1.19 does not properly restrict execution of scriptable plugin content, which allows user-assisted remote attackers to obtain sensitive information via crafted content in an IFRAME element in an HTML e-mail message, as demonstrated by a Flash object that sends arbitrary local files during a reply or forward operation.

Alerts:
Fedora FEDORA-2010-7100 seamonkey 2010-04-21
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14

Comments (none posted)

X.org Server: privilege escalation

Package(s):xorg-x11-server CVE #(s):CVE-2010-1166
Created:April 28, 2010 Updated:September 7, 2010
Description: The X.org render extension can be forced to crash by an authorized client, leading to denial of service and potential privilege escalation vulnerabilities.
Alerts:
Mandriva MDVSA-2013:260 x11-server 2013-10-28
openSUSE openSUSE-SU-2010:0583-1 xorg-x11-server 2010-09-07
openSUSE openSUSE-SU-2010:0561-1 xorg-x11-server 2010-08-30
SUSE SUSE-SR:2010:014 OpenOffice_org, apache2-slms, aria2, bogofilter, cifs-mount/samba, clamav, exim, ghostscript-devel, gnutls, krb5, kvirc, lftp, libpython2_6-1_0, libtiff, libvorbis, lxsession, mono-addon-bytefx-data-mysql/bytefx-data-mysql, moodle, openldap2, opera, otrs, popt, postgresql, python-mako, squidGuard, vte, w3m, xmlrpc-c, XFree86/xorg-x11, yast2-webclient 2010-08-02
CentOS CESA-2010:0382 xorg-x11-server 2010-05-28
Ubuntu USN-939-1 xorg-server 2010-05-18
Red Hat RHSA-2010:0382-01 xorg-x11-server 2010-04-28

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 2.6.34-rc5; there have been no prepatches released over the last week. The flow of changes into the mainline continues; it contains lots of fixes but also the VMware balloon driver (discussed briefly here in early April) and the ipeth driver which facilitates USB tethering to iPhones. The 2.6.34-rc6 release can be expected soon - probably a few milliseconds after this page is published.

Stable updates: the 2.6.32.12 and 2.6.33.3 stable kernel updates were released on April 26.. Both updates are massive, with well over 100 fixes in each.

Comments (3 posted)

Quotes of the week

I came to realize that if one wants his work (software) to be used globally, making it in-tree is not the goal but an important first step. Making software in-tree is technical, but affecting distributors decision should involve non-technical issues, I guess.
-- Toshiharu Harada

So, if your display switch button now just makes the letter "P" appear, say thanks to Microsoft. There's a range of ways we can fix this, although none of them are straightforward and those of you who currently use left-Windows-p as a keyboard shortcut are going to be sad. I would say that I feel your pain, but my current plan is to spend the immediate future getting drunk enough that I stop caring.
-- Matthew Garrett

One of the things that we sometimes have to tell people who are trying to navigate the maze of upstream submission is that sometimes you need to know who to ignore, and that sometimes rules are guidelines (despite pedants who will NACK based on rules like, "/proc, eeeeewwww", or "/debugfs must only strictly be for debug information").

Telling embedded developers who only want to submit their driver that they must create a whole new pseudo-filesystem just to export a single file that in older, simpler times, would have just been thrown into /proc is really not fair, and is precisely the sort of thing that may cause them to say, "f*ck it, these is one too many flaming hoops to jump through". If we throw up too many barriers, in the long run it's not actually doing Linux a service.

-- Ted Ts'o

Good heavens, what is EILSEQ? ... Why on earth are driver writers using this in the kernel??? Imagine the confusion which ensues when this error code propagates all the way back to some poor user's console. They'll be scrabbling around with language encodings not even suspecting that their hardware is busted.

People do this *a lot*. They go grubbing through errno.h and grab something which looks vaguely appropriate. But it's wrong. If your hardware is busted then return -EIO and emit a printk to tell the operator what broke.

-- Andrew Morton

Comments (none posted)

CPUS*PIDS = mess

By Jonathan Corbet
April 27, 2010
Mike Travis recently ran into a problem: if you have a system with a mere 2048 processors, there's only room for 16 processes on each CPU before the default 32K limit on process IDs is reached. Systems with lots of processors tend not to run large numbers of processes on each CPU, but 16 is still a bit tight - especially when one considers how many kernel threads run on each CPU. With 2K processors, the kernel threads alone may run the system out of process IDs; with 4K processors, the system will not even succeed in booting.

The proposed solution was a new boot-time parameter allowing the specification of a larger maximum number of process IDs. That idea did not get very far, though; there is not much interest in adding more options just to enable the system to boot. The fact that concurrency-managed workqueues should eventually solve this problem (by getting rid of large numbers of workqueue threads) hasn't helped either; that makes the kernel option look like a temporary stopgap. But the workqueue changes are only so helpful to people who are having this problem now; some form of this work will probably go in eventually, but it does not appear to be a fast process.

So there will most likely be a shorter-term fix merged. Instead of a kernel parameter, though, it will probably be some sort of heuristic which looks at the number of processors and ensures that a sufficient number of process IDs is available. If the default limit is too low, it will be raised automatically.

There is one remaining concern: what about ancient applications which store process IDs in signed, 16-bit integers? Apparently such applications exist. It is less clear, though, that such applications exist on 4096-processor systems. So this fear is unlikely to hold up this change. By the time the rest of us get those shiny, new, 4096-core desktop systems, hopefully, any remaining broken applications will have long since been fixed.

Comments (2 posted)

Suspend block

By Jonathan Corbet
April 28, 2010
One of the key points of contention when it comes to getting the kernel-level Android code merged has been the wakelock API. Wakelocks run counter to how some think power management should be done, so they have been hard to merge. But Android's drivers use wakelocks, so, in the absence of that API, those drivers also cannot be merged. They could be reworked to not use wakelocks, but then the mainline kernel would have a forked version of the driver code which nobody actually uses - not the best of outcomes. So coming to resolution on the wakelock issue has been a high priority for a while.

The result of work in that area can now be seen in the form of the suspend block patches recently posted by Arve Hjønnevåg. The name of the feature has been changed, as has the API, but the core point is the same: allow the system to automatically suspend itself when nothing is going on, and allow code to say "something is going on" at both the kernel and user-space levels.

The suspend block patches add a new sysfs file called /sys/power/policy; the default value found therein is "forced." When the policy is "forced," system state transitions will happen in response to explicit writes to /sys/power/state, as usual. If the policy is changed to "opportunistic," though, things are a bit different. The state written to /sys/power/state does not take effect immediately; instead, the kernel goes into that state whenever it concludes that the system is idle. The suspend blocker API can then be used to prevent the system from suspending when the need arises.

The two postings of this patch set have received a number of comments, causing various things to be fixed. More recently, though, responses have been of the "acked" variety. So one might conclude that suspend block has a reasonable chance of getting in during the 2.6.35 merge window. That, in turn, should open the doors for the merging of a lot of driver code from the Android project. With luck, the much-publicized disconnect between Android and the community may be a thing of the past - at the kernel level, at least.

Comments (2 posted)

Kernel development news

Kernel Hacker's Bookshelf: Generating Realistic Impressions for File-System Benchmarking

April 28, 2010

This article was contributed by Valerie Aurora

"File systems benchmarking is in a state of disarray." This stark and undisputed summary comes from the introduction to "Generating Realistic Impressions for File-System Benchmarking [PDF]" by Nitin Agrawal, Andrea Arpaci-Dusseau, and Remzi Arpaci-Dusseau. This paper describes Impressions, a tool for generating realistic, reproducible file system images which can serve as the base of new file system benchmarks.

First, a little history. We, the file systems research and development community, unanimously agree that most of our current widely used file system benchmarks are deeply flawed. The Andrew benchmark, originally created around 1988, is not solely a file system benchmark and is so small that it often fits entirely in-cache on modern computers. Postmark (c. 1997) creates and deletes small files in a flat directory structure without any fsync() calls; often the files are deleted so quickly that they never get written to disk. The company that created Postmark, Netapp, stopped hosting the Postmark code and tech report on their web site, forcing developers to pass around bootleg Postmark versions in a bizarre instance of benchmark samizdat. fs_mark (c. 2003) measures synchronous write workloads and is a useful microbenchmark, but is in no way a general purpose file system benchmarking tool. bonnie (c. 1990) and bonnie++ (c. 1999) tend to benchmark the disk more than the file system. In general, run any file system benchmark and you'll find a file system developer who will tell you why it is all wrong.

Why has no new general purpose file system benchmark gained widespread use and acceptance since Postmark? A new benchmark is a dangerous creature to unleash on the world: if it becomes popular enough, years of research and development can go into making systems perform better on what could, in the end, be a misleading or useless workload. "No benchmark is better than a bad benchmark," is how the thinking goes, at least in the academic file systems development community. I've seen several new benchmarks quashed over the years for minor imperfections or lack of features.

However, creating excellent new file systems benchmarks is difficult without intermediate work to build on, flawed though it may be. It's like demanding that architects go straight from grass huts to skyscrapers without building stone buildings in between because stone buildings would be an earthquake hazard. As a result, the file systems benchmarking community continues to live in grass huts.

Impressions: Building better file system images

One thing the file systems community can agree on: We need better file system images to run our benchmarks on - a solid foundation for any putative skyscrapers of the future. The most accurate and reproducible method of creating file system images is to make a copy of a representative real-world file system at the block level and write it back to the device afresh before each run of the benchmark. Obviously, this approach is prohibitively costly in time, storage, and bandwidth. Creating a tarball of the contents of the file system, and extracting it in a freshly created file system is nearly as expensive and also loses the information about the layout of the file system, an important factor in file system performance. Creating all the files at once and in directory order is a best case for the file system block allocation code and won't reflect the real-world performance of the file system when files are created and deleted over time. In all cases, it is impractical for other developers to reproduce the results using the same file system images - no one wants to download (and especially not host) several hundred gigabyte file system images.

This is where Impressions comes in. Impressions is a relatively small, simple, open-source tool (about 3500 lines of C++) that generates a file system image satisfying multiple sophisticated statistical parameters. For example, Impressions chooses file sizes using combinations of statistical functions with multiple user-configurable parameters. Impressions is deterministic: given the same set of starting parameters and random seeds, it will generate the same file system image (at the logical level - the on-disk layout may not be the same).

Impressions: The details

The directory structure of the file system needs to have realistic depth and distribution of both files and directories. Impressions begins with a directory creation phase that creates a target number of directories. The directories are distributed according to a function of the current number of subdirectories of a particular directory, based on a 2007 study of real-world file system directory structure. A caveat here is that creating the directories separately from the files will not properly exercise some important parts of file system allocation strategy. However, in many cases most of the directory structure is static, and most the changes occur as creation and deletion of files within directories, so creation of directories first reflects an important real-world use case.

The distribution of file sizes can't be accurately modeled with any straight-forward probability distribution function due to a second "bump" in the file size distribution, which in modern file systems begins around 512MB. This heavy tail of file size distribution is usually due to video files and disk images, and can't be ignored if you care about the performance of video playback. Impressions combines two probability distribution functions, a log-normal and a Pareto, with five user-configurable parameters to produce a realistic file size distribution.

Files are assigned a depth in the directory tree according to "an inverse polynomial of degree 2." Whatever that is (the code is available for the curious), Figure 2(f) in the paper shows that the resulting distribution of files by namespace depth is almost indistinguishable from that in a real-world file system. Impressions also supports user-configurable "Special" directories with an exceptionally large number of files in them, like /lib.

The authors of Impressions clearly understood the importance of realistic file data; the example use case in the paper is performance comparison of two desktop search applications, which depend heavily on the actual content of files. Filling all files with zeroes, or randomly generated bytes, or repeats of the same pieces of text would make Impressions useless for any benchmark that depends on file data, such as those testing file system level deduplication or compression. Impressions supports two modes of text file content generation, including a word popularity model suitable for evaluation of file search applications. It also creates files with proper headers for sound files, various image and video formats, HTML, and PDF.

Generation of file names is rudimentary but includes advanced support for realistic file name extensions, like .mp3. The file name itself is just a number incremented by one each time a file is created, but the extension is selected from a list of popular extensions according to percentiles observed in earlier file system studies. Popular extensions only account for about half of file names; the rest of the extensions are randomly generated.

One case in which file names generated this way won't be useful is in evaluating a directory entry lookup algorithm. Sequential search of a directory for a particular directory entry isn't very efficient. Instead, most modern file systems have some way to quickly map a file name to its location in a directory, usually based on a hash of the characters of the file name. This mapping function may be more or less efficient on Impression's sequential numerical file names compared to real-world names. File name length also influences performance, since it changes the number of directory entries that fit in a block. Overall, file name generation in Impressions is good enough, but there are opportunities for improvement.

One of the most important features of Impressions is its support for deliberate fragmentation of the file system. Impressions creates fragmentation by writing part of a file, creating a new file, writing another chunk of the file, and then deleting the new file. This cycle is repeated until the requested degree of fragmentation is achieved. Note that file systems with good per-file preallocation may never fragment in this scheme unless the disk space is nearly full or no contiguous free stretches of disk space are left. In this case, fragmenting a file system to the requested degree may take a while. More efficient methods of fragmenting a file system might be necessary in the future. Impressions could also use FIBMAP/FIEMAP to query the layout of file systems in a portable manner; currently calculation of the "fragmentation score" is only supported on ext2/3.

An interesting feature described in the paper but not available in the version 1 release of Impressions is support to run a specified number of rounds of the fragmentation code - sort of a fragmentation workload. This will show the difference in disk allocation strategies between file systems. For example, if one file system manages allocation well enough that it normally never exceeds 30% discontiguous blocks, and the other normally always exceeds 60% discontiguous blocks, it doesn't always make sense to compare their performance when they are both at 50% discontiguous blocks. Instead, running a set fragmentation workload would result in different "natural" fragmentation levels in both file systems, providing a more realistic baseline for performance comparison.

Impressions development

Impressions is open sourced under the GPLv3 and downloadable here. The original author, Nitin Agrawal, has graduated (now at NEC Labs) and does not currently have plans for developing Impressions further. This is a rare golden opportunity for a new maintainer to work on an influential, high-profile project. The code is, in my opinion, easy to understand and clearly written (although I've spent the last year working on e2fsprogs and the VFS, so take that with a grain of salt). Some open areas for contribution include:
  • Measure actual fragmentation using FIBMAP/FIEMAP
  • Smarter filename generation
  • Addition of hard links and symbolic links
  • Performance improvement
  • Scaling to larger file systems (> 100GB)
  • Packaging for distributions
  • More robust error checking and handling
Another possibility for future development is Lars Wirzenius's genbackupdata tool, written in Python. The goal of this tool is to generate a representative file system image for testing a backup tool. It already has some of the features of Impressions and others appear to be easy to add. Python may be a more suitable language for long-term development and maintenance of a file system creation tool.

Conclusion

Impressions is an enormous step forward in the file system benchmarking world. With a little polishing and a dedicated maintainer, it could become the de facto standard for creating file systems for benchmarking. Impressions can report the full set of parameters and random seeds it uses, which can then be used for another Impressions run to recreate the exact same logical file system (actual layout will vary some). Impressions can be used today by file system and application developers to create realistic, reproducible file system images for testing and performance evaluation.

Comments (4 posted)

Might 2.6.35 be BKL-free?

By Jonathan Corbet
April 27, 2010
The removal of the big kernel lock (BKL) has been one of the longest-running projects in kernel development history. The BKL has been a clear scalability and maintainability problem since its addition in the 1.3 development series; efforts to move away from it began in the 2.1 cycle. But the upcoming 2.6.34 kernel will still feature a big kernel lock, despite all the work that has been done to get rid of it. The good news is that 2.6.35 might just work without the BKL - at least, for a number of configurations.

Over the years, use of the BKL has been pushed down into ever lower levels of the kernel. Once a lock_kernel() call has been pushed into an individual device driver, for example, it is relatively easy to determine whether it is really necessary and, eventually, get rid of it altogether. There is, however, one significant BKL acquisition left in the core kernel: the ioctl() implementation. The kernel has supported a BKL-free unlocked_ioctl() operation for years, but there are still many drivers which depend on the older, BKL-protected version.

Clearly, fixing the ioctl() problem is a key part of the overall BKL solution. To that end, Frederic Weisbecker and Arnd Bergmann posted a patch to prepare the ground for change. This patch adds yet another ioctl() variant called locked_ioctl() to the file_operations structure. The idea was to have both ioctl() and locked_ioctl() in place for long enough to change all of the code which still requires the BKL, after which ioctl() could be removed. This new function was also made dependent on a new CONFIG_BKL configuration option.

That patch did not get very far; Linus strongly disliked both locked_ioctl() and CONFIG_BKL. So the search for alternatives began. In the end, it looks like locked_ioctl() may never happen, but the configuration option will eventually exist.

Linus's suggestion was to not bother with locked_ioctl(). Instead, every ioctl() operation should just be renamed to bkl_ioctl() in one big patch. That would allow code which depends on the BKL to be easily located with grep without adding yet another function to struct file_operations even temporarily. A patch which does this renaming has been posted; this patch may well be merged for 2.6.35.

Or perhaps not. Arnd has taken a more traditional approach with his patch which simply pushes the BKL down into every remaining ioctl() function which needs it. Once a specific ioctl() function handles BKL acquisition itself, it can be called from the core kernel as an unlocked_ioctl() function instead. When all such functions have been converted, the locked version of ioctl() can go away, and the BKL can be removed from that bit of core code. The pushdown is a bigger job than the renaming, but it accomplishes a couple of important goals.

One of those goals is simply getting the BKL closer to the code which depends on it, facilitating its eventual removal. The other is to get that much closer to a point where the BKL can simply be configured out of the kernel altogether. That is where the CONFIG_BKL option comes in. Turning that option off will remove BKL support, causing any code which depends on it to fail to compile. That code can be annotated with its BKL dependency, again making it easier to find and fix.

On the face of it, configuring out the BKL may not seem like a hugely desirable thing to do; it takes little space, and the overhead seems small if nobody is actually using it. But there is small - but significant - savings to be had: currently the scheduler must check, at every context switch, whether the BKL must be released by the outgoing process and/or reacquired by the incoming process. Context switches happen often enough that it's worth making them as fast as possible; eliminating the BKL outright will make a small contribution toward that goal.

Making the BKL configurable will also be a motivating factor for anybody who finds that their BKL-free kernel build is blocked by one crufty old driver. Most of the remaining BKL-dependent drivers are unloved and unmaintained; many of them may be entirely unused. Those which are still being used may well be fixed once a suitably-skilled developer realizes that a small amount of work will suffice to banish the BKL from a specific system forevermore.

In the end, 2.6.35 will not be, as a whole, a BKL-free kernel. But, if this work gets in, and if some other core patches are accepted, it may just become possible to build a number of configurations without the big kernel lock. That, certainly, is an achievement worth celebrating.

Comments (9 posted)

The cpuidle subsystem

By Jonathan Corbet
April 26, 2010
Your editor recently had cause to dig around in the cpuidle subsystem. It never makes sense to let such work go to only a single purpose when it could be applied toward the creation of a kernel-page article. So, what follows is a multi-level discussion of cpuidle, what it's for, and how it works. Doing nothing, it turns out, is more complicated than one might think.

On most systems, the processor is idle much of the time. We can't always be running CPU-intensive work like kernel builds, video transcoding, weather modeling, or yum. When there is nothing left to do, the processor will go into the idle state to wait until it is needed again. Once upon a time, on many systems, the "idle state" was literally a thread running at the lowest possible priority which would execute an infinite loop until the system found something better to do. Killing the idle process was a good way to panic a VAX/VMS machine, which had no clue of how to do nothing without a task dedicated to that purpose.

Running a busy-wait loop requires power; contemporary concerns have led us to the conclusion that expending large amounts of power toward the accomplishment of nothing is rarely a good idea. So CPU designers have developed ways for the processor to go into a lower-power state when there is nothing for it to do. Typically, when put into this state, the CPU will stop clocks and power down part or all of its circuitry until the next interrupt arrives. That results in the production of far more nothing per watt than busy-waiting.

In fact, most CPUs have multiple ways of doing nothing more efficiently. These idle modes, which go by names like "C states," vary in the amount of power saved, but also in the amount of ancillary information which may be lost and the amount of time required to get back into a fully-functional mode. On your editor's laptop, there are three idle states with the following characteristics:

C1C2C3
Exit latency (µs)1 1 57
Power consumption (mW)1000 500100

On a typical processor, C1 will just turn off the processor clock, while C2 turns off other clocks in the system and C3 will actively power down parts of the CPU. On such a system, it would make sense to spend as much time as possible in the C3 state; indeed, while this sentence is being typed, the system is in C3 about 97% of the time. One might have thought that emacs could do a better job of hogging the CPU, but even emacs is no challenge for modern processors. The C1 state is not used at all, while a small amount of time is spent in C2.

One might wonder why the system bothers with anything but C3 at all; why not insist on the most nothing for the buck? The answer, of course, is that C3 has a cost. The 57µs exit latency means that the system must commit to doing nothing for a fair while. Bringing the processor back up also consumes power in its own right, and the ancillary costs - the C3 state might cause the flushing of the L2 cache - also hurt. So it's only worth going into C3 if the power savings will be real and if the system knows that it will not have to respond to anything with less than 57µs latency. If those conditions do not hold, it makes more sense to use a different idle state. Making that decision is the cpuidle subsystem's job.

Every processor has different idle-state characteristics and different actions are required to enter and leave those states. The cpuidle code abstracts that complexity into a separate driver layer; the drivers themselves are often found in architecture-specific or ACPI code. On the other hand, the decision as to which idle state makes sense in a given situation is very much a policy issue. The cpuidle "governors" interface allows the implementation of different policies for different needs. We'll take a look at both layers.

cpuidle drivers

At the highest level, the cpuidle driver interface is quite simple. It starts by registering the driver with the subsystem:

    #include <linux/cpuidle.h>

    struct cpuidle_driver {
	char			name[CPUIDLE_NAME_LEN];
	struct module 		*owner;
    };

    int cpuidle_register_driver(struct cpuidle_driver *drv);

About all this accomplishes is making the driver name available in sysfs. The cpuidle core also will enforce the requirement that only one cpuidle driver exist in the system at any given time.

Once the driver exists, though, it can register a cpuidle "device" for each CPU in the system - it is possible for different processors to have completely different setups, though your editor suspects that tends not to happen in real-world systems. The first step is to describe the processor idle states which are available for use:

    struct cpuidle_state {
	char		name[CPUIDLE_NAME_LEN];
	char		desc[CPUIDLE_DESC_LEN];
	void		*driver_data;

	unsigned int	flags;
	unsigned int	exit_latency; /* in US */
	unsigned int	power_usage; /* in mW */
	unsigned int	target_residency; /* in US */

	unsigned long long	usage;
	unsigned long long	time; /* in US */

	int (*enter)	(struct cpuidle_device *dev,
			 struct cpuidle_state *state);
    };

The name and desc fields describe the state; they will show up in sysfs eventually. driver_data is there for the driver's private use. The next four fields, starting with flags, describe the characteristics of this sleep state. Possible flags values are:

  • CPUIDLE_FLAG_TIME_VALID should be set if it is possible to accurately measure the amount of time spent in this particular idle state.

  • CPUIDLE_FLAG_CHECK_BM indicates that this state is not compatible with bus-mastering DMA activity. Deep sleeps will, among other things, disable the bus cycle snooping hardware, meaning that processor-local caches may fail to be updated in response to DMA. That can lead to data corruption problems.

  • CPUIDLE_FLAG_POLL says that this state causes no latency, but also fails to save any power.

  • CPUIDLE_FLAG_SHALLOW indicates a "shallow" sleep state with low latency and minimal power savings.

  • CPUIDLE_FLAG_BALANCED is for intermediate states with some latency and moderate power savings.

  • CPUIDLE_FLAG_DEEP marks deep sleep states with high latency and high power savings.

The depth of the sleep state is also described by the remaining fields: exit_latency says how long it takes to get back to a fully functional state, power_usage is the amount of power consumed by the CPU when it is in this state, and target_residency is the minimum amount of time the processor should spend in this state to make the transition worth the effort.

The enter() function will be called when the current governor decides to put the CPU into the given state; it will be described more fully below. The number of times the state has been entered will be kept in usage, while time records the amount of time spent in this state.

The cpuidle driver should fill in an appropriate set of states in a cpuidle_device structure for each CPU:

    struct cpuidle_device {
	unsigned int		cpu;

	int			last_residency;
	int			state_count;
	struct cpuidle_state	states[CPUIDLE_STATE_MAX];
	struct cpuidle_state	*last_state;

	void			*governor_data;
	struct cpuidle_state	*safe_state;
	/* Others omitted */
    };

The driver should set state_count to the number of valid states and cpu to the number of the CPU described by this device. The safe_state field points to the deepest sleep which is safe to enter while DMA is active elsewhere in the system. The device should be registered with:

    int cpuidle_register_device(struct cpuidle_device *dev);

The return value is, as usual, zero on success or a negative error code.

The only other thing that the driver needs to do is to actually implement the state transitions. As we saw above, that is done through the enter() function associated with each state:

    int (*enter)(struct cpuidle_device *dev, struct cpuidle_state *state);

A call to enter() is a request from the current governor to put the CPU associated with dev into the given state. Note that enter() is free to choose a different state if there is a good reason to do so, but it should store the actual state used in the device's last_state field. If the requested state has the CPUIDLE_FLAG_CHECK_BM flag set, and there is bus-mastering DMA active in the system, a transition to the indicated safe_state should be made instead. The return value from enter() should be the amount of time actually spent in the sleep state, expressed in microseconds.

If the driver needs to temporary put a hold on cpuidle activity, it can call:

    void cpuidle_pause_and_lock(void);
    void cpuidle_resume_and_unlock(void);

Note that cpuidle_pause_and_lock() blocks cpuidle activity for all CPUs in the system. It also acquires a mutex which is held until cpuidle_resume_and_unlock() is called, so it should not be used for long periods of time.

Power management for a specific CPU can be controlled with:

    int cpuidle_enable_device(struct cpuidle_device *dev);
    void cpuidle_disable_device(struct cpuidle_device *dev);

These functions can only be called with cpuidle as a whole paused, so one must call cpuidle_pause_and_lock() first.

cpuidle governors

Governors implement the policy side of cpuidle. The kernel allows the existence of multiple governors at any given time, though only one will be in control of a given CPU at any time. Governor code begins by filling in a cpuidle_governor structure:

    struct cpuidle_governor {
	char			name[CPUIDLE_NAME_LEN];
	unsigned int		rating;

	int  (*enable)		(struct cpuidle_device *dev);
	void (*disable)		(struct cpuidle_device *dev);
	int  (*select)		(struct cpuidle_device *dev);
	void (*reflect)		(struct cpuidle_device *dev);

	struct module 		*owner;
	/* ... */
    };

The name identifies the governor to user space, while rating is the governor's idea of how useful it is. By default, the kernel will use the governor with the highest rating value, but the system administrator can override that choice.

There are four callbacks provided by governors. The first two, enable() and disable(), are called when the governor is enabled for use or removed from use. Both functions are optional; if the governor does not need to know about these events, it need not supply these functions.

The select() function, instead, is mandatory; it is called whenever the CPU has nothing to do and wishes the governor to pick the optimal way of getting that nothing done. This function is where the governor can apply its heuristics, look at upcoming timer events, and generally try to decide how long the sleep can be expected to last and which idle state makes the most sense. The return value should be the integer index of the target state (in the dev->states array).

When making its decision, the governor should pay attention to the current latency requirements expressed by other code in the system. The mechanism for the registration of these requirements is the "pm_qos" subsystem. A number of quality-of-service requirements can be registered with this system, but the one most relevant for cpuidle governors is the CPU latency requirement. That information can be obtained with:

    #include <linux/pm_qos_params.h>

    int max_latency = pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY);

On some systems, an overly-deep sleep state can wreak havoc with DMA operations (trust your editor's experience on this), so it's important to respect the latency requirements given by drivers.

Finally, the reflect() function will be called when the CPU exits the sleep state; the governor can use the resulting timing information to reach conclusions on how good its decision was.

An aside: blocking deep sleep

For what it's worth, driver developers can use these pm_qos functions to specify latency requirements:

    #include <linux/pm_qos_params.h>

    int pm_qos_add_requirement(int qos, char *name, s32 value);
    int pm_qos_update_requirement(int qos, char *name, s32 new_value);
    void pm_qos_remove_requirement(int qos, char *name);

This API is not heavily used in current kernels; most of the real uses would appear to be drivers telling the system that transitions into deep sleep states would be unwelcome. Needless to say, a driver should only block deep sleep when it is strictly necessary; the latency requirement should be removed when I/O is not in progress.

And that describes the 2.6.34 version of the cpuidle subsystem and API. For the curious, the core and governor code can be found in drivers/cpuidle, while cpuidle drivers live in drivers/acpi/processor_idle.c and a handful of ARM subarchitecture implementations. All told, it's a testament to the complexity of doing nothing properly on contemporary systems.

Comments (29 posted)

Patches and updates

Kernel trees

Greg KH Linux 2.6.33.3 ?
Thomas Gleixner 2.6.33.3-rt16 ?
Greg KH Linux 2.6.32.12 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Janitorial

Memory management

Networking

Security-related

Mimi Zohar EVM ?
Serge E. Hallyn p9auth fs: introduction ?

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

News and Editorials

Tools and distributions for embedded Linux development

April 27, 2010

This article was contributed by Tom Parkin

The deployment of Linux on the desktop and in the server room is well served by the general-purpose distribution. In the embedded world things are very different: although Linux is used widely, the concept of the general-purpose distribution is much less in evidence. Many vendors rely on forked board support packages or home-grown builds to create their systems, effectively creating their own customized distribution in the process. While embedded platforms represent a challenge to the traditional Linux distribution, there is no shortage of community projects to support the development of embedded Linux systems.

What makes embedded systems different?

The primary challenge presented by embedded systems is their diversity. Embedded device hardware tends to vary both in terms of design and capability. Even between generations of the same product, it is not unusual for a major system component such as the CPU to change completely. In addition, most embedded devices contain a fair amount of custom, closed-source software which is not packaged by a distribution. The combined effect of this hardware and software diversity is that most embedded devices are unique in one way or another, meaning it is difficult to design an embedded distribution which is simultaneously general enough to be useful and targeted enough to be efficient across a range of embedded platforms.

To make matters worse, embedded Linux development often takes place behind closed doors. In some cases this is due to a lack of hardware to support wider community involvement, since development platforms for many embedded devices will only exist within the labs of the companies manufacturing them. In other cases, manufacturers of embedded Linux devices rely on keeping their designs as secret as possible in order to keep ahead of the competition. This secrecy tends to stifle the community engagement that helps fuel the development of many desktop and server distributions.

In addition to the closed nature of many embedded projects, embedded Linux distributions face a challenge of a lack of perceived utility within developer communities. Given the heavily customized nature of much of the software within a given device, developers tend to view any external software as a starting point rather than a completed product. Rather than a distribution being a central provider, it is simply another component in a larger project.

Meeting the challenge

Despite the challenges of the embedded Linux ecosystem, embedded Linux tools and distributions can offer real value to developers working within that space. Various approaches are being developed by a number of embedded Linux projects, generating a suite of tools which may be brought to bear on a particular project according to its needs.

Dispensing with tools: the DIY approach

If complete control over configuration is important to an embedded developer, then the most efficacious approach is to build the whole system manually. This would typically involve hand-crafting a cross-compilation toolchain targeting the embedded device, then downloading and building the individual packages required to make up the software stack. Although a somewhat arduous process, the DIY approach to embedded Linux offers several key benefits, including the ability to fine-tune packages to requirements, an enforced understanding of the precise contents of the embedded root file system, and the ability to pick and choose system components in order to meet project goals.

The basics of putting together a DIY embedded Linux system are illustrated in Karim Yaghmour's book Building Embedded Linux Systems. In addition, the Cross Linux From Scratch guide covers the creation a completely DIY cross-compiled Linux installation in greater detail.

Meta-distributions and configurable build architectures

The DIY approach to an embedded distribution offers the greatest control and flexibility, but it does require a large investment of time and no small amount of expertise to get a basic system up and running. To help mitigate this, a number of configurable build tools exist which simplify the process. This approach combines the flexibility of selecting, configuring, and building sources by hand with the convenience of a framework which hides much of that detail until a developer needs it. At the time of this writing, several different build architectures are in active development.

Buildroot is perhaps the simplest of the build framework projects. Buildroot provides a menu-driven configuration similar to that used by the Linux kernel, and an automated build based around makefiles. It can be used to build anything from a cross-compilation toolchain right up to a complete root filesystem for the device. It also provides a simple mechanism for developers to patch software packages as they are built. Since its initial release in 2005, Buildroot has achieved considerable popularity in a range of applications, which can include anything from chip manufacturer board support packages to embedded firmware for rugged displays.

OpenEmbedded is a more complex system which utilizes a custom build tool, Bitbake, to provide information on how to build individual packages. Bitbake is derived from Gentoo's portage system, and many of the concepts utilized in Bitbake will be familiar to Gentoo users. OpenEmbedded provides Bitbake build recipes for many packages, from cross-compilation tools to the Qt framework and X. It is used as the build tool for a number of Linux distributions, including Ångström and Openmoko, and is also used by Gumstix to create firmware for their devices.

In contrast, Scratchbox takes a different approach from both OpenEmbedded and Buildroot. Rather than providing build instructions for the cross-compilation of packages, Scratchbox provides a chrooted environment in which the default build tools produce cross-compiled code. In this environment, packages may be built as though they were being built natively, thereby avoiding much of the hassle of configuring packages for cross-compilation. In addition to providing a compilation environment, Scratchbox also integrates Qemu to provide an emulated platform for running cross-compiled code. This could be a useful feature early in the development of a new embedded device when the actual hardware is not yet available. Scratchbox has been used by the Maemo project, which is used in Nokia's Internet Tablet products—and the N900 phone.

Finally, Firmware Linux marries the simplicity of Buildroot to the native compilation approach of Scratchbox. Starting life in 2001 as a means of automating a Linux From Scratch build, Firmware Linux has developed into a general set of scripts for the creation of a native development environment for the target hardware. Similar to Scratchbox, Firmware Linux supports the use of Qemu as an emulation environment for running target binaries and compiling code.

Miniature desktop distributions

Automated build environments allow developers to build complete embedded systems from source. At the same time, a number of projects are developing distributions which approach embedded software from the opposite direction, by slimming down general-purpose distributions to fit the embedded space.

Emdebian aims to make Debian suitable for embedded devices. So far, this effort has taken two complementary paths. Firstly, Emdebian Crush provides a heavily modified set of packages based around a Busybox root filesystem to provide the smallest possible installation of Debian. At the time of this writing, Crush supports ARM platforms only. Parallel to working on Crush, the Emdebian team has been developing Emdebian Grip, a low-fat install which tracks the development of the forthcoming stable release of Debian. Emdebian Grip has been used in Toby Churchill's SL40 Lightwriter, and some of Digi's ARM-based embedded modules, as well as various silicon provider board support packages.

Similar to Emdebian, Gentoo Embedded (aka Embedded Gentoo) works to make the Gentoo distribution more suitable for embedded devices. This brings Gentoo's portage package management tool to bear on the embedded space, and also provides a handbook which guides users through the process of putting together a Gentoo Embedded system; a process which will be generally familiar to users of the Gentoo handbook for desktop and server systems. Readers may be familiar with Gentoo Embedded through its use in Google's Chromium OS project, where it has been employed as a cross-compilation tool to build AMD64 binaries on an x86 host.

Commercial offerings

Alongside freely available open source embedded Linux tools and distributions, there are a number of commercial embedded Linux distributions available. Just as with desktop Linux, a commercial agreement with a vendor may offer various incentives to reassure pointy-haired bosses: guaranteed integration support, a quicker time-to-market, enhanced development tools, etc. MontaVista Linux is one of the most prominent offerings in this space, however a number of other commercial embedded distributions exist including Wind River Linux and BlueCat Linux.

Complete software stacks

Recent initiatives have seen the development of complete, application-specific software stacks for embedded platforms. Examples include Google's Android and Nokia's Maemo project, which are geared toward mobile phones and internet tablets; or one of the many network router firmware projects (e.g. FreeWRT, Tomato, or OpenWrt). These projects can offer a complete solution for certain developments, although it is more likely that they are employed as a foundation to build upon. For example, Android is in use in the Archos 5 internet tablet and Barnes and Noble's "nook" e-book reader, and Maemo has been merged with Intel's Moblin to form MeeGo.

Many paths to enlightenment

Although the nature of embedded devices makes a general-purpose Linux distribution difficult to develop there are nonetheless a range of projects which support embedded Linux development at every level. The Cross Linux From Scratch project provides a simple guide to hand-crafting a system, the various build frameworks and slimline versions of desktop distributions provide a sliding scale of automation for the creation of a distribution, and the complete stacks provided by the likes of Android and MeeGo provide an integrated foundation for further development.

Of the range of embedded Linux tools, the projects generating the most press are the likes of Android and MeeGo. This is partially due to the large corporate sponsors behind these projects, but it may also be because they are starting to change the face of embedded Linux by commoditizing the software stack for subsets of embedded devices. In so doing, Android and MeeGo may suggest a possible future development path for embedded Linux as a whole: for many applications, an off-the-shelf stack could deliver an enticingly short time-to-market along with the added incentive of high community involvement in the wider project. Whether or not this model can support enough diversity to make it appropriate across the embedded ecosystem is not clear, but the potential benefits to manufacturers and end users may make it an attractive way to deliver Linux in a range of other embedded markets.

Comments (20 posted)

New Releases

Release candidate for Ubuntu 10.04 LTS now available

The release candidate for Ubuntu 10.04 LTS (Lucid Lynx) is now available. See the release notes for "instructions and caveats" before installing. "The Ubuntu team is pleased to announce the Release Candidate for Ubuntu 10.04 LTS (Long-Term Support) Desktop and Server Editions and Ubuntu 10.04 LTS Server for Ubuntu Enterprise Cloud (UEC) and Amazon's EC2, as well as Ubuntu 10.04 Netbook Edition. Codenamed 'Lucid Lynx', 10.04 LTS continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. [...] We consider this release candidate to be complete, stable, and suitable for testing by any user." Click below for the full announcement.

Full Story (comments: 27)

Distribution News

Debian GNU/Linux

DPL delegations

Debian Project Leader Stefano Zacchiroli has announced delegations for the Debian System Administration team and for the Debian auditor. Luca Filipozzi, Martin Zobel-Helas, Peter Palfrader and Stephen Gran have been appointed to the DSA team, and Luk Claes has been appointed as the Debian auditor.

Comments (none posted)

Bits from the 3nd Debian Groupware Meeting

Click below for a few bits from the Debian Groupware Meeting. Topics include getting Icedove 3 (Thunderbird), Iceowl (Sunbird) and Evolution ready for squeeze.

Full Story (comments: none)

Welcome to our 2010 Debian Google Summer of Code students!

Debian welcomes the students selected for this year's Summer of Code. Click below to see who the students are and what they are working on.

Full Story (comments: none)

Fedora

Fedora 13 Spotlight Features

The Fedora Team has started a series of blog posts highlighting features in Fedora 13. The first post looks at NetworkManager. "NetworkManager started in 2005 as the brainchild of Red Hat developer Dan Williams, as his answer to the challenge of making networking in Linux simple and painless for users. NetworkManager rapidly grew from primarily being a wireless helper into a full-featured solution that was still simple and elegant enough to meet the needs of desktop users."

The second post in the series explores new features for Python developers. "This second post dives into several new and innovative features planned for Fedora 13 that we believe will be useful to Python developers of all levels. Early in the development cycle, Fedora engineering team member and contributor David Malcolm started working on more intelligent tracing and debugging features that solve some common difficulties. This work serves as an example of how Fedora leads by solving technical problems through the power of open source."

Comments (none posted)

Quick heads-up: Ubuntu memory leak issue

Adam Williamson reports that a memory leak in the Ubuntu 10.04 GLX package is not present in Fedora. "Hey everyone, just a quick heads-up. Some of you may have read about a memory leak that cropped up very late in Ubuntu 10.04 development process. They kindly put this phrase in their explanation of the bug: "One possible solution is to roll back the GLX 1.4 enablement patches, and the patch which caused the memory leak to appear. These GLX patches were produced by RedHat and incorporated into Debian, they were not brought in due to Ubuntu-specific requirements" which can obviously create the impression that the patches in question actually come from Red Hat Enterprise Linux, or from Fedora."

Full Story (comments: none)

Fedora Elections: Nominations open Sat 2010-04-24

Nominations are open for seats on the Fedora Advisory Board and the Fedora Engineering Steering Committee (FESCo). "You can nominate yourself, or someone else. It's recommended if you intend to nominate someone else that you consult with that person first." Nominations are open until May 2, 2010.

Full Story (comments: none)

Upcoming Fedora IRC Classroom Sessions

Click below for a list of Fedora IRC Classroom Sessions from April 27 - May 10, 2010.

Full Story (comments: none)

Fedora 11 EOL

Fedora 13 is scheduled for release on May 18, 2010. Therefore the end-of-life (EOL) for Fedora 11 will be June 18, 2010. "When Fedora 11 reaches EOL at that time, bugs open for Fedora 11 will be closed as previously described."

Full Story (comments: none)

Mandriva Linux

New mailing list : mandriva-kde

Mandriva has a new mailing list for KDE related discussions, problems and issues.

Full Story (comments: none)

SUSE Linux and openSUSE

Advance discontinuation notice for openSUSE 11.0

SUSE Security has announced that support for openSUSE 11.0 will be discontinued on June 30, 2010. "As a consequence, the openSUSE 11.0 distribution directory on our server download.opensuse.org will be removed from /distribution/11.0/ to free space on our mirror sites. The 11.0 directory in the update tree /update/11.0 will follow, as soon as all updates have been published. Also the openSUSE buildservice repositories building openSUSE 11.0 will be removed."

Full Story (comments: none)

Ubuntu family

Shuttleworth: A global menu for Ubuntu 10.10 Netbook Edition

Mark Shuttleworth describes a global menu for the 10.10 Netbook Edition. "In the netbook edition for 10.10, we're going to have a single menu bar for all applications, in the panel. Our focus on netbooks has driven much of the desktop design work at Canonical. There are a number of constraints and challenges that are particular to netbooks, and often constraints can be a source of insight and inspiration. In this case, wanting to make the most of vertical space has driven the decision to embrace the single menu approach."

Comments (29 posted)

Regional Membership Board nominations

Mark Shuttleworth has announced that nominations are open for the Regional Membership Board. "We're seeking nominations to all three Regional Membership Boards. Ideal candidates have a track record good judgment - and a willingness to support positive contributions matched only by their willingness NOT to be drawn into supporting factions, personalities and cabals. In any community of scale (and Ubuntu is at a larger scale than most) there will always be people making fascinating and unexpected (and hard to evaluate) contributions, as well as people who want to further their own ambitions at the expense of others. Being able to tell the difference, and recognizing those who are going to continue to raise the bar for Ubuntu, is a skill."

Comments (none posted)

Distribution Newsletters

DistroWatch Weekly, Issue 351

The DistroWatch Weekly for April 26, 2010 is out. "It's the Ubuntu release week, but before we get a chance to swarm the download mirrors with our insatiable desire for new releases, there is still time to cover some of the lesser-known distributions. This week we'll take a look at Scientific Linux, a surprisingly popular Linux option - and not just for sever rooms of high-tech research rooms and leading laboratories. In the news section, Red Hat announces the first public beta of a Xen-less Red Hat Enterprise Linux 6, Fedora continues its strong march towards version 13 with a plethora of (mostly invisible) features, Edubuntu prepares for a launch of a much improved Ubuntu flavour for schools, and MEPIS releases a set of USB images with extra applications and language support. Also in this issue, a look at the current state of affairs at Xandros and an explanation about motives behind 6-month release cycles which many popular distributions adopted in recent years. All this and more in this week's issue of DistroWatch Weekly - happy reading!"

Comments (none posted)

Fedora Weekly News 222

The Fedora Weekly News for April 21, 2010 is out. "In project-wide announcements, we have news of a time extension on the Fedora Summer Coding project, a request for comments on a new Fedora contributor agreement to replace the CLA, and the start of the always-exciting naming process for the next Fedora release. From Planet Fedora we hear of a new IcedTea (Java) release, a PyGTK hackfest, some suggestions for sticking more closely to the release schedule, and an interesting essay from Dan Williams on CDMA 3G data adapters. QA tells us all about the recent Graphics Test Week, and some results of Fedora 13 Beta testing. From the Translation team we hear about the start of Fedora 13 User Guide translation and an improvement in the arrangements for updating the Translation Quick Start Guide (TQSG). Finally, the design team lets us know about an update to the Fedora 13 wallpaper. Enjoy FWN 222!"

Full Story (comments: none)

openSUSE Weekly News/120

The openSUSE Weekly News for April 24, 2010 is available. "Now the sixteenth Week has come to a close, and we are pleased to announce our new issue. Last Week started the openSUSE Kernel Review, and we're pleased to announce our new report from the testing Team. We wish you much enjoyment in reading it..."

Comments (none posted)

Ubuntu Weekly Newsletter #190

The Ubuntu Weekly Newsletter for April 24, 2010 is out. "In this issue we cover Announcing the Release Candidate for Ubuntu 10.04 LTS, Call for UDS Plenaries and Lightning Talks, Ubuntu Launchpad Single-Sign-On Now Open Source, Mark Shuttleworth promotes Ubuntu Party of Paris, Farewell to the notification area, New Ubuntu Members, Lo``Co Council Meeting, Ubuntu's success at Anime Boston 2010!, LoCo Stories: the Asturian team drives the Asturian Language Academy migration to Ubuntu, Ubuntu Turkey LoCo Team Interviews Canonical CTO, Matt Zimmerman, Ubuntu-NY Lucid release party, Launchpad in the second half of 2010, Full Circle Magazine Podcast Episode 5: Manual Dexterity, Ubuntu Lucid Lynx 10.04 RC released on time, OMG Interviews Elizabeth Krumbach, Ubuntu women, learning team, beginners team all star, Ubuntu Women World Play Day Competition - Win a new netbook!, Ubuntu Manual Project To Release Quickstart Booklets For Ubuntu 10.10, Ubuntu: How to Measure Canonical's Business Progress and much, much more!"

Full Story (comments: none)

Interviews

Keeping 1000 devs focused: new Debian leader speaks (ITWire)

ITWire has an interview with newly elected Debian Project Leader Stefano Zacchiroli. "Zacchiroli, a post-doctoral fellow at a university in Paris, thus has a fairly tough task ahead as he begins his term. He took some time out to speak to iTWire about his plans for the year."

Comments (none posted)

Distribution reviews

EasyPeasy and the Challenges of Linux Netbook Design (Datamation)

Bruce Byfield reviews EasyPeasy, an Ubuntu based distribution for netbooks. "Netbook desktops in free and open source software (FOSS) are in a state of rapid development. Should a netbook be treated as more as a mobile device than as a laptop? Should developers assume that netbooks are used for light computing such as social networking, rather than for productivity? These are just two of the questions whose answers affect the design of any netbook desktop. However, you would have to search long and hard to find any distribution that answers such questions as openly as the EasyPeasy distribution does in its 1.6 release candidate (RC)."

Comments (none posted)

Page editor: Rebecca Sobol

Development

What is Open Graph?

April 28, 2010

This article was contributed by Nathan Willis

Social networking site Facebook's latest API update, dubbed "Open Graph," is generating diverse debate — about its genuine "open"-ness, its impact on other web services, and (like other Facebook initiatives before it) it's privacy and security approach for users. Those issues can get conflated if not properly addressed, however, so it is vital to take a close look at what Open Graph is in order to make a solid critique.

Bird's eye view

Broadly speaking, Facebook's value is in its network effect — that is, there is only one Facebook, and a large number of people are using it, so if you are a content producer, you have access to the entire audience at a single entry point. Up until now, content producers have primarily reached out to this audience with "Facebook Pages" — which despite the vague name, are a very specific site feature. Any business, book, album, or product that wanted to connect with fans had to create a Facebook Page on the site, then collect "fans" — essentially re-creating a duplicate of its off-network identity that was confined to within Facebook itself. Page owners then marketed their product to fans in traditional ways: sending messages, posting updates, soliciting feedback, etc..

Open Graph is designed to duplicate that marketing process but free the content producer from having to re-create the in-network Facebook Page for each product. The producer can instead add semantic tags to any web page that identify it as a particular entity in Facebook's database, and add widgets to the page with which Facebook-using visitors can interact (including adding themselves as fans).

The producer can also add widgets to the web page that extract data from Facebook's aggregated database, such as "recommendations" based on common fan-bases, news items posted by fans, and other activity. To extract that data, Facebook introduced an API alongside the Open Graph tagging syntax itself. The Graph API can only be used to request data from Facebook's servers — and although it is regarded as an improvement on its predecessor APIs, it is also the source of many critics' privacy concerns. It exposes more information about user accounts than many feel is appropriate, and Facebook silently changed its privacy policy to opt everyone in to the API without asking.

The other major criticism of the system is that it stores all of the user-to-product relationship data inside the Facebook silo itself, which creates a single-point-of-failure, and stores it permanently, which affects user privacy. Finally, in spite of pre-pending the word Open to its name, Open Graph was not developed in the open, and at present there does not appear to be anyone outside of Facebook using it for anything. That, of course, does not mean that there is no room for others — including the open source software community — to make use of Open Graph in one form or another.

Tagging content

Open Graph Protocol (OGP) is Facebook's term for the semantic metadata content owners can add to their sites to enable Facebook's servers to reference them. In its current implementation, OGP is a Resource Description Framework in attributes (RDFa) ontology. RDF is the "semantic web" technology created by the W3C; RDFa is a technique for embedding it in a specific way, within special, structured HTML attributes. The ontology is the formal description of the objects, classes, and relationships being modeled. OGP defines RDFa properties in the "og:" namespace, which site creators place in <meta> tags inside the HTML document's <head> section.

A web page can be associated with only one "object," and the official schema defines a set of 38 object types, covering common activities, businesses, places, celebrities, and entertainment products. Several object properties are defined in the schema as well, from location information to contact information, plus general attributes such as "image" and "url."

For example, the movie "Primer" on the Internet Movie Database (IMDB) might include the following tags:

    <meta property="og:title" content="Primer"/>
    <meta property="og:type" content="movie"/>
    <meta property="og:url" content="http://www.imdb.com/title/tt0390384/"/>
    <meta property="og:site_name" content="IMDb"/>

Precisely what constitutes an "object" in OGP is not strictly defined — is the DVD release of "Primer" the same object as the theatrical release described on IMDB, for example? Are multiple editions of the same book separate objects? OGP does not currently take a position; Facebook's documentation is focused on the "user X likes product Y" relationship, which is not specific. Other OGP adopters may find the specification needs enhancement in this area.

In practice, Facebook's partner sites also make use of the Facebook Markup Language (FBML) namespace, which is not part of OGP itself, and would include properties such as "fb:admins" and "fb:app_id" that tie directly in to Facebook services.

As of today, OGP has only an informal specification, with examples but not a complete reference, hosted at opengraphprotocol.org. The site says that the specification has been released under the Open Web Foundation Agreement version 0.9, which specifies a royalty-free grant of copyright and patent non-assertion. The site also states that much of the syntax for OGP's properties mimics that found in the hCard microformat specification and the Dublin Core metadata ontology.

OGP's status as an "open" specification was challenged by Chris Messina, who directed his question to Facebook's Dave Recordon, in particular asking who (if anyone) outside of Facebook had participating in the specification's design. On the Open Web Foundation mailing list, Recordon eventually stated that OGP was created by Facebook engineers, with only "feedback" from others.

The API

Like OGP, the Graph API is simple in and of itself. It is RESTful and returns objects in JSON format. All Facebook objects — from people to pages, groups, applications, and events, are accessed the same way, with a https://graph.facebook.com/ID URL, where ID is the the official page ID already found in Facebook URLs. Appending /CONNECTION_TYPE to the URL allows the application to query relationships, such as /friends, /photos, /notes, /attending, or /likes — with the valid relationship types varying depending on the object. The API supports basic object introspection, and uses OAuth 2.0 for authentication.

The privacy concerns start with the fact that Facebook altered its privacy policy when Graph API launched, and set all of its users' permissions to allow sites using Graph API to retrieve their personal information. But there are still other privacy concerns, including the fact that Graph API can not only access user's information, it can be used to publish new content, writing to a user's wall, creating comments, and responding to event invitations, or the fact that the new API makes additional data views accessible that were never available in previous APIs, such as the full list of events that a user is attending.

The button

Bloggers have given considerable attention to the final piece of the puzzle, the "Like this" button that Open Graph-using sites are supposed to add to enable visitors to show their support for the page. As one might expect given OGP and Graph API, all the Like button does is send a Graph API call to Facebook's servers reporting the user ID of the visitor and the metadata about the page culled from the OGP properties in the page header.

To the content publisher, the result is exactly the same as the old Facebook Page fan-collecting process, but with less development overhead. IMDB can automatically collect "likes" from Facebook users by generating OGP headers for every page, rather than having to build separate in-network pages for each entity inside Facebook itself.

But there is more to it than that, at least in theory. OGP's "objects" are supposed to refer to real-world entities, such as Ben Affleck, not to pages, such as http://www.imdb.com/name/nm0000255/. Thus the Facebook server can parse property="og:name" content="Ben Affleck" on both IMDB and on Netflix, recognize that both sites refer to the same person, and react accordingly with advertising, recommendations based on other users' behavior, and so on.

It is not yet clear how external sites using Open Graph will have access to this aggregation of content, or if it will be limited to Facebook itself. Sites will, however, be able to use more complicated Graph API widgets to build on the behavior of their site visitors — the Recent Activity and Recommendations plugins are Facebook's existing examples. It is likely that it was this advanced usage that prompted Facebook to quietly do away with its previous policy of caching user behavior for a maximum of 24 hours — tracking long-term and large-scale behavior patterns requires more than that. That reversal was announced, with less fanfare, alongside Open Graph itself. Facebook is also offering more detailed user tracking to page owners as part of its Insights program, and has expanded the amount of user information it offers in this way.

Apart from the aggregation and user-tracking questions, the Like button's implementation has critics. ReadWriteWeb notes that a Like button can be written to link to a totally different page than the one on which it is placed, and advocates that users save and use a special JavaScript "bookmarklet" instead.

The rest of the Internet

Many advocates of open web technology decried Open Graph primarily on the grounds that if the "Like this" button becomes ubiquitous, Facebook will be the de-facto owner of all "like this"/"share this"-style conversations and behavior. That prospect should certainly worry businesses that rely on network effects for generating their own ad revenue or sales, such as Digg or Hotels.com. What else it threatens is less clear.

Almost immediately, a project named OpenLike appeared, setting out to replicate the Open Graph "Like this" button, but redirecting the API to another server. At present it appears to do little (if anything) more than the multi-service offerings of ShareThis.com or its competitors. Starting such a project is an understandable reaction, but regardless of who gets the API hit, if it is aggregated on the server, it is not necessarily more open or free than at Facebook. Making the data publicly accessible has its own privacy problems.

Alex Iskold at ReadWriteWeb believes that Open Graph's biggest effect will be that large sites will begin marking up their pages with semantic information, and nothing prevents other web or software developers from making good use of the RDFa tags used. This is probably true; the initial list of partner sites is small: Microsoft, Pandora, and Yelp, but the OGP documentation alludes to many more, including reference sites like IMDB.

Many open source applications harvest information from reference sites like IMDB, and at present have to do so via screen-scraping code that requires an update every time the site markup or layout changes. Consider all of the collections-management and media center applications that scrape cover images and screenshots from IMDB, for example. If the site does implement OGP, the applications' task becomes much simpler overnight.

That is not to say that OGP is perfect; far from it. It is not the only semantic web project by a long shot — it overlaps with Friend-of-a-Friend (FOAF), hCard, and Activity Streams in some respects, such as event and group descriptions, and competes with GoodRelations on product descriptions. In addition to that, WHATWG is working on an alternative semantic markup scheme called Microdata that encodes metadata in HTML attributes rather than in <meta> tags. This technique makes it possible to mark up multiple items in a single page, rather than limiting each page to a single object as in OGP.

Furthermore, if OGP is to become a useful ontology, Facebook will have to genuinely give up control over it, even when its customers do not like the direction the specification moves; whether Facebook is amenable to that is an open question. Certainly the opportunity is there for other big players (such as Google) to leverage a large pool of OGP semantics against its own hot properties (such as Gmail, Google Reader, and Google Maps) in ways that Facebook would not. Hopefully open source will not be late to the battle.

To others, the principle source of worry is that Open Graph morphs users' Facebook accounts into their general online identity on multiple sites. Clearly commercial services will resist this encroachment; Twitter, eBay, and PayPal have too much at stake in their in-house identity systems to cede that to another company — but a lot of smaller publishing outlets and individual blogs may be tempted to tie in to Facebook solely to increase hit counts. Here the biggest open source challenger is Mozilla, which has been working on lifting identity management out of the server and keeping it within the browser. On April 28, Mozilla announced the first release of its Account Manager add-on, which automates account creation for users and for sites.

There are certainly efforts to build decentralized "Facebook-like" social networking services in an open manner, such as the BuddyPress plugin for Wordpress. StatusNet, the company behind the popular Identi.ca microblogging site, has created OStatus as a general-purpose activity-and-status-update extension of its original microblogging protocol that can exchange Facebook-like general traffic. The big challenge these and other open projects face is gaining a significant foothold with the user community — starting from scratch.

The privacy-sensitive have probably stopped using Facebook altogether, and of course, everyone is free to keep an eye on their own personal information's security by monitoring the site's privacy policy. But those concerned about the continued openness of the web itself would be wise to turn their attention to the data that will suddenly become available to them, rather than the button attracting all of the attention. The power of a network like the one Open Graph is designed to create is in the links between the elements, to be sure; with the data OGP adoption could make available, the race is on to do something better with it.

Comments (4 posted)

Brief items

Quotes of the week

Anyone who thinks that Perl 6 is fundamentally based on traditional compiler construction techniques taught in universities frankly has no clue as to what a fundamental paradigm shift Perl 6 represents to language design and implementation. It's this fundamental change that ultimately gives Perl 6 its power, but it's also why Perl 6 development is not a trivial exercise that can be done by a few dedicated undergraduates.
-- Patrick Michaud

I am stepping down as the [GCC] spu backend maintainer since Sony removed GNU/Linux (OtherOS) support from their newer PS3 firmware. The main reason is I will no longer have access to a machine to support the target. But really this is also a step backwards for free software support from Sony.
-- Andrew Pinski

I love Emacs, and I have come to really like Bazaar, but I can not help but feel that by helping the Emacs development team move to Bazaar I have actually done both of these communities a disservice. Bazaar has received nothing but bad publicity from the switch, and the Emacs development group appears to have been hampered more than helped--despite the fact that Emacs was switching from crufty CVS.
-- Jason Earl

Comments (none posted)

Doxer 10.04.0 available

Doxer is a documentation system based around a wiki-style markup parser and a Drupal module for online service. It also has tools for extracting documentation from source code. Version 10.04 is the first release of Doxer as a standalone package.

Full Story (comments: none)

The Fennec browser for Android

There is now a "pre-alpha" version of the Fennec browser available for Android 2.0+ platforms. It comes with a number of caveats - when a Firefox-based browser has explicit warnings about memory use one should pay attention - but it should be an interesting thing for adventurous users to test.

Comments (22 posted)

LLVM 2.7 released

Version 2.7 of the LLVM compiler is out. There's a lot of new stuff, including improved C++ and beginning Objective-C support in clang, support for the MicroBlaze architecture, "major progress" in the MC internal assembler (described briefly in this article), static analysis improvements, better virtual machine support, and more. Details can be found in the release notes.

Full Story (comments: 108)

Notmuch release 0.3 now available

Version 0.3 of the "notmuch" email client is available. "The major theme of this release is a huge number of improvements to the emacs interface to notmuch. There's now a lovely new 'welcome screen' that provides a search bar, recent searches, and saved searches. It looks nice, it's extremely convenient to use, and we think it makes a great model for what a 'search-based email interface' should look like. (So we're hoping that someone will imitate it in an upcoming graphical interface to notmuch)." Note that this release had a bug or two, so there is a 0.3.1 update with fixes available.

Full Story (comments: none)

Qt Mobility 1.0 released

Version 1.0.0 of the Qt Mobility package has been released. Qt Mobility provides a set of APIs aimed at mobile applications; they cover areas like connectivity, sensors, contact management, location, and more. This code will eventually find its way into a MeeGo release. See this white paper [PDF] for more information.

Comments (none posted)

sighttpd 1.0.0 released

Sighttpd is an HTTP server focused on the distribution of realtime streaming data in an embedded context. It is thus useful for applications like camera controllers. The 1.0.0 release has just been announced: "This release includes a new configuration file syntax allowing a single sighttpd instance to provide multiple streams. Examples are provided for streaming Ogg Theora, Motion JPEG and MPEG-4 AVC (H.264) video."

Full Story (comments: none)

x264 adds Blu-ray support

The developers of the x264 video encoding library have announced that the library is now capable of creating video in the Blu-ray format. It is, they say, the first free Blu-ray encoder. "With x264's powerful compression, as demonstrated by the incredibly popular BD-Rebuilder Blu-ray backup software, it's quite possible to author Blu-ray disks on DVD9s (dual-layer DVDs) or even DVD5s (single-layer DVDs) with a reasonable level of quality. With a free software encoder and less need for an expensive Blu-ray burner, we are one step closer to putting HD optical media creation in the hands of the everyday user."

Comments (4 posted)

Open source from the White House

The White House - the seat of the US presidency - has announced that it is releasing some of its improvements to the Drupal content management system. "By releasing some of our code, we get the benefit of more people reviewing and improving it. In fact, the majority of the code for WhiteHouse.gov is already open source as part of the Drupal project. The code we're releasing today adds to Drupal's functionality in three key ways." It is nice to see that the president's office cares about such things.

Comments (12 posted)

Newsletters and articles

Page editor: Jonathan Corbet

Announcements

Commercial announcements

Canonical Announces Strong ISV and Open Source Ecosystem Support for Ubuntu 10.04 LTS

Canonical has announced strong software vendor support for the upcoming Ubuntu 10.04 LTS (Long-term Support) release for both server and desktop. "'A strong and varied ISV ecosystem is critical for Ubuntu to thrive and grow both on user's desktops and in the world's datacentres,' said Jane Silber, CEO, Canonical. 'We expected Ubuntu 10.04 LTS to be popular with existing and prospective software partners, but response is fantastic. Users considering switching to Ubuntu or upgrading to 10.04 LTS will be encouraged by this industry support and reassured that they can use many of their favourite applications on what we are sure will become their favourite operating system.'"

Comments (none posted)

Articles of interest

German appeal court upholds Microsoft FAT patent (The H)

The H reports that Microsoft's FAT patent has been upheld in a German appeals court, and is thus, it seems, valid in Germany. "In judgement number X ZR 27/07, handed down on Tuesday, the tenth civil division of the Karlsruhe-based court confirmed the enforceability of the company's commercial rights in Germany. It has not yet published its reasoning, but has confirmed the decision in a short press release."

Comments (22 posted)

Why Making Money from Free Software Matters (The H)

Glyn Moody looks at making money with free software. "As companies like Red Hat have grown in size and profitability, so has the credibility of free software options among larger enterprises. Profits mean that other, smaller open source companies can be bought, providing a useful payback for entrepreneurs that encourages others to enter the fray by funding new open source start-ups. Money also means more influence at the political table through increased visibility and clout."

Comments (42 posted)

Microsoft claims Android steps on its patents (CNet)

CNet reports that Microsoft is now claiming that Android, too, infringes on its patents. "Microsoft and HTC announced they have inked a new patent deal that specifically provides the Taiwanese cell phone maker with the right to use Microsoft's patented technologies in phones running Google's Android operating system. Microsoft said it has been in talks with other phone makers." (Thanks to Amit Shah).

Comments (5 posted)

Report: Fair use in the US economy

The Computer & Communications Industry Association has released a report putting a value on the "fair use economy" in the US. "The research indicates that the industries benefiting from fair use-and other limitations and exceptions-make a large and growing contribution to the U.S. economy. The fair use economy in 2007 accounted for $4.7 trillion in revenues and $2.2 trillion in value added, roughly one-sixth of total U.S. GDP It employed more than 17 million people and supported a payroll of $1.2 trillion. Fair use companies generated $281 billion in exports and rapid productivity growth." Of course, these are white-paper numbers, only so far removed from random. Perhaps more interesting is the long survey of industries which are said to rely on fair use.

Comments (1 posted)

Linux audio explained (Techradar)

Techradar tries to make sense of audio on Linux. "Linux's audio architecture is more like the layers of the Earth's crust than the network model, with lower levels occasionally erupting on to the surface, causing confusion and distress, and upper layers moving to displace the underlying technology that was originally hidden."

Comments (124 posted)

New Books

PHP: The Good Parts and Ubuntu: Up and Running--New from O'Reilly

O'Reilly has released two books, PHP: The Good Parts and Ubuntu: Up and Running.

Full Story (comments: none)

Resources

Linux Foundation Newsletter: April 2010

The April edition of the Linux Foundation newsletter covers * The Linux Foundation Welcomes Eight New Corporate Members * Collaboration Summit Keynote Videos Available Now * LinuxCon Early Registration Deadline Closes This Friday * MeeGo Developer Community Grows as Ecosystem Support Broadens * Linux.com T-shirt Design Finalists Announced * We're Linux Winner Revealed * The Linux Foundation in the News * Upcoming Training Courses from The Linux Foundation.

Full Story (comments: none)

Education and Certification

This Year's Google Summer of Code Students Announced

Google has made an announcement that the list of students that have been accepted for this year's Summer of Code is now available.

Comments (none posted)

Calls for Presentations

Linux Plumbers Conference 2010 Call for Track Ideas

The 2010 Linux Plumbers Conference will be held in Cambridge, MA, from November 3 to 5. The LPC 2010 organizers are now looking for ideas for the tracks to be run at the event. If you have in mind a problem which could be improved by having plumbing-level developers get together and talk about it for a few hours, the program committee would like to hear from you. "For example, if in order to get better performance, we need to get better information about low-level devices from the kernel, and that needs to be utilized by file system utilities, and the user needs to be able to involved by exposing options at the UI level in control panels and distribution installers --- the Plumbers conference might be a great place to get everyone in the same room for half a day to solve this particular problem."

Comments (none posted)

OOoCon 2010 Call for Papers

The 2010 OOoCon will be held in Budapest, Hungary, from August 31 - September 3, 2010. "Whether you are a dedicated developer, a contributor of any measure, or just interested in the Project and its technology, such as the OpenDocument Format (ODF), we want to hear from you. Please note the conference language is English, and all presentations must be delivered in that language."

Full Story (comments: none)

Upcoming Events

PyCon Australia 2010 update

Registration is open for PyCon Australia to be held June 26-27, 2010 in Sydney. Click below for more information.

Full Story (comments: none)

First Middle East and Africa Open Source Software Technology Forum

The first Middle East and Africa Open Source Software Technology Forum (MEAOSSF) will be held June 15-16, 2010 in Cairo, Egypt. It will be followed by tutorials on June 17, 2010.

Comments (none posted)

Linux Audio Conference program is out

The program for the Linux Audio Conference (LAC2010) is now available. LAC2010 will be held May 1-4 in Utrecht, The Netherlands.

Comments (none posted)

Events: May 6, 2010 to July 5, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
May 3
May 7
SambaXP 2010 Göttingen, Germany
May 3
May 6
Web 2.0 Expo San Francisco San Francisco, CA, USA
May 6 NLUUG spring conference: System Administration Ede, The Netherlands
May 7
May 9
Pycon Italy Firenze, Italy
May 7
May 8
Professional IT Community Conference New Brunswick, NJ, USA
May 10
May 14
Ubuntu Developer Summit Brussels, Belgium
May 17
May 21
Fourth African Conference on FOSS and the Digital Commons Accra, Ghana
May 18
May 21
PostgreSQL Conference for Users and Developers Ottawa, Ontario, Canada
May 24
May 25
Netbook Summit San Francisco, CA, USA
May 24
May 30
Plone Symposium East 2010 State College, PA, USA
May 24
May 26
DjangoCon Europe Berlin, Germany
May 27
May 30
Libre Graphics Meeting Brussels, Belgium
June 1
June 4
Open Source Bridge Portland, Oregon, USA
June 3
June 4
Athens IT Security Conference Athens, Greece
June 7
June 10
RailsConf 2010 Baltimore, MD, USA
June 7
June 9
German Perl Workshop 2010 Schorndorf, Germany
June 9
June 12
LinuxTag Berlin, Germany
June 9
June 11
PyCon Asia Pacific 2010 Singapore, Singapore
June 10
June 11
Mini-DebConf at LinuxTag 2010 Berlin, Germany
June 12
June 13
SouthEast Linux Fest Spartanburg, SC, USA
June 15
June 16
Middle East and Africa Open Source Software Technology Forum Cairo, Egypt
June 19 FOSSCon Rochester, New York, USA
June 21
June 25
Semantic Technology Conference 2010 San Francisco, CA, USA
June 22
June 25
Red Hat Summit Boston, USA
June 23
June 24
Open Source Data Center Conference 2010 Nuremberg, Germany
June 26
June 27
PyCon Australia Sydney, Australia
June 28
July 3
SciPy 2010 Austin, TX, USA
July 1
July 4
Linux Vacation / Eastern Europe Grodno, Belarus
July 3
July 10
Akademy Tampere, Finland

If your event does not appear here, please tell us about it.

Event Reports

Location as Platform at Where 2.0

O'Reilly wraps up this year's Where 2.0 conference. "The Where 2.0 program navigated along the three tracks of mobile innovations, mapping for consumers, and local opportunities and models. Participants explored topics such as the map as an information ecology, the new meaning of mapping, government's role as a clearinghouse for data, open source mapping, local data, local search, augmented reality, location sharing, mobile cloud, game mechanics, and web vs. native apps."

Full Story (comments: none)

Web sites

Launch of new site: OpenSource-News.com

OpenSource-News.com is a new place for news about the great Open Source Projects on the internet. "Keep track of the news of your favorite project. Submit your own/favorite project. Read the news through our rss feeds, twitter and facebook."

Full Story (comments: none)

Audio and Video programs

Videos from the LF Collaboration Summit

The Linux Foundation has posted a set of videos from the first day of its recently-concluded Collaboration Summit. There's talks from Chris DiBona, Josh Berkus, Dan Fry, Imad Sousou, and Ari Jaaksi, as well as the kernel developer panel. There are Theora versions available.

Comments (7 posted)

Miscellaneous

Results for the OpenOffice.org Community Council Elections

The results of the OpenOffice.org Community Council elections are available. "Please join me in welcoming Olivier [Hallot] and Eike [Rathke]! I have no doubt they are all quite familiar to you, and I and everyone else on the Council, hope that you will grow more familiar with them and the working of the Council. The time in the Council gives us the chance to develop a good relationship."

Full Story (comments: none)

New TestPilot Privacy Policy

Mozilla Labs has updated its privacy policy for TestPilot to allow users a 7-day window to save their data before it is deleted from their systems. "This Mozilla Test Pilot Privacy Policy explains how Mozilla Corporation, a wholly-owned subsidiary of the non-profit Mozilla Foundation, collects and uses information from user tests conducted through Mozilla's Test Pilot program. It does not apply to other Mozilla websites, products or services, each of which have their own privacy policy."

Full Story (comments: none)

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds