User: Password:
|
|
Subscribe / Log in / New account

Leading items

BBC opens a little more content for Linux

November 18, 2008

This article was contributed by Tom Chance.

The British Broadcasting Corporation (BBC) has long dabbled with free software, starting a number of new projects and opening content via their backstage developer network. Now they've announced a bold new step forward, releasing an experimental service—initially just for Linux users—with open access to some multimedia content, which has already spun out in unexpected ways.

The BBC's Research and Innovation team took a fairly conventional commissioning process for this experiment. Having identified the feature—help existing content to "surface" in multimedia applications, so users don't need to browse around the web site—they went on to find the right approach. George Wright and his team settled on integrating BBC content into the Totem media player with Canonical, aiming to get a first version out with the recent Intrepid release. Things then moved quickly. Discussions with the company contracted to do the Totem work (Collabora) started in spring 2008, although according to Christian Schaller from Collabora "it was probably around July things got concrete". Over a few autumn months the work was completed, opening up a large number of radio shows to Ubuntu users worldwide (although much of the content is restricted to the UK because that's who pays the TV license that funds the BBC).

This great new feature, exclusive to Ubuntu, was promoted in the Intrepid press release but received little attention in the media. Given that it still only delivers a fraction of the content you can get through iPlayer (proprietary Windows software full of DRM technology) this is hardly surprising. That you can stream Dirac-encoded videos released under Creative Commons licenses is obviously still a bit geeky for most.

But that doesn't stop free software developers. Barely days after the Totem announcement, Nikolaj Hald Nielsen wrote a script to neatly integrate the content in Amarok 2.0. As a core Amarok developer his main motivation was familiar: "I wanted to inspire other people to write similar scripts for Amarok 2, and I think it is important to have some good example scripts ready when Amarok 2.0.0 final is released." I've been watching the Amarok 2 betas come along, and having given the "get more features" dialogs in KDE a miss over the past few years, I was pleasantly surprised how well this worked. You just go to the script manager, click to get some more scripts, install the BBC script and—like magic—you get all the BBC content in the "internet" tab on the left.

Wright's team did all the hard low-level work to make this kind of adaptation straightforward. The Amarok script has delighted Wright, who is a long-time Amarok user; they've even been in touch with Nielsen to see how they can help improve the integration.

The question everyone wants an answer to is: will this ever match iPlayer for content range? Wright's team have a fairly wide remit, but they're not in charge of releasing content, so this is unlikely to change the Corporation's attitude towards DRM overnight. According to Wright, the content teams have given great feedback, but over the past five years we've seen promises of an open Creative Archive wither away, with a consumer-facing focus on proprietary products like iPlayer. Truly open content from the BBC, or even the volume of copyrighted-but-available archives released by the National Public Radio (NPR) in the US (also integrated into Amarok ), is probably still a long way off.

This new service is strictly experimental, Wright says, "it's a way to experiment with distribution platforms and free software." They've also learned a lot more about developing in a free software community; although many of them have been Linux users for years, this was a first for them. Working to the feature freezes for Gnome and Ubuntu Intrepid meant the UI isn't a nice as they might have hoped, but it's a great start.

The open service is here to stay. They're not sure if they'll keep developing the Totem feature and patching against mainline in Ubuntu or Totem; time will tell. More work between Collabora, the BBC, and Canonical is also uncertain. But, since the code is all open, we can definitely expect the Totem and Amarok features to be maintained. We can also look forward to more open content integrated into free desktops in the future in a way that is extremely difficult to do with proprietary platforms.

Comments (10 posted)

NLnet Foundation seeks projects to fund

By Jake Edge
November 19, 2008

A little-known organization—at least outside of its native home in the Netherlands—has quietly been funding various free software projects to the tune of roughly €2.5 million a year. Most of those projects have been in the Netherlands or Europe, but it is looking to expand its reach to the rest of the world. It is "actively encouraging" submissions of funding proposals for projects that involve network technology and will be released as open source, according to NLnet Foundation Director Valer Mischenko.

The Foundation grew out of the Netherlands' first internet provider, NLnet, which laid the original backbone along the rails in that country. In 1998, it was sold to UUNet and the proceeds were invested into the Foundation. The intent of the money was to fund technology, particularly internet technology. Because the internet depends on interoperability, it just makes sense to require projects that are funded to release their code, Mischenko says.

The Foundation prides itself on being quick to answer requests for funding as there are "not too many bureaucratic layers" to the organization. Projects that try to get government funding often fall behind because it takes so much time and effort to get a grant of some kind—the technology may well have moved on. Depending on the size of the project, and the amount of funding required, answers can come as quickly as just a few weeks.

Each year, two themes are chosen to focus on so that projects in those areas get priority for funding. For 2008, those themes are "Identity, Privacy, and Presence" and "Open Document Format" (ODF). While ODF is not directly connected to network technology, the internet will be a poorer place without open formats that can be freely shared.

Part of the ODF effort was helping governments understand the importance of open formats in general and ODF in particular. One of the outcomes of that work was that all agencies in the Netherlands must start using open formats or justify why they cannot.

The ODF theme is just one area where the Foundation has broadly interpreted its mission. It has helped fund the FSF Europe (FSFE) Freedom Task Force project for several years. In addition, it provided €200,000 to help pay for Eben Moglen's time to work on GPLv3 at the FSF. Mischenko notes that it is important for the foundation to fund things that will help "protect the network"; he and the board see these efforts as important in that regard.

The bulk of funding this year has gone into the Identity, Privacy, and Presence theme. A list of the currently funded projects has a number of interesting entries from support for Tor hidden services and an improved routing algorithm for GNUnet to hardware projects such as RFID Guardian and e-Passport.

The current structure of funding is made up of four "layers", each corresponding to how much the Foundation will provide as well as how long it will provide funding for. The first layer is for things like funding trips for developers and other community members to attend conferences and the like. The second layer is for commitments of up to €30,000. Currently around 15% of proposals for second layer funding are granted.

For larger projects, the third layer can provide 2-4 years of funding of up to €500-600,000 per year. The fourth layer projects are currently fixed for the next five years as the Foundation is funding DNSSEC work at NLnet Labs as well as work on intelligent agents at Vrije Universiteit Amsterdam.

Mischenko said that the board is "willing to hear about ideas that don't fit into the layers". He said that the Foundation will continue its current funding model "unless we hear a great world-changing idea that we put all our money in and then we are gone". It is not just projects that can be funded by the Foundation, any person, company, or organization can apply. "As long as it is a network technology and it will be put in open source", the Foundation will consider funding it.

[ Along those lines, the author would like to thank the NLnet Foundation for helping to fund his recent trip to the co-located NLUUG autumn Mobility conference and Embedded Linux Conference Europe in Ede, the Netherlands. ]

Comments (3 posted)

MinGW and why Linux users should care

By Jonathan Corbet
November 19, 2008

The Minimalist GNU for Windows (MinGW) project is a way to get GCC and tools like binutils working to build software for the Windows environment—something that might not sound very interesting to Linux users or developers. But there are a number of advantages to porting and regularly testing free software on Windows, as Red Hat's Richard Jones and Dan Berrange explain in the following interview. Richard and Dan also describe Red Hat's involvement, how developers can participate, as well as how it all helps the free software cause.

LWN: Could you describe the MinGW project? How did it get started?

Richard: For some time I have been making Windows builds of libvirt available and, frankly, it was a real chore. I needed a Windows virtual machine to do it. But Windows is so frustrating to use and maintain: it doesn't come with any of the tools such as shells or version control that we are used to, and because I was only doing builds once a month or so I'd go back to it and find something had gone wrong that would require maintenance or even reinstallation.

During this time, we didn't routinely build libvirt for Windows. New code would inevitably break something. I had to fix things on Windows, then copy the code back to Linux and check that my fixes didn't break the Linux build, then come up with a patch, and all of this was complicated by the fundamental incompatibility of Windows with the rest of the world -- even simply copying code back and forth is irritatingly difficult when one machine is a Windows machine. (There's no ssh or scp or tar, files get executable bits set or have CRLF line endings, etc.)

At the same time we were getting a strong demand for the rest of our virt tools on Windows. Enough was enough. We decided that the only way to deal with this was to remove Windows from the equation. We wanted to build and test libvirt and the virt tools for Windows routinely (daily or more often), from the Fedora host, using the normal development environment. The way to do this is through cross-compilation (the Fedora MinGW project) and testing under emulation (Wine).

Debian & Ubuntu have been shipping the MinGW cross-compiler for quite a while, but it's important to say that the cross-compiler itself is the easy bit. The hard part about this project are the 50+ libraries and development tools that we ship and maintain alongside. Without those, just having the cross-compiler is fairly useless.

Dan: The libvirt project started a few years ago to provide an API for managing Xen virtualization hosts. Initially it was just a locally accessed C library, but over time the project expanded in scope to allow remote RPC access to the management APIs, and over other virtualization technology like QEMU, KVM, OpenVZ, LXC (native Linux containers) & User-Mode Linux. Shortly after we added support for RPC, a number of community members expressed an interest in using the client side from the Windows platform to manage their Unix hosts. Periodically people would contribute patches to make libvirt build on Windows, but soon after they were applied, new unrelated work would break the Windows build again.

It became clear that if the libvirt community was to officially support building a Windows client, then all developers needed to be able to easily test builds for Windows. The obvious stumbling block here is that most of our community developers do not use or even own Windows machines for testing. The MinGW project provides a cross compiler toolchain and stubs for the Win32 APIs to allow building of Windows executables and DLLs from a Linux host. Add in WINE and you can also run your cross-compiled build. MinGW and WINE are completely open source, so we can provide a very good level of support without ever having to purchase a Windows license or leave our primary Linux development environment.

We are not the first people to see the value in MinGW for supporting Windows platforms in open source software. Prior to the the start of the Fedora MinGW effort, Fedora developers would have to build all the cross compilers & libraries themselves. This is not particularly hard, but it is a lot of wasted effort to have everyone duplicating the work. Providing the MinGW compiler toolchain, and important libraries such as libxml, gnutls, libpng, libjpeg, GLib, GTK, etc directly in the Fedora repositories enables developers to focus on their own code, rather than the cross-compilers.

LWN: What is Red Hat's involvement in MinGW?

Richard: Dan and I work for a Red Hat group responsible for fostering the development of new tools and technologies. We have an eye to productisation and I spend quite a lot of time going to customer conferences and asking them what they want to see, but as for whether MinGW will make it into some future supported Red Hat product I cannot say.

Dan: Red Hat initiated development on the libvirt project and supports its ongoing evolution with significant developer resources. Red Hat wants the libvirt project to be the de facto standard for managing virtualization hosts, and the project community members want Windows to be a supported client platform. The work we are doing on the MinGW project in Fedora is thus a response to demand from the libvirt community for better Windows support in our releases. It is just a small part of our day job, alongside major libvirt feature development for Linux systems and in particular KVM & Xen.

LWN: Why does Red Hat care? Are you going into the Windows software business now?

Richard: Red Hat certainly cares about libvirt, and making libvirt available on the widest range of platforms. The alternatives to libvirt are interfaces like XenAPI and VMWare's APIs, which lock customers into proprietary technologies. Any way we can make it easier to provide open APIs and open source software even on closed platforms like Windows is a win for Red Hat, the Linux community, and even for Windows users.

Dan: As Richard says, this effort isn't about any particular Red Hat product. It is a community focused effort to address demand from libvirt users for better Windows client support. People are interested in open source virtualization technology like Xen and KVM, as an alternative to closed source solutions. Open source exists in a heterogeneous world though, and even if someone decides to migrate their servers to virtual machines on a Linux KVM host, they may still need to manage these servers from a Windows desktop. The MinGW project helps us maintain a reliable client build for the Windows platform, and thus lets a broader spectrum of users take advantage of open source virtualization technology. Growing the size of the libvirt community, and encouraging use of virtualization is what is important to Red Hat, and the MinGW project is one small part of that effort.

LWN: Why should free software developers care about MinGW? Does it do anything for them?

Richard: There's been some opposition, along the lines of "why are we helping Windows?". IMHO people who say that are ignoring both history and reality. First the history bit: the GNU project started off as a set of better compilers and command-line tools for the proprietary Unix systems of the day. I remember before Linux was around that you'd get some horrible system like HP-UX or (in my case) OS-9, and the first thing you would do would be to install all the GNU tools. Without real GNU grep, make, awk, bash, those systems were less than useful. Eventually when GNU got a kernel (Linux) we moved over to that system because it came with all the good tools.

Second the reality bit: Windows users are locked into proprietary applications and file formats, everything from Photoshop to QuickBooks to MSN to Illustrator. No Windows user can switch without first switching all their applications, which is going to be a very long transition process. Therefore we need a way to enable the developers of Gimp, GnuCash, Pidgin, Inkscape (to pick four out of hundreds) to easily build and test their software for Windows, so they can ship their software for Windows, respond easily to bug reports, and break that proprietary lock-in. Fedora MinGW does this - in fact we already used our compiler and huge chain of libraries to port Inkscape. Another thing we've found in porting to other platforms, is that it can generally improve the quality of the codebase. Different compilers and runtime environments expose different bugs in an application. The more combinations you can regularly build & test on, the better the overall quality of your code.

Dan: The libvirt project started off with a strong Linux focus due to our immediate needs for a management API for Xen in Fedora and later RHEL-5. Over time the community has contributed patches to improve our portability to non-Linux platforms, in particular Solaris and more recently Windows. While Red Hat's focus is on Linux, enabling portability to other platforms is important because it grows the size of your developer community. Every significant open source project has a huge wishlist of features and nowhere near enough developers and testers to address them all. Cross-platform portability enlarges the pool of potential contributors. They may initially only send minor patches to fix portability bugs for Windows, but over time they can end up working on major new features that benefit every platform.

Another thing we've found in porting to other platforms, is that it can generally improve the quality of the codebase. Different compilers and runtime environments expose different bugs in an application. The more combinations you can regularly build & test on, the better the overall quality of your code.

LWN: Is there anything in particular that developers should keep in mind to make life easier for people building their code for MinGW?

Richard: My pet list would be:

  • Don't write your own build system. Use autoconf/automake/libtool or cmake. That's not to say I'm a great fan of autoconf, but these really do make cross-compilation almost trivial.

    Autoconf-based programs can generally be cross-compiled by doing:

         yum install mingw32-*
         ./configure --host=i686-pc-mingw32
         make
    
  • Don't try to run executables during the build phase. It doesn't work when you're cross-compiling.
  • Do use pkg-config. And if you can't use pkg-config, then make sure your *-config program is a shell script, not a binary.
  • Do use common, portable libraries such as glib, gtk, libvirt or any of our other libraries.
  • Please use Fedora MinGW to routinely cross-compile your own code for Windows.

Dan: I have been pleasantly surprised at just how easy it has been to build many open source libraries with MinGW. Despite almost universal dislike for autotools, the applications which use autotools have been some of the easiest to port, particularly when it comes to building DLLs. The apps with home-brewed build systems have been much more involved. I definitely echo Richard's suggestion to stick to a broadly supported build system like autotools or cmake.

Any project which is serious about enabling support for Windows in their releases should make sure they are running regular automated builds & tests of their codebase. This is actually just good sense for any software engineering project regardless of whether Windows support is desired - it just happens to be particularly useful for configurations that developers rarely test on a day-to-day basis to avoid otherwise unnoticed regressions.

If you are not using a support library like GLib, QT or NSPR (which provides a degree of cross-platform portability) then seriously consider making use of Gnulib. This is a library of code which you can drop into an application, fixing POSIX API portability problems on various platforms. As an example, it replaces Winsock's socket() call so it returns real file descriptors that you can use in both read() and recvfrom(). It can't fix all problems - such as the lack of fork()/exec() on Windows - but if your application / library is written against POSIX, using Gnulib will significantly improve your portability across all Linux, UNIX and Windows platforms.

LWN: What are the biggest challenges that your project faces now? How can the community help?

Richard: Scaling the project is a big challenge. Red Hat dedicates quite limited resources to this project. The only way we can scale it is if the application developers themselves start to use our tools to build and maintain their own programs. I would like to see everyone who has an important Linux app or library start building and shipping for Windows routinely. Bringing open APIs, apps and file formats to Windows users is important: It's important to Windows users because it breaks their lock-in and makes switching to a fully free platform easier down the road. It's important for you, because your potential audience of users will increase by a factor of 10x or 20x.

Dan: Spreading the package maintenance job across a larger number of Fedora members is an important task. There is a limit to how many packages a single person can do a good job at maintaining. To make it manageable we track & pull patches from the native builds to the MinGW cross-compiled builds of common packages. Ultimately we still need more package maintainers to look after the cross-compiled builds.

There are some core pieces of the open source ecosystem which do not work / are not fully portable to a Win32 environment. The most obvious one being DBus, which is used by an ever increasing number of apps for local RPC. There have been a number of efforts to port DBus, but none ever completely finished & merged into the official releases.

LWN: Anything else you'd like to say to LWN readers?

Richard: Get involved.

Dan: Cross platform portability is often beneficial to your project even if you personally only care about its use in Linux. In the libvirt case it is opening up use of libvirt & virtualization to a set of users who have only ever had access to closed source virtualization technology. Portability broadens the pool of potential contributors to your project. Open source developers on the various BSDs, OpenSolaris, and Windows all have the potential to make valuable contributions to your project.

[ We would like to thank Richard and Dan for taking time to answer our questions. ]

Comments (64 posted)

Page editor: Jake Edge
Next page: Security>>


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds