Puppet is an automated configuration management system intended to make life easier for system administrators; it can be seen as a competitor to venerable tools like cfengine. Over time, Puppet has attracted an active community of users and developers; it would appear to be a tool which is growing in capability and popularity. Puppet is managed by Reductive Labs, which has a clear commercial interest in providing training and support services for Puppet users.
Recently (January, 2009), a project named Chef announced its existence. Chef's developers, who have previously worked with the Puppet code, set out to solve a similar problem. Chef is not a fork of Puppet, though; it's a new project developed from the beginning. Among other things, the Chef developers decided to use Ruby as the configuration language and they chose the Apache License (Puppet, instead, is distributed under the GPL). This project claims to be in active, production use, but its community, at this point, is clearly small. As of this writing, the chef-dev mailing list shows a total of four messages over its entire history.
Initially, the Puppet developers responded confidently to the Chef announcement:
More recently, though, Puppet developer Luke Kanies posted to the project's user list that Chef wasn't competing entirely fairly:
My take is that if your participation in our community is *solely* for purposes of shrinking it by drawing people into your community at the expense of ours, then you should be kicked from our community.
In particular, it is said that one developer from the Chef project has been sending private mail to Puppet users - especially those experiencing problems with Puppet - suggesting that they should switch to Chef. Luke, clearly, sees this activity as a threat to his livelihood; every Puppet user who deserts is one less potential customer. Even without that incentive, though; it can be hard to stand by and watch as others try to woo users away from your project. One need only think back to the days when "Ubuntu is better" posts were a semi-regular feature of the Fedora mailing lists to see how galling it can be.
In this case, a cooler perspective quickly won over and it became clear that there was little to be done. If nothing else, the objectionable messages were private email; there is little that the project could do to stop them even if it wanted to. Beyond that, though, certain things are inherent in the running of a free software project, including:
Andrew Shafer summed up the situation nicely:
Projects which are focused on "awesome" tend, over the long term, to be rather more successful than projects which worry about what others might be saying about them. They are also likely to be more successful than projects which put their effort into trying to poach another project's users. Puppet appears to have good code and an active and engaged user community. If it can stay focused on that code and that community, this project need not fear what its competitors are doing.
(Thanks to Friedrich Clausen for calling our attention to this discussion).
In my last article on OpenStreetMap I looked at the recent mass imports of public data — everything from British oil wells to the entire road network for the United States. But for those interested in more than an alternative to Google Maps, the ability to extract or add data to the project is what really makes OpenStreetMap shine. Whether you want to get an SVG of a campus map or import a local government's database of every building in the city, Linux users will find plenty of tools that cater to their needs.
The export tab on the web site provides the most simple way to access data. Users can draw an area on the main map view and then grab an image (in PNG, JPEG, PDF or PS formats); some HTML to embed the map into your web site; or the raw XML data. To further modify the data, either in the OpenStreetMap database or a local copy (stored as an XML .osm file on your disk) download the data using an editor like JOSM (the 'Java OpenStreetMap editor'). To make life easier when selecting the area to download, open up the preferences dialog and install the namefinder and slippy_map_chooser plugins.
Grabbing larger amounts of data would be difficult, slow and clumsy with these methods. More advanced users can get data directly through the API. Check the latitude and longitude coordinates for the area you want — an easy method for this is to use the export tab to draw an area, then note down the coordinates it records — then fire up wget or curl and download the data:
The main api only lets you grab 5,000 points per request; you have to page the request to get the additional data. To pull out a really large chunk of data, or to filter it (for example to just download all the pubs in the city) use the extended OSM API (XAPI, or 'zappy'). Access to really enormous amounts of data, such as the entire planet or a country, can be found in the frequently updated dumps listed on the Planet.osm wiki page.
Once you have the data there are all manner of uses - your GPS navigation device, rendering your own maps for the web or print, or converting the data into another standard GIS format with tools like the Ruby osmlib. The documentation for each tool various enormously, but the toolchains tend to be relatively straight forward.
Of course, extracting data is only half the story. Not only should all good open source citizens be contributing back, but you will get the most value from the data if you collaborate with others in developing a rich data set that will lead to tools and use cases you can later replicate.
OpenStreetMap abounds with methods and tools for entering data. You might like the "old school" method of tracing a breadcrumb GPS trail — much more fun in the early days when I mapped much of Reading with some friends from a completely blank slate. Many mappers have traced basic road layouts and buildings from aerial imagery donated from Yahoo! so that others can go in and identify street names and points of interest. The main editing tools are Potlatch, a flash interface on the main web site (just click on the 'Edit' tab once you're zoomed into your local area), and the previously-mentioned JOSM. The wiki has plenty of guidance.
When importing large sets of existing data, things get a little more complicated. The first step is to step back and have a good think. Imports can cause two kinds of headaches for other contributors if done wrong: you might put a load of new data over the top of somebody else's efforts and make a complete mess in the process; or worse, you might import data without proper permission, causing legal difficulties for the project and technical difficulties in taking the data back out again.
It's always best to begin by asking a few questions on the relevant mailing list; there are localized lists for many areas, a general (high traffic) "talk" list, and a "legal-talk" list for legal issues such as licensing for imports. It's especially important to avoid convenient interpretations of web site notices regarding copyright and database rights when deciding if you can import the data. You need to get written confirmation so that the OpenStreetMap project is immune from legal attacks. There are some nice general guidelines on the wiki, which are worth a read.
If you have data with written permission to use it, you can begin the import process. The first, and most laborious, step is to map out the data against standard OSM tags, as in this UK public transport example or this really comprehensive exercise for CanVec data. You'll notice that oftentimes source-specific data (like unique IDs for features and really niche data) is retained in a namespace like "CanVec:FID" and "naptan:StopAreaCode". This can also be useful where you don't want the data to appear until volunteers have gone through checking it against existing data in the database, for example to merge two bus stops (one crowdsourced, the other from the import).
For large chunks of data, importers have tended to write custom scripts to then bring the data in. If the data is in the OpenStreetMap format, and it is in a state suitable to go straight into the database, this bulk import script makes the process quick and painless. The Canvec2osm code shows how to pull in more complicated data; this converts 11 different shape files into themed osm files with correct tagging, which can then be worked into a suitable state for importing.
A more cautious approach can be appropriate in areas with a lot of existing data. One quite technically challenging route is to set-up your own Web Map Service (WMS) using a tool like mapserver, and then set-up the JOSM WMS plugin to pull those maps in as a layer underneath your map data so it can be traced. This Map Warper tool is in beta and tries to make this process easier. If the data is quite simple you could just put the source and editor side-by-side on your screen and use your judgement to copy over points of interest.
However you want to proceed, you're probably best off getting in touch with some local or more experienced community members. Interested people could even just lobby local government officers and public institutions to get the data, then pass it along to somebody with more of an appetite for the technical stage. Given 6 months to study, process, and import the data, you should find richly detailed maps and underlying data available under a Creative Commons BY-SA license; the license, incidentally, may soon change to one more suitable for databases. Whatever you do, just remember to have fun.
Software patents were rejected several years ago in the European Union (EU) and undermined last year by the Bilski case in the United States. Under these circumstances, what direction should anti-software patent activism take? Ciarán O'Riordan, the newly-appointed director of the End Software Patents (ESP) campaign, answers that now is the time to organize the arguments and legal documents used in the past so that they can be used to fight the next software patent battles around the world. This material might be useful not only in the EU and USA should the status of software patents change in either jurisdiction, but also in the rest of the world.
O'Riordan began his career as a software developer with a strong interest in free software. In fact, he has membership card #8 in the Free Software Foundation (FSF), which indicates that he was one of the first to take out membership when it was offered. Moving from Ireland to Brussels in 2003, he found night time work in a bar. Increasingly, however, he found his days being filled by lobbying members of the European parliament as the debate over whether to allow software patents in the EU intensified.
"It was very strange," O'Riordan recalls. "In Europe we had the habit of reading Slashdot, and reading about all the crazy patents in the USA, and we all had a good laugh. Then, very suddenly, we were faced with our own software patent problem."
At first, O'Riordan's lobbying was volunteer work, in which he was simply "looking for the most important thing to work on." However, several months before the European parliament rejected the idea of software patents, he was hired as a lobbyist by Free Software Foundation Europe (FSFE), a separate organization from the FSF.
After the vote in parliament, he continued to lobby for FSFE whenever an issue emerged. The work, he says, "was very interesting and very important, and I found it wasn't very difficult. There was a bit of a power vacuum in the European Parliament, because people in Europe are not very interested in European politics. So when I asked politicians if I could talk to them, they were very available. So I was able to talk to various politicians, and I was able to get deeply involved in the topic, despite not having a background in patents."
Recently, O'Riordan has been studying law at Facultés universitaires Saint-Louis in Brussels and taking a leave of absence from his FSFE work. But when offered the position at ESP by the FSF, the campaign's major sponsor, he jumped at it. "Since it's a legal topic and the FSF is a good institution, I decided to give it a try," he says.
As the new director, O'Riordan replaces Ben Klemens, who was hired in November 2007 when ESP was first organized, and quietly departed in spring 2008 after preparing an amicus curiae brief in the Bilski hearing. "When the Bilski case was over, there wasn't a similar case in sight, so I guess that at that point he decided to move on," O'Riordan says, although he has yet to talk to Klemens directly.
O'Riordan now refers to Klemens's time as director as "the first phase" of ESP. In discussing the directions in which he might take the campaign, O'Riordan concluded that "in the next phase it would be a good idea to document what happened in the EU before all the documents completely disappear, and then do the same for the Bilski case. The Bilski case did its job in terms of influencing the court's decision. but it can also do a second job of aiding people all around the world who are working on similar projects. It seemed that an obvious Phase 2 would be to move from the specific to the general, and try to turn the previous campaigns into a base for future campaigns."
O'Riordan argues that such cataloging is badly needed:
We have great documents that were published by Germany's monopoly commission, and we have economic studies published by universities in The Netherlands that were approved by the government. We have a lot of documents that people don't seem to know about. And when you're looking at the anti-software patents websites around the world, how could people know what's on these sites? There's dozens of websites, and some of them have changed names, and some of them have broken links now. It really is scattered." Considering the situation, he concludes that the contributions that ESP could make by adding more arguments "isn't as great as the contributions that could be made by assembling the arguments and cataloging the work that's already been done.
Admittedly, law can differ greatly between jurisdictions. All the same, O'Riordan suggests that ESP's new direction will be useful because most laws that concern software patents are based on international treaties. In Europe, for instance, most countries' patent laws are specific implementations of the European Patent Convention.
Similarly, given that patent law in most countries is often written ambiguously — it often pre-dates software — and is ill-equipped to deal with it, interpretation is essential. Most of the time, O'Riordan observes, interpretation is based on the question "'how do we harmonize with the rest of the world?'" — which, given the historical American dominance in trade, usually means "'how do we harmonize with the USA?'"
Even when laws and circumstances differ, O'Riordan adds, a global viewpoint can put matters into perspective. For example, the tendency of small and medium-sized American companies to support software patents — perhaps because they "are afraid of angering their mega-corporation business partners" — might be countered by pointing out that "small and medium enterprises aren't using software patents in Europe, Canada, or Australia. If we can build a picture from other countries, sometimes that can fill the gaps in the argument in one country like the USA."
In addition, O'Riordan hopes that ESP can provide a more accurate perspective. For instance, during the campaign in Europe, the fact that 77% of software patent applications in the EU were by American companies caused some observers to view the issue of software patents as a matter of American domination. However, if you take ESP's estimate on its home page that software patents cost the United States $11.2 billion, then you can establish that "it's not a case of one country taking over the world; it's a cost to everyone, and it's slowing down innovation. A lot of these arguments are actually improved by putting all the information together."
To help with Phase 2, O'Riordan plans to extend the ESP's repository of information beyond the United States and Europe through a wiki that should be ready in the next few weeks.
"The first thing will be to find out what's happening already in people's countries," O'Riordan says. "For example, in the Philippines, does the patent office give out software patents? Well, I don't know. Who can I ask? So, in some cases, we're going to document what's not known, or at least what people or legal authorities or organizations we know in an area. We'll start with that and, when we have time to dig into each jurisdiction, we can start asking them questions."
As O'Riordan points out, there is no way of knowing beforehand what information might be found:
Other content for the ESP site might include advice about how to conduct a campaign and lobby politicians. "There are certain ways to talk to politicians," O'Riordan says. "They like hearing about studies, and they like hearing about legislation, legal wordings, and comparisons between other countries. They like hearing about these things, but, if people start without having these resources, then sometimes they can get off on the wrong foot."
O'Riordan also points out that politicians are not just a source of support, but also of advice about how to conduct a campaign. For example, in his own lobbying, the Green Party's explanation of how the European legislature worked was as important as the eventual votes of its members.
O'Riordan does not rule out ESP's involvement in specific campaigns. Recently, for instance, O'Riordan and other activists distributed a one page letter about Microsoft's patent case against TomTom at the company's Innovation and Growth Day in Brussels. "This is just a small way to keep the topic alive and always remind everyone that there are people against software patents."
However, ESP's main focus for now will remain education and gathering of information. Although the issue of software patents is relatively quiet now, O'Riordan does not assume that it will remain so. "The European Commission [the EU executive] will change in November , and the European patent office is having a consultation about this topic, so there's a chance that the topic will come back on the table. There's also a small chance that the [American] Supreme Court will review the Bilski decision. So now is a good time to talk stock and to prepare for possible new campaigns."
Moving into Phase 2, O'Riordan counts on the support of the free software community. "The free software community tends to understand these issues very quickly, so it's very useful, because these people get active a lot easier than people who are new to the topics of freedom and software." At the same time, though, he stresses that ESP is not directly connected to the FSF, nor aimed only at free software users. The goal of ESP, O'Riordan says, is "to build a real coalition, to really convince the politicians that this is something that effects everyone — every computer user, and every business." And, for now, the best way to reach this goal, according to O'Riordan, is to prepare the ammunition for the next campaign.
Page editor: Jonathan Corbet
Next page: Security>>
Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds