University of Waterloo Human-Computer Interaction (HCI) researcher
Michael Terry always has intriguing work on display when he comes to the Libre Graphics Meeting. Two years ago, it was ingimp, a GIMP derivative that collects real-time usability data, and Textured Agreements, a study that showed how semantic markup and illustration dramatically increase the percentage of users that read and understand end-user license agreements and other click-through "legalese."
2011 was no different. Terry and his students presented two talks that could change the way open source developers work. The first was an analysis of real-world frequently-asked-questions mined through the Google Suggest search auto-completion feature. The second was another version of GIMP that — drawing on ingimp results and the Google Suggest analysis — presents a task-oriented interface, where users choose a task set to work on, and the tool palette morphs to fit the job at hand. The result is less confusing for new users, and because the set of supported tasks is customizable, it can grow through community contributions and individual personalization.
How do I ...
The first talk, entitled Quick and Dirty Usability: Leveraging Google Suggest to Instantly Know Your Users, detailed an analysis project led by PhD candidate Adam Fourney. Fourney and his collaborators started with what many Google users already know: by typing in a partial search query to Google Suggest, one can see the most popular complete search queries entered in by Google users at large. Although this is often done for comedic purposes, they instead scripted the process and automated search query prospecting runs for Firefox, GIMP, Inkscape, and small set of other open source applications, using an assortment of "how to" phrases and re-wording tricks to grab the broadest set of results.
After collecting data for three months, the team analyzed and classified the results. Additional Google tools such as Google Insights and AdWords helped to turn the raw query results into a usable data set. The queries break down into six basic phrase categories (e.g., imperatives like "gimp rotate text" versus questions like "how to draw a line in gimp"), six "intents" (e.g., troubleshooting problems versus seeking instructions), and in many cases related words allow certain queries to be correlated (e.g., "bring the window back" and "lost my window").
Analyzing the data may make for fascinating reading, but the researchers jumped straight to the potential practical applications for open source software projects. Among the findings that a Google Suggest query-study can uncover for a project are where the users' terminology and the project's diverge, functionality desired by the user base that might not be on the project's radar, and usability problems.
In the first category, the researchers observed that GIMP users frequently search for help making a picture "black-and-white." In reality, of course, the users are most likely not interested in a binary, two-color result: they want a grayscale image. Thus the project can help lead the user in the right direction by including "black-and-white" in the relevant tutorials and built-in help. The second and third categories are probably more self-explanatory: if users are consistently looking for help with a specific topic or error dialog message, it is simple to move from that knowledge to a feature request or bug.
Harvesting Google search results itself is not a new idea, but the talk (and the paper from which it originates) do an excellent job of explaining how to take a simple query and systematically glean useful data from it. An audience member asked Fourney if he planned to release his query-harvesting code; he declined on the grounds that Google's terms-of-use do not explicitly address using Google Suggest for this type of data mining, and he fears that publishing the code could lead to an obfuscation campaign from the search engine. Nevertheless, he said, the actual code was simple, and anyone with scripting experience could duplicate it in short order. The real genius, of course, comes in correctly interpreting the results in order to improve one's application.
A prime example of how developers could use search query analysis was on display in the second talk, Introducing AdaptableGIMP. AdaptableGIMP is built on top of GIMP 2.6 (the current stable release), and seeks to add a flexible, new-user-friendly interface to the image editor. Binary packages are available from the project for Windows and 32-bit Linux systems. The Linux builds are provided as Debian packages: one for AdaptableGIMP itself, and replacements for the standard gimp-data and libgimp2.0. Source tarballs and a Github repository are also provided for those who would prefer to install from source.
AdaptableGIMP does not remove any functionality from upstream GIMP. Rather, it replaces the standard toolbox palette, and links in to a web-based archive of "task sets," each of which loads a customized tool palette of its own. When you are not working with one of the loadable task sets, however, you can switch back to the default toolbox with its grid of tools. The custom palettes presented by each task set give you an ordered list of buttons to click that steps you through the process of performing the task at hand. It's a bit like having a built-in tutorial; no more getting lost or searching through the menus and tool-tips one by one.
At launch time, AdaptableGIMP asks you to log in with an AdaptableGIMP
"user account." This is optional, and is designed to tie in to the
project's wiki service (as well as to allow multiple users on a shared
system to maintain distinct identities, a feature that is probably more useful to Windows households). At the top of AdaptableGIMP's toolbox is a search box, into which you can type the name of a particular task or keywords. As you type, an overlay presents live search results retrieved from the AdaptableGIMP task set wiki. Each has a title and a short description; if the pop-up results are not helping, you can also launch a search dialog window that offers more information, including a preview of the steps involved in the task and the associated wiki page.
Whenever you select a task, it loads a custom list of commands into the toolbox. This is a bit like loading a recorded macro, except that the steps are not executed automatically; you perform each one in sequence. An "information" button next to the task title opens up a pop-up window explaining it in more detail. For example, the "Convert a Picture to a Sketch" task set involves four steps: converting the image to grayscale, adjusting the contrast in the the Levels dialog, applying the Sobel filter, and inverting the result.
Each step is represented by a button on the task set toolbox, as you click through them they change shape to the "pressed" look. It helps to have the Undo History palette open, because clicking the buttons again does not undo the completed step. In any event, clicking the buttons opens up the correct tool, filter, or adjustment window, pre-loaded with the correct settings (if applicable), but clicking does not automatically execute every task, because some require input — selecting the right portion of the image, for example.
Looking beyond the individual task sets, AdaptableGIMP allows the user
to personalize their experience by downloading local copies of frequently
used task sets. The interface shows the last-update-date of each task set,
which allows the user to compare it to the public version and retrieve updates. The task sets are stored on the project's wiki in XML. You can create your own task set on the wiki by using the built-in editor, but you can also create it or edit an existing task set within AdaptableGIMP, simply by clicking on the "Edit commands" button.
On the down side, the AdaptableGIMP team admits it is not experienced at building .debs — and even appeals for help with the packaging. Currently the Debian packages weigh in at 16.7MB and installing them is less than trivial. The AdaptableGIMP package conflicts with standard GIMP 2.6 and will not install until GIMP is removed, while the gimp-data and libgimp2.0 packages do install, they just break GIMP 2.6 as a result. Fortunately, all are easily removed and replaced with standard GIMP once testing is complete. Hopefully someone more experienced with packaging will offer to lend a hand, because AdaptableGIMP is an interesting package that distributions may actually want to consider offering.
AdaptableGIMP's launch-time selection of public task sets came from the results of the Google search query study discussed above, plus the ongoing data collection provided by ingimp. The wiki is open to all, however, and the concept of AdaptableGIMP "user accounts" is advertised as enabling good task set authors to develop a positive reputation with the user community by sharing their work.
All together now!
At the moment, AdaptableGIMP is tied to the wiki for retrieving remote task sets, but Terry pointed out that this is not set in stone, and that in the future it should be possible to add in other remote "task repositories." The researchers have not been able to do this yet solely for lack of time, which is also why the adaptable interface and task-set framework have not yet been modularized so that they could be re-used in other applications.
Inkscape and Blender team members in the audience asked follow-up questions about that point, including how to stay in contact with the research group as development continued. Like GIMP, both applications support a vast array of unrelated use cases, to the point where casual users sometimes do not know where to get started. From the GIMP camp itself, Øyvind "pippin" Kolås was highly complimentary of the work, saying "we think you're on crack — but it's good crack." He went on to say that the GIMP UI team found a lot of the ideas interesting and useful, although AdaptableGIMP probably could not be directly incorporated into the upstream GIMP.
Terry agreed with the latter point after the talk was over, explaining that the AdaptableGIMP interface is not meant to replace the traditional GIMP interface. Also, in the present code base, the process of changing the GIMP toolbox was not straightforward enough to make AdaptableGIMP distributable as a GIMP plug-in because it removes and replaces parts of the UI, which are activities not covered by the plug-in API.
As for distant versions of GIMP, Inkscape, or Blender, who knows? Terry and his research students intend to keep developing the AdaptableGIMP software — including the communal, public task set architecture, which (more than the interface changes) makes up the backbone of the system. As the team stated at the beginning of the talk, years of ingimp research show that most users make use of only six GIMP tools on average — but everyone's six is different. That is why GIMP and the other creative applications are so complex: everyone uses a different subset of their functionality. By separating "the tasks" from "the toolbox," AdaptableGIMP shows that it is possible to carve a usable path through even the most complex set of features, provided that you make getting something done the goal, instead of showing off everything you can do.
Graphics tools are certainly not the only subset of open source applications that could learn from this approach. Just about any full-featured program offers more functionality than a single task requires of it. The other important lesson from AdaptableGIMP is that presenting a streamlined interface does not necessitate removing functionality or even hiding it — only offering a human- and task-centric view on the same underlying features.
Comments (8 posted)
I had the opportunity to sit down with Mark Shuttleworth, founder of Ubuntu
and Canonical, for an wide-ranging, hour-long conversation
while at Ubuntu Developer Summit (UDS) in Budapest. In his opening talk, Shuttleworth said that he wanted to "make the case"
for contributor agreements, which is something he had not been successful
in doing previously. In order to do that, he outlined a rather different
vision than he has described before of how to increase Linux and free
software adoption, particularly on
the desktop, in order to reach his goal of 200 million Ubuntu users in the
next four years. While some readers may not agree with various parts of
that vision, it is
definitely worth understanding Shuttleworth's thinking here.
Company participation in free software
In Shuttleworth's view, the participation of companies is vital to bringing the Linux desktop to
the next level, and there is no real path for purely software companies to
move from producing proprietary software toward making free software.
There is a large "spike-filled canyon" between the proprietary
and the free license
world. Companies that do not even try to move in a "more free" direction
are largely ignored by the community, while those which start to take some
tentative steps in that direction tend to be harassed,
"barbed", and "belittled". That means that
companies have to leap that canyon all in one go or face the wrath of the
"ideologues" in the community. It sets up a "perverse
situation where companies who are trying to engage get the worst
experience", he said.
The community tends to distrust the motives of companies and even fear
them, but it is a "childish fear", he said. If we make
decisions based on that fear, they are likely to be bad ones. Like
individuals, companies have varied motives some of which align with the
interests of the community and some of which don't. Using examples like
Debian finding the GNU Free Documentation License to be non-free, while
Debian is not a free distribution under the FSF's guidelines, he noted that
the community can't even define what a "fully free" organization looks
like. Those kinds of disagreements make it such that we are "only
condemning ourselves to a lifetime of argument". In addition, because it is so unclear, "professional software
companies" aren't likely to run the gauntlet of community
unhappiness to start down the path that we as a community should want them
Essentially, Shuttleworth believes that it is this anti-corporate,
free-license-only agenda that is holding free software back. For some,
"the idea of freedom is more important than the reality", and
those people may "die happy" knowing that their ideal was
never breached, but that isn't what's best for free software, its
adoption, and expansion. The "ideologues are costing free software
the chance" to get more corporate participation. What's needed is a
understanding of how free software can actually grow", he said.
Existing company participation
There are, of course, companies that do contribute to free software, but
those companies "do something orthogonal" to software
development, he said. He pointed to Intel as a hardware vendor that wants
to sell more chips, and Google, which provides services, as examples of
these kinds of participants. There are also the distribution companies,
Red Hat, SUSE, Canonical, and others, but they have little interest in
seeing free software projects become empowered (by which he means able to
generate revenue streams of their own), he said, because that means
that anyone looking for support or "assurances about the
software" can only get it through the distribution companies.
Though some at Canonical disagree with the approach—because it will
reduce the company's revenues—Shuttleworth is taking a stand in favor
agreements to try to empower the components that make up distributions. By
doing that, "it will weaken Canonical", but will strengthen
the ecosystem. There needs to be more investment into the components, he
said, which requires that those components have more power, some of
which could come from the projects owning the copyright of the code.
Whether those projects are owned by Canonical, some other company, or by a
project foundation, owning the code empowers the components.
The other main reason that Shuttleworth is "taking a strong public
view" about contributor agreements is to provide some cover for
those who might want to use them. He has "thick skin" and
would like to move the free software ecosystem to getting more
"companies that are actually interested in software"
involved. So far, he has "seen no proposals from the
ideologues" on how to do that.
Companies may be more willing to open up their code and participate if they
know they can also offer the code under different terms. That requires
that, at least some of the time, contributors be willing to give their patches to
the project. Those who are unwilling to do so are just loaning their
patches to the project, and "loaning a patch is very uncool".
The "fundamentalists" who are unwilling to contribute their
code under a copyright assignment (while retaining broad rights to the code in question) are simply
not being generous, he said.
The state of free software today
The goal should be to "attract the maximum possible participation to
projects that have a free element", he said. He is "not arguing for
proprietary software", but he is tired of seeing "80%
done" software. In addition, the free software desktop applications
are generally far behind their proprietary counterparts in terms of
functionality and usability. He would like to "partner with companies that
get things done", specifically pointing to Mozilla as an
organization that qualifies.
The fear that our code will be taken proprietary is holding us back,
Shuttleworth said. In the meantime, we have many projects where the job is
only 80% done, and there is no documentation. A lot of those projects
eventually end up in the hands of new hackers who take over the project and
change everything, which results in a different unfinished application or
Involving software companies will not be without its own set of problems,
as those companies will still do "other things that we don't
like", but there is a need for professional software companies to help
get free software over the hump.
The "lone hacker" style of development is great as far as it
goes, but there are lots of other pieces that need to come together. He
pointed to the differences between Qt and GTK as one example. GTK is a
"hacker toolkit", whereas Qt is owned by a company that does
documentation, QA, and other tasks needed to turn it into a "professional
toolkit". Corporate ownership of the code will sometimes lead to
abuse, like "Oracle messing around with Java", but free
software needs to "use" companies in a kind of
"jujitsu" that leverages the use of the companies' code in
ways that are beneficial to the ecosystem.
He said that some of the biggest free software success stories come
from companies being involved with the code. MySQL and PostgreSQL are
"two great free
software databases", which have companies behind their
development or providing support. CUPS is a great
printing subsystem at least partly because it is owned and maintained by
Apple. Android is another example of an open source success; it has Google
maintaining strict control over the codebase.
Shuttleworth has a fairly serious disagreement with how the
OpenOffice.org/LibreOffice split came about. He said that Sun made a $100
million "gift" to the community when it opened up the
OpenOffice code. But
a "radical faction" made the lives of the OpenOffice
developers "hell" by refusing to contribute code under the Sun
agreement. That eventually led to the split, but furthermore led Oracle to
finally decide to stop OpenOffice development and lay off 100 employees.
He contends that the pace of development for LibreOffice is not keeping up
with what OpenOffice was able to achieve and wonders if OpenOffice would
have been better off if the "factionalists" hadn't won.
There is a "pathological lack of understanding" among some
parts of the community about what companies bring to the table, he said.
People fear and mistrust the companies on one hand, while asking
"where can I get a job in free software?" on the
other. Companies bring jobs, he said. There is a lot of "ideological
claptrap" that permeates the community and, while it is reasonable
to be cautious about the motives of companies, avoiding them entirely is
The Canonical contributor
agreement is "mediocre at best", but does have "some
elements which are quite generous", he said. It gives a wide license
back for code that is contributed so the code can be released under any
license the author chooses. In addition, Canonical will make at least one
release of the project using the patch under the license that governs
the project, he said. That guarantee does not seem to appear in the actual
These kinds of contributor agreements are going to continue to exist, he
said, and believing otherwise "denies the reality of the world we
live in". The problem is that there are so many different
agreements that are "all amateur in one form or another", so
there is a need to "distill the number of combinations and
permutations" of those agreements into a consistent set. That is
the role of Project Harmony, he said.
The project brought together various groups, companies, organizations, and
individuals with different ideas about contributor agreements, including
some who are "bitterly opposed" to copyright assignment. The
project has produced draft 1.0
agreements that have "wide recognition" that they
represent the set of options that various projects want.
The agreements will help the community move away from "ad hoc"
agreements to a standard set, which is "akin to Creative
Commons", he said. The idea is that it will become a familiar
process for developers so they don't have to figure out a different
agreement for each project they contribute to. Down the road, Shuttleworth
sees the project working on a 2.0 version of the agreements which would
cover more jurisdictions, and address any problems that arise.
In the hour that we spoke, Shuttleworth was clearly passionate about free
software, while being rather frustrated with the state of free software
applications today. He has a vision for the future of free software that
different from the current approach. One can
certainly disagree with that vision, but it is one that he has carefully
thought out and believes in. One could also argue that huge progress
has been made with free software over the last two or three
decades—and Shuttleworth agrees—but will our current approach
take things to the "next level"? Or is some kind of different approach
As far as contributor agreements go, it seems a bit late to be making the
case for them at this point—something that Shuttleworth acknowledged
in his talk at the UDS opening. Opposition to the agreements, at least
those requiring copyright assignment, is fairly high, and opponents have
likely dug in their heels. While he bemoans ideology regarding contributor
agreements, there are procedural hurdles that make them unpopular as well;
few want to run legal agreements by their (or their company's) lawyers.
The biggest question, though, seems to be whether a more agreement-friendly
community would lead to more participation by companies. If the goal is to
get free software on some rather large number of desktops in a few short
years—a goal that may not be shared by all—it would certainly
seem that something needs to change. Whether that means including more
companies who may also be pursuing proprietary goals with the same code is
unclear, but it is
clear that Shuttleworth, at least, is going to try to make that happen.
Comments (179 posted)
On May 12, NLUUG held its Spring Conference with the theme "Open is efficient". With such a general theme, it won't be a surprise that the program was a mix of talks about policy matters, case studies, and technical talks in various domains. However, two talks were particularly interesting because they pinpointed some gaps in our current solutions for open (source) telephony.
Open SIP firmware
The Dutch cryptographer Rick van Rein presented [PDF] his project to build open source firmware for SIP (Session Initiation Protocol) telephony. His vision is that SIP holds great promises for the future of telephony, but that nobody is unleashing its potential:
Users say that they just want to make phone calls, so that's what phone manufacturers and telecom operators largely limit their features to, but if you ask for more advanced features the telcos say that not all phones support the feature and the phone manufacturers say that most telcos do not use the feature. And as the telecom operators still earn their money from analog calls, progress has come to a halt. At the end of the day the users remain stuck with some basic calling features, and current SIP phones are just a simulation of plain old analog phones.
All this holds for SIP devices, but there's another type of SIP phone
that currently has much more advanced functionality: the softphones (software implementing SIP functionality on a computer or smartphone). According to van Rein, the softphone market is where the real innovation is happening, with advanced features such as presence settings, IPv6 connectivity, and end-to-end encryption of phone calls with ZRTP (a cryptographic key-agreement protocol to negotiate the keys for encryption in VoIP calls). In short, open source is great at handling SIP functionality, but this doesn't help the people that have bought a SIP phone (the hardware), because the firmware of these devices remains "as open as an oyster".
Van Rein's goal is to build open source firmware that can be installed on such a SIP device instead of its original closed firmware, and ultimately it should be able to bring the advanced SIP features of softphones to these phones too. The project is called 0cpm, and is partially funded by NLnet Foundation. As a proof of concept, van Rein is now implementing his firmware for the Grandstream BT200, a fairly typical and affordable SIP phone for home and office use, but the framework is designed with portability in mind. The 0cpm firmware, called Firmerware, is GPLv3 licensed.
As some of these SIP phones have only 256K of RAM, Linux would be too
big an operating system to run on them; even its microcontroller cousin uClinux would be too large. So van Rein
wrote his own tickless realtime operating system, with a footprint of
around 13K. Together with the network stack and the application, this fits
well into the 512K NOR flash that is typical for smaller devices. According
to van Rein, it's important that 0cpm is able to run on cheap and energy
efficient phones (because they're always on) with limited resources, so he doesn't have the luxury to use Linux. At the bottom of the 0cpm firmware stack, you need drivers for all chips and peripherals of the phone, and on top of it there will be some applications running, such as a SIP phone application.
One of the main goals of the 0cpm project is to enable SIP on IPv6. For most end users, current SIP phones are too complex to configure due to a dependency on IPv4 and NAT routers. To tackle these issues, most SIP vendors end up passing all traffic through their own servers, but of course this isn't free. Van Rein believes that only an IPv6 SIP project will be able to offer an open but easy-to-configure SIP experience to end users. With IPv6, direct calls are always possible, and with technologies like ITAD (defined in RFC 3219) and ENUM (E.164 NUmber Mapping), SIP telephone numbers can be found using a DNS-based lookup. By combining all these existing pieces in the 0cpm project, users can finally call freely. Not only free as in beer (that's where the project's name comes from, "zero cents per minute"), but also free as in speech, van Rein emphasizes. For devices that have no IPv6 connectivity whatsoever, the 0cpm firmware will fall back to a suitable device-local tunnel for IPv6 access.
But before we get there, there's a lot of reverse engineering to do. In
his talk at the NLUUG conference, van Rein gave some tips and tricks he
used to reverse engineer the inner workings of his Grandstream BT200
device. One of the tips he gave was to use the Logical Link Control
(LLC) network protocols, which are extremely easy to implement and come in
handy for reverse engineering. The LLC1 protocol offers a trivial datagram
service directly over Ethernet and it has minimal requirements: just
memory, network and booting code. A stand-alone LLC1 image is about 10K,
including console access. You can install a bootloader using TFTP over LLC1
instead of TFTP over UDP. In a similar way you can connect to the console
over LLC2 (a trivial stream service) instead of over TCP. It has the same
minimal requirements as LLC1 and adds about 200 lines of C code. Van Rein
calls LLC "a generally useful tool for reverse engineering",
emphasizing that with LLC2 it's even possible to show the boot logs before
the device gets an IP address.
But when reverse engineering phone hardware, you should first be able to figure out what each component is doing. In general, most phones contain a System-on-Chip, RAM, flash storage, Ethernet connectivity, and GPIO (General Purpose Input/Output) pins. Reverse engineering is more of an art than a science, and it includes identifying the components, gathering datasheets, and finding an open source compiler toolchain. Finally, you have to figure out a way to launch your own code on the device. Then you can start writing some "applications" to test your drivers, for instance an application that flashes the LEDs to test the drivers for timers and interrupts, and an application that shows typed numbers to test the drivers for the keys and the display. By building various of these simple applications, you can test the drivers individually. The 0cpm software contains these applications to make porting easier. Van Rein is currently still working on the drivers, and he has a simple application that gets an IPv6 address. He hopes to be able to show a working phone application in the second half of this year.
In short, van Rein truly believes that progress in SIP phones will come
from open source firmware, and with the 0cpm project he wants to build this
firmware. While the current proof of concept phone is the Grandstream
BT200, he invites anyone to port to their own hardware. For interested
developers, there's a Git
repository with the source code (git://git.0cpm.org/firmerware/,
which is only accessible over IPv6). Reverse
engineering current SIP phone hardware is a big task, and van Rein
emphasizes that 0cpm is not even alpha-quality code. If the project can
generate a critical mass, its vision of generic SIP firmware could come true, in much the same way as we now have free firmware projects such as OpenWrt and DD-WRT for wireless routers.
A nice by-product will be that 0cpm allows a truly secure way of calling each other, thanks to the direct IPv6 connectivity without any central server that can wiretap all media streams, as well as the encryption and mutual authentication offered by the ZRTP protocol. On a related note, GNU SIP Witch 1.0 was released last week, which offers a secure peer-to-peer SIP server that lets ZRTP phones talk without the need for an intermediate service provider.
An "open" GSM network operator
Another telephony-related project that was presented at the NLUUG
conference is Limesco [in Dutch], an "open" and transparent pan-European not-for-profit GSM network. Mark van Cuijck, one of the three founders, presented the rationale behind this project:
Telephony really is a closed world, especially in the domain of mobile networks. GSM operators are far from open and transparent, with many types of subscriptions. There are even operators that offer you different prices and different conditions based on whether you have bought the subscription in a physical store or on the operator's web site. Even more, the conditions are different if you have bought the subscription together with a device than if you bought a SIM-only subscription. And so on and so on. Moreover, international roaming costs, even in the "internal" European market, are very high, especially for data.
On the other hand, the telephones and some of the telecom infrastructure are becoming more and more open. Van Cuijck mentioned the popular Android platform, which is largely open source and has a big open applications ecosystem. There's also OsmocomBB, an open source GSM baseband software implementation, and OpenBTS, an open source software-based implementation of a GSM base station. And in the SIP domain, we have open source SIP softphones such as Ekiga and server software like Asterisk and FreeSWITCH. But what good are all these open source programs if the mobile network operators are very restrictive about what happens on their network? That's where Limesco would come in.
Limesco is still in research phase, so even the founders aren't sure yet that it will become reality. They are researching now whether it would be financially and technically viable to become a mobile virtual network operator (using another operator's infrastructure) aiming at a target audience of users that value freedom, openness, and transparency. They have published a survey that has been filled out by 1200 people, and they are talking with companies in the telecom market to get an idea about the costs. By June 1, they want to decide whether to go further with the project or whether to abandon it.
A targeted approach
Whatever will happen with the project, the idea is interesting, and it seems like it should appeal to enough people to make it a viable business model. There are already a lot of mobile virtual network operators that have a specific target audience. For instance, van Cuijck mentioned the Dutch mobile operator Telesur, which targets Surinamese inhabitants of the Netherlands. Many of these people still have relatives in this former colony of the Netherlands, and Telesur is responding by offering them very cheap call rates between the Netherlands and Suriname. It's this kind of targeted approach that the Limesco founders are thinking about, but this time targeted at the "hacker" community.
Van Cuijck presented some ideas. One of the goals of Limesco is to be more transparent about the call rates:
With other mobile operators, call rates vary a lot, especially for international calls, and it's not clear whether the differences are technical or organizational. When we will decide on call rates for Limesco, we will communicate why our rates are what they are.
The preliminary results from the survey also show that the respondents
are interested in knowing what data Limesco would store about the
subscribers. Each mobile network operator has to store certain data and
cooperate with the police, and Limesco would have to obey these laws as any other network operator, but it wants to make the difference by being completely transparent about it.
Another goal is to give the subscribers the freedom to manage their own
services. For instance, instead of offering services on the level of the
operator network, subscribers could get the capability to manage their own
voice mail application, their own conference call implementation, and so
on. Subscribers could also get the ability to block certain numbers, and it
should even be possible to link two mobile phone numbers to one SIM card or
to have two SIM cards linked to the same mobile phone number. All these
examples van Cuijck mentioned are clearly features that are not interesting
for most of the public but that could be interesting for a niche audience of do-it-yourself people.
This all sounds interesting, but it's not yet reality, and it might be that the project is a bit naïve. One of the people in the audience made an excellent observation after van Cuijck's talk: a mobile operator makes its profit because of the difference in how much you think you call and how much you call in reality. All these various call plans only have one goal: confuse the subscribers and let them choose a suboptimal plan for their situation. So if Limesco wants to be completely transparent about its call rates, it loses the information asymmetry to its customers, so it will make less profit than the other mobile operators. While this observation may be true, though, your author thinks that Limesco can have a competitive advantage over other mobile operators with its do-it-yourself approach: if its subscribers are not interested in the many services other operators offer, like voice mail, conference calls, call forwarding, and so on, Limesco doesn't have to build or outsource these services, and maybe that's how it can lower the costs for the infrastructure it rents.
Filling the gaps in open telephony
Both 0cpm and Limesco are interesting projects in that they are filling
the few remaining gaps in our open telephony infrastructure. We have good
open source SIP softphones like Ekiga, we have good SIP server software
like Asterisk, we even have open source GSM base station software such as
OpenBTS and open source GSM baseband software like OsmocomBB, but we still
lack two important components to have a fully open and transparent
telephony experience: open firmware for SIP phones, and a mobile network
operator that doesn't hamper what we can do with our mobile phone
connection. Perhaps we will see that change in the relatively near future.
Comments (8 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Seccomp: replacing security modules?; New vulnerabilities in exim, flash-plugin, pure-ftpd, tor, ...
- Kernel: Pointer hiding; Integrating memory control groups; ARM kernel consolidation; The platform problem.
- Distributions: UDS security discussions; Fedora, Mageia, Debian, ...
- Development: DVCS-autosync; NumPY, Perl, SIP Witch, TermKit, ...
- Announcements: Google "Chromebooks" launch, webOS, Groklaw, ...