Author and keen technology observer Clay Shirky came to LinuxCon in
Vancouver to impart his vision of how large-scale collaboration
works—and fails to work. In an energetic and amusing—if not
always 100% historically accurate—talk,
Shirky likened collaboration to "structured fighting" and
looked at how and why that is. It is, he said, the structure that makes
all the difference.
Shirky started things off with his "favorite bug report ever"
#330884), which starts with the line: "This privacy flaw has
caused my fiancé and I to break-up after having dated for 5 years."
Because of the way Firefox was recording information about sites that were
blocked from ever storing the password, the woman who filed the bug found
out that her intended was still visiting dating sites. What was
interesting, Shirky said, was that the responses in the bug report not only
included technical advice, but also relationship advice that was presented
as if it were technical information. The report is proof that we can never
really "disentangle the hard technical stuff from the squishy human
stuff", he said.
He then put up a picture of the "most important
Xerox machine in the world" as it was the one that was sent to Richard
Stallman's lab without any source code for a driver. In "an epic fit
of pique", Stallman wrote a driver and has devoted the following 25
years of his life to fighting the strategy of releasing software without
the corresponding source code.
But GNU projects were tightly managed, and it wasn't until another project
came along, Linux, that the full power of large-scale collaboration was
unlocked. Eric Raymond had this idea that the talent pool for a project was
the entire world. Linus Torvalds took that idea and ran with it, he said.
(That probably isn't quite the order of those events the rest of us
remember, but Shirky's point is still valid.)
One of the things that open source has given to the world, is the
"amazing" ability to manage these large-scale collaborations.
It goes well beyond software, he said. If you look at the "cognitive
surplus" that is available for collaborative projects, it is truly
a huge resource. A back-of-the-envelope calculation in 2008 came up
with 100 million hours to create all of Wikipedia, including the talk
pages, revisions, and so on. But that pales in comparison to television
watching which takes up an estimated 1 trillion hours per year. There is
an "enormous available pool of time and attention" that can be
tapped since people are now all connected to the same grid, Shirky said.
As an example, he pointed to the Red Balloon
Challenge that DARPA ran last year. They wanted to test new
collaboration models, so they tethered ten weather balloons in locations
across the US. The challenge was to gather a list of all ten and their
latitude/longitude to within a mile.
An MIT team won the challenge by saying they would share the prize money
with anyone who gave them information about the locations. But they also
took a cue from Amway, he said, and offered a share of the prize to people
who found a person that could give them location information. That led to
a network effect, where people were asking their friends if they had seen
any of the balloons. In the end,
the MIT team solved the problem in nine hours, when DARPA had allocated 30
days for the challenge. "That's the cognitive surplus in
action", Shirky said.
"When the whole world is potentially your talent pool, you can do
amazing things", Shirky said. lolcats is one of those
a "goodly chunk of cognitive surplus" goes into creating them,
which leads to criticism of the internet. But that always happens with new
media, he said, pointing out that the first erotic novel was written shortly
after the invention of the printing press but that it took 150 years to
think of using the invention for a scientific journal.
He showed several quotes from people reacting to new media like the
telegraph, telephone, and television at the time each was introduced. The
introduction of the television led the commenter to
believe that world peace would occur because it would allow us to
better connect with and understand other cultures. "Here's a hint of
with new media—it's not world peace", he said. More people
communicating actually leads to more fighting, and the challenge is to
figure out how to structure that fighting.
Shirky believes that the transition from alchemy to chemistry was fueled by
the "decision to add structure to what the printing press made
possible". Instead of performing and recording experiments in
secret as alchemists did, the rise of the scientific journal changed the
focus to publishing results that others could test for themselves—or
argue about. The difference between the two is that alchemists hid their
discipline, while chemists published, he said.
Three observations about collaboration rounded out the rest of Shirky's
talk. While its not a canonical list, he said, there are useful lessons
from the observations. The first is that "many large-scale
collaborations actually aren't". If you look at the page for Linux
on Wikipedia, there have been some 10,000 edits from 4,000 different
people. That equates to 2.5 edits per person, which is a pretty standard rate
for Wikipedia pages.
That might appear to be a very large-scale collaboration, but it's not, he
said. If you graph the contributions, you soon see that the most active
contributors are doing the bulk of the work, with the top contributor doing
around 500 edits of their own. The tenth highest contributor did 100
edits, and the 100th did 10 edits. Around 75% of contributors did only one
That same pattern shows up in many different places, he said,
including Linux kernel commits. These seemingly large-scale collaboration
projects are really run by small, tight-knit groups that know each other
and care about the project. That group integrates lots of small fixes that
come from the wider community. Once we recognize that, we can
plan for it, Shirky said.
Shirky's second observation was that many of the people who want to collaborate shouldn't be allowed to, at
least easily. He pointed to stackoverflow and the related StackExchange sites as embodying some
of this philosophy. StackExchange was spun off from stackoverflow to
handle additional topic areas beyond just programming that the latter
covers. Possible question and answer topics are "anything that is
geeky enough to have a right answer" and that people want to argue
about, Shirky said.
But creating new Q&A sites on StackExchange does not follow the model
that many other internet sites have used: "just let people do what
they want and see what sticks". Instead, it is difficult to start a
new site, which ensures that there is enough real interest in the topic.
The sites are "taking karma really seriously", and are
"stretching both ends of the karmic equations". New users are
not allowed to post either questions or answers right away, but must build
up karma by reading the site first. Net etiquette always said that new
users should do that, but "no one did it". At the other end
of the spectrum, users can build up enough karma that they get sysop-like
powers. These sites are an "attempt to say that we don't have to
treat all people the same", he said.
Technology and human patterns need to match up, Shirky said, as
his third observation. This goes back to the bug report
at the beginning of his talk. It has taken a long time to align
the two, he said, because code dominated free software communities for so
As an example, he pointed to the saga of Linux kernel source code
management, which started out as tarballs and patches. Then BitKeeper
showed up, and then went away, which (Shirky said) caused Torvalds to go back to tarballs
and patches. Basically, Torvalds chose not to use any source code manager
than use one whose functionality did not embrace the ideals of the GPL,
Shirky said. He was not making a licensing argument here, after all
Torvalds had been using the decidedly non-GPL BitKeeper, but instead was
arguing (perhaps somewhat inaccurately) that Torvalds chose BitKeeper, and
later Git, because the way they operate is in keeping with GPL ideals.
Git "lives up to the promise of the GPL", because it
decentralizes repositories and allows easy forking. Merging code should
always be a community decision, which Git also embodies, he said.
Once Git was released, there were other interesting developments. Source
code management systems had been around for decades, but were never used
for anything but source code, Shirky said. Because Git matches people's
mental model of how collaboration should work, it spawned things like github. But it doesn't stop there, he said,
noting that there are Git repositories for novels, and that someone had
checked in their genome to a public repository. The latter, of course,
spawned an immediate pull request for 20 upgrades. A joke, but one that
resulted in a scholarly discussion about caffeine sensitivity that
had participants from organizations like the National Institutes of Health.
There is also an effort called Open Knesset
[Hebrew] that is attempting to use Git to help people better understand
what they agree and disagree about. Essentially it takes laws proposed in the
Israeli Knesset and checks them into Git, then tells people to fork the law
and write it the way they would like to see it. "That will show
where the arguments are", Shirky said. It is "audacious
enough" that it probably won't work, but he also noted that
"audacity beats predictability over the long haul". He
believes we will see more of this kind of thing in the future.
One way to look at large-scale collaboration is that it is more people
pooling more ideas, and that's true he said, but he would make an addition:
"after arguing about it for a really long time". Taking this
"structured argument approach" that free software (and other)
have and moving it into other areas of our world will be beneficial.
Applying some of the lessons learned from communities like StackExchange,
Open Knesset, and the Linux kernel, as well as lessons from things like
Mozilla bug entries will provide a means to take argumentation to the next
level—and actually make it matter.
[ I would like to thank the Linux Foundation for travel assistance to
attend LinuxCon. ]
Comments (19 posted)
The theme of the 2011 COSCUP
conference (Taipei, August 20-21) was "Gadgets beyond smartphones." Based
on a number of the talks and exhibits on offer, "beyond smartphones" seemed
"tablets" to a number of the people in attendance. Two talks by
representatives of competing desktop environments show some interesting
similarities and differences in how they see the tablet opportunity.
First up was GNOME developer Bastien Nocera, speaking on the theme "my sofa
wants a new form factor." That new form factor, naturally, is the tablet - an
ideal device, it seems, for the typical couch potato. Tablets, he said, are
"the new Eldorado"; everybody is trying to get there.
There are a number of options for software to run on tablets. One could
use Windows, but it is non-free and uninteresting. iOS, too, is entirely
proprietary; it's also unavailable for non-Apple hardware. WebOS was an option when
Bastien wrote his talk, though things had changed in the meantime - the
demise of WebOS shows what can happen to a proprietary platform owned by a
single company. Then there's Android, but the problem with Android,
according to Bastien, is that it's entirely owned by Google. It is not
truly open; one has to be one of Google's best friends to have any kind of
early access to the software. The result is that there are a lot of
tablets on the market running old versions of Android. MeeGo, he said, was
not really even worth mentioning; it is a "puppet" of Intel without any
real community governance.
What all this comes down to is that, at the moment, there is an opportunity
for something else in the tablet market. Unsurprisingly, Bastien thinks
that GNOME 3 would be a good something else.
GNOME 3, he says, is the result of an ongoing push for more vertical
integration in the platform. Increasingly, GNOME is seen to include
components like PulseAudio, NetworkManager, udev, and, more recently,
systemd. GNOME, in other words, is becoming more of an operating system in
its own right. Furthering that evolution, the project plans to start
shipping full operating system images to users. The full GNOME experience
is hard to produce if distributors change pieces of the platform - using
ConnMan instead of NetworkManager, for example. The project wants to
produce a single, unified experience for GNOME users.
And they want GNOME 3 to be an option for tablets. There are a number
of advantages to the platform: it's a community-based, 100% free project
with an open development model. But, he said, it lacks one thing:
hardware. So Bastien put out a call to hardware manufacturers: please talk
to the GNOME project about what they have to offer. And, if nothing else,
please send your drivers upstream and ensure that the hardware is supported
by free software.
Bastien was replaced at the lectern by KDE developer Aaron Seigo who had a
surprisingly similar story to tell. The existing platforms, he said, are
not free; he cited the result of some study which - using an unclear
methodology - came to the conclusion that iOS was 0% open while Android
did a little better at 23% open. Linux (for some value of "Linux") came in
at 71% open. KDE, he said, is going for 100% open.
Aaron introduced Plasma and Plasma Active (recently described in LWN); these projects have existed
in desktop and netbook variants for a while now. The tablet version is
more recent, but is well advanced regardless. The goals for all of the
variants are the same: an "amazing experience" which creates an "emotional
bond" in users, an efficient development framework, and the ability to run
the same application on all types of devices. Aaron noted that all three
variants share almost all of their code.
One part of the talk sounded quite different from Bastien's talk: Plasma,
Aaron said, has been designed as a set of components which can be assembled
in any number of ways. KDE is not shooting for the single unified
experience; it is aiming to build a platform with which others can create
any number of different experiences.
According to Aaron, there are seven companies working with Plasma now,
along with a lot of community developers. But the project is looking for
more developers, more designers, and more companies to work with; they are
especially interested in hardware partners. KDE, he said, has something
that is compelling and shippable today; all it needs is something to ship
that software on. (He had previously said that a couple months of
polishing were planned; perhaps a large value of "today" was intended).
In your editor's view, there does seem to be an opportunity in the tablet
space at the moment. Apple's offerings still own this category,
but that situation seems unlikely to last forever. Android is the logical
choice for a
second leading system, but Google's control may not sit well with all vendors,
especially now that Google is, through its acquisition of Motorola
Mobility, becoming a hardware vendor in its own right. The management of
Android, according to Google, will not change as a result of this
acquisition, but that is just the problem: companies like Motorola have
already tended to get privileged access to unreleased Android versions.
And, in any case, a duopoly is still a small set of options; Android is
clearly not going away, but it would not be surprising to see an appetite
for a third option among both vendors and customers.
Becoming that third option will not be an easy thing to do, though. There
are a number of contenders for that space beyond GNOME and KDE: they
include MeeGo, Ubuntu with the Unity shell and, naturally, Windows.
Even WebOS could possibly make a surprise comeback.
Perhaps one other Linux-based platform can establish itself as a viable
alternative on tablets; it seems unlikely that four or five of them will.
Competition between projects can be good for the exploration of different
ideas and as a motivation to get more done, but it's hard not to feel that,
if we want to create a viable third platform which is competitive with
Android and iOS, our community's efforts are a little too scattered at this
A related question is: can a tablet-based platform be competitive without
running on phone handsets as well? Neither of the desktop environment
presentations at COSCUP mentioned handsets; if the projects are thinking of
scaling down that far, they are not talking about it yet. There is clear
value in having the same interface - and the same applications - on both
types of device. Android and iOS offer that consistency; alternatives may
have to as well.
And, of course, there is the challenge of third-party applications; getting
this week's hot application ported to GNOME or KDE may not prove easy.
Sometimes one hears that HTML5 will save the day, but there are a couple of
objections that one could raise to that line of reasoning. One is that we
have been hearing that the web would replace local applications for at least
15 years now; maybe it is really true this time, but that has yet to be
seen. And if everything does move to HTML5, alternatives like
ChromeOS and Boot2Gecko may become more interesting, widening the field
So the desktop environments have given themselves a big challenge, to say
the least. It would be nice to see at least one of them succeed; we have
come too far to give up on the idea of a fully free, community-oriented
system on newer hardware. The technology to create a competitive
alternative is certainly there; what remains to be seen is whether it is
matched with an ability to woo hardware manufacturers and get real products
into the market. At this point, the success of Linux on the tablet
probably depends more on that sales job than on what the developers do.
[Your editor would like to thank COSCUP 2011 for assisting with his travel
to this event.]
Comments (71 posted)
The first day of LinuxCon 2011 started off with a keynote from the Linux Foundation's Jim Zemlin, in which he joked about the perpetually-next-year "year of the Linux desktop." Interestingly enough, that afternoon a smaller session with Userful Corporation's Timothy Griffin dealt with Linux on desktops in massive numbers. Userful deploys Linux in very large-scale "digital inclusion" projects — such as schools in second- and third-world environments — including the world's largest, a 500,000 seat deployment in Brazil.
Userful is a small, Calgary-based company that contracts with local system integrators to roll-out Linux desktops, usually in schools, and often to fulfill government mandates to deploy open source software. Griffin showed a cartogram that colored the countries of the world by the relative price of a PC, and scaled the size of each country by its population. According to that graphic, the vast majority of the world population lives in countries where a computer costs the equivalent of 6 months' salary (or more), and the ratio of schoolchildren to computers is as high as 150 to 1.
In those countries, governments frequently undertake nation-wide computing initiatives (sometimes even creating national Linux distributions), for basic cost-saving reasons and to keep development and IT support jobs in-country. When deploying the machines into schools, Griffin said, the cost of the hardware accounts for but a fraction of the overall cost: power may be expensive and unreliable, the site may be several days journey on difficult roads, and there may be no Internet connection for updates and IT support. As a result, Userful tailors its software solution to function in circumstances that ordinary Linux distributions do not.
The most visible difference seen in Userful deployments is multi-seat
PCs. Using commodity hardware, the company configures machines to serve up
five to ten front-ends (including monitor, keyboard/mouse, and sound) from
a single PC. Userful's multi-seat setup relies on USB hubs, using hardware
from HP, ViewSonic, and a number of other commodity peripheral vendors. While in
the past such multi-seat configurations would have required special-purpose
components, Griffin said that (ironically, perhaps) the popularity of
Microsoft's "Windows Multipoint" product led to a glut of easily available
hardware. The USB devices at each front end include simple graphics chips
of the same type used in laptop docks, and are capable of running applications at normal, "local" speed — unlike most remote-desktop thin client configurations. A "medium" strength PC with four CPU cores can serve ten front ends running normal office applications, and do so using less power than ten low-end PCs, plus offer simplified configuration management, printer sharing, updates, and support.
The Brazil deployment has been rolling out in phases since 2008, and
currently includes more than 42,000 schools in 4,000 cities. The base
distribution is one created by the Brazilian government, called Educational Linux [Portuguese], which is based
on Kubuntu. But a bigger component of the project, Griffin said, was the
support system that was also built by the government to provide teachers with classroom materials and software updates, and students with a social networking component. The computers are pre-loaded with multi-gigabyte data stores — from lesson plans to video content, and in rural areas without Internet access, updates are sent by mail on DVD.
As a case study, Griffin noted, the Brazil deployment reveals valuable lessons for the Linux and open source community as a whole, on subjects such as "sustainability," where too often the focus is on power consumption alone. But a genuinely "sustainable" deployment must sustain itself, he argued, including being resilient to lack of an Internet connection, predictable visits from IT staff, and teachers that may not have any more experience with computing than do the schoolchildren.
Griffin called these situations "green field" deployments, where there is no pre-existing computing environment at all. They are common in regions of the world where computers are expensive, he said, and where national governments often do studies and end up mandating the use of Linux and open source.
Where open source is silently ceding the field
Yet despite those mandates, he said, Microsoft Windows often ends up ultimately getting deployed instead. There are many reasons why, including lobbying efforts, entrenched players, politics, and money. But the troubling part is that the open source community has no response to these gambits, even when they are based entirely on FUD and distortion. The major commercial Linux distributions (Red Hat, SUSE, etc.) put no effort into competing for green field deployments, and offer no on-the-ground field support to those who lobby and bid for the contracts.
There is not an easy solution; what is needed to improve the situation includes better coverage of the large-scale success stories to counteract FUD and even outright lack-of-knowledge. Griffin told an audience member that there are many non-governmental organizations (NGOs) working in impoverished nations that run Windows on their computers solely because they have no idea that Linux even exists. In fact, he added, they pay full price for their licenses, when they could save considerable money just by telling Microsoft they were considering Linux and getting a steep discount in return.
The green field market is one that Linux and open source ought to fight hard to win, Griffin said, for precisely the reasons that Zemlin said Linux had been successful in the first world: its free availability enables innovation and experimentation in areas (inside and outside of the technology field) that are simply unpredictable. National governments regularly end up recommending and mandating open source, Griffin argued, because they see that by not buying into a proprietary solution owned by a foreign company, they put more power into the hands of their own people.
If you want to see the year of the Linux desktop, he said, look to the green field deployments. "The next billion computer users haven't even decided what their operating system is going to be." Brazil's roll-out of 500,000 desktops running Linux has put Linux into the hands of millions of students. In five to ten years, the open source community is going to see a return on that investment when those students enter the workplace, having been trained on a computer — easily the most powerful educational tool in the world — that runs free software. Microsoft recognizes that those stakes are huge, and has adopted a "win at any cost" strategy. Unfortunately the open source community is not nearly as organized, and lets many of those opportunities slip from its grasp.
As Griffin said repeatedly, there is no simple answer: his company works on software, but most of the work needing to be done is hands-on and in-the-field. But for all the talk at LinuxCon about how the PC era is over, it is a powerful reminder that the smartphone and tablet wars are a decidedly first-world problem, and that for most of the computer users of the future, the desktop battle is far from being over.
Comments (27 posted)
Page editor: Jonathan Corbet
Next page: Security>>