LWN.net Weekly Edition for August 25, 2011
LinuxCon: Clay Shirky on collaboration
Author and keen technology observer Clay Shirky came to LinuxCon in
Vancouver to impart his vision of how large-scale collaboration
works—and fails to work. In an energetic and amusing—if not
always 100% historically accurate—talk,
Shirky likened collaboration to "structured fighting
" and
looked at how and why that is. It is, he said, the structure that makes
all the difference.
![[Clay Shirky]](https://static.lwn.net/images/2011/lcna-shirky1-sm.jpg)
Shirky started things off with his "favorite bug report ever
"
(Firefox bug
#330884), which starts with the line: "This privacy flaw has
caused my fiancé and I to break-up after having dated for 5 years.
"
Because of the way Firefox was recording information about sites that were
blocked from ever storing the password, the woman who filed the bug found
out that her intended was still visiting dating sites. What was
interesting, Shirky said, was that the responses in the bug report not only
included technical advice, but also relationship advice that was presented
as if it were technical information. The report is proof that we can never
really "disentangle the hard technical stuff from the squishy human
stuff
", he said.
He then put up a picture of the "most important
Xerox machine in the world
" as it was the one that was sent to Richard
Stallman's lab without any source code for a driver. In "an epic fit
of pique
", Stallman wrote a driver and has devoted the following 25
years of his life to fighting the strategy of releasing software without
the corresponding source code.
But GNU projects were tightly managed, and it wasn't until another project
came along, Linux, that the full power of large-scale collaboration was
unlocked. Eric Raymond had this idea that the talent pool for a project was
the entire world. Linus Torvalds took that idea and ran with it, he said.
(That probably isn't quite the order of those events the rest of us
remember, but Shirky's point is still valid.)
One of the things that open source has given to the world, is the
"amazing
" ability to manage these large-scale collaborations.
Cognitive surplus
It goes well beyond software, he said. If you look at the "cognitive
surplus
" that is available for collaborative projects, it is truly
a huge resource. A back-of-the-envelope calculation in 2008 came up
with 100 million hours to create all of Wikipedia, including the talk
pages, revisions, and so on. But that pales in comparison to television
watching which takes up an estimated 1 trillion hours per year. There is
an "enormous available pool of time and attention
" that can be
tapped since people are now all connected to the same grid, Shirky said.
As an example, he pointed to the Red Balloon Challenge that DARPA ran last year. They wanted to test new collaboration models, so they tethered ten weather balloons in locations across the US. The challenge was to gather a list of all ten and their latitude/longitude to within a mile.
An MIT team won the challenge by saying they would share the prize money
with anyone who gave them information about the locations. But they also
took a cue from Amway, he said, and offered a share of the prize to people
who found a person that could give them location information. That led to
a network effect, where people were asking their friends if they had seen
any of the balloons. In the end,
the MIT team solved the problem in nine hours, when DARPA had allocated 30
days for the challenge. "That's the cognitive surplus in
action
", Shirky said.
"When the whole world is potentially your talent pool, you can do
amazing things
", Shirky said. lolcats is one of those
things and
a "goodly chunk of cognitive surplus
" goes into creating them,
which leads to criticism of the internet. But that always happens with new
media, he said, pointing out that the first erotic novel was written shortly
after the invention of the printing press but that it took 150 years to
think of using the invention for a scientific journal.
![[Clay Shirky]](https://static.lwn.net/images/2011/lcna-shirky2-sm.jpg)
He showed several quotes from people reacting to new media like the
telegraph, telephone, and television at the time each was introduced. The
introduction of the television led the commenter to
believe that world peace would occur because it would allow us to
better connect with and understand other cultures. "Here's a hint of
what happens
with new media—it's not world peace
", he said. More people
communicating actually leads to more fighting, and the challenge is to
figure out how to structure that fighting.
Shirky believes that the transition from alchemy to chemistry was fueled by
the "decision to add structure to what the printing press made
possible
". Instead of performing and recording experiments in
secret as alchemists did, the rise of the scientific journal changed the
focus to publishing results that others could test for themselves—or
argue about. The difference between the two is that alchemists hid their
discipline, while chemists published, he said.
Three observations
Three observations about collaboration rounded out the rest of Shirky's
talk. While its not a canonical list, he said, there are useful lessons
from the observations. The first is that "many large-scale
collaborations actually aren't
". If you look at the page for Linux
on Wikipedia, there have been some 10,000 edits from 4,000 different
people. That equates to 2.5 edits per person, which is a pretty standard rate
for Wikipedia pages.
That might appear to be a very large-scale collaboration, but it's not, he said. If you graph the contributions, you soon see that the most active contributors are doing the bulk of the work, with the top contributor doing around 500 edits of their own. The tenth highest contributor did 100 edits, and the 100th did 10 edits. Around 75% of contributors did only one edit ever.
That same pattern shows up in many different places, he said, including Linux kernel commits. These seemingly large-scale collaboration projects are really run by small, tight-knit groups that know each other and care about the project. That group integrates lots of small fixes that come from the wider community. Once we recognize that, we can plan for it, Shirky said.
Shirky's second observation was that many of the people who want to collaborate shouldn't be allowed to, at
least easily. He pointed to stackoverflow and the related StackExchange sites as embodying some
of this philosophy. StackExchange was spun off from stackoverflow to
handle additional topic areas beyond just programming that the latter
covers. Possible question and answer topics are "anything that is
geeky enough to have a right answer
" and that people want to argue
about, Shirky said.
But creating new Q&A sites on StackExchange does not follow the model
that many other internet sites have used: "just let people do what
they want and see what sticks
". Instead, it is difficult to start a
new site, which ensures that there is enough real interest in the topic.
The sites are "taking karma really seriously
", and are
"stretching both ends of the karmic equations
". New users are
not allowed to post either questions or answers right away, but must build
up karma by reading the site first. Net etiquette always said that new
users should do that, but "no one did it
". At the other end
of the spectrum, users can build up enough karma that they get sysop-like
powers. These sites are an "attempt to say that we don't have to
treat all people the same
", he said.
Technology and human patterns need to match up, Shirky said, as his third observation. This goes back to the bug report at the beginning of his talk. It has taken a long time to align the two, he said, because code dominated free software communities for so long.
As an example, he pointed to the saga of Linux kernel source code
management, which started out as tarballs and patches. Then BitKeeper
showed up, and then went away, which (Shirky said) caused Torvalds to go back to tarballs
and patches. Basically, Torvalds chose not to use any source code manager
than use one whose functionality did not embrace the ideals of the GPL,
Shirky said. He was not making a licensing argument here, after all
Torvalds had been using the decidedly non-GPL BitKeeper, but instead was
arguing (perhaps somewhat inaccurately) that Torvalds chose BitKeeper, and
later Git, because the way they operate is in keeping with GPL ideals.
Git "lives up to the promise of the GPL
", because it
decentralizes repositories and allows easy forking. Merging code should
always be a community decision, which Git also embodies, he said.
Once Git was released, there were other interesting developments. Source code management systems had been around for decades, but were never used for anything but source code, Shirky said. Because Git matches people's mental model of how collaboration should work, it spawned things like github. But it doesn't stop there, he said, noting that there are Git repositories for novels, and that someone had checked in their genome to a public repository. The latter, of course, spawned an immediate pull request for 20 upgrades. A joke, but one that eventually resulted in a scholarly discussion about caffeine sensitivity that had participants from organizations like the National Institutes of Health.
There is also an effort called Open Knesset
[Hebrew] that is attempting to use Git to help people better understand
what they agree and disagree about. Essentially it takes laws proposed in the
Israeli Knesset and checks them into Git, then tells people to fork the law
and write it the way they would like to see it. "That will show
where the arguments are
", Shirky said. It is "audacious
enough
" that it probably won't work, but he also noted that
"audacity beats predictability over the long haul
". He
believes we will see more of this kind of thing in the future.
One way to look at large-scale collaboration is that it is more people
pooling more ideas, and that's true he said, but he would make an addition:
"after arguing about it for a really long time
". Taking this
"structured argument approach
" that free software (and other)
communities
have and moving it into other areas of our world will be beneficial.
Applying some of the lessons learned from communities like StackExchange,
Open Knesset, and the Linux kernel, as well as lessons from things like
Mozilla bug entries will provide a means to take argumentation to the next
level—and actually make it matter.
[ I would like to thank the Linux Foundation for travel assistance to attend LinuxCon. ]
The year of the Linux tablet?
The theme of the 2011 COSCUP conference (Taipei, August 20-21) was "Gadgets beyond smartphones." Based on a number of the talks and exhibits on offer, "beyond smartphones" seemed to mean "tablets" to a number of the people in attendance. Two talks by representatives of competing desktop environments show some interesting similarities and differences in how they see the tablet opportunity.First up was GNOME developer Bastien Nocera, speaking on the theme "my sofa wants a new form factor." That new form factor, naturally, is the tablet - an ideal device, it seems, for the typical couch potato. Tablets, he said, are "the new Eldorado"; everybody is trying to get there.
There are a number of options for software to run on tablets. One could
use Windows, but it is non-free and uninteresting. iOS, too, is entirely
proprietary; it's also unavailable for non-Apple hardware. WebOS was an option when
Bastien wrote his talk, though things had changed in the meantime - the
demise of WebOS shows what can happen to a proprietary platform owned by a
single company. Then there's Android, but the problem with Android,
according to Bastien, is that it's entirely owned by Google. It is not
truly open; one has to be one of Google's best friends to have any kind of
early access to the software. The result is that there are a lot of
tablets on the market running old versions of Android. MeeGo, he said, was
not really even worth mentioning; it is a "puppet" of Intel without any
real community governance.
What all this comes down to is that, at the moment, there is an opportunity for something else in the tablet market. Unsurprisingly, Bastien thinks that GNOME 3 would be a good something else.
GNOME 3, he says, is the result of an ongoing push for more vertical integration in the platform. Increasingly, GNOME is seen to include components like PulseAudio, NetworkManager, udev, and, more recently, systemd. GNOME, in other words, is becoming more of an operating system in its own right. Furthering that evolution, the project plans to start shipping full operating system images to users. The full GNOME experience is hard to produce if distributors change pieces of the platform - using ConnMan instead of NetworkManager, for example. The project wants to produce a single, unified experience for GNOME users.
And they want GNOME 3 to be an option for tablets. There are a number of advantages to the platform: it's a community-based, 100% free project with an open development model. But, he said, it lacks one thing: hardware. So Bastien put out a call to hardware manufacturers: please talk to the GNOME project about what they have to offer. And, if nothing else, please send your drivers upstream and ensure that the hardware is supported by free software.
Bastien was replaced at the lectern by KDE developer Aaron Seigo who had a
surprisingly similar story to tell. The existing platforms, he said, are
not free; he cited the result of some study which - using an unclear
methodology - came to the conclusion that iOS was 0% open while Android
did a little better at 23% open. Linux (for some value of "Linux") came in
at 71% open. KDE, he said, is going for 100% open.
Aaron introduced Plasma and Plasma Active (recently described in LWN); these projects have existed in desktop and netbook variants for a while now. The tablet version is more recent, but is well advanced regardless. The goals for all of the variants are the same: an "amazing experience" which creates an "emotional bond" in users, an efficient development framework, and the ability to run the same application on all types of devices. Aaron noted that all three variants share almost all of their code.
One part of the talk sounded quite different from Bastien's talk: Plasma, Aaron said, has been designed as a set of components which can be assembled in any number of ways. KDE is not shooting for the single unified experience; it is aiming to build a platform with which others can create any number of different experiences.
According to Aaron, there are seven companies working with Plasma now, along with a lot of community developers. But the project is looking for more developers, more designers, and more companies to work with; they are especially interested in hardware partners. KDE, he said, has something that is compelling and shippable today; all it needs is something to ship that software on. (He had previously said that a couple months of polishing were planned; perhaps a large value of "today" was intended).
An opportunity?
In your editor's view, there does seem to be an opportunity in the tablet space at the moment. Apple's offerings still own this category, but that situation seems unlikely to last forever. Android is the logical choice for a second leading system, but Google's control may not sit well with all vendors, especially now that Google is, through its acquisition of Motorola Mobility, becoming a hardware vendor in its own right. The management of Android, according to Google, will not change as a result of this acquisition, but that is just the problem: companies like Motorola have already tended to get privileged access to unreleased Android versions. And, in any case, a duopoly is still a small set of options; Android is clearly not going away, but it would not be surprising to see an appetite for a third option among both vendors and customers.Becoming that third option will not be an easy thing to do, though. There are a number of contenders for that space beyond GNOME and KDE: they include MeeGo, Ubuntu with the Unity shell and, naturally, Windows. Even WebOS could possibly make a surprise comeback. Perhaps one other Linux-based platform can establish itself as a viable alternative on tablets; it seems unlikely that four or five of them will. Competition between projects can be good for the exploration of different ideas and as a motivation to get more done, but it's hard not to feel that, if we want to create a viable third platform which is competitive with Android and iOS, our community's efforts are a little too scattered at this point.
A related question is: can a tablet-based platform be competitive without running on phone handsets as well? Neither of the desktop environment presentations at COSCUP mentioned handsets; if the projects are thinking of scaling down that far, they are not talking about it yet. There is clear value in having the same interface - and the same applications - on both types of device. Android and iOS offer that consistency; alternatives may have to as well.
And, of course, there is the challenge of third-party applications; getting this week's hot application ported to GNOME or KDE may not prove easy. Sometimes one hears that HTML5 will save the day, but there are a couple of objections that one could raise to that line of reasoning. One is that we have been hearing that the web would replace local applications for at least 15 years now; maybe it is really true this time, but that has yet to be seen. And if everything does move to HTML5, alternatives like ChromeOS and Boot2Gecko may become more interesting, widening the field even further.
So the desktop environments have given themselves a big challenge, to say the least. It would be nice to see at least one of them succeed; we have come too far to give up on the idea of a fully free, community-oriented system on newer hardware. The technology to create a competitive alternative is certainly there; what remains to be seen is whether it is matched with an ability to woo hardware manufacturers and get real products into the market. At this point, the success of Linux on the tablet probably depends more on that sales job than on what the developers do.
[Your editor would like to thank COSCUP 2011 for assisting with his travel to this event.]
LinuxCon: The world's largest Linux desktop deployment
The first day of LinuxCon 2011 started off with a keynote from the Linux Foundation's Jim Zemlin, in which he joked about the perpetually-next-year "year of the Linux desktop." Interestingly enough, that afternoon a smaller session with Userful Corporation's Timothy Griffin dealt with Linux on desktops in massive numbers. Userful deploys Linux in very large-scale "digital inclusion" projects — such as schools in second- and third-world environments — including the world's largest, a 500,000 seat deployment in Brazil.
Userful is a small, Calgary-based company that contracts with local system integrators to roll-out Linux desktops, usually in schools, and often to fulfill government mandates to deploy open source software. Griffin showed a cartogram that colored the countries of the world by the relative price of a PC, and scaled the size of each country by its population. According to that graphic, the vast majority of the world population lives in countries where a computer costs the equivalent of 6 months' salary (or more), and the ratio of schoolchildren to computers is as high as 150 to 1.
![[Timothy Griffin]](https://static.lwn.net/images/2011/lcna-griffin-sm.jpg)
In those countries, governments frequently undertake nation-wide computing initiatives (sometimes even creating national Linux distributions), for basic cost-saving reasons and to keep development and IT support jobs in-country. When deploying the machines into schools, Griffin said, the cost of the hardware accounts for but a fraction of the overall cost: power may be expensive and unreliable, the site may be several days journey on difficult roads, and there may be no Internet connection for updates and IT support. As a result, Userful tailors its software solution to function in circumstances that ordinary Linux distributions do not.
The most visible difference seen in Userful deployments is multi-seat PCs. Using commodity hardware, the company configures machines to serve up five to ten front-ends (including monitor, keyboard/mouse, and sound) from a single PC. Userful's multi-seat setup relies on USB hubs, using hardware from HP, ViewSonic, and a number of other commodity peripheral vendors. While in the past such multi-seat configurations would have required special-purpose components, Griffin said that (ironically, perhaps) the popularity of Microsoft's "Windows Multipoint" product led to a glut of easily available hardware. The USB devices at each front end include simple graphics chips of the same type used in laptop docks, and are capable of running applications at normal, "local" speed — unlike most remote-desktop thin client configurations. A "medium" strength PC with four CPU cores can serve ten front ends running normal office applications, and do so using less power than ten low-end PCs, plus offer simplified configuration management, printer sharing, updates, and support.
Brazil
The Brazil deployment has been rolling out in phases since 2008, and currently includes more than 42,000 schools in 4,000 cities. The base distribution is one created by the Brazilian government, called Educational Linux [Portuguese], which is based on Kubuntu. But a bigger component of the project, Griffin said, was the support system that was also built by the government to provide teachers with classroom materials and software updates, and students with a social networking component. The computers are pre-loaded with multi-gigabyte data stores — from lesson plans to video content, and in rural areas without Internet access, updates are sent by mail on DVD.
As a case study, Griffin noted, the Brazil deployment reveals valuable lessons for the Linux and open source community as a whole, on subjects such as "sustainability," where too often the focus is on power consumption alone. But a genuinely "sustainable" deployment must sustain itself, he argued, including being resilient to lack of an Internet connection, predictable visits from IT staff, and teachers that may not have any more experience with computing than do the schoolchildren.
Griffin called these situations "green field
" deployments, where there is no pre-existing computing environment at all. They are common in regions of the world where computers are expensive, he said, and where national governments often do studies and end up mandating the use of Linux and open source.
Where open source is silently ceding the field
Yet despite those mandates, he said, Microsoft Windows often ends up ultimately getting deployed instead. There are many reasons why, including lobbying efforts, entrenched players, politics, and money. But the troubling part is that the open source community has no response to these gambits, even when they are based entirely on FUD and distortion. The major commercial Linux distributions (Red Hat, SUSE, etc.) put no effort into competing for green field deployments, and offer no on-the-ground field support to those who lobby and bid for the contracts.
There is not an easy solution; what is needed to improve the situation includes better coverage of the large-scale success stories to counteract FUD and even outright lack-of-knowledge. Griffin told an audience member that there are many non-governmental organizations (NGOs) working in impoverished nations that run Windows on their computers solely because they have no idea that Linux even exists. In fact, he added, they pay full price for their licenses, when they could save considerable money just by telling Microsoft they were considering Linux and getting a steep discount in return.
The green field market is one that Linux and open source ought to fight hard to win, Griffin said, for precisely the reasons that Zemlin said Linux had been successful in the first world: its free availability enables innovation and experimentation in areas (inside and outside of the technology field) that are simply unpredictable. National governments regularly end up recommending and mandating open source, Griffin argued, because they see that by not buying into a proprietary solution owned by a foreign company, they put more power into the hands of their own people.
If you want to see the year of the Linux desktop, he said, look to the green field deployments. "The next billion computer users haven't even decided what their operating system is going to be
". Brazil's roll-out of 500,000 desktops running Linux has put Linux into the hands of millions of students. In five to ten years, the open source community is going to see a return on that investment when those students enter the workplace, having been trained on a computer — easily the most powerful educational tool in the world — that runs free software. Microsoft recognizes that those stakes are huge, and has adopted a "win at any cost" strategy. Unfortunately the open source community is not nearly as organized, and lets many of those opportunities slip from its grasp.
As Griffin said repeatedly, there is no simple answer: his company works on software, but most of the work needing to be done is hands-on and in-the-field. But for all the talk at LinuxCon about how the PC era is over, it is a powerful reminder that the smartphone and tablet wars are a decidedly first-world problem, and that for most of the computer users of the future, the desktop battle is far from being over.
Security
LinuxCon: FreedomBox update and plans
Bdale Garbee is well-known in the free software world for a number of
different things: his work with Debian (including a term as project
leader), his work as HP's open source and Linux chief technologist,
membership on several boards (the Linux Foundation among them), and a lot more.
He's also known for giving talks at various conferences about another
passion of his, model rocketry, and specifically how open hardware and
software can be used to control and track those rockets. So when he said
that his LinuxCon talk
was a rare example of a "talk I would rather give than a rocket
talk
", it's a pretty good indicator of how important he thinks the
topic, FreedomBox, is.
![[Bdale Garbee]](https://static.lwn.net/images/2011/lcna-garbee-sm.jpg)
The FreedomBox project is an effort to create personal servers that will run on cheap, "plug computer" hardware. While the software will be designed to run on hardware installed in the home or elsewhere, the focus is on in-home use. In some jurisdictions, Garbee said, there is a big difference between how data stored on a computer in the home vs. one elsewhere is treated in a legal sense.
The project also wants to "contribute to privacy-respecting
alternatives to social networking
". In today's world, people are
uploading personal data to services like Facebook without any real
guarantees that the data will still be there in the future, and that they will
always have access to it. In addition, the terms of service can change
over time, as do the privacy settings and policies. Garbee was careful to
point that the project (and the FreedomBox Foundation) would
not necessarily be creating these social networking alternatives, but would
be collaborating with those who are.
Another important part of the FreedomBox idea is to support mesh networking. As we have seen in the news recently, activists and political protestors in various places are too dependent on centralized services, especially communications services. We already have the technology to build mesh networks that could be used to route around repressive governments, or just repressive ISPs, he said. If two neighbors have different ISPs, with different filtering policies, a mesh network between them could potentially avoid those problems.
Debian and FreedomBox
There is a "high correlation
" between the goals of the Debian
distribution and those of the FreedomBox, Garbee said. There is also
"no better place to find a strong technical infrastructure
"
than in Debian. In something of an aside, he also noted that while Linux was celebrating its 20th
anniversary at the conference, Debian is celebrating its 18th
anniversary, which is truly "mind-boggling
", he said. There
is no Debian company or corporation, it is made up of individual volunteers.
It also runs on all of the relevant architectures. All of these things explain
why the FreedomBox software is Debian-based.
In addition to all of that, there is a fair amount of truth to the
statement that "all free software gets packaged for Debian
",
he said,
which gives the project a good base. It can use the same bug tracker and
build environment that Debian uses as well. Many of the pieces that are
needed for FreedomBox are already packaged or being worked on within the
distribution.
But FreedomBox does not plan to be a Debian derivative, and will instead do
all of its work within the distribution. One of the goals is that every
stable release of Debian will have "everything needed to create
FreedomBoxes
", Garbee said. So users can either buy a plug computer
and install FreedomBox themselves, buy an off-the-shelf plug computer with
the software pre-installed, or find a cast-off computer and install it
there. One of the big advantages of that approach, he said, is that no
matter how successful the FreedomBox project ends up being, all of the work
and code will always be available in Debian.
The foundation
The FreedomBox Foundation (FBF) was founded by Eben Moglen, who has "done a
great job articulating the need
" for such a device. Moglen asked
Garbee to join the board of the foundation in order to establish and chair
a technical advisory committee (TAC). The TAC exists "to make the
board understand what the technical issues are
", he said, and it is
not a "top-down design group
". That work will be done in the
soon-to-be-established working groups.
The FBF is not a large organization with "a lot of resources and an
army of coders
", Garbee said. The technology is not really the hard
part, he said, at least for most of the people in the room. The much harder
part will be the user experience because the FreedomBox has a "much
broader audience than just those who are building it
". If those
others can't understand how to use it, "we will have failed
".
So far, that's an area where, unfortunately, not a lot of work has been
done yet, he said.
There are other tasks that the FBF is taking on, such as fund-raising, outreach, and publicity. Those things are important and are a persistent problem for any non-profit organization, he said. Another non-obvious thing that the FBF can do is "industry relations". At some point, hardware vendors should be willing to build and ship products with FreedomBox pre-installed. That may require NDAs, which is not something that most free software developers want to deal with.
The TAC has been formed with Garbee as the chair. Five others are on the committee as well: Jacob Appelbaum, who is security researcher and core member of the Tor project; Sam Hartman, a Debian developer and security consultant; Sascha Meinrath, author and mesh networking researcher; Rob Savoye, GNU toolchain hacker and embedded systems developer; and Matt Zimmerman, who is a Debian developer and former CTO at Canonical.
Over the coming weeks, Garbee said, various working groups will be
established to work on the disparate pieces that make up FreedomBox. There
are a lot of different conversations going on in the mailing list, and they
are often getting derailed by people who are focusing on a different piece
of the problem. These working groups will likely be "instantiated as
separate mailing lists
" and will be tasked with a specific piece of
the problem. The output may be code, packages, or recipes, he said.
Garbee is "looking forward to getting them going
".
DreamPlug reference platform
The DreamPlug has been chosen as the initial reference platform for FreedomBox. Part of the requirements for the FBF's Kickstarter fundraising campaign was to deliver hardware to some donors, and the DreamPlug will fill that role. While the hardware is reasonable overall, he said, there are still some frustrating things from a free software perspective. Marvell created most of the hardware inside the DreamPlug, and has generally worked well with the community, but there were still some driver and source availability problems. Most of those have been resolved except for a firmware blob that is required to run the Marvell wireless uAP device.
The idea behind the choice of the DreamPlug is to pick a specific target, and the hardware is fairly capable. It has a 1.2 GHz ARM processor, with 512M of RAM, 2M flash for u-boot, and 2G of flash for filesystems. There are also lots of IO ports, including two gigabit Ethernet interfaces, two USB 2.0 ports, an eSATA 2.0 port, an SD socket, and more. It also has audio inputs which didn't seem useful at first, he said, until someone pointed out that they could be used for random number generation.
Technical progress
One of the areas that has been extensively discussed within the project is
the idea of "establishing trust
". OpenPGP keys are "about
as good as it gets
" in terms of storing public/private
key pairs, he said, but the trust relationship problem still isn't solved.
Noting that the target audience may be more likely to have smartphones, the
project is narrowing in on solutions that would allow an initial key exchange
using the display and cameras of smartphones. A phone app could gather
these keys up when people meet face-to-face and then allow them to be
installed on the FreedomBox.
In addition, lots of work on the FreedomBox went on at the hackfest that preceded DebConf11 in Banja Luka, Bosnia and Herzegovina at the end of July. The focus was on assembling an initial development image for the DreamPlug and identifying and integrating an application into that image. While lots of progress was made, and an application was identified (an XMPP-based secure chat client), they didn't quite get there during the hackfest. There were also several FreedomBox talks at the conference itself and Garbee recommended viewing the videos of those talks.
Going forward, he said the team is "single-digit days
" from
releasing initial development images for both the DreamPlug and for x86
virtualization for those who don't have the hardware. There is ongoing work
to use Monkeysphere for
identity management with OpenPGP keys. Work on selecting and integrating
specific applications that deliver "functionality implied by our
vision
" is underway, starting with the secure XMPP-based chat
stack. The plan is to do periodic releases until "we achieve
1.0
", Garbee said, but he won't say when that will happen,
"Debian-style
".
There are a number of ways for interested folks to get involved, starting
with being "conscious about privacy and other freedoms in all that
you do
", he said. Experimenting with the software and helping to
refine the list
of alternatives to the proprietary cloud services would be
helpful. Joining a working group or helping to select Debian packages (and
determine the right configuration for them) are additional ways to help.
Of course, financial contributions to the FBF are always welcome.
In answer to audience questions, Garbee reiterated that Debian was chosen
for pragmatic reasons and there is no reason that others couldn't put the
FreedomBox stack on top of other distributions. He did not want the FBF to
have to set up distribution infrastructure or be saddled with long-term
security updates, and basing on Debian avoided that. He also said that
off-the-shelf FreedomBoxes are "at least a year away
", and it
could be longer than that.
[ I would like to thank the Linux Foundation for assistance with travel costs for LinuxCon. ]
Brief items
Security quotes of the week
Nasty Apache denial of service vulnerability
The Apache project has sent out an advisory warning of an easily-exploited denial of service vulnerability in all versions of the Apache server. "An attack tool is circulating in the wild. Active use of this tool has been observed. The attack can be done remotely and with a modest number of requests can cause very significant memory and CPU usage on the server. The default Apache HTTPD installation is vulnerable. There is currently no patch/new version of Apache HTTPD which fixes this vulnerability. This advisory will be updated when a long term fix is available." A fix is expected "within 48 hours"; a number of workarounds are provided in the advisory for those who cannot wait.
New vulnerabilities
bugzilla: multiple vulnerabilities
Package(s): | bugzilla | CVE #(s): | CVE-2011-2379 CVE-2011-2380 CVE-2011-2979 CVE-2011-2381 CVE-2011-2978 CVE-2011-2977 | ||||||||||||||||
Created: | August 22, 2011 | Updated: | October 10, 2011 | ||||||||||||||||
Description: | From the CVE entries:
Cross-site scripting (XSS) vulnerability in Bugzilla 2.4 through 2.22.7, 3.0.x through 3.3.x, 3.4.x before 3.4.12, 3.5.x, 3.6.x before 3.6.6, 3.7.x, 4.0.x before 4.0.2, and 4.1.x before 4.1.3, when Internet Explorer before 9 or Safari before 5.0.6 is used for Raw Unified mode, allows remote attackers to inject arbitrary web script or HTML via a crafted patch, related to content sniffing. (CVE-2011-2379) Bugzilla 2.23.3 through 2.22.7, 3.0.x through 3.3.x, 3.4.x before 3.4.12, 3.5.x, 3.6.x before 3.6.6, 3.7.x, 4.0.x before 4.0.2, and 4.1.x before 4.1.3 allows remote attackers to determine the existence of private group names via a crafted parameter during (1) bug creation or (2) bug editing. (CVE-2011-2380) Bugzilla 4.1.x before 4.1.3 generates different responses for certain assignee queries depending on whether the group name is valid, which allows remote attackers to determine the existence of private group names via a custom search. NOTE: this vulnerability exists because of a CVE-2010-2756 regression. (CVE-2011-2979) CRLF injection vulnerability in Bugzilla 2.17.1 through 2.22.7, 3.0.x through 3.3.x, 3.4.x before 3.4.12, 3.5.x, 3.6.x before 3.6.6, 3.7.x, 4.0.x before 4.0.2, and 4.1.x before 4.1.3 allows remote attackers to inject arbitrary e-mail headers via an attachment description in a flagmail notification. (CVE-2011-2381) Bugzilla 2.16rc1 through 2.22.7, 3.0.x through 3.3.x, 3.4.x before 3.4.12, 3.5.x, 3.6.x before 3.6.6, 3.7.x, 4.0.x before 4.0.2, and 4.1.x before 4.1.3 does not prevent changes to the confirmation e-mail address (aka old_email field) for e-mail change notifications, which makes it easier for remote attackers to perform arbitrary address changes by leveraging an unattended workstation. (CVE-2011-2978) Bugzilla 3.6.x before 3.6.6, 3.7.x, 4.0.x before 4.0.2, and 4.1.x before 4.1.3 on Windows does not delete the temporary files associated with uploaded attachments, which allows local users to obtain sensitive information by reading these files. NOTE: this issue exists because of a regression in 3.6. (CVE-2011-2977) | ||||||||||||||||||
Alerts: |
|
crypt_blowfish: crackable password hashing
Package(s): | crypt_blowfish | CVE #(s): | CVE-2011-2483 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 19, 2011 | Updated: | November 15, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
The implementation of the blowfish based password hashing method had a bug affecting passwords that contain 8bit characters (e.g. umlauts). Affected passwords are potentially faster to crack via brute force methods. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ecryptfs-utils: denial of service
Package(s): | ecryptfs-utils | CVE #(s): | CVE-2011-3145 | ||||||||||||||||||||||||||||||||
Created: | August 23, 2011 | Updated: | January 19, 2012 | ||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that eCryptfs incorrectly handled permissions when modifying the mtab file. A local attacker could use this flaw to manipulate the mtab file, and possibly unmount arbitrary locations, leading to a denial of service. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
gimp: heap corruption
Package(s): | gimp | CVE #(s): | CVE-2011-2896 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 22, 2011 | Updated: | September 28, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
GIF image file format readers in various open source projects are based on the GIF decoder implementation written by David Koblas. This implementation contains a bug in the LZW decompressor, causing it to incorrectly handle compressed streams that contain code words that were not yet added to the decompression table. LZW decompression has a special case (a KwKwK string) when code word may match the first free entry in the decompression table. The implementation used in this GIF reading code allows code words not only matching, but also exceeding the first free entry. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: arbitrary command execution
Package(s): | kernel | CVE #(s): | CVE-2011-2905 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 18, 2011 | Updated: | November 28, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
It was reported that perf would look for configuration files in /etc/perfconfig, ~/.perfconfig, and ./config. If ./config is not a perf configuration file, perf could fail or possibly do unexpected things. If a privileged user was tricked into running perf in a directory containing a malicious ./config file, it could possibly lead to the execution of arbitrary commands. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2011-2695 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 23, 2011 | Updated: | September 13, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Multiple off-by-one errors in the ext4 subsystem in the Linux kernel before 3.0-rc5 allow local users to cause a denial of service (BUG_ON and system crash) by accessing a sparse file in extent format with a write operation involving a block number corresponding to the largest possible 32-bit unsigned integer. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kiwi: multiple vulnerabilities
Package(s): | kiwi | CVE #(s): | CVE-2011-2225 CVE-2011-2226 CVE-2011-2644 CVE-2011-2645 CVE-2011-2646 CVE-2011-2647 CVE-2011-2648 CVE-2011-2649 CVE-2011-2650 CVE-2011-2651 CVE-2011-2652 | ||||||||
Created: | August 18, 2011 | Updated: | December 15, 2011 | ||||||||
Description: | From the SUSE advisory:
SUSE Studio was prone to several cross-site-scripting (XSS) and shell quoting issues.
| ||||||||||
Alerts: |
|
nip2: privilege escalation
Package(s): | nip2 | CVE #(s): | CVE-2010-3364 | ||||||||||||||||||||
Created: | August 23, 2011 | Updated: | January 27, 2014 | ||||||||||||||||||||
Description: | From the CVE entry:
The vips-7.22 script in VIPS 7.22.2 places a zero-length directory name in the LD_LIBRARY_PATH, which allows local users to gain privileges via a Trojan horse shared library in the current working directory. | ||||||||||||||||||||||
Alerts: |
|
system-config-printer: arbitrary code execution
Package(s): | system-config-printer | CVE #(s): | CVE-2011-2899 | ||||||||||||||||||||||||
Created: | August 23, 2011 | Updated: | September 23, 2011 | ||||||||||||||||||||||||
Description: | From the Red Hat advisory:
It was found that system-config-printer did not properly sanitize NetBIOS and workgroup names when searching for network printers. A remote attacker could use this flaw to execute arbitrary code with the privileges of the user running system-config-printer. | ||||||||||||||||||||||||||
Alerts: |
|
zabbix: cross-site scripting
Package(s): | zabbix | CVE #(s): | CVE-2011-2904 | ||||||||||||
Created: | August 18, 2011 | Updated: | August 24, 2011 | ||||||||||||
Description: | From the Red Hat bugzilla:
A vulnerability was reported in Zabbix where input passed to the "backurl" parameter in acknow.php is improperly sanitized before being returned to the user. This could be used to facilitate a cross-site scripting attack. This flaw is fixed in Zabbix 1.8.6 | ||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.1-rc3 (code-named "Divemaster Edition"), released on August 22. Linus says:
See the full changelog for all the details.
Stable updates: no stable updates have been released in the last week, and none are in the review process as of this writing.
Quotes of the week
I've talked to quite a few lawyers worldwide and they all think that downloading the software will give you a new license, so I wouldn't be too worried about these organisations.
Three Linux wireless summit videos
Videos of three talks at the recently concluded Linux wireless summit have been posted. These talks cover the implementation of dynamic frequency selection, 802.11s mesh networking, and mesh network testing with wmediumd.
Kernel development news
Merging the kvm tool
The "native Linux KVM tool" (which we'll call "NLKT") is a hardware emulation system designed to support virtualized guests running under the KVM hypervisor. It offers a number of nice features, but an attempt to get this code merged into the 3.1 kernel was deferred by Linus, who did not want to deal with another controversial development at that time. This tool's developers have let it be known that it will be back for the 3.2 merge window; controversy is sure to follow. The core question raised by this project is: what code is appropriate for the kernel tree, and which projects should live in their own repositories elsewhere?NLKT was started in response to unhappiness about QEMU, the state of its code, and the pace of its development. It was designed with simplicity in mind; NLKT is meant to be able to boot a basic Linux kernel without the need for a BIOS image or much in the way of legacy hardware emulation. Despite its simplicity, NLKT offers "just works" networking, SMP support, basic graphics support, copy-on-write block device access, host filesystem access with 9P or overlayfs, and more. It has developed quickly and is, arguably, the easiest way to get a Linux kernel running on a virtualized system.
Everybody seems to think that NLKT is a useful tool; nobody objects to its existence. The controversy comes for other reasons, one of which is the name: the tool simply calls itself "kvm." The potential for confusion with the kernel's KVM subsystem is clear - that is why this article made up a different acronym to refer to the tool. "KVM" is already seen as an unfortunate name - searches for the term bring in a lot of information about keyboard-video-mouse switches - so adding more ambiguity seems like a bad move. It is also seemingly viewed by some as a move to be the "official" hardware emulator for KVM. The NLKT developers have, thus far, resisted a name change, though.
The bigger fight is over whether NLKT belongs in the kernel at all. It is not kernel code; it is a program that runs in user space. The question of whether such code should be in the kernel's repository is certainly the one that will decide whether it is merged for 3.2 or not.
NLKT would not be the first user-space tool to go into the mainline kernel; several others can be found in the tools/ directory. Many of them are testing tools used by kernel developers, but not all. The "cpupower" tool was merged for 3.1; it allows an administrator to tweak various CPU power management features. The most actively developed tool in that directory, though, is perf, which has grown by leaps and bounds since being merged into the mainline. The developers working on perf have been very outspoken in their belief that putting the tool into the mainline kernel repository has helped it to advance quickly.
Proponents say that, like perf, NLKT is closely tied to the kernel and heavily used by kernel developers; like perf, it would benefit from being put into the same code repository. KVM, they say, is also under heavy development; having NLKT and KVM in the same tree would help both to improve more quickly. It would bring more review of any future KVM ABI changes, since a user of that ABI would be merged into the kernel as well. Keeping the hardware emulation code near the drivers that code has to work with is said to be beneficial to both sides. All told, they say, perf would not have been nearly as successful outside of the mainline tree as it has been internally; merging NLKT can be expected to encourage the same sort of success.
That success seems to be one of the things that opponents are worried about; some have worried that the main purpose is to increase the project's visibility so that it succeeds at the expense of competing projects. The ABI development benefits are challenged; any changes would clearly still have to work with tools like QEMU regardless of whether NLKT is in the kernel, so QEMU developers would have to remain in the loop. It is even better, some say, to separate the implementation of an ABI from its users; that forces the implementers to put more effort into documenting how the ABI should be used.
There is also concern that, once we start seeing more user-space tools going into the kernel tree, there will be an unstoppable flood of them. Where does it stop, they ask - should we pull in the C library, the GNU tools, or, maybe, LibreOffice? Linux is not BSD, they say; trying to put everything into a single repository is not the right direction to take. The answer to that complaint is that there is no interest in merging arbitrary tools; only those that are truly tied to the kernel would qualify. By this reasoning, NLKT is an easy decision. A C library is something that could be considered; perhaps even graphics if the relevant developers wanted to do that. But office suites are not really eligible; there are limits to what should go into the mainline.
That was where the discussion stood at the beginning of the 3.1 merge window; Linus decided not to pull NLKT at that time. Instead, he clearly wanted the discussion to continue; he told the NLKT developers that they would have to convince him in the 3.2 merge window instead. It looks like that process is about to begin; the NLKT repository is about to be added to linux-next in anticipation of a pull request once the merge window opens. This time, with luck, we'll have a resolution of the issue that gives some guidance for those who would merge other user-space tools in the future.
The udev tail wags the dog
It is not unheard of for kernel developers to refuse to support a particular user-space interface that, they think, is poorly designed or hard to maintain into the future. A user-space project refusing to use a kernel-provided interface in the hope of forcing the creation of something better is a rather less common event. That is exactly what is happening with the udev project's approach to device tree information, though; the result could be a rethinking of how that information gets to applications.OLPC laptops have, among their other quirks, a special keyboard which requires the loading of a specific keymap to operate properly. For the older generations of laptops, loading this keymap has been easily handled with a udev rule:
ENV{DMI_VENDOR}=="OLPC", ATTR{[dmi/id]product_name}=="XO", \ RUN+="keymap $name olpc-xo"
This rule simply extracts the name of the machine from the desktop management interface (DMI) data that has been made available in sysfs. If that data indicates that the system is running on an XO laptop, the appropriate keymap file is loaded. DMI is an x86-specific interface, though, and the upcoming (1.75) generation of the XO laptop is not an x86-based machine. There is no DMI information available on that laptop, so this rule fails; some other solution will be needed.
In current times, the source for hardware description information - especially on non-discoverable platforms - is supposed to be the device tree structure. So Paul Fox's solution would seem to make sense: he created a new rule with a helper script to extract the machine identification from the device tree, which happens to be available in /proc/device-tree. It almost certainly came as a surprise when this solution was rejected by udev maintainer Kay Sievers, who said:
Of course, Paul wasn't adding the /proc/device-tree interface; criticism of such a move would not have been surprising. That file has a long history; it has been supported, under some architectures, since the 2.2 kernel. So one might think that it is a bit late to be complaining about it; there are a number of /proc files added in those days which would not be allowed into /proc now. In general, those files are considered to be part of the user-space ABI at this point; like it or not, we are stuck with them. The device tree file has been around for long enough that it almost certainly falls into that category; it's hard to imagine that it would have been maintained for so long if there were no programs making use of it. Whether or not the udev developers like it, /proc/device-tree is not likely to go anywhere else anytime soon.
That still doesn't mean that the udev developers have to make use of it, though, and they seem determined to hold out for something better. Quoting Kay again:
Kay would like to see the machine identification information exposed separately somewhere under /sys; it has even been suggested that platforms using device trees could emulate the DMI directory found on x86 systems. That, to them, looks like a longer-term solution that doesn't put udev in the position of blocking an ABI cleanup sometime in the future.
In essence, what we have is a user-space utility telling the kernel that an interface it has supported for well over a decade is unacceptable and needs to be replaced. To force that replacement, udev is refusing to accept changes that make use of the existing interface. Whether that is a proper course of action depends on one's perspective. To some, it will look like a petty attempt to force kernel developers to maintain two interfaces with duplicate information in the hope that a long-lived /proc file will eventually go away, despite its long history. To others, it will seem like a straightforward attempt to help the kernel move toward interfaces that are more supportable in the long term.
In this particular case, it looks like udev probably wins. Adding the machine identification somewhere in sysfs will be easy enough that it is probably not worth the effort to fight the battle. In a more general sense, this episode shows that the kernel ABI is not just something handed down to user space from On High. User-space developers will have their say, even a dozen years after the interface has been established; that is a good thing. Having more developers look at these issues from both sides of the boundary can only help in the creation of better interfaces.
LinuxCon: x86 platform drivers
With his characteristically dry British humor, Matthew Garrett outlined the current situation with x86 platform drivers at LinuxCon. These drivers are needed to handle various "extra" hardware devices, like special keys, backlight control, extended battery information, fans, and so on. There are a wide range of control mechanisms that hardware vendors use for these devices, and, even when the controller hardware is the same, different vendors will choose different mechanisms to talk to the devices. It is a complicated situation that seems to require humor—and perhaps alcohol—to master.
![[Matthew Garrett]](https://static.lwn.net/images/2011/lcna-garrett-sm.jpg)
Garrett does a variety of things for Red Hat, including hardware support
and firmware interfaces (e.g. for EFI).
Mostly he does "stuff that nobody else is really enthusiastic about
doing
", he said. Platform drivers are "bits of hardware
support code
" that are required to make all of the different pieces
of modern hardware function with Linux. Today's hardware is not the PC of
old and it requires code to make things work, especially for mobile devices.
He started by looking at keys, those used to type with, but also those
that alter display brightness or turn hardware (e.g. wireless) on and off.
The "normal" way that keys have been handled is that a key press causes an
interrupt, the kernel reads a value from the keyboard controller, and the
keycode gets sent on to user space. The same thing happens for a key up
event. This is cutting edge technology from "1843 or
something
", which is very difficult to get wrong, though some
manufacturers still manage to do so. The first thing anyone writes when
creating a "toy OS
" is the keyboard driver because it is so
straightforward.
In contrast to that simple picture, Garrett then described what goes on for
getting key event information on a Sony laptop. The description was rather
baroque and spanned
three separate slides. Essentially, the key causes an ACPI interrupt, which
requires the kernel to do a multi-step process executing "general purpose
event" (GPE) code in the ACPI firmware, and calling ACPI methods to
eventually get a key code that ends up being sent to user
space. "This is called value add
", he said.
Manufacturers are convinced that you don't want to manage WiFi the same way
on multiple devices. Instead, they believe you want to use the "Lenovo
wireless manager" (for example) to configure the wireless device.
"Some would call them insane
", and Garrett is definitely in
that camp. The motivation seems to be an opportunity for the device maker
to splash their logo onto the screen when the manager program is run. As
might be guessed, there is no
documentation available because that would allow others to copy the
implementation, which obviates the supposed value add.
It is not just keyboards that require platform drivers, Garrett said.
Controlling radios, ambient light sensors ("everyone wants the
brightness to change when someone walks behind them
"), extended
battery information (using identical battery controller chips, with the
interface implemented differently on each one), hard drive protection
(which always use the same accelerometer device), backlight control,
CPU temperature, fan control, LEDs (e.g. a "you have mail" indicator, that
is "not really useful
" but is exposed "for people who
don't have anything better to do with their lives
"), and more, all
need these drivers.
Multiple control mechanisms
There are half-a-dozen different interfaces that these drivers will use to
control the hardware, starting with plain ACPI calls. That is generally
one of the easiest methods to use, because it is relatively straightforward
to read the ACPI tables and generate a driver from that information.
Events are sent to the driver, along with an event type, and some reverse
engineering is required to work out what the types are and what they do.
There are specific ACPI calls to get more information about the event as
well. Garrett's example showed two acpi_evaluate_object() calls
for the AUSB ("attach USB") and BTPO ("Bluetooth power on") ACPI methods,
which is all that is needed to turn on Bluetooth for a Toshiba
device. "Wonderful
", he said.
A small micro-controller with closed-source firmware—the embedded
controller—is another means to control hardware. Ideally, you
shouldn't have to touch the embedded controller because ACPI methods are
often provided to do so. But, sometimes you need to access the registers of the
controller to fiddle with GPIO lines or read sensor data stored there. The
problem is that these register locations can and do change between BIOS
versions. While it is "considered bad form to write a driver for a
specific BIOS version
", sometimes you have to do so. It is a fairly
fragile interface, he said.
Windows Management Instrumentation (WMI) is a part of the Windows driver
model that Microsoft decided would be nice to glue into ACPI. It has
methods that are based on globally unique IDs (GUIDs) corresponding to
events. A notify handler is registered for a GUID and it gets called when
that event happens. The Managed Object Format (MOF) code that comes with a
given WMI implementation is supposed to be self-documenting, but there is a
problem: it is compressed inside the BIOS using a Microsoft proprietary
compression tool "that we don't know how to decompress
". As an
example of WMI-based driver, Garrett showed a Dell laptop keyboard handling
driver that reports the exact same keycode that would have come from a
normal keyboard controller, but was routed through WMI instead, "because this is the future
", he said.
Drivers might also be required to make direct BIOS calls, which
necessitates the use of a real mode int instruction. This is
"amazingly fragile
" and incompatible with 64-bit processors.
Currently, the only time BIOS interrupts are invoked from user space are for
X servers and Garrett suggests that drivers should "never do this
".
In fact, he went further than that: "If you ever find hardware that
does this, tell me and I will send you money for new hardware
". If
you decide to write code that implements this instead, he said that he would pay
someone else money to "set fire to your house
".
System Management Mode (SMM) traps are yet another way to control hardware,
but there seems to be a lot of magic involved. There are "magic
addresses
" that refer to memory that is hidden from the kernel. In
order to use them, a buffer is set up and the address is poked, at which
point the "buffer contents magically change
". There have been
various problems with the SMM implementations from hardware vendors
including some HP hardware that would get confused if SMM was invoked from
anything other than CPU 0. Garrett did not seem particularly enamored of
this technique, likening it to the business plan of the "Underpants
Gnomes".
The last control mechanism Garrett mentioned is to use a native driver to access the hardware resources directly. Typically these drivers use ACPI to identify that the hardware exists. The hardware is accessed using the port IO calls (i.e. inb(), outb()), and will use native interrupts to signal events. Various models of Apple hardware uses these kinds of drivers, Garrett said.
Consistent interfaces
While there are many ways to access the hardware, kernel hackers want to
provide a consistent interface to these devices. We don't want "to
have to install the Sony program to deal with WiFi
". So, "hotkeys"
are sent through the input system, "keys are keys
". Backlight
control is done via the backlight class. Radio control is handled with
rfkill, thermal and fan state via hwmon, and the LED control using the led
class. That way, users are insulated from the underlying details of how
their particular hardware implements these functions.
There are two areas that still have inconsistent interfaces, Garrett said. The hard drive protection feature that is meant to park the disk heads when an untoward acceleration is detected (e.g. the laptop is dropped) does not have a consistent kernel interface. Also, the ambient light sensors are lacking an interface. The latter has become something of a running joke in the kernel community, he said, because Linus Torvalds thinks it should be done one way, but the maintainer disagrees, so, as yet, there is no consistent interface.
How do I work this?
Garrett also had some suggestions on figuring out how new/unsupported hardware is wired up. There is a fair amount of reverse engineering that must be done, but the starting point is to use acpidump and acpixtract utilities to find out what is in the ACPI code in the hardware.
If the device is WMI-based, wmidump may also be useful. Extracting the event GUIDs and registering a handler for each will allow one to observe which ones fire for various external events. Then it is a matter of flipping switches to see what happens, parsing the data that is provided with the event, and figuring how to do something useful. This may require alcohol, he said.
For embedded controllers or direct hardware access, there are sysfs files
that can be useful. The embedded controller can be accessed via
/sys/kernel/debug/ec/ec0/io (at least for those who have debugfs mounted), or by using the ec_access
utility. Once again, you need to hit buttons, throw various switches, and
listen for fan changes. In addition, you should test that the register
offsets are stable for various machine and BIOS version combinations, he
said. You can find the IDs of devices to access them directly via
the /sys/bus/pnp/devices/*/id files, register as a PNP bus driver
for devices of interest, and then
"work out how to drive the hardware
".
The overall picture that Garrett painted is one of needless complexity and inconsistency that is promulgated by the hardware vendors. But, it is something that needs to be handled so that all of the "extras" baked into today's hardware work reliably—consistently—with Linux. While it would be nice if all of these components were documented in ways that Linux driver writers could use, that doesn't seem likely to change anytime soon. Until then, Garrett and the rest of the kernel community will be wrestling with these devices so that we don't get locked into manufacturer-specific control programs.
[ I would like to thank the Linux Foundation for travel assistance to attend LinuxCon. ]
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
LinuxCon: MeeGo architecture update
Day three of LinuxCon 2011 reserved a separate track to serve as a MeeGo project "mini-summit," consisting of update reports and sessions for application developers. Intel's Sunil Saxena kicked off the agenda with an update on the progress of the MeeGo architecture, including what components changed over the current (1.2) release's development cycle, and where they are headed for 1.3.
MeeGo 1.2, inside and out
Saxena led off with a block diagram of the system components planned for the 1.2 stack, then highlighted which pieces did not make it to the final release (which was made in May 2011). Security was the major hiccup, with several subsystems still flagged as gaps, but system control, resource policy, and backup frameworks remained incomplete as well. Later in the afternoon, Ryan Ware presented a more in-depth look at the security frameworks — worthy of separate discussion — but Saxena addressed the basics.
The old security architecture, Mobile Simplified Security Framework (MSSF), was drafted by Nokia several cycles ago, and put most of the emphasis on constructing a framework for operators to implement handset device lock-down. Since Nokia took MeeGo handset development off of its own product roadmap, MSSF did not reach maturity in time for 1.2 (partly due to the difficulties wrought by Nokia's attempt to make MSSF compatible with Symbian as well as MeeGo). Still, 1.2 did include other key security components, such as the Smack Linux kernel security module, which enables simple file and socket access control.
Other planned frameworks that slipped from the 1.2 release were the Mode Control Entity (MCE), Sharing framework, and some additional low-level APIs. MCE is used to monitor switch and key events from the device, and to control hardware states like LEDs and backlighting. The Sharing framework is a unified API for sharing files through a user-selectable set of channels (email, Bluetooth OBEX, and various web services). The miscellaneous APIs include a Profiles API to manipulate profile options, a Non-Graphic Feedback API to trigger vibration or audio indicators, and the QmSystem API to provide Qt-like access to various system services.
Saxena only briefly mentioned the backup framework and resource policy components that were marked as incomplete in the block diagram. As one might expect, the backup framework is intended to be a flexible API for backing up and restoring both data and device settings. The resource policy component is a bit more involved, allowing device makers to set up rule-based policies that reconfigure pieces of the system in response to events. The canonical example is a rule that automatically re-routes the audio output whenever a headset is plugged in to the device. Because the components listed above were not complete in time for the 1.2 release, Saxena said, they will not be part of the 1.2 compliance testing process.
He then discussed the elements of MeeGo 1.2 that underwent substitutions or significant changes during the development cycle. The first was the Personal Information Manager (PIM) Storage and Sync framework for address book contacts, calendar data, and email. The project has been using Evolution Data Server (EDS) for PIM storage since the 1.1 days, but had initially considered switching to Tracker for 1.2. Upon investigation, the developers found Tracker's privacy controls inadequate — essentially any application with access to Tracker storage would have full access to all (potentially private) PIM data — and its performance and SyncML support insufficient. Likewise, the Buteo framework was investigated as a possible replacement for SyncEvolution, but was found to be immature, so EDS and SyncEvolution remain the storage/synchronization platform for 1.2.
Similarly, the network time daemon timed
was under consideration for 1.2, but was found to be too high of a security risk, because it requires privileged access to set the system time, but must be accessible by non-privileged applications. On top of that, it would have required reworking to support the remote time sources (such as cellular) used by handsets to synchronize the local clock. On the plus side, the ConnMan connection manager (which is already used by MeeGo to manage Internet connections) has added time synchronization functionality of its own, so in 1.2 it takes on the duty of clock management as well.
MeeGo 1.2 also introduced a new suite of default applications, written entirely in QML and Qt, and deprecated the MeeGo Touch Framework (MTF) versions of the PIM tools, email client, media player, and other "reference" applications. MTF does live on in several places at least in the Tablet User Experience (UX), Saxena said, such as the window manager MCompositor, and in the input methods.
Progress
MCompositor remains a pain point moving forward, Saxena continued. It
runs on top of the standard X.org server, which the project has "been
struggling with
" in trying to get good, flicker-free application switching. The root trouble, he said, is that X invalidates all graphics buffers on an application switch, which forces the window manager to reload them. Unfortunately, the X server and Qt 4.7 are both required components in MeeGo 1.2. Qt 4.8 introduces a new scene graph that relies on the Wayland display server, and Saxena said that the MeeGo project regards the Wayland/Qt 4.8 pairing as the ultimate solution — although it could be a ways off.
There are other changes in the works, most notably a switch from fastinit to systemd, and substantial work on the resource policy components for the more appliance-like IVI and Smart TV UXes. Saxena also described the effort required to rewrite the MeeGo reference applications from scratch using QML as "huge
", and added that performance and stability work remains. The project is also interested in reducing overall size of the MeeGo stack, which he described as noticeably larger than most embedded Linux distributions.
Saxena ended his talk with a discussion of "web app" support. MeeGo does not yet have a web runtime for third-party applications, but he added that the project knows it needs to integrate one "sooner, rather than later
". He cited recent developments in Web APIs for audio, video, and local storage, plus improvements in multi-threaded JavaScript as being key functionality that make a web runtime viable for full-fledged application development. On top of that, he pointed out, one of MeeGo's primary selling points with OEMs is the ability to present the same APIs across devices, from cars to smartphones to TVs, and HTML5 simplifies that process.
MeeGo 1.3 is currently slated to be released in October or November of 2011. It should be noted that Saxena's details of the development process and plans for the future apply to the MeeGo Core specification, which is common across all of the device UXes. The individual UX projects may have additional goals and milestones unique to their own device class.
WebOS, privacy, and more
Looking at the plan for the next development cycle (and possibly next few, considering the scope of some of the changes), the web runtime is particularly interesting. It has been in discussion since the 1.1 cycle (2010), and both the developer documentation and SDK mention it in various places — although they call it experimental. Yet it has not made it into the core MeeGo releases.
As luck would have it, just 24 hours before the MeeGo mini-summit, Hewlett Packard dropped a minor bombshell on the industry by announcing its intention to shut down its WebOS hardware business. Immediately after the announcement most of the buzz was that this move spelled the end for WebOS, which was a Linux-based mobile device OS that used a web runtime as its sole application development framework. The remainder of the buzz was spent noting the irony that HP's announcement came mere hours after Phil Robb, director of the company's Open Source Program Office, delivered a keynote at LinuxCon highlighting WebOS's strengths (although it should be noted that no one seemed to have anything other than empathy for the awkward position that HP placed Robb in through this chain of events).
HP subsequently "clarified" its commitment to developing WebOS as a platform, although it is not clear who will be producing WebOS-based products. The senior vice president of Palm (which created WebOS and was acquired by HP) would not rule out the idea of selling WebOS to another company, but the effect that move would have on MeeGo, Android, and other Linux-based platforms depends entirely on the identity of the buyer.
Perhaps predictably, debate in the open source crowd quickly turned to the idea of HP releasing WebOS as free software. Despite containing the Linux kernel and a long list of open source utilities, the remainder of the stack was proprietary. One Identi.ca discussion speculated that there was little value in WebOS's proprietary components, which is ironic considering that its web runtime is a key component currently missing in MeeGo.
Without a vendor making products, HP has a tough fight ahead in pitching WebOS to application developers. In yet another irony, because HP made open standards like HTML and JavaScript the only development option, independent developers have little investment in WebOS itself as a platform. Unless a new champion picks up the cause, WebOS's legacy may end up being an illustration of how difficult it is to "go it alone" developing a mobile Linux platform. MeeGo is not yet shipping on consumer smartphones, but at least by counting on multiple vendors, its fate does not rest in the hands of a single product line.
Naturally, MeeGo faced a challenge in the first half of the year as one of its vendors suddenly changed directions. In his security talk, Ware called this turn of events a chance to redefine the platform, and in some cases, to differentiate MeeGo from the mobile Linux competition. One way to do that, he said, would be to design the security framework that replaces MSSF as one that puts end-user privacy first.
OEMs will no doubt want to include lock-down frameworks and mechanisms to ensure chipset security and trusted application updates. But MSSF is gone for good, Ware said, and the opportunity exists to develop something better. The replacement framework Ware and his team (some of whom are refugees from Nokia) are working on is called the Mobile Security Model (MSM). It includes access control (already implemented by Smack), but encompasses application sandboxing, userspace access to the kernel cryptographic APIs, and integrity-checking as well.
MSM is still in the early stages of development, but Ware also takes the security of the MeeGo project framework seriously. He and the others on the security team have implemented access restrictions on some of the tools, and even did a source audit of the Open Build System that uncovered two previously unidentified issues. They scour the CVE RSS feed and patch all defects, releasing security updates every six weeks.
Remaking MeeGo's security model without Nokia's MSSF certainly is not a trivial task — as Ware pointed out in his talk, the rapid growth of smartphones and the relatively sparse data-protection they tend to offer makes them an attractive target for malware writers. At least in the current plan, MeeGo device owners can count on someone trying to protect their privacy, whether they end up using a smartphone or some entirely different form-factor.
Between its general tendency toward working in the upstream projects and its collection of distinct UX platforms that often seem to operate independently, MeeGo can sometimes be a tricky project to keep tabs on. Now that Nokia's contribution comes almost entirely through Qt development, a much greater percentage of MeeGo's Core development happens at Intel and inside various UX-specific OEMs, such as set-top box vendor Amino along with tablet vendors WeTab and Trinity Audio Group, which makes the problem worse. Thus the mini-conference at LinuxCon made for a good check-up opportunity for those that don't follow the project closely. Like any distribution, it has its fits and starts, but development is moving ahead at the same pace as before.
Brief items
Arch Linux 2011.08.19
Arch Linux has released new installation media. "time for a much needed update to the Arch installation media, as the last release (2010.05) is not only quite outdated, but now yields broken installations if you do a netinstall (because the old installer is not aware of the changed kernel/initramfs filename in our new Linux 3.0 packages)." If you've been thinking about installing Arch, this would be a good time. The Official Arch Linux Install Guide has been updated for the 2011.08.19 release.
CeroWrt RC5 (beta) available
The beta-test release of CeroWrt (an OpenWRT derivative) has been announced. "CeroWrt is a project to resolve endemic problems in home networking today, and to push the state of the art of edge networks and routers forward. Projects include tighter integration with DNSSEC, wireless mesh networking (Wisp6), measurements of networking and censorship issues (BISMark), among others, notably reducing bufferbloat in both the wired and wireless components of the stack." Only the Netgear WNDR3700v2 router is supported at this time.
Announcing the release of Fedora 16 Alpha
The Fedora Project has announced the release of Fedora 16 "Verne" Alpha. "The Alpha release contains all the exciting features of Fedora 16 in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete, and bears a very strong resemblance to the third and final release. The final release of Fedora 16 is due in early November."
IPFire 2.9 Core Update 51 released
The IPFire firewall distribution has released version 2.9 Update 51. "Core Update 51 is addressing several security issues in the Linux kernel as well as stability fixes, performance optimization and driver updates. It is recommended to install this update as soon as possible and please take notice that a reboot is required to complete the installation."
Swift Linux 0.1.2 is now available
Swift Linux is a lightweight distribution aimed at older hardware. Swift Linux 0.1.2 is based on AntiX Linux M11. AntiX is a lightweight MEPIS derivative and MEPIS is a Debian derivative. Swift remains compatible with Debian, so packages from the Debian repositories can be added. "There are two plain vanilla editions (Diet Swift Linux and Regular Swift Linux) and three special editions (Taylor Swift Linux, Minnesota Swift Linux, and Chicago Swift Linux)." The project is seeking more developers.
Distribution News
Debian GNU/Linux
bits from the DPL for July 2011
Debian Project Leader Stefano "Zack" Zacchiroli has some bits on his July activities. "The main highlight for July is, of course, DebConf11. It's been a blast: hundreds of Debian Developers and contributors have flocked together in Banja Luka to have fun improving Debian. If you haven't attended, no problem, you could catch up with what happened at DebConf11 by perusing the videos of all events that the Video Team has made available since the very end of DebConf11." Other topics include Software Patents, GNOME trademark, Debian trademark, and more. Zack has also appointed Colin Watson to the Technical Committee.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 419 (August 22)
- Fedora Weekly News Issue 284 (August 17)
- Maemo Weekly News (August 22)
- openSUSE Weekly News, Issue 189 (August 20)
- Ubuntu Weekly Newsletter, Issue 229 (August 21)
First Look at Poseidon Linux, the Linux For Scientists (Linux.com)
Carla Schroder takes a look at Poseidon Linux, a distribution for the international scientific community. "Why a specialized science distro? Poseidon is more than just another Ubuntu respin because it includes a lot of specialized software that is not available in the Ubuntu or Debian respositories, and putting it all together in a nice distro is a real convenience. These extra packages are managed the usual way with Synaptic, Ubuntu Software Center, or apt-get, drawing from various repositories such as linux.dropbox.com/ubuntu, packages.medibuntu.org and ppa.launchpad.net."
Page editor: Rebecca Sobol
Development
An interview with Kovid Goyal of calibre
Growing up, Kovid Goyal planned to be a physicist working on quantum computers. However, while studying at the California Institute of Technology (CalTech), he began developing calibre, an ebook manager for GNU/Linux, OS X, and Windows. The project quickly grew, developing its own ecosystem of sub-projects, extensive documentation, and self-funding. Today, calibre is a full-time vocation for Goyal, as well as a partial source of income for other project members.
Goyal recalled:
A GNU/Linux user from the age of fourteen, first on Red Hat and later on Gentoo, Goyal worked his way through grad school by administering the particle theory group's computers at CalTech. An avid reader, he was dismayed to learn that the SONY PRS-500, the first reader to use e-ink technology, did not support his operating system of choice. In response, he wrote the first version of calibre, which he originally named libprs500.
From this beginning, the project grew as Goyal discovered other missing tools for ebooks. The conversion features were added because Goyal wanted to convert ebooks for his reader. Similarly, management collection tools and a graphical interface were created as his ebook collection grew, and the news downloading tools when he could no longer get Newsweek delivered to his door. Other features were added by contributors such as John Schember who created the "Get Books" feature in calibre, which provides a comprehensive list of ebook publishers as well as a comparison shopper. Recently, Schember also became maintainer for Sigil, an ebook editor in early release.
In 2008, the project was renamed by Goyal's wife, Krittika Goyal. On the project's history page, Kovid Goyal explains that "the 'libre' in 'calibre' stands for freedom, indicating that calibre is a free and open source product, modifiable by all. Nonetheless, 'calibre' should be pronounced as 'cali-ber,' not 'ca-libre'.
"
Krittika Goyal is also responsible for Open Books, a portal site for
ebooks unencumbered by Digital Rights Management (DRM). Kovid Goyal
commented:
Currently, the site lists several thousand titles by publishers ranging from small presses to Baen Books, a major science-fiction publisher.
Today, calibre has evolved into an active project, with new releases every Friday. Growth of the project remains steady, with many of the revisions consisting of drivers for readers, new magazine downloads, or interface refinements. However major new features are still being developed as well. Recently, for instance, calibre added a plug-in structure, whose success can be seen from the "Plugins" menu under "Preferences". There is even an unofficial plugin for removing DRM, although Goyal stressed that:
Since calibre collects statistics based on unique IDs and IP addresses whenever an installation starts, Goyal can track current usage in some detail. He reports that calibre has about four million users who have used the application at least once, and is growing at a rate of three hundred thousand users every month. About eighty percent of users are on Windows, seventeen percent on OS X, and three percent on GNU/Linux. By any standard, calibre is a free software success story, an all-in-one application that is to ebooks what Amarok is to music players, or digiKam is to images.
Keys to success
The most obvious cause of calibre's success is its comprehensive feature set. However, another part of the project's success lies in two areas often neglected by other projects: documentation and fund-raising.
Help is available on the project site in forms ranging from an FAQ and an introductory video to step-by-step guides on conversion
to and from various ebook formats and a complete user manual. This thoroughness
"is a deliberate policy
", according to Goyal:
Also, calibre is now so large (over 400K lines of code) that I cannot keep track of it all. I often find myself needing to read the docs to figure out how some obscure feature is supposed to work.
I have found that, because I place a premium on documentation personally, as time has passed, calibre's community, both users and developers, have contributed to that documentation.
Yet, as important as documentation is, probably the single most significant feature of the calibre project's culture is its active fund-raising. Unlike most free software applications, calibre includes a button for donations, as well as a discreet donation button on the project's home page.
"Donations are essential to my being able to work full-time on calibre,
" Goyal said. "I am continually amazed by the generosity of all of calibre's donors, many of whom have continued to donate repeatedly over the years.
"
However, donations are not the only source of calibre's funding. Hosting is paid for from a single sidebar ad on calibre websites. Goyal and some of the other most active project members also do some calibre-related consulting. Although confidentiality prevents him from being specific about this work, he explained that, "In general, it involves consultation for companies that are trying to enter the ebook market. Some [of it] involves customizing calibre for a particular organization/use case.
"
Yet another source of income for all project members is the Get Books store browser and comparison shopper feature. Most of the links on Get Books are part of publishers' affiliate programs, and revenue from such programs are divided, with seventy percent going to the developer who maintains a particular link and thirty percent to Goyal. "The two most active developers (by commits) John Schember and Charles Haley, get most of the income from Get Books,
" Goyal said.
To some, this emphasis on funding might seem opportunistic, but Goyal remains unapologetic:
Moreover, it is hard to argue with success. If calibre remained entirely a volunteer project, then it would undoubtedly be less advanced than it is. Unsurprisingly, Goyal anticipates other possible income sources, such as personal calibre clouds, with hosted services "for those who would rather not deal with maintaining their own cloud.
"
Future possibilities
Currently at version 0.8.15, calibre is still very much a work in progress. Although calibre already far exceeds the features of any of the software shipped with ebook readers, tentative future plans include a number of major additions. They include improved conversion to PDF and MS Word formats, a reworked ebook viewer with the ability to read HTML 5, and support for annotations across multiple operating systems and readers.
"But I must emphasize that things happen in calibre land very
serendipitously
", Goyal said. "
People scratch their itches
and calibre grows new capabilities. So please do not regard this as a list
of promises.
" His overall goal is clear:
Meanwhile, the continued success of calibre seems assured. Not only is it a feature-rich application with no major rivals, but its funding efforts allow Goyal and other developers to concentrate on their work while keeping their autonomy.
Brief items
Quote of the week
So, what are the implications? What I hope and believe will happen is that the project is already sufficiently healthy to stay alive and grow without Nokia acting as a midwife. This would imply the community to take a leading role in both the project guidance and actual development. This can of course happen in several ways. First, independent open-source developers may pitch in. Second, there are many companies using PySide by now. They might want to contribute to PySide due to their own interests. Third, there might be some companies providing support and services for PySide. (INdT?) Finally, their might be some company with sufficient interest to take the main responsibility for future development.
The Newtonator v0.5.0
The Newtonator is an LV2 synthesizer plugin with a graphical interface, three-note polyphony, and a "unique" synthesis algorithm. "The Newtonator specializes in making crazy, harsh sounds, so if you're looking for some sounds to produce the next Yanni album, keep looking." v0.5.0 is considered to be a beta release.
Fun with PHP releases
The PHP team has rushed out the 5.3.8 release to fix a 5.3.7 bug that broke crypt() for a lot of users. But it turns out that there is another problem: the behavior of is_a() has changed, with the results that (1) the autoloader can be triggered, and (2) the return value can change. This change appears to have caused problems with PEAR, at least; users may want to be careful about upgrading to this release.PyPy 1.6 released
The PyPy 1.6 release is out. Most of the improvements are performance-related, but there is also a "JitViewer" tool allowing developers to see which code has been successfully compiled to assembler, improvements in the extension module API, and the rough beginnings of NumPY support. "Unfortunately, this does not mean that you can expect to take an existing NumPy program and run it on PyPy, because the module is still unfinished and supports only some of the numpy API. However, barring some details, what works should be blazingly fast :-)"
Mozilla launches WebAPI
The Mozilla project has announced the launch of the WebAPI project. "WebAPI is an effort by Mozilla to bridge together the gap, and have consistent APIs that will work in all web browsers, no matter the operating system. Specification drafts and implementation prototypes will be available, and it will be submitted to W3C for standardization." In particular, they want to create a standardized HTML5 API for tasks like accessing the dialer, the address book, the camera, etc.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (August 23)
- LibreOffice Development Summary (August 23)
- PostgreSQL Weekly News (August 21)
Coghlan: Of Python and Road Maps (or the lack thereof)
Python core developer Nick Coghlan has put together a summary of what is happening in Python development based on his recent PyCon AU talk. "Personally, I think the status quo in this space is in a pretty good place, with python-dev and CPython handling the evolution of the language specification itself, as well as providing an implementation that will work reasonably well on almost any platform with a C compiler (and preferably some level of POSIX compliance), while the PyPy crew focus on providing a fast, customisable implementation for the major platforms without getting distracted by arguments about possible new language features."
Page editor: Jonathan Corbet
Announcements
Brief items
HP dropping webOS devices
HP has sent out an unhappy press release stating that it is exploring "strategic alternatives" for its Personal Systems Group. "In addition, HP reported that it plans to announce that it will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. HP will continue to explore options to optimize the value of webOS software going forward."
FSF: Android GPLv2 termination worries: one more reason to upgrade to GPLv3
In a release seen by many as FUD, the Free Software Foundation is urging projects to use version 3 of the General Public License (GPLv3). They note that Android's commercial success has lead to an increase in GPL violations from Android distributors. "[P]eople still seek out our opinions about the relevant parts of the GPL, and that discussion has recently turned to GPLv2's termination provisions. Section 4 of the license says, "You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License." When we enforce the license of FSF-copyrighted software, we give violators back the rights they had after they come into compliance. In our experience, developers of Linux are happy to do the same. Unfortunately, even if we assume they all would restore these rights, it would be extremely difficult to have them all formally do so; there are simply too many copyright holders involved, some of whom haven't worked on the project in years or even decades."
Articles of interest
Android OEMs should hear Microsoft, Nokia out on Google-Motorola combo (ars technica)
Ars technica speculates on how Google's purchase of Motorola Mobility will change the Android market. "For the time being, at least, Google claims that Motorola will be operated as a "separate business unit"-it will be owned by Google, but operationally will function like any other Android licensee. However, a claim made in a Wall Street Journal profile of Andy Rubin, the founder of Android Inc. and Google's senior vice president of Mobile, suggests that Google may not be telling the whole story. Specifically, the article says that "People close to the deal said one of Google's motivations was its desire to design devices, not just the software that powers them, thus giving it the sort of influence that rival Apple enjoys with its iPhone and iPad." Such a move would change the nature of the Google-Motorola relationship radically, and it's difficult to see how this wouldn't give Motorola a substantial advantage over other Android OEMs."
Articles from LinuxCon
Joe 'Zonker' Brockmeier has several articles covering this week's LinuxCon. Jim Zemlin gave the opening keynote posing the question "What would the world be like without Linux?"
In The Next 20 Years? Who Knows? Joe covers a keynote speech by Jim Whitehurst, CEO of Red Hat.
Linus Torvalds and Greg Kroah-Hartman took the stage for a question and answer session, with Greg asking the questions.
Dr. Irving Wladawsky-Berger was formerly responsible for IBM's response to emerging technologies. In his keynote he talked about the disruptive force of Linux then and now, and IBM's relationship with Linux through the years.
Wirzenius: Linux at 20, some personal memories
Lars Wirzenius reminisces about the early days of Linux. "Linus's multitasking program grew, and grew, and gained features such as a hard disk driver, and memory management, and a filesystem. He had enough of a kernel to run some userspace programs, and he made it so that he could compiler programs on Minix, and run them on his own kernel. By this time, summer of 1991, we had both started posting to Usenet. In August, Linus mentioned his kernel project on comp.os.minix for the first time. Later on, he decided to make the code available, and got one of the admins of ftp.funet.fi to put it there. For this, the project needed a name. Linus wanted to call it Freax, but Ari Lemmke, the ftp.funet.fi admin, decided to call it Linux instead. You can find the Freax name in the Makefile of the earliest Linux releases."
Upcoming Events
The openSUSE Conference Program and Keynote Speakers
The conference program and keynote speakers for the openSUSE conference (September 11-14, 2011 in Nuremberg, Germany) have been announced. "We have scheduled more than 100 contributions in the four days of the conference. More than 50% of those are interactive like birds of a feather sessions (BoFs) and workshops, that is along with our motto RWX³ which basically means that people should not only just listen, but also do things." Some social events have also been announced.
Events: September 1, 2011 to October 31, 2011
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
August 30 September 1 |
Military Open Source Software (MIL-OSS) WG3 Conference | Atlanta, GA, USA |
September 6 September 8 |
Conference on Domain-Specific Languages | Bordeaux, France |
September 7 September 9 |
Linux Plumbers' Conference | Santa Rosa, CA, USA |
September 8 | Linux Security Summit 2011 | Santa Rosa, CA, USA |
September 8 September 9 |
Italian Perl Workshop 2011 | Turin, Italy |
September 8 September 9 |
Lua Workshop 2011 | Frick, Switzerland |
September 9 September 11 |
State of the Map 2011 | Denver, Colorado, USA |
September 9 September 11 |
Ohio LinuxFest 2011 | Columbus, OH, USA |
September 10 September 11 |
PyTexas 2011 | College Station, Texas, USA |
September 10 September 11 |
SugarCamp Paris 2011 - "Fix Sugar Documentation!" | Paris, France |
September 11 September 14 |
openSUSE Conference | Nuremberg, Germany |
September 12 September 14 |
X.Org Developers' Conference | Chicago, Illinois, USA |
September 14 September 16 |
Postgres Open | Chicago, IL, USA |
September 14 September 16 |
GNU Radio Conference 2011 | Philadelphia, PA, USA |
September 15 | Open Hardware Summit | New York, NY, USA |
September 16 | LLVM European User Group Meeting | London, United Kingdom |
September 16 September 18 |
Creative Commons Global Summit 2011 | Warsaw, Poland |
September 16 September 18 |
Pycon India 2011 | Pune, India |
September 18 September 20 |
Strange Loop | St. Louis, MO, USA |
September 19 September 22 |
BruCON 2011 | Brussels, Belgium |
September 22 September 25 |
Pycon Poland 2011 | Kielce, Poland |
September 23 September 24 |
Open Source Developers Conference France 2011 | Paris, France |
September 23 September 24 |
PyCon Argentina 2011 | Buenos Aires, Argentina |
September 24 September 25 |
PyCon UK 2011 | Coventry, UK |
September 27 September 30 |
PostgreSQL Conference West | San Jose, CA, USA |
September 27 September 29 |
Nagios World Conference North America 2011 | Saint Paul, MN, USA |
September 29 October 1 |
Python Brasil [7] | São Paulo, Brazil |
September 30 October 3 |
Fedora Users and Developers Conference: Milan 2011 | Milan, Italy |
October 1 October 2 |
WineConf 2011 | Minneapolis, MN, USA |
October 1 October 2 |
Big Android BBQ | Austin, TX, USA |
October 3 October 5 |
OpenStack "Essex" Design Summit | Boston, MA, USA |
October 4 October 9 |
PyCon DE | Leipzig, Germany |
October 6 October 9 |
EuroBSDCon 2011 | Netherlands |
October 7 October 9 |
Linux Autumn 2011 | Kielce, Poland |
October 7 October 10 |
Open Source Week 2011 | Malang, Indonesia |
October 8 October 9 |
PyCon Ireland 2011 | Dublin, Ireland |
October 8 October 9 |
Pittsburgh Perl Workshop 2011 | Pittsburgh, PA, USA |
October 8 | PHP North West Conference | Manchester, UK |
October 8 October 10 |
GNOME "Boston" Fall Summit 2011 | Montreal, QC, Canada |
October 8 | FLOSSUK / UKUUG's 2011 Unconference | Manchester, UK |
October 9 October 11 |
Android Open | San Francisco, CA, USA |
October 11 | PLUG Talk: Rusty Russell | Perth, Australia |
October 12 October 15 |
LibreOffice Conference | Paris, France |
October 14 October 16 |
MediaWiki Hackathon New Orleans | New Orleans, Louisiana, USA |
October 14 | Workshop Packaging BlankOn | Jakarta , Indonesia |
October 15 | Packaging Debian Class BlankOn | Surabaya, Indonesia |
October 17 October 18 |
PyCon Finland 2011 | Turku, Finland |
October 18 October 21 |
PostgreSQL Conference Europe | Amsterdam, The Netherlands |
October 19 October 21 |
13th German Perl Workshop | Frankfurt/Main, Germany |
October 19 October 21 |
Latinoware 2011 | Foz do Iguaçu, Brazil |
October 20 October 22 |
13th Real-Time Linux Workshop | Prague, Czech Republic |
October 21 October 23 |
PHPCon Poland 2011 | Kielce, Poland |
October 21 | PG-Day Denver 2011 | Denver, CO, USA |
October 23 October 25 |
Kernel Summit | Prague, Czech Republic |
October 24 October 28 |
18th Annual Tcl/Tk Conference (Tcl'2011) | Manassas, Virgina, USA |
October 24 October 25 |
GitTogether 2011 | Mountain View, CA, USA |
October 24 October 25 |
GStreamer Conference 2011 | Prague, Czech Republic |
October 26 October 28 |
Embedded Linux Conference Europe | Prague, Czech Republic |
October 26 October 28 |
LinuxCon Europe 2011 | Prague, Czech Republic |
October 28 October 30 |
MiniDebConf Mangalore India | Mangalore, India |
October 29 | buildroot + crosstool-NG Developers' Day | Prague, Czech Republic |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol