User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 07, 2008

LCA: Bringing X into a two-handed world

By Jonathan Corbet
February 3, 2008
Our graphical interfaces, as implemented through the X Window System, are designed around a single keyboard and a single mouse. But humans are social creatures who want to work together and share systems; they also tend to design their activities around the fact that we have two hands. Moving X out of the single-device model is not a task for the faint of heart, but Peter Hutterer is making a go of it. His LCA talk on multi-pointer X was an interesting update on where this work stands.

The X device model is based on the idea of a core keyboard and a core pointer. Even in a situation where multiple input devices are present (a second mouse plugged into a laptop, say), the application still only sees a single, core device. There is no way to tell, using these core devices, which physical device generated any given event. This, of course, will be an obstacle for any application wanting to provide multi-device support.

As it happens, the XInput extension has provided basic multiple-device support for many years. XInput events look much like core device events, except that (1) applications must register to receive them separately, and (2) they include an ID number identifying the device which generated the event. XInput does not solve the problem by itself, though, for a couple of reasons. Beyond the fact that it does not provide a way for users to specify how different devices should be handled, XInput suffers from the little difficulty that approximately 100% of X applications do not make use of it. So nobody is listening to all those nice XInput events with associated device IDs. The one exception Peter mentioned is the GIMP, which uses XInput to deal with tablets.

Of course, multiple devices work on current systems; that is because the X server also generates core events for all devices. That causes the device ID to be lost, but, since applications do not care, this is not a problem, for now. But it does mean that we are still stuck in a world where systems have a single pointer and a single keyboard.

Luckily for us, says Peter, multi-pointer X is on the horizon. MPX extends X through the creation of the concept of "master" and "slave" devices. Master devices are those which generate events seen by MPX-aware clients; they are virtual devices which can be created and destroyed by the user at will. Slave devices, instead, correspond to the physical devices attached to the system. Through the use of a modified xinput command, users can create masters and attach specific slaves to them.

In the MPX world, one of three things will happen whenever something is done with a physical (slave) device:

  1. The X server will create an XInput event from the slave device and deliver it to any applications which have asked for such events.

  2. If that event is not delivered (because nobody was interested), a core event from the associated master device is created and queued for delivery.

  3. If the event is still undelivered, the server will create an XInput event from the master device to which the slave is attached and attempt to deliver that.

The end result is a scheme where multiple devices still work as expected with non-MPX-aware applications. But when an application which does take advantage of MPX shows up, it will have access to the real information about what the user is doing.

[Peter Hutterer] Peter ran a demo of some of the things he was able to do. By default, there is still only one pointer and one keyboard. Once a new master is created, though, and slave devices attached to it, things get more interesting. Two mouse pointers exist on the screen, each of which can be used independently. It's possible to be typing into two separate windows at the same time. Or, with the right window manager, the user can move windows simultaneously, or resize a window by grabbing two corners at the same time. It was great fun to watch.

MPX brings with it an API which can be used with multi-device applications. When applications use it, says Peter, the result is "eternal happiness." That just leaves the problem of "the other 100%" of the application base which lacks this awareness. To a certain extent, things just work, even when independent pointers are used in the same application. There are some exceptions, though, which have required some workarounds in the system.

For example, applications typically respond when the pointer enters a specific window - illuminating a button within the application, for example. Things work fine when two pointers enter that button. But, likely as not, once the first pointer leave the button, it will go dark and refuse to respond to events from the other pointer. The solution is to nest enter and leave events, so that only the first entry is reported to the application, and only the final exit. Another problem results when a mouse button is pushed while another button is being held down (for a drag operation, perhaps) on a different device. Do that within Nautilus, and the application simply locks up - not the eternal happiness Peter was hoping for. So, when the application holds a grab on one device (as happens when buttons are held down), no other button events will be reported. Also problematic is what to do when the application asks where the pointer is: which pointer should be reported? In this case, the server simply assigns one pointer as the one to report on. All of this makes standard applications work - almost all the time.

Some interesting problems remain, though. How, for example, should a window manager place new windows in a multi-user, multi-device situation? Users will want their windows in their part of the display space, but the window manager has no real way of knowing where that is - or even which user the window "belongs" to. In general, the whole paradigm under which desktop applications have been developed is unprepared to deal with a multi-device world.

Things will get worse as more types of input devices enter the picture. Touch screens are bad enough; they have no persistent state, so things change every time the user touches the device. But touch screens of the future will report multiple touch points simultaneously, and each of those will have attributes like the area of the touch, the pressure being applied, etc. Perhaps the device will sense elevation - a third dimension above the device itself. All of this is going to require a massive rethinking of how our applications work. There are going to be a lot of big problems. But that, says Peter, is what happens when one explores new areas. One gets the sense that he is looking forward to the challenge.

Comments (12 posted)

LCA: Disintermediating distributions

By Jonathan Corbet
February 6, 2008
One of the mini-confs which happened ahead of linux.conf.au proper was the "distribution summit," meant to be a place where representatives and users of all distributions could talk about issues of interest to all. The highlight of this event, perhaps, was Jeff Waugh's talk on disintermediating distributions - or, as he rephrased it, "distributed distributions." If his ideas take hold, they could be the beginning of a new relationship between free software projects and their users.

It all started, says Jeff, some years ago, when he ran into Mark Shuttleworth fresh from a visit to Antarctica. Mark's pitch, says Jeff, "sounded like crack" at the time. By 2003 or so, it just didn't seem like there was a whole lot of room for a new distribution. But Mark had some interesting ideas, and Jeff signed on; the result, of course, was Ubuntu.

Ubuntu has clearly had some success, but, in some important ways, it has failed to work out - at least for Jeff. He found himself distracted by Ubuntu's lack of participation in Debian, from which it derived its product. There was a real tension between tracking Debian and tracking upstream projects more directly. Despite Jeff's insistence that Ubuntu should be tracking (and pushing updates into) Debian's unstable distribution, Ubuntu often chose to go with upstream, resulting in what is, in effect, a fork of the Debian distribution - in terms of both the technology and the community.

[Jeff
Waugh] What Ubuntu was doing was taking upstream packages, modifying them, bringing in shiny new features, and generally looking for ways to differentiate itself from the other distributors. So, for example, the first Ubuntu release contained a great deal of Project Utopia work (aimed at making hardware "just work" with Linux) which had been done by developers from other distributions; Ubuntu shipped it first, though, and got a lot of credit for it. Novell's behind-closed-doors development of Xgl was motivated primarily by the wish to keep Ubuntu from shipping it first. Meanwhile, Red Hat had slowly learned that trying to differentiate itself by diverging from upstream was a path to pain. So Red Hat's developers created AIGLX, in an open, community oriented manner; the result is that AIGLX has proved to be the winning technology.

Events like these led Jeff to wonder about just where the integration of packages should be done - upstream or downstream? From Jeff's (GNOME-based) upstream point of view, he wonders why he doesn't have a direct relationship with his users. While most projects deliver their code through middlemen (distributors), there is an example of a project which has managed to maintain a much more direct relationship: Firefox. Most Firefox users are direct clients of the project - though most of them are Windows users. The Firefox trademark has been used to ensure that, even when distributors are involved, the upstream developers get a say in what is delivered to users.

So, what happens if you take out the middleman? It's instructive to look back at what life was like before there were distributors. It was, Jeff says, much like pigs playing in mud; perhaps they enjoyed it, but it was messy. There are, in fact, a lot of good things that distributors have done for us. You can get a fully integrated stack of software from one source, and the distributor acts, in a way, as the user's advocate toward the upstream project. We don't want to lose out on all that.

But, if one were to look at facilitating a more direct relationship between development project and their users, one would want to take advantage of a number of maturing technologies. These include:

  • OpenID. Any process of distributing distributions must look at distributed identity, and OpenID is the way to do it.

  • DOAP. "Sounds terrible" but it's a useful way of describing a project with XML. With a DOAP description, a user can find a project's mailing lists, bug tracker, source repository, etc.

  • Atom. This is how projects can distribute information about what they are doing.

  • XMPP. This is a Jabber-based message queueing and presence protocol. It can be used to more active publishing of information than Atom can do.

  • Distributed revision control. Lots of functionality for integration between projects, and between upstream and downstream. Jeff sees git as a step backward, though; some of the other offerings, he thinks, have much better user interfaces.

Also important are the packaging efforts which are underway in a number of places. These include Fedora, which is "becoming competitive with Debian" as a community project. OpenSUSE has put together a build system which can create packages for a number of distributions. Debian has had a community build system for years; there is interest in Debian in going the next step, though - ideas like building packages directly from a distributed version control system. Ubuntu's Launchpad was "a spectacular vision," though the reality is "a bit of a snore"; it didn't achieve its goal of helping upstream and downstream work together.

Then there's Bugzilla, which is the "bug filing gauntlet" between projects and their users. The Debian bug tracking system has done a better job of facilitating bug reports by allowing them to be submitted by email. But most big projects are using Bugzilla. It would be much improved by using OpenID (so that users would not have to register to file bugs) and some sort of Atom-based feed which would make querying bugs easy.

If you take out the distribution, what do you replace it with? How do we achieve consistency? We need to create standards for how we interact with each other. And we can, in fact, be very good at consistency and standards when the need is clear. Good release management is a step toward that goal. GNOME once had very bad release management, but has pulled it together. Doing time-based releases was a hard sell, but few developers would want anything else now. Now GNOME release management just works.

Consistency in source management is needed. Once upon a time that was done through CVS, but CVS is no longer up to the job, and now every project is using a different distributed version control system. But, sooner or later, one of the competing projects will win out and "hopefully we'll have clarity again." Autotools and pkgconfig can also go a long way toward creating consistency between projects.

So, if we can push the available tools up into the upstream projects, those projects can get better at producing packages for distributions themselves. Once the tools (like bug trackers) can talk to each other, people will start making more use of them and network effects will take over. But, at the moment, the knowledge about integration remains at the distribution level.

Debian, Jeff thinks, is well placed to take on a project like this and push its integration knowledge upstream. While Debian has typically been ten years ahead of everybody else in its packaging and integration abilities, it currently has a "relevancy problem." Finding ways to help upstream projects support their users more directly while maintaining overall integration and consistency would be a perfect way for Debian to maintain its leadership in this area. That could change the game for everybody, bringing projects closer to their users and making us all "happy as pigs in mud."

Comments (149 posted)

linux.conf.au 2008

By Jonathan Corbet
February 6, 2008
linux.conf.au has an interesting structure which differentiates it from most other events. Every year, a completely new set of organizers takes over the event, moves it to a new city, and puts its own stamp on it. They have a great deal of freedom in how they run LCA, but there is still a group of Linux Australia members and past organizers who keep an eye on things and help ensure that the event does not run into problems. The result is a conference which has a lot of fresh energy every year, but which is also reliably interesting. Many attendees consider it to be one of the best Linux events to be found anywhere in the world.

This year, LCA was held in Melbourne, Australia; the organizing team was led by Donna Benjamin. The now-familiar LCA formula was followed, but with some small changes. The tutorial day is no more, replaced by relatively short tutorial sessions on each day. The traditional auction for charity was also gone this year; instead, a raffle (with Greg Kroah-Hartman's 2.6.22 contributor poster as the main prize) yielded some $1000 for a local penguin refuge. The raffle was [Donna Benjamin] certainly a lower-pressure, less alcohol-fueled way of raising money, but LCA without Rusty Russell as auctioneer just isn't quite the same. That quibble notwithstanding, LCA 2008 was an interesting, well-organized, and well-attended event. Ms. Benjamin and company have certainly upheld the standards for this conference.

A number of LCA talks have been covered in separate LWN articles, and a few more may yet follow. This article will quickly review a few other high points, as seen from your editor's perspective. It's worth noting that videos for almost all of the talks have been posted on the conference web site.

[Muffins] Certainly one high point came on January 30, the day that LWN celebrated its tenth anniversary. The crowd sang a rousing - if not entirely harmonious - version of "happy birthday" after Bruce Schneier's keynote. The following morning tea featured special LWN muffins; they were, much to your editor's delight, of the intense chocolate variety. It is hard to imagine a better place or time to celebrate to celebrate ten years of LWN.

While most LCA presentations are quite technical in nature, there are exceptions. Australian lawyer Kimberlee Weatherall's talk on legal issues was called "Stop in the name of law"; it covered a number of topics of interest to a global audience. Kimberlee, it's worth noting, was the recipient of the "Rusty Wrench" award for service to the free software community at last year's LCA in Sydney.

The Digital Millennium Copyright Act, she noted, is ten years old now. At this point, the debate on its anti-circumvention provisions is essentially done, and anti-circumvention has won; she is not expecting to see any major changes in countries which have adopted such laws. The music industry may [Kimberlee Weatherall] be moving away from use of DRM, but "they were never very good at it anyway." DRM is still going strong in other areas, such as movies and subscription television.

Similarly, the fight to end software patents is over, and we have lost. There are incredible numbers of software patents issued every year; every one of those patents represents a significant investment by its owner. The total amount of investment in these patents is huge; that amount of money is almost impossible to displace. It is also very hard to define what a software patent really is; there are thousands of them in Europe, which ostensibly does not allow software patents. No matter how the rules are written, lawyers will find a way around them.

What is happening on the patent front, instead, is a more constructive engagement with the process. Some reform is happening in the US, as a result of the KSR decision and various attempts to mitigate the costs associated with patents. So the situation might improve slowly over time.

GPLv3 is out. It now has to pass two tests: the market test (will projects use it?) and any legal tests which might be brought. Kimberlee expressed some doubts on whether GPLv3 will really hold up in court, but did not elaborate on them.

There is a new threat out there which we should not underestimate: the push to force copyright enforcement duties onto ISPs. This effort takes two forms: getting "infringers" disconnected, and requiring ISPs to filter data passing through their networks. There are a lot of problems with either approach, but that is not stopping the industry (and others, such as anti-porn crusaders) from pushing hard for ISP responsibility. This is a fight to watch.

So what should the free software community do? Not much, says Kimberlee, except to keep coding. The production of good code brings us allies with money, and that's what we're going to need. As long as we are successful, people will go out of our way to protect us. Keep doing what we do, and things should come out OK.

Anthony Baxter is the Python release manager; he was also the keynote speaker for the third day of the conference. He is, to say the least, an entertaining speaker, so this would be a good one to watch on video. The [Anthony Baxter] talk was about coming changes in Python, and Python 3.0 in particular. The 3.0 release, he says, is "the one where we break all of your code." It's the first backward-incompatible update of the language (at least, if you don't deal in C extension modules).

There are a lot of changes to the language which your editor will not repeat here; they are well documented on the Python web sites. As noted, many of these changes will cause existing code to break. This is being done, says Anthony, because the Python language is now 16 years old. Like all 16-year-olds, it has a number of annoying features. It's time to clean out a lot of accumulated cruft and get back to the minimal, "there is one way to do it" vision that has always driven the language.

Perhaps what's most interesting is what won't be done. The language will not be bloated - it will stay Python. There will be no braces; white space will still be used to mark blocks of code. The much-criticized global interpreter lock will remain. And, importantly, this will be an incremental (if big) update - there will be no overall rewrite of the interpreter. The experience of certain other projects (being Perl 6 and Mozilla) shows that total rewrites tend to be much longer, more painful affairs than anybody might envision at the outset.

There will be migration tools, of course, and warnings built into the forthcoming 2.6 release which will point out things that may cause migration difficulties. The 2.x series will be supported for some years into the future. And, says Anthony, there will be no Python 4.0 release. This is their one chance to break everything and start over, and they plan to get it right this time.

Dave Jones is the head maintainer for the Fedora kernel. At LCA 2008 he took a break from pointing out user-space problems and talked about "a day [Dave Jones] in the life of a distribution kernel maintainer." The real subject of the talk was the process that the Fedora project goes through to put together the kernels they ship.

There are currently three developers working on the Fedora kernel (Dave, Chuck Ebbert, and Kyle McMartin), and "several dozen" working on the RHEL kernels. Most of the RHEL folks are doing backports of fixes, drivers, etc. to the older kernels used by RHEL releases.

Once a kernel has been chosen for release, it's time to start adding patches. Some interesting numbers were put up at this point. Red Hat Linux 7 had 70 patches added to its 2.2.24 kernel. That number went slowly up, to the point where Fedora Core 6 had 191 patches. There are currently 63 patches added to the Fedora 8 kernel, though that may grow over the life of this release. By comparison, RHEL 5 is shipping a 2.6.18 kernel with 1628 patches added to it - a very different world.

There's all kinds of patches which go into a distributor kernel. These include security technologies (ExecShield) which have not made it into the mainline, changes to some default parameters, the silencing of certain "scary messages" which tend to provoke lots of needless bug reports, out-of-tree drivers, patches which help debug problems found in the field, stuff which has been vetoed upstream, and more. Then it's a matter of putting the package and dealing with the subsequent bug reports - lots of them.

[mascot] The closing ceremony included the traditional introduction of the organizer for next year's event. This event will go, for the first time ever, to Hobart, Tasmania; see MarchSouth.org for more information. There is some information on what this team is planning in the bid document [1.6MB PDF]; your editor is intrigued by the following: "The official Speakers' Dinner will be held at a mystery location south of Hobart following a 40 minute river cruise on a high speed luxury catamaran." It's never too soon to get that talk proposal together.

Finally, the last few LCA events have included the passing of the "Rusty Wrench" award to somebody who has performed a great service to the community. Recipients so far are Rusty Russell (after whom the award is named), Pia Waugh, and Kimberlee Weatherall. The Rusty Wrench was not awarded at LCA2008, though. It seems that, in the future, the Rusty Wrench will be part of an extensive set of awards which will be handed out at a separate "gala dinner" event held in the (Australian) winter. The awarding of the Rusty Wrench was a nice LCA feature which will be missed, but, then, there are advantages to having another excuse to visit Australia.

Comments (5 posted)

Page editor: Jake Edge

Inside this week's LWN.net Weekly Edition

  • Security: Security hardening for Debian; New vulnerabilities in gnatsweb, kernel, pcre, xdg-utils, ...
  • Kernel: More stuff for 2.6.25; CRFS and POHMELFS; Ticket spinlocks.
  • Distributions: An interview with the new openSUSE community manager; Terra Soft Releases YDL v6.0; Fedora 9 Alpha; Hardy Alpha 4; Indiana preview 2; Debian Lenny update
  • Development: PostgreSQL releases version 8.3, Apache Synapse becomes top-level project, new versions of Open1X, ZK, ALE Server, Zumastor, GNOME Development Release, GARNOME, KDE, ij-plugins Toolkit, osslsigncode, WorldVistA EHR VOE, wcnt, MediaInfo, Transform SWF, GCC, GNU CLISP, image4j, GIT.
  • Press: Aaron Seigo interviewed at LCA, Torvalds interview, Corbet's LCA talk, the Asus Eee line, Azingo's mobile Linux platform, interviews with Sebastian Kuegler, and Linus Torvalds, Zimbra's future after Microsoft takes over Yahoo.
  • Announcements: EFF challenges online gaming patent, ODMG.ORG consortium rehosted, Big Box Linux opens, LiMo announces Linux mobile platform, Logicworks and MySQL partner, Augustin and Asay join MindTouch board, sub-$100 Linux phone, Intel releases graphics manuals, Pizzigati Prize awarded, Interop Las Vegas, VON.x Europe, InsideRIA.com launched, linux.conf.au videos.
Next page: Security>>

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds