When thinking about user interface design, many will focus on the
application itself, but Claire Rowland, an interaction designer and
researcher, looks at things a bit differently. She came to the Desktop
Summit in Berlin to describe "service design", which encompasses more than
just the interface for a particular application. Looking at the service
that is being provided, and focusing on the "touchpoints" for that
service, makes for a more holistic view of interface design. That will become
increasingly important as we move into a world where more and more
"ordinary" devices become connected to the internet.
Rowland set the tone for her talk by playing a short video from the Smarcos project, which outlined
the kinds of devices and connectivity between them that we are likely to
see over the next few years. Things in the real world that have not been
connected to the internet, like toilets, pets, or bathroom scales, are
headed in that direction. Since February 2011, AT&T has had more new
machine subscribers (i.e. devices of various sorts) than human subscribers,
and it is estimated that there will be 50 billion connected devices by 2020.
The video described the challenge of making the
systems—services—surrounding these devices usable. It also
pointed out the problems with
ensuring that users are in control of the data that gets shared, as well as
the challenges in making the service understandable. Some of the
presumably fictional examples shown were a washing machine flashing an
"Error: update firmware" message and a coffee machine that wouldn't perform
its usual task because of a "caffeine allowance exceeded" condition.
The difficulty in designing these systems is to make them usable and
understandable, Rowland said, because many people "don't want to
fiddle around with tech". The number of things that need to
"connect up" are only increasing. Smartphones are outselling
PCs these days, TVs are connecting to the web, and more environmental
sensors are coming online, which presents an "interconnectivity
challenge", she said. "How do we get these things to play
Part of the answer may lie in "service design", which is what she works
on. A service simply delivers "something for users". That
could be a service in the traditional computer sense of the term, or
something more real world. She used the "Post" (i.e. Postal Service in the
US) as an example of the latter. There are multiple "touchpoints" for the
service, whether it is buying stamps or sending and receiving packages.
The value of the service is in "how the whole thing works
together", she said. For digital services, it doesn't matter how
well an application ("app") works in isolation, it needs to fit and work with the
service as a whole.
New design metaphor needed
There is a need for a new design metaphor, Rowland said, because the old
usability model of "one person sitting in front of one app" is
no longer valid. That model relies on there being one core device, the
screen, that creates a "work-centric" design. Those kinds of applications
are context-independent and passive, waiting for a single user to perform
In contrast, future applications will have "interusability", she said.
There will be multiple devices involved, some without a screen, and the
applications will become context aware. The applications will be
"content and activity-centric", cloud-based, and will target
multiple users (e.g. web TV).
The key to designing these services will be in finding the right
touchpoints and the appropriate interaction type. Touchpoints need to be
right for the device being used, that is "doing the right thing on
the right device". The "right thing" is not necessarily based on
what the device can do, she said. While a TV can have a keyboard, that may
not be the right way to interact with it, because watching TV is generally
a more passive activity. Depending on the type of application, and the
device in use, it may make sense to design applications to be
"glanceable", and not require users to put their full
attention on the application.
Today's smartphone landscape takes an approach that Rowland called the
"bucket of apps". Instead of just offering a huge range of
different apps, the phone's capabilities should be used to
anticipate the user's needs. If the user is fumbling with their phone at a
bus stop, there is no technical reason that the bus stop couldn't identify
itself to the phone. That would allow the phone to present the bus
schedule app as a likely choice, rather than require the user to dig
it out of the bucket.
There are three elements that make a "service feel like a
service", Rowland said.
The first is to present a "clear mental model" of what the
service is and what it can do for the user. For example, she said that
Dropbox is not technically better than other alternatives, but it positions
itself as simply being about sharing folders. Other similar services talk
about "syncing and backup", which is "scary for
Continuity is another important element, so that users get the same
experience on different devices. For example, an app could tag the Twitter
tweets that you have seen on a particular device, so that they don't have
to be downloaded on a different device. There is an effort to create
"migratory interfaces", she said, where the user can move from
device to device while keeping the same state and context in the service.
If a user is on a mobile device looking at banking information, and the
device runs low on power, the device could prompt whether to push the
information to a nearby desktop. There should also be continuity
"across interaction modes", so that a transaction started
elsewhere could be completed via a phone call, for example.
The final piece of the service puzzle is consistency, Rowland said. No
matter what kind of device or application used to interact with the
service, it should be consistent. If an appliance is to be controlled from
a mobile phone, that doesn't mean that there will be the exact same dials
and other control elements in the phone app, but that the labels, names, and
interaction logic should be the same, she said. The kind of controls used
should be appropriate to the device, but still be consistent with other
ways of interacting with the service.
The cloud user experience is a challenge for consumers, she said.
Connectivity is going to fail sometimes, and to a non-technical user, the
difference between losing the connection and a bug in the app is small.
Losing connectivity can also lead to bad user experience when it is
regained. She pointed to the Spotify music service, where users have to
log in again once the connection has been restored. There may be valid
security reasons for doing so, she said, but it leads to a bad user experience.
Instead of treating connection loss as an exceptional event, applications
should plan for periods of disconnection. Downloading content well ahead
of the time it is needed would be one example of that. The cloud also
brings with it a set of privacy issues and settings that are difficult for
users to get their heads around. There is a need for reasonable defaults,
she said, pointing to the recent issues with Fitbit
activity information showing up in Google searches. Users were
probably not expecting that their sexual activity (including date and time,
as well as duration) would show up there.
The desktop certainly has a role to play and will be a part of this
ecosystem, Rowland said. Service design is partly about the user
interfaces on devices, but it is also about how to make all the different
parts work well together. Apple has staked out a claim to provide this
kind of experience, but she does not want commit to only Apple products.
There "need to be alternatives" to Apple, she said, and that's
where the free software world can come in.
In response to a question from the audience, Rowland had some suggestions
on getting designers more involved with free software. "Designers
love a challenge", she said, and free software needs to "get
better at packaging itself to attract designers". She suggested
going to design conferences to present free software design problems as
challenges and asking for designers to step up to help solve them.
While Rowland's talk was not immediately applicable to free desktops, there
was much in it to ponder on. Like it or not, the vision of the
interconnected future is coming, and our mundane devices and appliances are
that route as well. Making those things work well for users, while still
allowing user freedom, is important, and it's something the free software
community should be contemplating.
[ I would like to thank the GNOME Foundation and KDE e.V. for travel
assistance to attend the Desktop Summit. ]
Comments (12 posted)
Edward Naughton is at it again
is now claiming that most or all Android vendors have lost their right to
distribute the kernel as the result of GPL violations. Naturally Florian
and amplified it; he is amusingly surprised to learn that there
are GPL compliance problems in the Android world. As it happens, there is
no immediate prospect of Android vendors being unable to ship their
products - at least, not as a result of GPL issues - but there is a point
here which is worth keeping in mind.
First: please bear in mind while reading the following that your editor is
not a lawyer and couldn't
even plausibly play one on television.
Earlier this year, Jeremy Allison gave a talk on why the Samba project
moved to version 3 of the GNU General Public License. There were a
number of reasons for the change, but near the top of his list was the
"GPLv2 death penalty." Version 2 is an unforgiving license: any
violation leads to an automatic termination of all rights to the software.
A literal reading of this language leads to the conclusion that anybody who
has violated the license must explicitly obtain a new license from the
copyright holder(s) before they can exercise any of the rights given by the
GPL. For a project that does not require copyright assignment, there could
be a large number of copyright owners to placate before recovery from a
violation would be possible.
The Samba developers have dealt with their share of GPL violations over the
years. As has almost universally been the case in our community, the
Samba folks have never been
interested in vengeance or "punitive damages" from violators; they simply
want the offending parties to respect the license and come back into
compliance. When the GPL was written to become GPLv3, that approach was
encoded into the license; violators who fix their problems in a timely
manner automatically have their rights reinstated. There is no "death
penalty" which could possibly shut violators down forever; leaving this
provision behind was something that the Samba team was happy to do.
Android phones are capable devices, but they still tend not to be shipped
with Samba servers installed. They do, however, contain the Linux kernel,
which happens to be a GPLv2-licensed body of code with thousands of
owners. Those who find it in their interest to create fear, uncertainty,
and doubt around Android have been happy to seize on the idea that a GPL
violation will force a vendor to locate and kowtow before all of those
owners before they can ship the kernel again. There can be no doubt that
this is a scary prospect.
One should look, though, at the history of how GPL violations have been
resolved in the past. There is a fair amount of case history - and a much
larger volume of "quietly resolved" cases - where coming into compliance
has been enough. Those who have pursued GPL violations in the courts have
asked for organizational changes (the appointment of a GPL compliance
officer, perhaps), payment of immediate expenses, and, perhaps, a small
donation to a worthy project. But the point has been license compliance,
not personal gain or disruption of anybody's business; that is especially
true of the kernel in particular.
Harald Welte and company won their first GPL court case in 2004; the practice of
quietly bringing violators into compliance had been going on for quite some
time previously. Never, in any of these cases, has a copyright-holding
third party come forward and claimed that a former infringer lacks
a license and is, thus, still in violation. The community as a whole has
not promised that licenses for violators will be automatically restored
when the guilty parties come back into compliance, but it has acted that
way with great consistency for many years. Whether a former violator could
use that fact to build a defense based on estoppel is a matter for lawyers
and judges, but the possibility cannot be dismissed out of hand. Automatic
reinstatement is not written into the license, but it's how things have
There is an interesting related question: how extensive is the termination
of rights? Each kernel release is a different work; the chances that any
given piece of code has been modified in a new release are pretty high.
One could argue that each kernel release comes with its own license; the
termination of one does not necessarily affect rights to other releases.
Switching to a different release would obviously not affect any ongoing
violations, but it might suffice to leave holdovers from previous
violations behind. Should this naive, non-lawyerly speculation actually
hold water, the death penalty becomes a minor issue at worst.
So Android vendors probably have bigger worries than post-compliance
hassles from kernel copyright owners. Until they get around to that little
detail of becoming a former violator, the question isn't even
relevant, of course. Afterward, software patents still look like a much
That said, your editor has, in the past, heard occasional worries about the
prospect of "copyright trolls." It's not too hard to imagine that somebody
with a trollish inclination might come into possession of the copyright on
some kernel code; that somebody could then go shaking down former violators
with threats of lawsuits for ongoing infringement. This is not an outcome
which would be beneficial to our community, to say the least.
One would guess that a copyright troll with a small ownership would succeed
mostly in getting his or her code removed from the kernel in record time.
Big holders could pose a bigger threat. Imagine a company like IBM, for
example; IBM owns the copyright on a great deal of kernel code. IBM also
has the look of one of those
short-lived companies that doesn't hang around for long. As this
flash-in-the-pan fades, its copyright portfolio could be picked up by a
troll which would then proceed to attack prior infringers. Writing IBM's
code out of the kernel would not be an easy task, so some other sort of
solution would have to be found. It is not a pretty scenario.
It is also a relatively unlikely scenario. Companies that have built up
ownership of large parts of the kernel have done so because they are
committed to its success. It is hard to imagine them turning evil in such
a legally uncertain way. But it's not a possibility which can be ignored
entirely. The "death penalty" is written into the license; someday,
somebody may well try to take advantage of that to our detriment.
What would happen then? Assuming that the malefactor is not simply
lawyered out of existence, other things would have to come into play.
Remember that the development community is currently adding more than one
million lines of code to the kernel every year. Even a massive rewrite job
could be done relatively quickly if the need were to arise. If things got
really bad, the kernel could conceivably follow Samba's example and move to
GPLv3 - though that move, clearly, would not affect the need to remove
problematic code. One way or another, the problem would be dealt with.
Copyright trolls probably do not belong at the top of the list of things we
lose sleep over at the moment.
Comments (39 posted)
Mozilla announced a new rapid-release cycle for its flagship applications earlier this year, but it has not slowed down on other fronts in the interim. New work in recent weeks include the Boot to Gecko instant-on project, collaboration with Google developers on WebRTC and Web Intents, a new security review process, and an initiative aimed at meeting the distinct needs of enterprise IT departments.
To the cloud
Boot to Gecko (B2G) was announced
on July 27; it is aiming to build "a complete, standalone operating
system for the open web." Much like ChromeOS, the idea is to build
a low-resource-usage operating system for portable devices (e.g., tablets
and phones) that focuses on web-delivered applications instead of
locally-installed software. Notably, however, the initial announcement and
the main project page both discuss the web's ability to displacing
proprietary "single vendor control" of application execution environments.
Obviously, Mozilla has believed in the web as an OS-agnostic delivery
platform for years, as its "open
web app ecosystem" and Drumbeat
outreach initiative demonstrate. But the project has never spearheaded the
development of an actual OS offering before. When third-party developers
launched the Webian Shell project — itself a
Mozilla-based desktop environment — Mozilla offered guidance and
technical assistance, but did not get directly involved in its development. At the time, some industry watchers speculated that Mozilla might be wary of stepping on Google's toes with its default-search-engine-placement deal coming up for renewal later this year.
B2G and Webian are very different, at least at the moment. Webian is replacement for the desktop environment, not a complete OS, while B2G at least plans to adopt a full software stack. B2G is also still in the very early development stage, without demos to download. But the project has outlined a number of areas where it believes new APIs will need to be developed and structures will need to be put in place to build a fully Mozilla-based OS. These include APIs for accessing hardware devices not addressed by traditional browsers (telephony, Bluetooth, and USB device access, for example), and a new "privilege model" to make sure that these devices are accessible by pages and web applications without security risks.
Interestingly enough, the B2G project pages also discuss the need for an
underlying OS to boot into, and describes it as "a low-level substrate for an Android-compatible device." This suggests that B2G is going after the Android, not ChromeOS, class of hardware (although where it concerns angering Google, it is doubtful that the company would be less protective of one pet project than another).
Indeed, the B2G GitHub code currently builds only with the Android SDK and is installable only on the Nexus S 4G, although the mailing list thread in mozilla.dev.platform discusses other hardware targets. The thread (which for the moment is the only official email discussion forum for B2G) includes considerable debate about what the sub-Gecko OS needs to include, exactly what web APIs deserve top priority, and the relative merits of Android, MeeGo, webOS, and other open source operating systems as a platform.
Mozilla's Mike Shaver addressed the current use of Android as more of a project-bootstrapping move than a longer-term strategy:
We intend to use as little of Android as possible, in fact. Really, we want to use the kernel + drivers, plus libc and ancillary stuff. It's not likely that we'll use the Android Java-wrapped graphics APIs, for example. It's nice to start from something that's known to boot and have access to all the devices we want to expose.
In spite of that explanation, the debate over the OS underpinnings rolls on. Wherever the project heads, it makes for educational reading.
Tied in deeply to the B2G discussion is a new generation of web APIs on which to build the increasingly interactive and cross-domain web applications that the B2G vision relies on. On that front, Mozilla and Google seem to be working well together.
In addition to the media streaming components, WebRTC includes libraries to handle network buffering, error correction, and connection establishment, some of which is adapted from libjingle.
In early August, Mozilla announced it was going to adopt WebRTC as a core component of its Rainbow extension for Firefox. Rainbow allows web applications to access client-side audio- and video-recording hardware (i.e., microphones and webcams). Apart from the obvious use (person-to-person chat applications), Mozilla Labs reports that developers have written karaoke, QR code scanning, and photo booth applications. Unfortunately, even the most recent Rainbow release (0.4) does not support Linux, although the team claims it is a high priority. The Rainbow README says the project ultimately wants to not depend on any external libraries; a solid offering of audio- and video-handling through WebRTC should help.
While WebRTC occupies a low-level API slot, Web Intents implements a very high level of abstraction. The concept is inter-application communication and service discovery, so that (for example) a user could use an online image editor like Picnik to open and touch up photos hosted at another online service, like Flickr. Web Intents was announced by Google in November of 2010, based on the Intents API used by Android.
Web services "register" the actions they intend to support with
<intent> tags in their page's
Mozilla's proposal tackles much the same problem. It was initially referred to as Web Activities in a July blog post, then as Web Actions in August. In both cases, however, the same general protocol is used: each service advertises a set of actions that it will support from incoming applications, based on a generally-agreed-upon set of common actions.
In an August 4th blog post, Google announced
that it was "working closely with Mozilla engineers to unify our two
proposals into one simple, useful API." With little more than basic
demos to go on, the two APIs seem strikingly similar, although Mozilla's
"Web Actions" is regarded as the clearer name in several articles in the
technical press. It also includes a more definite mechanism for service discovery, which remains a fuzzy notion in the Google proposal. Currently applications needing to connect to a remote service must rely on either the user or the browser to locate compatible alternatives. Mozilla's proposal uses its Open Web App manifest storage to remember previously-discovered services. Everyone seems to agree on the value of a cross-web-application communication framework, so the protocol is worth watching, but it could be quite some time before there are any services able to make use of the system.
Freshening the security blanket
In late July, the Mozilla Security Blog posted a
outline for reworking and "evolving" the project's security review process.
The nexus of the proposal is to better integrate security review with the
overall application development process: a smoother process results in less
disruption for the developers and fewer hangups for the users. As Mozilla
contemplates reaching a wider audience with the increased adoption of
Firefox for Mobile and its messaging products, getting the process right will help the organization grow its user base.
Specifically, the goals include performing reviews and flagging bugs
earlier, ensuring that reviews produce "paths" out of trouble and not just work
stoppages, more transparency in the content of reviews, and a more open and
transparent format for security team meetings. There is a sample outline
of the new review meeting process included in the blog post, and the team
has been using it for the past few months.
The experience has been a successful one so far, and preemptively caught security flaws in Firefox's CSS animation code and Server-sent DOM event handling. The full schedule of security meetings is published as publicly-accessible HTML and iCalendar data, and the results are archived on the Mozilla wiki. The new approach has also resulted in some new features being added to the Mozilla Bugzilla instance and team status pages.
Ultimately, the security team says it wants to become "fleet of foot" enough that development teams will come to it to have a review done, rather than the security team needing to initiate the review process and interrupt development.
In late June, PC Magazine reported that enterprise IT departments were upset by Mozilla's move to a short release cycle, arguing that the change negatively affected them by drastically shortening the support lifetime of each release. When a corporate IT consultant lamented the time it would take to test and validate multiple major releases each year, Mozilla's Asa Dotzler sparked controversy by commenting "Enterprise has never been (and I'll argue, shouldn't be) a focus of ours."
A month later, Mozilla's chief of developer engagement, Stormy Peters, announced the formation of an enterprise user working group where the project can interface with IT professionals and enterprise developers. The "enterprise developers" segment includes people who develop in-house web applications for enterprises, as well as those who use Mozilla components to develop their own software (including add-ons and XUL-based applications).
The group's wiki page lists general "help each other"-style objectives, but more importantly it outlines communication mechanisms, starting with a private mailing list and monthly phone call meetings. Each meeting has a specific topic, and both outlines and minutes are posted on the wiki. Understandably, the first few tackled the new release cycle and input from enterprise users on deploying Firefox and how it could be improved.
The output of the meetings also seems to be archived in
"resource" pages on the wiki, and integrated with related information on each particular topic. Unfortunately, the minutes from the August 8th meeting on the new rapid-release cycle are not posted yet, and although the working group has its own issues in Mozilla Bugzilla, so far the only bugs filed deal with technical issues, such as configuration and LDAP support.
Nevertheless, the working group is a positive step. The brouhaha over enterprise support in June was primarily sparked by the attitude many read in Dotzler's comments; opening an ongoing conversation with more diplomatic overtones is arguably a better fix for that kind of problem than are Bugzilla issues. It would be nice to see the enterprise working group attempt to increase its openness and transparency by making its mailing list public, but that may simply be another one of those areas where "enterprises" and those of us who are merely "consumers" do not see eye-to-eye.
The list of recent projects undertaken at Mozilla demonstrates the organization's new-found interest in taking its mission beyond the traditional desktop browser. Certainly the new approach to security review and the enterprise working group directly affect Firefox development, but with B2G and the various Open Web Application projects, soon the oft-used term "browser maker" may fail to accurately describe Mozilla. But it is encouraging to see that the diversified interests of the project include exploring areas — like web-only operating systems — that might otherwise be ceded to commercial interests alone.
Comments (14 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Unpredictable sequence numbers; New vulnerabilities in isc-dhcp, libmodplug, libxfont, Mozilla products ...
- Kernel: Sharing buffers between devices; Avoiding the OS abstraction trap; Transcendent memory in a nutshell.
- Distributions: Six years of RHEL 4 security; SmartOS, Debian, CentOS, ...
- Development: Plasma Active; Cilk Plus, Firefox, ...
- Announcements: Google to buy Motorola mobility, MPL 2.0-rc1, the mobile patent mess, ...