The failure of the lengthy attempt to get Android's suspend blockers patch
set into the kernel offers a number of lessons at various levels. The
technical side of this episode has been covered in a Kernel-page article
; here we'll look,
instead, at the process issues. Your editor argues that this sequence of
events shows both the best and the worst of how Linux kernel development is
done. With luck, we'll learn from both and try to show more of the best in
Suspend blockers first surfaced as wakelocks in February, 2009.
They were immediately and roundly criticized by the development community;
in response, Android developer Arve Hjønnevåg made a long series of
changes before eventually bowing to product schedules and
letting the patches drop for some months. After the Linux Foundation's
Collaboration Summit this year, Arve came back with a new version of the
patch set after being encouraged to do so by a number of developers.
Several rounds of revisions later, each seemingly driven by a new set of
developers who came in with new complaints, these patches failed to get
into the mainline and, at this point, probably never will.
In a number of ways, the situation looks pretty grim - an expensive failure
of the kernel development process. Ted Ts'o described it this way:
Keep in mind how this looks from an outsider's perspective; an
embedded manufacturer spends a fairly large amount of time
answering one set of review comments, and a few weeks later, more
people pop up, and make a much deeper set of complaints, and
request that the current implementation be completely thrown out
and that someone new be designed from scratch --- and the new
design isn't even going to meet all of the requirements that the
embedded manufacturer thought were necessary. Is it any wonder a
number of embedded developers have decided it's Just Too Hard to
Work With the LKML?
Ted's comments point to what is arguably the most discouraging part of the
suspend blocker story: the Android developers were given conflicting advice
over the course of more than one year. They were told several times: fix X
to get this code merged. But once they had fixed X, another group of
developers came along and insisted that they fix Y instead. There never
seemed to be a point where the job was done - the finish line kept moving
every time they seemed to get close to it. The developers who had the most
say in the matter did not, for the most part, weigh in until the last week
or so, when they decisively killed any chance of this code being merged.
Meanwhile, in public, the Android developers were being
criticized for not getting their code upstream and having their code
removed from the staging tree. It can only have been demoralizing - and expensive too:
At this point we've spent more engineering time on revising this
one patchset (10 revisions to address various rounds of feedback)
and discussion of it than we have on rebasing our working kernel
trees to roughly every other linux release from 2.6.16 to 2.6.32
(which became much easier after switching to git).
No doubt plenty of others would have long since given up and
There are plenty of criticisms which can be directed against Android,
starting with the way they developed a short-term solution behind closed
doors and shipped it in thousands of handsets before even trying to
upstream the code. That is not the way the "upstream first" policy says
things should be done; that policy is there to prevent just this sort of
episode. Once the code has been shipped and applications depend on it,
making any sort of change becomes much harder.
On the other hand, it clearly would not have been reasonable to expect
the Android project to delay the shipping of handsets for well over
a year while the kernel community argued about suspend blockers.
In any case, this should be noted: once the Android developers decided to
engage with the kernel community, they did so in a persistent,
professional, and solution-oriented manner. They deserve some real credit
for trying to do the right thing, even when "the right thing" looks like a
different solution than the one they implemented.
The development community can also certainly be criticized for allowing
this situation to go on for so long before coming together and working out
a mutually acceptable solution. It is hard to say, though, how we could
have done better. While kernel developers often see defending the quality
of the kernel as a whole as part of their jobs, it's hard to tell them that
helping others find the right solutions to problems is also a part of their
jobs. Kernel developers tend to be busy people. So, while it is
unfortunate that so many of them did not jump in until motivated by the
imminent merging of the suspend blocker code, it's also an entirely
understandable expression of basic human nature.
Anybody who wants to criticize the process needs to look at one other
thing: in the end it appears to have come out with a better solution.
Suspend blockers work well for current Android phones, but they are a
special-case solution which will not work well for other use cases, and
might not even work well on future Android-based hardware. The proposed
alternative, based on a quality-of-service mechanism, seems likely to be
more flexible, more maintainable, and better applicable to other situations
(including realtime and virtualization). Had suspend blockers been
accepted, it would have been that much harder to implement the better
solution later on.
And that points to how one of the best aspects of the kernel development
process was on display here as well. We don't accept solutions which look
like they may not stand the test of time, and we don't accept code just
because it is already in wide use. That has a lot to do with how we've
managed to keep the code base vital and maintainable through nearly twenty
years of active development. Without that kind of discipline, the kernel would
have long since collapsed under its own weight. So, while we can certainly
try to find ways to make the contribution process less painful in
situations like this, we cannot compromise on code quality and
maintainability. After all, we fully expect to still be running (and
developing) Linux-based systems after another twenty years.
Comments (40 posted)
The fifth annual Libre Graphics Meeting (LGM) took place May 27-30 in Brussels, Belgium, bringing together the developers of the open source creative application suite: GIMP, Krita, Inkscape, Scribus, Rawstudio, Blender, and a dozen other related projects, such as the Open Font Library and Open Clip Art Library. As is tradition, most of the projects gave update presentations, and both time and meeting space was set aside for teams to work and make plans.
But apart from that, one of the most interesting facets of this year's
LGM was the emphasis placed on professional graphics users in the program.
Artists and designers from every part of the globe were there, not just to
listen, but to present — fully 20 of the 51 sessions were given by
creative professionals (though some, naturally, are developers as well), in
addition to the BOF meetings and interactive workshops. The result is that
LGM helps narrow the gap that sometimes grows between project teams and end
users, a goal that other open source communities could emulate.
Open source in the real world
For example, Ana Carvalho talked about Porto, Portugal-based Plana Press, a low-volume publisher
specializing in art and independent comic books. Plana's first two books
were laid out in Photoshop, but the company has been transitioning to an
entirely open-source production pipeline ever since. Carvalho discussed
the steps involved in taking a book from hand-drawn artwork to final print
production, including those steps still not covered by free software, such
as imposition, the
process of arranging multiple pages together onto large format paper to
facilitate high-speed professional printing.
Markus Holland-Moritz also discussed book production in his talk, which
outlined the self-publishing of his photographic book about traveling
through New Zealand. Holland-Moritz's book is primarily images; in the
process of developing it he used Perl to custom-process several repetitive
tasks and produced useful patches to Scribus that have since been
integrated in the project's trunk. One is an image cache, which reduced
the sixteen-minute wait he initially experienced when opening the
2.6-gigabyte Scribus file down to 20 seconds. The other is per-image
compression settings; previously Scribus allowed document creators to
select a lossy or lossless image compression when producing PDF output, but
Holland-Moritz needed to specify different settings for his photography and
graphical images. The process also led him to develop a custom text markup
language based on TeX and an accompanying Scribus import filter.
Christopher Adams presented a publisher's perspective on the need for high-quality open fonts, such as those developed under the Open Font License. He first described common restrictions placed on commercially-produced fonts, even for publishers, such as the inability to embed some fonts in a PDF, and the inability to extend a font to cover special characters or accent marks needed for a particular project. He then took a tour through the open font landscape, showing off what he considered to be the highest quality open fonts, and showed the differences between them in practical terms — character set coverage, suitability for print versus screen display, the availability of different weights and widths in the same font for visual consistency, and so forth.
Professional publishing is always a big topic at LGM, but it is not the only source of feedback from design professionals. Several talks focused on using open source in the classroom, such as Lila Pagola's discussion of her experiences and occasional frustrations working open source software into her graphic design and photography curriculum for art students in Córdoba, Argentina. Despite the assumption on some people's part that Adobe has a stranglehold on art students' lab time, Pagola has successfully taught students with free software alongside proprietary tools.
Others presented talks detailing their use of open source graphics in
disparate fields and contacts. New Zealand's Helen Varley Jamieson
showcased interactive, multi-user performance art with UpStage, and led a live demonstration during the workshop hours. UpStage is a unique combination of shared whiteboard, avatar-based interaction, and live text-and-audio communication channel.
A bit closer to the typical open source developer, Novell's Jakub Steiner presented an in-depth look at the icon design process he uses for GNOME, SUSE Studio, and other projects, and how it has changed over the years, from the need to hand-produce individual sizes of thousands of raster icon files, to the more streamlined workflow available today using vector graphics. He also pointed out areas that still need work, such as the incomplete scriptability of Inkscape. Steiner and other designers generally build sets of related icons in a single Inkscape file (such as all of the folder-based icons for a desktop icon set); this allows them to define a single color or gradient and reuse it in every icon through cloning, which makes adjusting all of the icons at once possible. But to produce the final output, external scripts are necessary, opening up the Inkscape file, selecting a particular icon by its SVG ID, and saving it at a particular size. It is a vast improvement over a decade ago, but still has a ways to go.
Open source in the abstract
The talks were not limited just to professional reports "from the field," however. Several of the most engaging and challenging sessions were more abstract, and came from artists discussing their work and software practices in principle. Mirko Tobias Schaefer discussed the changing metaphors used in technology (from how we describe computers and networks in terms of physical machines to icon imagery), and how they both push and pull on society as a whole — an area of particular value to open source software as it grapples with how to incorporate more input from interface and interaction designers in the development process. Eric Schrijver also touched on this theme, observing that graphic design as a practice ignored the web for years, focusing instead on traditional print media, and the result was years of bad design on the web, and a culture of design-by-programmers that allows it to persist.
Two talks related the culture of open source to other creative
communities. Barry Threw discussed his work in the music arts world,
including his technical projects aimed at recapturing
audience-performer-composer interaction that was common in centuries past,
but was lost as music developed into a prepackaged, "read only" medium.
Threw's projects include the K-Bow, a Bluetooth-equipped
string bow that captures a wealth of live performance information from the
violin or cello — pressure, acceleration, twist, and more —
beyond recorded sound. The result is a richer record of the performance
event, which opens up more possibilities for the listener to study and
reproduce the technique, and for programmers to tweak and manipulate the
recording. The value of capturing this richer experience, which is more
than what is contained in
the final recording, has analogs for the visual artist as well, he said.
Artist Pete Ippel presented an overview of his recent work exploring the visual design patterns that arise naturally in the open source and open culture movements (such as through Etsy, Instructables, and Make), and how they relate to folk art traditions found in every society for thousands of years. Folk art, he said, is just art created "by the people," and reflects the community that creates in. In the same way, open source software is developed by the people, consequently the sense of community found in folk art through the centuries resonates with the open source movement today, and trends in programming are analogous to trends in folk design.
Several talks offered the open source community more than simple
feedback from the user base, but went so far as to present a challenge to
the community. Denis Jacquerye, widely known from his work with the DejaVu fonts, discussed font design and
features for African languages, encouraging the community to build more
such fonts. African languages, even those based on Latin scripts, have
distinct orthographies and few have adequate coverage in open fonts.
Jacquerye went over the design challenges, but also emphasized the
importance of free open fonts for education, freedom of the press, and
information access in Africa. He noted that African language support can
seem intimidating at first, given that there are more than 2000 languages
spoken on the continent, but observed that half of the population is
covered by just the top 25 of those languages, which makes it a much more
manageable goal for open source projects.
Designer Ginger Coons introduced the Open Colour Standard (with a "u," she emphasized to a round of applause) project, a new effort to standardize a color definition model not controlled by a corporate entity such as an ink manufacturer. The goal is to produce color definitions than can be easily translated to real-world physical output formulas just as easily as on-screen digital images, from printer inks to fabric dyes to any other format. The process is just starting, and looking for interested participants.
Susan Spencer put out a call for open source developers interested in working on fashion design and sewing software, which is currently completely unserved. Fashion design software is a niche dominated by expensive proprietary applications — she mentioned some that retailed in the $3000-$4000 per seat range, and even then came with a limited set of models that cannot be extended by the user. This closed and expensive software niche locks out many young and un-funded designers, in addition to limiting what those with creativity can do. She outlined the basic needs of fashion design software, from pattern-making to integration with fabric cutters, and listed several interesting possibilities that an open source programmer could tackle that the proprietary vendors will not. One example is extending a pattern to a different size — the process involves complex transformations along key seams, often in non-straightforward ways. The methods to perform such pattern resizing are centuries old, but they have never been implemented in software. Spencer's talk elicited enough of a response that a BOF session to discuss it further was added to the program.
The usual suspects
The artist and design-led talks were not the only dishes on the menu, of course. Representatives from the different projects also showcased new developments in their applications, as is tradition. Peter Sikking showed off early designs for a new interface model in the upcoming GEGL-based branch of GIMP. GEGL, the generic graphics library, is a graph-based image processing library that will become the new core of GIMP. Because GEGL represents all image editing as a connected series of operations ("nodes") on a graph, this will mean two important changes for the editor. First, it will make completely lossless editing possible; the existing .XCF file format will go away and be replaced by a format that simply stores the GEGL operations graph. Second, though, this new paradigm of image editing will require rethinking the user interface. Since all operations are undo-able, and because all operations are (in a sense) equal, Sikking is working on a new interface that represents them as a stack of individual operations that can be individually activated, deactivated, or hidden — much like raw photo editors use today.
Jasper van de Gronde presented a new drawing tool for Inkscape, diffusion curves. Diffusion curves are "free-form gradients" that let color emanate in smooth gradients outward from a spline, with user-controllable parameters. They permit artists to draw complex, painting-like images with very few curves and control points. As with GIMP's new features, though, the user interface is still under construction. Hin-Tak Leung spoke about color management and other new features in Ghostscript, Lukáš Tvrdý showed off Krita's new brush engines, and Peter Linnell gave a preview of the next release of Scribus.
The Open Font Library's (OFL) new site was launched at the beginning of the conference, showcasing new features such as Web Open Font Format (WOFF) previews, and OFL members Dave Crossland and Nicolas Spalinger presented talks on font design. Jon Phillips of Open Clip Art showed off the project's new site and the special framework written to support it, Aiki. Finally, the Blender Institute held an evening session that took audience members through the workflow involved in creating a 3-D animated film, from character design to modeling, rigging, animation, lighting, and final rendering. The team used real examples from its upcoming open movie project Sintel and the in-progress Blender 2.5 code base, marking the world debut of the footage.
Crossland also led a hands-on font design workshop centered around an
interactive game called "A, B, C" — one of several workshop sessions
spread out over the four days of the event. Some were centered around
projects planning for their next development cycle, others were more
tutorial-driven. One of the most important for the future of open graphics
development was the OpenRaster
session, led by Krita's Boudewijn Rempt. OpenRaster is cross-application
standard that several projects are collaborating on under the Freedesktop.org Create banner. The goal is to create a flexible raster image format that will be documented and well-supported by all of the tools. The need for such a format comes from the reality that no one application works in isolation in a creative workflow; with a common format, Krita, MyPaint, GIMP, and a host of other programs can all be used together depending on whichever has the right tool for the moment.
The far out
As always, LGM's program also featured several talks that debuted new and unusual applications or developments. Photographer Alexandre Prokoudine demonstrated Darktable, a new photographic workflow tool. Darktable incorporates image management, batch operations, and geotagging, and is plug-in driven, so it can be modified and extended to fit any photographer's process.
The most widely-celebrated session of the entire conference, though, was Tom Lechner's Laidout. Lechner is an independent cartoonist who has been self-publishing his own books for years, and is evidently a gifted programmer to boot. Laidout is a tool he developed entirely on his own to simplify the task of impositioning his books (as referenced above, a feature not yet found in any other open source application). Rather than simply allow repositioning of pages on a larger canvas, though, Lechner has extended the layout engine in a swath of new and surprising ways as he takes on new projects.
Laidout can imposition images on non-rectangular pages, including on Möbius strips and unfolded 3-D polyhedra. It can also arbitrarily
rotate and deform images, manipulate them with meshes, and can edit mesh
gradients (i.e., gradients defined across a 2-D grid of points that can be
individually moved and warped, rather than gradients defined along a
straight line) in place, arbitrarily subdividing them for further
refinement. Lechner performed a live demo of mapping a 360-degree
spherical panoramic photo onto a triangle-based polyhedron model, which he
then unwrapped into a flat, printable shape by selecting the triangular
faces at will. The applause from the audience lasted nearly a minute.
When asked during the subsequent Q&A what interface toolkit Laidout was
written in, Lechner casually replied, "oh, I wrote it myself."
Pushing conferences in a different direction
LGM has always placed more of an emphasis on connecting users and developers than other open source conferences, but this year the difference that emphasis made was more noticeable. It was not perfect; several artists and designers mentioned informally that they would liked to have had more direct discussions with the development teams about the future of the projects, but did not find the opportunity. Finding a way to do that, and to make it easier for users to get involved with the projects themselves is a possibility for next year, according to organizer Louis Desjardin.
But LGM is distinct for putting the users of the software behind the
podium to talk about what they do, how the projects help them, and where
the projects hinder them. Too many other, general open source conferences
draw a line between users and developers — they are viewed as
complementary sets, which ultimately can lead to the mistake of
underestimating the user set and treating it generically as
those-people-who-don't-understand-how-to-program. It would not appear, for
example, that any sessions will be given by users (i.e. those not
involved in the developing the software) at this year's
or Akademy conferences, even
though several of them are ostensibly about "connecting with users." Is it
any wonder, then, when open source projects often struggle with building user experiences? Perhaps all of the conferences could take a page from LGM's book and carve out time in the schedule to listen to what users are actually doing on a day-to-day basis with the software in question.
After all, open source is about creating the tools that allow people to build and do creative things. This year's LGM showcased how well that works, which ought to reinforce its value to all of the developers who were there, and the feedback ought to help ensure that the next round of development empowers that user base even more.
Comments (4 posted)
Running a subscription-oriented publication on the net is an interesting
challenge, to say the least. Here at LWN, thanks to the generosity of our
readers, we have actually made a qualified success of it. Even more
challenging, though, is asking those readers to pay more in uncertain
economic times and in an industry where prices normally fall. Please read
on for a discussion of what we're doing and why.
But first! Let us try to distract you with shiny stuff. We have added a
few new features to the site:
LWN moved to the subscription model in September 2002, well over seven years
ago. The basic individual subscription rate was established at $5/month
then, and has not changed since. Over that time, baseline inflation in the
US has added up to just over 20% (according to the US government, which
would never lie to us about a thing like this), so that $5 buys rather less
than it did then. The value of the dollar has also declined significantly
since 2002, so the large portion of our readership which pays in other
currencies has seen a nice price decrease. That's even still true for
people in the Euro zone.
Additionally, official inflation rates become totally irrelevant when it
comes to large expenses like health insurance, which went up 40% last year
alone. Much to our surprise, the current US administration has not actually
fixed that problem for us.
All this explains why LWN lost an editor in March despite the fact that our
readers have been incredibly loyal to us during the whole economic roller
coaster ride. We have stabilized our finances, but we find ourselves in a
position of working at a pace which will certainly lead to eventual burnout.
Something needs to change
to enable us to address those problems and not only keep LWN alive
but continue to make it better in the coming years.
So we will be increasing our subscription rates as of June 14, 2010. The
new individual "Professional Hacker" rate will be $7/month, with the other
rates scaled accordingly. This increase, we hope, will offset the
increases we have seen, enable us to rebuild our finances, and, eventually,
allow us to bring staff back to its previous level. But that only works if
our subscribers do not leave in disgust; needless to say, we will hope you
will stay with us. In return, we'll make the best of the increase and,
with any luck at all, not do it again for a very long time.
To answer a couple of anticipated questions: prepaid subscriptions
remain valid for the purchased period; the increase only affects
subscriptions purchased on or after June 14. Monthly
subscriptions are a bit more complicated. We have never believed that
our readers wanted to give us permission to charge their cards forever, so
monthly subscriptions have always had a maximum number of authorized
charges associated with them. All monthly subscribers will continue to be
charged the old rate for the number of months they had authorized before
this announcement was posted. Only when those subscribers explicitly
authorize further charges will the new rate come into effect.
Rates for group subscriptions will change by a roughly proportional amount;
we will be contacting our group subscribers at renewal time to discuss the
We're a little nervous about this change; it's hard to ask for more from
the people who have already supported us so well for so long. But we
cannot really find a way around it. We very much hope that you will stick
with us as we work to build an even better and more interesting LWN in the
Comments (131 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: OpenID Connect; New vulnerabilities in clamav, kdenetwork, kernel, transmission,...
- Kernel: 2.6.35 merge window part 3; What comes after suspend blockers; Symbolic links in "sticky" directories.
- Distributions: Peppermint OS: Another member of "Team Linux"; Vinux 3.0; Mandriva Linux 2010 Spring RC2; Rethinking UDS; reviews of MeeGo and Slackware.
- Development: Giggle: A Git GUI; Ardour, GCC in C++, gst123, KOffice, Pylons, ...
- Announcements: Google asks for delay in WebM license consideration; OLPC announces XO 3.0 partnership; Novell's financial results; Facebook alternative...