Suspend blockers first surfaced as wakelocks in February, 2009. They were immediately and roundly criticized by the development community; in response, Android developer Arve Hjønnevåg made a long series of changes before eventually bowing to product schedules and letting the patches drop for some months. After the Linux Foundation's Collaboration Summit this year, Arve came back with a new version of the patch set after being encouraged to do so by a number of developers. Several rounds of revisions later, each seemingly driven by a new set of developers who came in with new complaints, these patches failed to get into the mainline and, at this point, probably never will.
In a number of ways, the situation looks pretty grim - an expensive failure of the kernel development process. Ted Ts'o described it this way:
Ted's comments point to what is arguably the most discouraging part of the suspend blocker story: the Android developers were given conflicting advice over the course of more than one year. They were told several times: fix X to get this code merged. But once they had fixed X, another group of developers came along and insisted that they fix Y instead. There never seemed to be a point where the job was done - the finish line kept moving every time they seemed to get close to it. The developers who had the most say in the matter did not, for the most part, weigh in until the last week or so, when they decisively killed any chance of this code being merged.
Meanwhile, in public, the Android developers were being criticized for not getting their code upstream and having their code removed from the staging tree. It can only have been demoralizing - and expensive too:
No doubt plenty of others would have long since given up and walked away.
There are plenty of criticisms which can be directed against Android, starting with the way they developed a short-term solution behind closed doors and shipped it in thousands of handsets before even trying to upstream the code. That is not the way the "upstream first" policy says things should be done; that policy is there to prevent just this sort of episode. Once the code has been shipped and applications depend on it, making any sort of change becomes much harder.
On the other hand, it clearly would not have been reasonable to expect the Android project to delay the shipping of handsets for well over a year while the kernel community argued about suspend blockers.
In any case, this should be noted: once the Android developers decided to engage with the kernel community, they did so in a persistent, professional, and solution-oriented manner. They deserve some real credit for trying to do the right thing, even when "the right thing" looks like a different solution than the one they implemented.
The development community can also certainly be criticized for allowing this situation to go on for so long before coming together and working out a mutually acceptable solution. It is hard to say, though, how we could have done better. While kernel developers often see defending the quality of the kernel as a whole as part of their jobs, it's hard to tell them that helping others find the right solutions to problems is also a part of their jobs. Kernel developers tend to be busy people. So, while it is unfortunate that so many of them did not jump in until motivated by the imminent merging of the suspend blocker code, it's also an entirely understandable expression of basic human nature.
Anybody who wants to criticize the process needs to look at one other thing: in the end it appears to have come out with a better solution. Suspend blockers work well for current Android phones, but they are a special-case solution which will not work well for other use cases, and might not even work well on future Android-based hardware. The proposed alternative, based on a quality-of-service mechanism, seems likely to be more flexible, more maintainable, and better applicable to other situations (including realtime and virtualization). Had suspend blockers been accepted, it would have been that much harder to implement the better solution later on.
And that points to how one of the best aspects of the kernel development process was on display here as well. We don't accept solutions which look like they may not stand the test of time, and we don't accept code just because it is already in wide use. That has a lot to do with how we've managed to keep the code base vital and maintainable through nearly twenty years of active development. Without that kind of discipline, the kernel would have long since collapsed under its own weight. So, while we can certainly try to find ways to make the contribution process less painful in situations like this, we cannot compromise on code quality and maintainability. After all, we fully expect to still be running (and developing) Linux-based systems after another twenty years.
The fifth annual Libre Graphics Meeting (LGM) took place May 27-30 in Brussels, Belgium, bringing together the developers of the open source creative application suite: GIMP, Krita, Inkscape, Scribus, Rawstudio, Blender, and a dozen other related projects, such as the Open Font Library and Open Clip Art Library. As is tradition, most of the projects gave update presentations, and both time and meeting space was set aside for teams to work and make plans.
But apart from that, one of the most interesting facets of this year's LGM was the emphasis placed on professional graphics users in the program. Artists and designers from every part of the globe were there, not just to listen, but to present — fully 20 of the 51 sessions were given by creative professionals (though some, naturally, are developers as well), in addition to the BOF meetings and interactive workshops. The result is that LGM helps narrow the gap that sometimes grows between project teams and end users, a goal that other open source communities could emulate.
For example, Ana Carvalho talked about Porto, Portugal-based Plana Press, a low-volume publisher specializing in art and independent comic books. Plana's first two books were laid out in Photoshop, but the company has been transitioning to an entirely open-source production pipeline ever since. Carvalho discussed the steps involved in taking a book from hand-drawn artwork to final print production, including those steps still not covered by free software, such as imposition, the process of arranging multiple pages together onto large format paper to facilitate high-speed professional printing.
Markus Holland-Moritz also discussed book production in his talk, which outlined the self-publishing of his photographic book about traveling through New Zealand. Holland-Moritz's book is primarily images; in the process of developing it he used Perl to custom-process several repetitive tasks and produced useful patches to Scribus that have since been integrated in the project's trunk. One is an image cache, which reduced the sixteen-minute wait he initially experienced when opening the 2.6-gigabyte Scribus file down to 20 seconds. The other is per-image compression settings; previously Scribus allowed document creators to select a lossy or lossless image compression when producing PDF output, but Holland-Moritz needed to specify different settings for his photography and graphical images. The process also led him to develop a custom text markup language based on TeX and an accompanying Scribus import filter.
Christopher Adams presented a publisher's perspective on the need for high-quality open fonts, such as those developed under the Open Font License. He first described common restrictions placed on commercially-produced fonts, even for publishers, such as the inability to embed some fonts in a PDF, and the inability to extend a font to cover special characters or accent marks needed for a particular project. He then took a tour through the open font landscape, showing off what he considered to be the highest quality open fonts, and showed the differences between them in practical terms — character set coverage, suitability for print versus screen display, the availability of different weights and widths in the same font for visual consistency, and so forth.
Professional publishing is always a big topic at LGM, but it is not the only source of feedback from design professionals. Several talks focused on using open source in the classroom, such as Lila Pagola's discussion of her experiences and occasional frustrations working open source software into her graphic design and photography curriculum for art students in Córdoba, Argentina. Despite the assumption on some people's part that Adobe has a stranglehold on art students' lab time, Pagola has successfully taught students with free software alongside proprietary tools.
Others presented talks detailing their use of open source graphics in disparate fields and contacts. New Zealand's Helen Varley Jamieson showcased interactive, multi-user performance art with UpStage, and led a live demonstration during the workshop hours. UpStage is a unique combination of shared whiteboard, avatar-based interaction, and live text-and-audio communication channel.
A bit closer to the typical open source developer, Novell's Jakub Steiner presented an in-depth look at the icon design process he uses for GNOME, SUSE Studio, and other projects, and how it has changed over the years, from the need to hand-produce individual sizes of thousands of raster icon files, to the more streamlined workflow available today using vector graphics. He also pointed out areas that still need work, such as the incomplete scriptability of Inkscape. Steiner and other designers generally build sets of related icons in a single Inkscape file (such as all of the folder-based icons for a desktop icon set); this allows them to define a single color or gradient and reuse it in every icon through cloning, which makes adjusting all of the icons at once possible. But to produce the final output, external scripts are necessary, opening up the Inkscape file, selecting a particular icon by its SVG ID, and saving it at a particular size. It is a vast improvement over a decade ago, but still has a ways to go.
The talks were not limited just to professional reports "from the field," however. Several of the most engaging and challenging sessions were more abstract, and came from artists discussing their work and software practices in principle. Mirko Tobias Schaefer discussed the changing metaphors used in technology (from how we describe computers and networks in terms of physical machines to icon imagery), and how they both push and pull on society as a whole — an area of particular value to open source software as it grapples with how to incorporate more input from interface and interaction designers in the development process. Eric Schrijver also touched on this theme, observing that graphic design as a practice ignored the web for years, focusing instead on traditional print media, and the result was years of bad design on the web, and a culture of design-by-programmers that allows it to persist.
Two talks related the culture of open source to other creative communities. Barry Threw discussed his work in the music arts world, including his technical projects aimed at recapturing audience-performer-composer interaction that was common in centuries past, but was lost as music developed into a prepackaged, "read only" medium. Threw's projects include the K-Bow, a Bluetooth-equipped string bow that captures a wealth of live performance information from the violin or cello — pressure, acceleration, twist, and more — beyond recorded sound. The result is a richer record of the performance event, which opens up more possibilities for the listener to study and reproduce the technique, and for programmers to tweak and manipulate the recording. The value of capturing this richer experience, which is more than what is contained in the final recording, has analogs for the visual artist as well, he said.
Artist Pete Ippel presented an overview of his recent work exploring the visual design patterns that arise naturally in the open source and open culture movements (such as through Etsy, Instructables, and Make), and how they relate to folk art traditions found in every society for thousands of years. Folk art, he said, is just art created "by the people," and reflects the community that creates in. In the same way, open source software is developed by the people, consequently the sense of community found in folk art through the centuries resonates with the open source movement today, and trends in programming are analogous to trends in folk design.
Several talks offered the open source community more than simple feedback from the user base, but went so far as to present a challenge to the community. Denis Jacquerye, widely known from his work with the DejaVu fonts, discussed font design and features for African languages, encouraging the community to build more such fonts. African languages, even those based on Latin scripts, have distinct orthographies and few have adequate coverage in open fonts. Jacquerye went over the design challenges, but also emphasized the importance of free open fonts for education, freedom of the press, and information access in Africa. He noted that African language support can seem intimidating at first, given that there are more than 2000 languages spoken on the continent, but observed that half of the population is covered by just the top 25 of those languages, which makes it a much more manageable goal for open source projects.
Designer Ginger Coons introduced the Open Colour Standard (with a "u," she emphasized to a round of applause) project, a new effort to standardize a color definition model not controlled by a corporate entity such as an ink manufacturer. The goal is to produce color definitions than can be easily translated to real-world physical output formulas just as easily as on-screen digital images, from printer inks to fabric dyes to any other format. The process is just starting, and looking for interested participants.
Susan Spencer put out a call for open source developers interested in working on fashion design and sewing software, which is currently completely unserved. Fashion design software is a niche dominated by expensive proprietary applications — she mentioned some that retailed in the $3000-$4000 per seat range, and even then came with a limited set of models that cannot be extended by the user. This closed and expensive software niche locks out many young and un-funded designers, in addition to limiting what those with creativity can do. She outlined the basic needs of fashion design software, from pattern-making to integration with fabric cutters, and listed several interesting possibilities that an open source programmer could tackle that the proprietary vendors will not. One example is extending a pattern to a different size — the process involves complex transformations along key seams, often in non-straightforward ways. The methods to perform such pattern resizing are centuries old, but they have never been implemented in software. Spencer's talk elicited enough of a response that a BOF session to discuss it further was added to the program.
The artist and design-led talks were not the only dishes on the menu, of course. Representatives from the different projects also showcased new developments in their applications, as is tradition. Peter Sikking showed off early designs for a new interface model in the upcoming GEGL-based branch of GIMP. GEGL, the generic graphics library, is a graph-based image processing library that will become the new core of GIMP. Because GEGL represents all image editing as a connected series of operations ("nodes") on a graph, this will mean two important changes for the editor. First, it will make completely lossless editing possible; the existing .XCF file format will go away and be replaced by a format that simply stores the GEGL operations graph. Second, though, this new paradigm of image editing will require rethinking the user interface. Since all operations are undo-able, and because all operations are (in a sense) equal, Sikking is working on a new interface that represents them as a stack of individual operations that can be individually activated, deactivated, or hidden — much like raw photo editors use today.
Jasper van de Gronde presented a new drawing tool for Inkscape, diffusion curves. Diffusion curves are "free-form gradients" that let color emanate in smooth gradients outward from a spline, with user-controllable parameters. They permit artists to draw complex, painting-like images with very few curves and control points. As with GIMP's new features, though, the user interface is still under construction. Hin-Tak Leung spoke about color management and other new features in Ghostscript, Lukáš Tvrdý showed off Krita's new brush engines, and Peter Linnell gave a preview of the next release of Scribus.
The Open Font Library's (OFL) new site was launched at the beginning of the conference, showcasing new features such as Web Open Font Format (WOFF) previews, and OFL members Dave Crossland and Nicolas Spalinger presented talks on font design. Jon Phillips of Open Clip Art showed off the project's new site and the special framework written to support it, Aiki. Finally, the Blender Institute held an evening session that took audience members through the workflow involved in creating a 3-D animated film, from character design to modeling, rigging, animation, lighting, and final rendering. The team used real examples from its upcoming open movie project Sintel and the in-progress Blender 2.5 code base, marking the world debut of the footage.
Crossland also led a hands-on font design workshop centered around an interactive game called "A, B, C" — one of several workshop sessions spread out over the four days of the event. Some were centered around projects planning for their next development cycle, others were more tutorial-driven. One of the most important for the future of open graphics development was the OpenRaster session, led by Krita's Boudewijn Rempt. OpenRaster is cross-application standard that several projects are collaborating on under the Freedesktop.org Create banner. The goal is to create a flexible raster image format that will be documented and well-supported by all of the tools. The need for such a format comes from the reality that no one application works in isolation in a creative workflow; with a common format, Krita, MyPaint, GIMP, and a host of other programs can all be used together depending on whichever has the right tool for the moment.
As always, LGM's program also featured several talks that debuted new and unusual applications or developments. Photographer Alexandre Prokoudine demonstrated Darktable, a new photographic workflow tool. Darktable incorporates image management, batch operations, and geotagging, and is plug-in driven, so it can be modified and extended to fit any photographer's process.
The most widely-celebrated session of the entire conference, though, was Tom Lechner's Laidout. Lechner is an independent cartoonist who has been self-publishing his own books for years, and is evidently a gifted programmer to boot. Laidout is a tool he developed entirely on his own to simplify the task of impositioning his books (as referenced above, a feature not yet found in any other open source application). Rather than simply allow repositioning of pages on a larger canvas, though, Lechner has extended the layout engine in a swath of new and surprising ways as he takes on new projects.
Laidout can imposition images on non-rectangular pages, including on Möbius strips and unfolded 3-D polyhedra. It can also arbitrarily rotate and deform images, manipulate them with meshes, and can edit mesh gradients (i.e., gradients defined across a 2-D grid of points that can be individually moved and warped, rather than gradients defined along a straight line) in place, arbitrarily subdividing them for further refinement. Lechner performed a live demo of mapping a 360-degree spherical panoramic photo onto a triangle-based polyhedron model, which he then unwrapped into a flat, printable shape by selecting the triangular faces at will. The applause from the audience lasted nearly a minute. When asked during the subsequent Q&A what interface toolkit Laidout was written in, Lechner casually replied, "oh, I wrote it myself."
LGM has always placed more of an emphasis on connecting users and developers than other open source conferences, but this year the difference that emphasis made was more noticeable. It was not perfect; several artists and designers mentioned informally that they would liked to have had more direct discussions with the development teams about the future of the projects, but did not find the opportunity. Finding a way to do that, and to make it easier for users to get involved with the projects themselves is a possibility for next year, according to organizer Louis Desjardin.
But LGM is distinct for putting the users of the software behind the podium to talk about what they do, how the projects help them, and where the projects hinder them. Too many other, general open source conferences draw a line between users and developers — they are viewed as complementary sets, which ultimately can lead to the mistake of underestimating the user set and treating it generically as those-people-who-don't-understand-how-to-program. It would not appear, for example, that any sessions will be given by users (i.e. those not involved in the developing the software) at this year's GUADEC or Akademy conferences, even though several of them are ostensibly about "connecting with users." Is it any wonder, then, when open source projects often struggle with building user experiences? Perhaps all of the conferences could take a page from LGM's book and carve out time in the schedule to listen to what users are actually doing on a day-to-day basis with the software in question.
After all, open source is about creating the tools that allow people to build and do creative things. This year's LGM showcased how well that works, which ought to reinforce its value to all of the developers who were there, and the feedback ought to help ensure that the next round of development empowers that user base even more.
But first! Let us try to distract you with shiny stuff. We have added a few new features to the site:
Filtering options (including the list of readers to filter) are managed in the My Account area.
LWN moved to the subscription model in September 2002, well over seven years ago. The basic individual subscription rate was established at $5/month then, and has not changed since. Over that time, baseline inflation in the US has added up to just over 20% (according to the US government, which would never lie to us about a thing like this), so that $5 buys rather less than it did then. The value of the dollar has also declined significantly since 2002, so the large portion of our readership which pays in other currencies has seen a nice price decrease. That's even still true for people in the Euro zone.
Additionally, official inflation rates become totally irrelevant when it comes to large expenses like health insurance, which went up 40% last year alone. Much to our surprise, the current US administration has not actually fixed that problem for us.
All this explains why LWN lost an editor in March despite the fact that our readers have been incredibly loyal to us during the whole economic roller coaster ride. We have stabilized our finances, but we find ourselves in a position of working at a pace which will certainly lead to eventual burnout. Something needs to change to enable us to address those problems and not only keep LWN alive but continue to make it better in the coming years.
So we will be increasing our subscription rates as of June 14, 2010. The new individual "Professional Hacker" rate will be $7/month, with the other rates scaled accordingly. This increase, we hope, will offset the increases we have seen, enable us to rebuild our finances, and, eventually, allow us to bring staff back to its previous level. But that only works if our subscribers do not leave in disgust; needless to say, we will hope you will stay with us. In return, we'll make the best of the increase and, with any luck at all, not do it again for a very long time.
To answer a couple of anticipated questions: prepaid subscriptions remain valid for the purchased period; the increase only affects subscriptions purchased on or after June 14. Monthly subscriptions are a bit more complicated. We have never believed that our readers wanted to give us permission to charge their cards forever, so monthly subscriptions have always had a maximum number of authorized charges associated with them. All monthly subscribers will continue to be charged the old rate for the number of months they had authorized before this announcement was posted. Only when those subscribers explicitly authorize further charges will the new rate come into effect.
Rates for group subscriptions will change by a roughly proportional amount; we will be contacting our group subscribers at renewal time to discuss the new rates.
We're a little nervous about this change; it's hard to ask for more from the people who have already supported us so well for so long. But we cannot really find a way around it. We very much hope that you will stick with us as we work to build an even better and more interesting LWN in the future.
When last we looked in on OpenID, it was close to finalizing the OpenID 2.0 specification. That was in 2007; since that time, various other identity management solutions have come about and have been more widely adopted, OAuth in particular. One of the architects of OpenID, David Recordon, has put out a idea (or "strawman" as he calls it) for a new API that combines the best of OpenID and OAuth into "OpenID Connect".
There are a number of shortcomings of OpenID that Recordon and others would like to see addressed. In the three years since the last revision, the internet has not stood still, but OpenID has. OpenID works reasonably well for web sites, but is much more difficult to use for things like desktop widgets or mobile applications. A bigger problem is that OpenID's user-centric nature has made those users less "valuable" to web sites, which results in fewer sites adopting OpenID.
OpenID is structured such that users need only share a limited amount of data (typically just a URL) with a site in order to register with it. That is good from a privacy perspective, but doesn't give site owners information that they want, like name, email address, photo, and so on. According to Chris Messina—who originated the OpenID Connect concept and name—that makes OpenID users into second-class citizens: "Because OpenID users share less information with third parties, they are perceived as being 'less valuable' than email-based registrants or users that connect to their Facebook or Twitter accounts."
More and more sites are farming out their identity management to big sites like Facebook and Twitter using OAuth. Messina and Recordon's idea is to reimplement OpenID atop OAuth 2.0, which would leverage that existing—widely adopted—API for identity management. It would also allow OpenID to become simpler for web site operators to implement. Recordon pointed out some of the problems he has heard about:
Based on that, one might wonder why OpenID doesn't just adopt OAuth, rather than build atop it, but there is an important distinction between the two. OpenID Connect would still decentralize the storage of user information and allow the user-centric nature of OpenID to survive. Users would be able to choose their provider or run their own that stored their personal information. That way, users would get to choose whom to trust or to only trust their own server.
Another problem that OpenID Connect hopes to solve is to simplify things for users. Right now, users have to remember and type in a URL that corresponds to their OpenID provider, or click on multiple buttons for popular providers (which leads to the so-called NASCAR problem where there are multiple logos as buttons). OpenID Connect would allow for simpler URLs or even email addresses as identifiers.
It is important to recognize that this proposal is being driven by the fact that OpenID adoption has largely stalled. That has happened because the sites that folks want to use want a little—or a lot—more information about those who are signing up for or using their services. There is a trade off, clearly, as it is not unreasonable for site owners to require more information as a kind of payment, as long as they are up front about it. But the privacy conscious are likely to still be marginalized as the demands for information increase.
While there currently is a lot of noise being made about privacy concerns for sites like Facebook, there appears to be little actual action about it by most users. Privacy just does not seem to be something that is high on most users' priority lists, or, perhaps, Scott McNealy was right: "You have zero privacy anyway ... Get over it." OpenID Connect seems like a reasonable idea, overall, but as long as the majority are happy with the current OAuth-based systems, it is a little hard to see it making any headway. Yes, it may be used by a small minority of internet users, but it is likely to require just enough effort that most will not take advantage of it. It would seem that many are already "over it".
The siphoned documents, supposedly stolen by Chinese hackers or spies who were using the Tor network to transmit the data, were the basis for Wikileaks founder Julian Assange's assertion in 2006 that his organization had already "received over one million documents from 13 countries" before his site was launched, according to the article in The New Yorker.
|Package(s):||clamav||CVE #(s):||CVE-2010-1639 CVE-2010-1640|
|Created:||May 27, 2010||Updated:||March 14, 2011|
From the Mandriva advisory:
The cli_pdf function in libclamav/pdf.c in ClamAV before 0.96.1 allows remote attackers to cause a denial of service (crash) via a malformed PDF file, related to an inconsistency in the calculated stream length and the real stream length (CVE-2010-1639).
Off-by-one error in the parseicon function in libclamav/pe_icons.c in ClamAV 0.96 allows remote attackers to cause a denial of service (crash) via a crafted PE icon that triggers an out-of-bounds read, related to improper rounding during scaling (CVE-2010-1640).
|Created:||May 27, 2010||Updated:||April 21, 2011|
From the KDE advisory:
In some versions of KGet (2.4.2) a dialog box is displayed allowing the user to choose the file to download out of the options offered by the metalink file. However, KGet will simply go ahead and start the download after some time - even without prior acknowledgment of the user, and overwriting already-existing files of the same name. (CVE-2010-1511)
|Created:||May 31, 2010||Updated:||September 23, 2010|
|Description:||From the Red Hat bugzilla:
The existing [btrfs] code would have allowed you to clone a file that was only open for writing. Not an expected behaviour.
|Created:||May 31, 2010||Updated:||June 2, 2010|
|Description:||From the Red
A vulnerability was reported to Debian for POE::Component::IRC, where it did not remove carriage returns and line feeds. This affects tools or IRC bots using the perl module, and can be used to execute arbitrary IRC commands by passing an argument such as "some text\rQUIT" to the 'privmsg' handler, which would cause the client to disconnect from the server.
|Created:||June 2, 2010||Updated:||June 2, 2010|
|Description:||The rhn-client-tools utilities fail to set secure permissions on the loginAugh.pkl file, allowing local users to manipulate it. The result can be unwanted package downloads or the manipulation of action lists associated with the system's profile.|
|Created:||June 1, 2010||Updated:||June 2, 2010|
|Description:||From the Gentoo advisory:
Multiple stack-based buffer overflows in the tr_magnetParse() function in libtransmission/magnet.c have been discovered. A remote attacker could cause a Denial of Service or possibly execute arbitrary code via a crafted magnet URL with a large number of tr or ws links.
Page editor: Jake Edge
Brief itemsreleased on May 30. Changes merged since last week's summary include "ramoops" (for saving oops information in persistent memory), direct I/O support in Btrfs, and some changes to how truncate() is handled. See the separate article below for a summary of changes, or the full changelog has all the details.
Stable updates: 126.96.36.199 was released on June 1. It removes two patches which created problems for 188.8.131.52 users; only those who have experienced difficulties need to think about upgrading.
How is anyone supposed to use this? What are the semantics of this thing? What are the units of its return value? What is the base value of its return value? Does it return different times on different CPUs? I assume so, otherwise why does sched_clock_cpu() exist? <looks at the sched_clock_cpu() documentation, collapses in giggles>
The ACPI BIOS is the standard way of getting at processor idle states in the x86 world. So why would Linux want to move away from ACPI for its cpuidle driver? Len explains:
The motivating factor appears to be a BIOS bug shipping on Dell systems for some months now which disables a number of idle states. As a result, Len's test system takes 100W of power when using the ACPI idle code; when idle states are handled directly, power use drops to 85W. That seems like a savings worth having. The fact that Linux now uses significantly less power than certain other operating systems - which are dependent on ACPI still - is just icing on the cake.
In general, it makes sense to use hardware features directly in preference to BIOS solutions when we have the knowledge necessary to do so. There can be real advantages in eliminating the firmware middleman in such situations. It's nice to see a chip vendor - which certainly has the requisite knowledge - supporting the use of its hardware in this way.noticed this problem and suggested that it should be fixed: "This is very important! We appear to be making a big mess which we can never fix up."
As it happens, the developers of drivers for these sensors tried to solve this problem earlier this year. That work culminated in a pull request asking Linus to accept the ambient light sensors framework into the 2.6.34 kernel. That pull never happened, though; Linus thought that these sensors should just be treated as another (human) input device, and others requested that it be expanded to support other types of sensors. This framework has languished ever since.
Perhaps the light sensor framework wasn't ready, but the end result is that its developers have gotten discouraged and every driver going into the system is implementing a different, incompatible API. Other drivers are waiting for things to stabilize; Alan Cox commented: "We have some intel drivers to submit as well when sanity prevails." It's a problem clearly requiring a solution, but it's not quite clear who will make another try at it or when that could happen.
Kernel development newsthe 2.6.35-rc1 release on May 30. A relatively small number of changes have been merged since last week's summary; the most significant are summarized here.
User-visible changes include:
Changes visible to kernel developers include:
void call_usermodehelper_setfns(struct subprocess_info *info, int (*init)(struct subprocess_info *info), void (*cleanup)(struct subprocess_info *info), void *data);
The new init() function will be called from the helper process just before executing the helper function. There is also a new function:
call_usermodehelper_fns(char *path, char **argv, char **envp, enum umh_wait wait, int (*init)(struct subprocess_info *info), void (*cleanup)(struct subprocess_info *), void *data)
This variant is like call_usermodhelper() but it allows the specification of the initialization and cleanup functions at the same time.
As is always the case, a few things were not merged. In the end, suspend blockers did not make it; there was really no question of that given the way the discussion went toward the end of the merge window. The fanotify file notification interface did not go in, despite the lack of public opposition. Also not merged was the latest uprobes posting. Concurrency-managed workqueues remain outside of the mainline, as does a set of patches meant to prepare the ground for that feature. Transparent hugepages did not go in, but it was probably a bit early for that code in any case. The open by handle system calls went through a bunch of revisions prior to and during the merge window, but remain unmerged. A number of these features can be expected to try again in 2.6.36; others will probably vanish.
All told, some 8,113 non-merge changesets were accepted during the 2.6.35 merge window - distinctly more than the 6,032 merged during the 2.6.34 window. Linus's announcement suggests that a few more changes might make their way in after the -rc1 release, but that number will be small. Almost exactly 1000 developers have participated in this development cycle so far. As Linus noted in the 2.6.35-rc1 announcement, the development process continues to look healthy.a pull request was sent to Linus. All that remained was to see whether Linus actually pulled it. That did not happen; by the end of the merge window, the newly reinvigorated discussion had made that outcome unsurprising. But the discussion which destroyed any chance of getting that code in has, in the end, yielded the beginnings of an approach which may be acceptable to all participants. This article will take a technical look at the latest round of objections and the potential solution.
As a reminder, suspend blockers (formerly "wakelocks") came about as part of the power management system used on Android phones. Whenever possible, the Android developers want to put the phone into a fully suspended state, where power consumption is minimized. The Android model calls for automatic ("opportunistic") suspend to happen even if there are processes which are running. In this way, badly-written programs are prevented from draining the battery too quickly.
But a phone which is suspended all the time, while it does indeed run a long time on a single charge, is also painful to use. So there are times when the phone must be kept running; these times include anytime that the display is on. It's also important to not suspend the phone when interesting things are happening; that's where suspend blockers come in. The arrival of a key event, for example, will cause a suspend blocker to be obtained within the kernel; that blocker will be released after the event has been read by user space. The user-space application, meanwhile, takes a suspend blocker of its own before reading events; that will keep the system running after the kernel releases the first blocker. The user-space blocker is only released once the event has been fully processed; at that point, the phone can suspend.
The latest round of objections included some themes which had been heard before: in particular, the suspend blocker ABI, once added to the kernel, must be maintained for a very long time. Since there was a lot of unhappiness with that ABI, it's not surprising that many kernel developers did not want to be burdened with it indefinitely. There are also familiar concerns about the in-kernel suspend blocker calls spreading to "infect" increasing numbers of drivers. And the idea that the kernel should act to protect the system against badly-written applications remains controversial; some simply see that approach as making a more robust system, while others see it as a recipe for the proliferation of bad code.
In other words, using cpuidle, current kernels already implement the "opportunistic suspend" idea - for the set of sleep states known to the cpuidle code now. On x86 hardware, a true "suspend" is a different hardware state than the sleep states used by cpuidle, but (1) the kernel could hide those differences, and (2) architectures which are more oriented toward embedded applications tend to treat suspend as just another idle state already. There are signs that x86 is moving in the same direction, where there will be nothing all that special about the suspended state.
That said, there are some differences at the software level. Current idle states are only entered when the system is truly idle, while opportunistic suspend can happen while processes are running. Idle states do not stop timers within the kernel, while suspend does. Suspend, in other words, is a convenient way to bring everything to a stop - whether or not it would stop of its own accord - until some sort of sufficiently interesting event arrives. The differences appear to be enough - for now - to make a "pure" QOS-based implementation impossible; things can head in that direction, though, so it's worth looking at that vision.
To repeat: current CPU idle states are chosen based on the QOS requirements indicated by the kernel. If some kernel subsystem claims that it needs to run with latencies measured in microseconds, the kernel knows that it cannot use a deep sleep state. Bringing suspend into this model will probably involve the creation of a new QOS level, often called "QOS_NONE", which specifies that any amount of latency is acceptable. If nothing in the system is asking for a QOS greater than QOS_NONE, the kernel knows that it can choose "suspend" as an idle state if that seems to make sense. Of course, the kernel would also have to know that any scheduled timers can be delayed indefinitely; the timer slack mechanism already exists to make that information available, but this mechanism is new and almost unused.
In a system like this, untrusted applications could be run in some sort of jail (a control group, say) where they can be restricted to QOS_NONE. In some versions, the QOS level of that cgroup is changed dynamically between "normal" and QOS_NONE depending on whether the system as a whole thinks it would like to suspend. Once untrusted applications are marked in this way, they can no longer prevent the system from suspending - almost.
One minor difficulty that comes in is that, if suspend is an idle state, the system must go idle before suspending becomes an option. If the application just sits in the CPU, it can still keep the system as a whole from suspending. Android's opportunistic suspend is designed to deal with this problem; it will suspend the system regardless of what those applications are trying to do. In the absence of this kind of forced suspend, there must be some way to keep those applications from keeping the system awake.
One intriguing idea was to state that QOS_NONE means that a process might be forced to wait indefinitely for the CPU, even if it is in a runnable state; the scheduler could then decree the system to be idle if only QOS_NONE processes are runnable. Peter Zijlstra worries that not running runnable tasks will inevitably lead to all kinds of priority and lock inversion problems; he does not want to go there. So this approach did not get very far.
An alternative is to defer any I/O operations requested by QOS_NONE processes when the system is trying to suspend. A process which is waiting for I/O goes idle naturally; if one assumes that even the most CPU-hungry application will do I/O eventually, it should be possible to block all processes this way. Another is to have a user-space daemon which informs processes that it's time to stop what they are doing and go idle. Any process which fails to comply can be reminded with a series of increasingly urgent signals, culminating in SIGKILL if need be.
Approaches like this can be implemented, and they may well be the long-term solution. But it's not an immediate solution. Among other things, a purely QOS-based solution will require that drivers change the system's overall QOS level in response to events. When something interesting happens, the system should not be allowed to suspend until user space has had a chance to respond. So important drivers will need to be augmented with internal QOS calls - kernel-space suspend blockers in all but name, essentially. Timers will need to be changed so that those which can be delayed indefinitely do not prevent the system from suspending. It might also be necessary to temporarily pass a higher level of QOS to applications when waking them up to deal with events. All of this can probably be done in a way that can be merged, but it won't solve Android's problem now.
So what we may see in the relatively near future is a solution based on an approach described by Alan Stern. Alan's idea retains the use of forced suspend, though not quite in the opportunistic mode. Instead, there would be a "QOS suspend" mode attainable by explicitly writing "qos" to /sys/power/state. If there are no QOS constraints active when "QOS suspend" is requested, the system will suspend immediately; otherwise, the process writing to /sys/power/state will block until those constraints are released. Additionally, there would be a new QOS constraint called QOS_EVENTUALLY which is compatible with any idle state except full suspend. These constraints - held only within the kernel - would block suspend when things are happening.
In other words, Android's kernel-space suspend blockers turn into QOS_EVENTUALLY constraints. The difference is that QOS terms are being used, and the kernel can make its best choice on how those constraints will be met.
There are no user-space suspend blockers in Alan's approach; instead, there is a daemon process which tries to put the system into the "QOS suspend" state whenever it thinks that nothing interesting is happening. Applications could communicate with that daemon to request that the system not be suspended; the daemon could then honor those requests (or not) depending on whatever policy it implements. Thus, the system suspends when both the kernel and user space agree that it's the right thing to do, and it doesn't require that all processes go idle first. This mechanism also makes it easy to track which processes are blocking suspend - an important requirement for the Android folks.
In summary, as Alan put it:
Android developer Brian Swetland agreed, saying "...from what I can see it certainly seems like this model provides us with the functionality we're looking for." So we might just have the form of a real solution.
There are a number of loose ends to tie down, of course. Additionally, various alternatives are still being discussed; one approach would replace user-space wakelocks with a special device which can be used to express QOS constraints, for example. There is also the little nagging issue that nobody has actually posted any code. That problem notwithstanding, it seems like there could be a way forward which would finally break the roadblock that has kept so much Android code out of the kernel for so long.
Security problems that exploit badly written programs by placing symbolic links in /tmp are legion. This kind of flaw has existed in applications going back to the dawn of UNIX time, and new ones get introduced regularly. So a recent effort to change the kernel to avoid these kinds of problems would seem, at first glance anyway, to be welcome. But some kernel hackers are not convinced that the core kernel should be fixing badly written applications.
These /tmp symlink races are in a class of security vulnerabilities known as time-of-check-to-time-of-use (TOCTTOU) bugs. For /tmp files, typically a buggy application will check to see if a particular filename exists and/or if the file has a particular set of characteristics; if the file passes that test, the program uses it. An attacker exploits this by racing to put a symbolic link or different file in /tmp between the time of the check and the open or create. That allows the attacker to bypass whatever the checks are supposed to enforce.
For programs with normal privilege levels, these attacks can cause a variety of problems, but don't lead to system compromise. But for setuid programs, an attacker can use the elevated privileges to overwrite arbitrary files in ways that can lead to all manner of ugliness, including complete compromise via privilege escalation. There are various guides that describe how to avoid writing code with this kind of vulnerability, but the flaw still gets reported frequently.
Ubuntu security team member Kees Cook proposed changing the kernel to avoid the problem, not by removing the race, but by stopping programs from following the symlinks that get created. "Proper" fixes in applications will completely avoid the race by creating random filenames that get opened with O_CREAT|O_EXCL. But, since these problems keep cropping up after multiple decades of warnings, perhaps another approach is in order. Cook adapted code from the Openwall and grsecurity kernels that did just that.
Since the problems occur in shared directories (like /tmp and /var/tmp) which are world-writable, but with the "sticky bit" turned on so that users can only delete their own files, the patch restricts the kinds of symlinks that can be followed in sticky directories. In order for a symlink in a sticky directory to be followed, it must either be owned by the follower, or the directory and symlink must have the same owner. Since shared temporary directories are typically owned by root, and random attackers cannot create symlinks owned by root, this would eliminate the problems caused by /tmp file symlink races.
The first version of the patch elicited a few suggestions, and an ACK by Serge Hallyn, but no complaints. Cook obviously did a fair amount of research into the problem and anticipated some objections from earlier linux-kernel discussions, which he linked to in the post. He also linked to a list of 243 CVE entries that mention /tmp—not all are symlink races, but many of them are. When Cook revised and reposted the patch, though, a few complaints cropped up.
For one thing, Cook had anticipated that VFS developers would object to putting his test into that code, so he put it into the capabilities checks (cap_inode_follow_link()) instead. That didn't sit well with Eric Biederman, who said:
Alan Cox agreed that it should go into SELinux or some specialized Linux security module (LSM). He also suggested that giving each user their own /tmp mountpoint would solve the problem as well, without requiring any kernel changes: "Give your users their own /tmp. No kernel mods, no misbehaviours, no weirdomatic path walking hackery. No kernel patch needed that I can see."
But Cook and others are not convinced that there are any legitimate applications that require the ability to follow these kinds of symlinks. Given that following them has been a source of serious security holes, why not just fix it once and for all in the kernel? One could argue that changing the behavior would violate the POSIX standard—one of the objections Cook anticipated—but that argument may be a bit weak. Ted Ts'o believes that POSIX doesn't really apply because the sticky bit isn't in the standard:
Per-user /tmp directories might solve the problem, but come with an administrative burden of their own. Eric Paris notes that it might be a better solution, but it doesn't come for free:
Ts'o agrees: "I do have a slight preference against per-user /tmp mostly because it gets confusing for administrators, and because it could be used by rootkits to hide themselves in ways that would be hard for most system administrators to find." Based on that and other comments, Cook revised the patches again, moving the test into VFS, rather than trying to come in through the security subsystem.
In addition, he changed the code so that the new behavior defaulted "off" to address one of the bigger objections. Version 3 of the patch was posted on June 1, and has so far only seen comments from Al Viro, who doesn't seem convinced of the need for the change, but was nevertheless discussing implementation details.
It may be that Viro and other filesystem developers—Christoph Hellwig did not seem particularly in favor of the change for example—will oppose this change. It is, at some level, a band-aid to protect poorly written applications, but it also provides a measure of protection that some would like to have. As Cook pointed out, the Ubuntu kernel already has this protection, but he would like to see that protection extended to all kernel users. Whether that happens remains to be seen.
Patches and updates
Core kernel code
Filesystems and block I/O
Page editor: Jonathan Corbet
News and Editorials
The first question that springs to mind when hearing of a new Linux distribution is not "what does it do?" but "why?" It would seem by now that virtually every possible angle has been covered, and that a Linux distribution must exist for almost any use case one could conceive of. Yet the recently-announced Peppermint Linux is slightly different in that it seeks to bridge the gap between standard desktop computing and "cloud" computing.
Peppermint is a fourth-generation Linux distribution. Peppermint is based on Linux Mint, which is in turn based on Ubuntu, which is based on Debian. It uses the LXDE desktop and Mozilla Prism, Mozilla's Firefox-based site specific browser, to run web-based applications more like standard applications. Aside from a different set of default applications, slight customization of Prism, and some pepperminty artwork, there's not a great deal of difference between Peppermint and Linux Mint's LXDE and Fluxbox editions. That's not surprising, Peppermint Linux contributor Kendall Weaver also contributes to Linux Mint and Lubuntu. Shane Remington is responsible for the web development and marketing for Peppermint, and Nick Canupp handles the forums and bug tracking.
One of Peppermint's most distinguishing features may be the attention paid to marketing. It's unusual for a fledgling distribution to focus intently on marketing, but Remington feels that a lack of marketing is one of the reasons that Linux has not won more converts.
Given that Weaver is already contributing to other distributions, why would another distribution be necessary?
Originally the concept was rather simple, we were going to take Linux Mint and make it "spicier" (hence, the name "Peppermint") by adding clean social network integration. I love the look of Sidux so we decided on a color scheme in that general neighborhood. I guess the single biggest inspiration is the fact that with more applications moving to the cloud, your OS serves less purpose as an OS and more of a portal. We decided that we wanted to build the best portal.
[...] You can have a super fast, lightweight, desktop - make it your own with whatever you want to install yet have the ability to fire off a web application in Prism which allows the SaaS [Software as a Service] or PaaS [Platform as a Service] to act as if it's installed locally.
To be clear, though, Weaver and Remington stressed that Peppermint is not about competing with other Linux distributions. Instead, they say they're part of "Team Linux." Remington says a main objective behind Peppermint is "to gain new users for Team Linux":
Peppermint installs easily and works out of the box. You don't need, and shouldn't need, any type of super-geek hacking skills to operate a Linux system and we set out to prove that one point. There hasn't been one person we've sat Peppermint in front of who [couldn't] pick up the mouse and figure it out almost instantly. That was our goal and we feel pretty strongly that we achieved it.
According to Weaver, the Linux desktop-only or cloud-only offerings made no sense. Peppermint OS was envisioned as a "hybrid desktop" that bring the two together and give "the user more freedom and more choices while offering a comfortable and familiar computing experience." Instead of taking the plunge directly from something like Ubuntu directly to ChromeOS, Weaver says that Peppermint is about "exposing a lot of the possibilities of what can be done in the cloud without taking away the ability to easily install local applications to handle all of the same functions."
No doubt this concept would not thrill Richard Stallman. Much of the software that Peppermint OS points to is free as in beer, but not free as in freedom. For example, Peppermint points to Google Docs for users who want to edit office documents and Pixlr for photo editing. Users wanting offline applications will need to install the standard Linux applications, since offline support is not available for the included web applications. Presumably this will change as Google and other providers introduce offline features based on HTML5, but for the time being there's not much support for offline use of web apps.
The social network integration that Weaver mentioned is minimal. Peppermint includes a Facebook link in the application menu, but beyond that there's not much integration so far. Ubuntu 10.04 goes much farther with the "Me Menu" and the selection of Gwibber to connect to Facebook, Twitter, Identi.ca, Flickr and others.
As its heritage suggests, there's not a great deal of difference between Peppermint and others in the Ubuntu/Mint family. Users who are looking for a light desktop with a lot of integration with web-based services can try Peppermint or just use Lubuntu or Linux Mint's LXDE release and install the Prism package. There doesn't seem to be a lot of "special sauce" in Peppermint, at least the current iteration, that isn't available in other distributions. Whether it takes off with Windows and Mac users is another story. According to Remington, appealing to Windows and Mac users isn't something "Team Linux" has done well thus far, but they hope to address that with Peppermint.
The Peppermint project doesn't have a strong commercial push yet, but Remington did hint that "there are some things on the table that we are working on but we are keeping a tight lid on for the moment." Peppermint will have a 64-bit version "soon" and "another special something that we will announce here in a few days' time, we hope."
As it is, Peppermint really doesn't have much to offer above and beyond current distributions for existing Linux users. It remains to be seen what the team does from here, but it's hard to see why the same work couldn't have been accomplished under the umbrella of another project. A custom edition of Lubuntu or Linux Mint, within the frameworks of those projects, would probably be more effective. Any marketing push and web development that Remington could supply to gain attention with Windows and Mac users for Peppermint could as easily be applied to an existing project. Perhaps Peppermint will evolve into something unique and compelling over time, but so far it's hard to see the reason for yet another entirely new Linux distribution.
New ReleasesVinux 3.0, a distribution designed for visually impaired users, has been released. "On behalf of the whole Vinux community I am happy to announce the 3rd release of Vinux - Linux for the Visually Impaired, based on Ubuntu 10.04 - Lucid Lynx. This version of Vinux provides three screen-readers, two full-screen magnifiers, dynamic font-size/colour-theme changing as well as support for USB Braille displays." released the second release candidate of Mandriva 2010.1. "As announced previously, here comes the last development release for Mandriva Linux 2010 Spring. This is essentially a bug fix release."
FedoraWe've had yet another in a long line of successful releases of the Fedora distribution. Now that the furor over the first few release days has passed, we on the Board want to recognize the outstanding efforts of our friends and colleagues in the Fedora Project." Fedora Legal wishes to give the Fedora community a window of time for discussion and review of the revised FPCA. Due to the fact that the changes are relatively minor, and the original draft has been open for comments for some time now, this second window is open until June 4, 2010 (2010-06-04). After that point, either another revised FPCA will be released for review, or we will begin the process of phasing in the FPCA and phasing out the Fedora ICLA."
Red Hat Enterprise Linuxpublished [PDF] a white paper covering the state of security for the first five years of Red Hat Enterprise Linux 4. "Red Hat Enterprise Linux 4 was released on February 15th, 2005. This report takes a look at the state of security for the first five years from release. We look at key metrics, specific vulnerabilities, and the most common ways users were affected by security issues. We will show some best practices that could have been used to minimize the impact of the issues and also take a look at how the included security innovations helped."
SUSE Linux and openSUSEreports on the recent openSUSE Strategy meeting. "Beside of the usual meeting things (introduction, ground rules, goals of the meeting) we wrapped up the stuff we did over the last months during our weekly IRC meetings. So we concentrated on our users, the strength and weakness openSUSE has, the competition we face and our expectations for future changes in the way we use computers. When building a strategy, you acknowledge that you can't be the best everywhere, you can't be everything to everybody, if you want to be successful, so you need to choose your focus - the already existing strength might be a good start to focus on." The team plans to present proposals to the community on June 8, 2010, which will then be open for 30 days of discussion.
Ubuntu familylooks at the evolution of the Ubuntu Developer Summit (UDS) with an eye toward making it better in the future. He lists various problems with the status quo along with proposals for addressing some of those and is inviting the Ubuntu community to share its thoughts as well. "UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we cant fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that were spending a lot of time at UDS talking about things which cant get done that cycle (and may never get done)." Why: Firefox 3.0 (and xulrunner 1.9) are now unsupported by Mozilla. Rather than backporting security fixes to these now, we are moving to a support model where we will be introducing major new upstream versions in stable releases. The reason for this is the support periods from Mozilla are gradually becoming shorter, and it will be more and more difficult for us to maintain our current support model in the future."
Distribution NewslettersWelcome to this year's fourth issue of DPN, the newsletter for the Debian community. Topics covered in this issue include: * Bits from the Debian Project Leader * Parallel booting enabled by default * DebConf Reconfirmation Deadline * Declassification of the debian-private mailing list * LILO about to be removed in Debian 6.0 "Squeeze" * Firmware support in Debian's installation system * ... and much more." DistroWatch Weekly for May 31, 2010 is out. "Fedora 13 was finally released last week and, as promised, it is given prominent space in our weekly summary of events in the free OS world. Read the interview with leading Fedora personalities who discuss the many new characteristics of the release, then dip into our first-look review of the project's KDE edition. The news section also starts with a Fedora story, bringing attention to the large number of custom Fedora spins united under one web page for easy comparison and access. In other news, Red Hat focuses on green computing in the upcoming version of its enterprise Linux product, Sabayon developers prepare for a new release with a number of interesting enhancements, and a group of BSD hackers in Germany take over the development of DesktopBSD. Also in this issue, a reader's warning about the suitability of Qimo 4 Kids 2.0 for children, an update on the Mandriva 2010.1 roadmap, and a tutorial about creating PBI packages that can be installed on a PC-BSD system with one click. A big issue with something for everyone, happy reading!" This week's issue kicks off with many announcements from the Fedora Project over the past week, including much detail on the release of Fedora 13, amongst many other items. In news from the Fedora Planet, some discussion on Google-sponsored new VP8/WebM open video standards, a last chance to vote in the various Fedora Board elections, and an article on "12 tips to getting things done in open source." In this week's Fedora In the News, we cover previews and reviews about the brand-new Fedora 13 release from around the globe. In Ambassador news, lots of coverage from the recent Fedora Ambassador Day North America, including links to blog postings about last week's event held at Iowa State University. The QA Team brings some brief news focused around the lead-up to Fedora 13. Translation team news is next, including recent changes in the design of Fedora documentation structure, an overview of Fedora 13 tasks from this past week and a new member of the Fedora Localization Project for Arabic. Security Advisories covers the security-related packages released for Fedora 11, 12 and 13 over the past week. News from the KDE SIG is next, including arrival of KDE SC 4.5 beta to KDE-RedHat unstable repositories for Fedora 13, and recent work on a new Phenon backend for VLC. This issue wraps up with updates from the Fedora Summer Coding Project, with a status update on what students and their mentors are up to. Enjoy Fedora 227 and Fedora 13!" openSUSE Weekly News for May 29, 2010 is out. "Now the twentyfirst week goes to the end, and we are pleased to announce our new issue. This week was very busy. I've made my first step with Milestone 7, and I like it. So I propose that you try it out too. And please not forget to file founded bugs in our bugzilla.Through helping with testing, we all can make our distribution better and more stable. The other thing where I was busy was the move from our Weekly News pages to a new Place. From now on, you can find actual Weekly News under: http://wiki.opensuse.org/Weekly_news. So wish you many joy by reading this Issue :-)" In this issue we cover Track the Desktop Team and UNE in Maverick, Ubuntu Server update for Maverick Meerkat. Ubuntu Foundations and Maverick Meerkat 10.10, Maverick Community Team Plans, Welcome: New Ubuntu Members, Winners of the 1st Annual Ubuntu Women World Play Announced, Ubuntu Stats, Ubuntu NC LoCo Team: Guitars to Goat Festivals: Ubuntu For All, Ubuntu Massachusetts LoCo Team: Ubuntu @ Intel LAN Party, Catalan LoCo Team: Ubuntu Lucid release party in Valencia, Why Launchpad Rocks: Great Bug Tracking, Ubuntu Forums News, Interview with Penelope Stowe, The behavioral economics of free software, Return of the Ubuntu Server papercuts, Rethinking the Ubuntu Developer Summit, Testing Indicator Application Menu Support, In The Press, In The Blogosphere, Landscape 1.5 Released with new Enterprise Features, Canonical Pushes Skype into Ubuntu Repository, Linux Security Summit 2010, Full Circle Magazine #37, Ubuntu UK Poscast: Three Friends, Upcoming Meetings and Events, Updates and Security and much much more!"
Newsletters and articles of interesttakes a look at Ubuntu spinoffs and beyond. "Curious users who have come to Linux through Ubuntu gain from the traditional virtues of a closer relationship with the software, the computer, and how it works, and may have become inquisitive about other versions of GNU/Linux and why they exist, the loyalties and animosities they arouse, and the experience and fun they bring."
Distribution reviewsreviews MeeGo 1.0. "We conducted extensive testing of the MeeGo 1.0 Netbook User Experience on the same Mini 10v to see how it compares to its Moblin predecessor. The underlying design philosophy is largely unchanged, but a number of significant differences are apparent in the application stack. In the transition from Moblin to MeeGo, Intel seems to have significantly reined in its ambitions by making a number of pragmatic compromises. Several components from Moblin that were built largely from scratch have been discarded in MeeGo in favor of existing Linux software." a review of MeeGo. "Right from the get-go, MeeGo looks neat and very well integrated. Indeed it is! What it does is to turn a netbook into more of an appliance - a single purpose tool in a small, neat little package. Every hardware component on the Asus 1000HE worked perfectly, including the web camera, built-in microphone, bluetooth, wireless, touchpad (with two finger scroll), keyboard hotkeys and even suspend and resume! From the boot screen to the desktop it's certainly pretty. It's the best integrated Linux "desktop" I have ever seen. The simple icons are really striking and give the whole operating system a solid, unified look and feel." reviews the latest Slackware release. "I think Slackware has a certain style that should be appreciated. Criticizing Slackware for lack of modernity would be like criticizing a well-maintained 1957 Chevy for not having power windows or satellite radio. You don't run Slackware to escape from the complexity or configurability of Linux; you run Slackware to embrace those things. Users turn to Slackware for a Linux distribution that doesn't get in the way. Package up the system software and make it relatively easy to shove on to a computer. Then get out of the way. And that's what Slackware does."
Page editor: Rebecca Sobol
In the roughly five years that the Git distributed version control system has been around, it has gained quite a following. But at its core, Git is command-line oriented, which doesn't necessarily suit all of its users. Along the way, various GUI interfaces to Git have been created, including two Tcl/Tk-based tools that come with Git. Giggle is a GUI front-end for Git that is based on GTK+, which released a 0.5 version in late April.
The two tools that come with Git are oriented for two separate jobs: gitk is for browsing the repository, while git-gui provides a way to change the repository by committing files, merging, creating branches, and so on. The combination provides fairly full-featured access to Git but, because of its Tcl/Tk-based UI, lacks much in the way of eye appeal. In addition, those tools don't integrate well with a GNOME desktop, visually or functionally, which is what Giggle (and others) are trying to do.
Giggle combines both the repository browsing and changing functions into one program, but the feature set for the latter still lags git-gui. There are two modes in Giggle: "Browse" for looking through the source tree and "History" for looking at the commits in the repository.
Browse mode has a three-panel view, with the source tree on the left, any currently selected file's contents at the top right, and a log and graph of the revision history of the file at the bottom right. Clicking on earlier revisions in the history section changes the file pane to show that revision as one might expect. In addition, hovering over lines in the file pane brings up a pop-up with the commit information when that line was added, so you are essentially always seeing the equivalent of git blame.
Other operations, like editing or committing a file, creating a branch or patch, etc. are also available in browse mode. Double-clicking on a file name brings up an editor, though how it chooses which editor is a bit of a puzzle. For the Linux kernel repository, it decided that Emacs was a good choice, but for the LWN site code, KWrite was deemed the proper choice. Presumably the latter comes from some default editor choice down in the guts of the KDE preferences, but it's unclear where Emacs came from—perhaps the different implementation languages (Python vs. C) played a role.
That points to one of the areas that makes Giggle somewhat difficult to use: lack of any documentation. It's easy enough to click around and figure most things out, but a small users' manual would not be out of place either. In addition, the "click around" method of figuring out Giggle runs afoul of its other main deficiency: performance.
The performance of Giggle is rather poor, especially considering that the underlying tool has a definite focus on speed. Starting up Giggle in a Linux kernel repository takes 15-20 seconds of 99% CPU usage before there is a usable interface. That might be understandable for a large repository with many files and revisions, like the Linux kernel, but the performance was much the same on a much smaller repository.
It's not just startup that is slow, either. Switching from browsing to history mode can sometimes take up to ten seconds. When scrolling through history, Giggle will just pause and eat CPU for a while. Overall, it is a fairly painful experience, especially when compared with gitk, which seems quite snappy. Giggle also suffered from a few crashes and hangs in an hour's worth of using it.
History mode has the git log output in the top panel, along with the commit graph. Once a commit is chosen, the files affected are shown in the lower left. Each file can then be selected to show the diff output from each change made to that file in the lower right pane. There is no side-by-side comparison of old vs. new versions that other tools have, which might make a nice addition.
The project has been around since it was created in a January 2007 hackathon, and has slowly added more features. Development releases have been fairly frequent of late, more-or-less monthly since January, but before then things seem to have stagnated for almost a year. It is unclear what the plans are for 0.6 and beyond, though the list of open issues gives some ideas of the kinds of bugs and features that will likely be addressed.
There are other fairly active Git GUI programs available including git-cola, a Python/Qt4-based program, and gitg, which also based on GTK+. The latter is meant to track Git clients on Windows and MacOS to try to provide a consistent interface to Git on all three platforms. In particular, it closely tracks the GitX interface for MacOS X.
Other than purely visual attractiveness issues (and Tk definitely has a fairly clunky look and feel), it doesn't seem that Giggle and the other Git GUIs really provide very much beyond what gitk and git-gui do. That may explain the fairly slow pace of development for those tools as anyone who really wants a GUI interface to Git already has one at hand. It's also likely that standalone GUI interfaces are less interesting to those who are used to integrated development environments (IDEs) like Eclipse.
In the end, a GUI is supposed to make a tool easier to use, but Giggle does very little to make Git more approachable. The user still needs to understand a fair amount about Git in order to use the tool effectively. Once they do, using the command line may not be that much of a burden.
Newsletters and articles
Page editor: Jonathan Corbet
Non-Commercial announcementsasked the Open Source Initiative to delay its consideration of the WebM license (requested by Bruce Perens) for a couple of weeks; the company has also requested some changes in how the OSI does business. "This might sound strident, but I think that OSI needs to be more open about its workings to retain credibility in the space." The resulting discussion, unsurprisingly, seems mostly to be focused on the relative blackness of various pots and kettles; those who are interested can read the full thread.
Commercial announcementsannounced a partnership with Marvell to develop the next-generation "XO 3.0" system - a tablet, naturally. "The new family of XO tablets will incorporate elements and new capabilities based on feedback from the nearly 2 million children and families around the world who use the current XO laptop. The XO tablet, for example, will require approximately one watt of power to operate (compared to about 5 watts necessary for the current XO laptop). The XO tablet will also feature a multi-lingual soft keyboard with touch feedback, enabling it to serve millions more children who speak virtually any language anywhere in the world." announced its financial results for its second fiscal quarter ended April 30, 2010. "For the quarter, Novell reported net revenue of $204 million. This compares to net revenue of $216 million for the second fiscal quarter of 2009. GAAP income from operations for the second fiscal quarter of 2010 was $20 million. This compares to GAAP income from operations of $18 million for the second fiscal quarter of 2009. GAAP net income in the second fiscal quarter of 2010 was $20 million, or $0.06 per share. This compares to GAAP net income of $16 million, or $0.05 per share, for the second fiscal quarter of 2009. Foreign currency exchange rates favorably impacted net revenue by $2 million and negatively impacted operating expenses by $6 million and income from operations by $4 million compared to the same period last year."
Articles of interesttakes a look at the Appleseed Project. "After I wrote about the Diaspora project a few weeks ago, I was contacted by Michael Chisari of the Appleseed Project, which is basically the same thing and predates Diaspora by several years. Reason virtually no one had heard of it was because Chisari had spent months developing the entire project himself and had to put it on hold because he, well, reached the point where he couldn't finish it alone with no one really interested in it." (Thanks to Martin Jeppesen) looks at the top 500 supercomputer list. "Of the 187 new entrants, all but one are running some variant of Linux and in fact 470 of the Top 500 run Linux, 25 some other Unix (mostly AIX) and the remaining 5 run Windows HPC 2008."
Interviewsinterview with David Reyes Samblas Martinez. "David Reyes Samblas Martinez is the founder of Spanish Copyleft Hardware store Tuxbrain, and attended the famous Open University of Catalunya. He's also the subject of this month's Fellowship interview, in which he answers questions on hardware manufacturing, e-learning and Free Software politics."
Calls for Presentationsannounced that the 2nd international openSUSE Conference will take place in Nuremberg, Germany October 20-23, 2010. The Call for Papers ends July 31, 2010.
|RailsConf 2010||Baltimore, MD, USA|
|PyCon Asia Pacific 2010||Singapore, Singapore|
|Mini-DebConf at LinuxTag 2010||Berlin, Germany|
|SouthEast Linux Fest||Spartanburg, SC, USA|
|Middle East and Africa Open Source Software Technology Forum||Cairo, Egypt|
|June 19||FOSSCon||Rochester, New York, USA|
|Semantic Technology Conference 2010||San Francisco, CA, USA|
|Red Hat Summit||Boston, USA|
|Open Source Data Center Conference 2010||Nuremberg, Germany|
|PyCon Australia||Sydney, Australia|
|SciPy 2010||Austin, TX, USA|
|Linux Vacation / Eastern Europe||Grodno, Belarus|
|Euromicro Conference on Real-Time Systems||Brussels, Belgium|
|11th Libre Software Meeting / Rencontres Mondiales du Logiciel Libre||Bordeaux, France|
|State Of The Map 2010||Girona, Spain|
|Ottawa Linux Symposium||Ottawa, Canada|
|EuroPython 2010: The European Python Conference||Birmingham, United Kingdom|
|Community Leadership Summit 2010||Portland, OR, USA|
|O'Reilly Open Source Convention||Portland, Oregon, USA|
|11th International Free Software Forum||Porto Alegre, Brazil|
|ArchCon 2010||Toronto, Ontario, Canada|
|Haxo-Green SummerCamp 2010||Dudelange, Luxembourg|
|Gnome Users And Developers European Conference||The Hague, The Netherlands|
|Debian Camp @ DebConf10||New York City, USA|
|PyOhio||Columbus, Ohio, USA|
|DebConf10||New York, NY, USA|
|YAPC::Europe 2010 - The Renaissance of Perl||Pisa, Italy|
|Debian MiniConf in India||Pune, India|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds