There is always a certain thrill that comes with upgrading a system and finding that important features no longer work. In this case, the problem was suspend and resume, which your editor uses heavily. In fact, the system would suspend just fine - as long as one failed to notice that, behind the cleverly darkened screen, the laptop's backlight had been left on. Needless to say, this new behavior is not helpful if one's goal is to save power while the system is suspended, but it gets worse than that. Your editor discovered this nice surprise after carrying the computer in a backpack for a few hours; by the time it came out, it was almost too hot to hold. Happily, no permanent damage appears to have been done.
Or, perhaps, unhappily. Your editor has been looking for an excuse to get a new laptop for a while.
The problem turned out to be a HAL configuration error combined with a strange internal model number which makes your editor's Thinkpad X31 different from, seemingly, every other X31 on the planet. Once your editor found the bug report and attached a "me too" comment, the solution was quick in coming. On the net, one can find complaints that Ubuntu is unresponsive to bug reports, but that was certainly not the experience here.
As an aside, it seems worth noting that life seems to have gotten more complicated, with a lot more code wrapped around the kernel than there once was. The problematic configuration file was /usr/share/hal/fdi/information/10freedesktop/20-video-quirk-pm-ibm.fdi - not a place where your editor, who is not a HAL expert, would have thought to look. That, it seems, is the price of more capable hardware and software, but sometimes your editor pines for the days when it seemed possible to carry a full understanding of the system within a single brain.
GNOME developers are (perhaps unjustly in recent years) known for taking a minimal approach to configuration options. That can be irritating, but just as annoying is their tendency to reset the options they do provide over major updates. Once suspend and resume work, your editor demands something else of a laptop when traveling: absolute silence. So the return of beeps to gnome-terminal was not appreciated. Those were easily silenced, but the GNOME developers also saw fit to bring back the blinking cursor - and they took away the configuration option which abolishes that intolerable feature.
Your editor first ran into the unstoppable blink with Rawhide; a query to the developers there turned up a quick answer. It seems that the GNOME developers have decided to create a single, system-wide parameter to control blinking cursors. Now, your editor approves of the concept of being able to turn off that behavior everywhere with a single switch - but only as long as that switch isn't hidden where nobody will ever find it. In this case, the GNOME developers have taken this feature, wrapped it in old newspapers, and stashed it behind the furnace in the basement; then they put a trunk on top of it. It is a rare user who will find it unassisted. In the hopes that it may save one or two readers from some time spent with search engine, your editor will now divulge the top-secret incantation which turns blinking cursors off:
gconftool-2 --type bool --set /desktop/gnome/interface/cursor_blink false
Naturally, a terminal window is required to run this command. It would have been nice if the developers who packaged this code for Hardy Heron had found a way to smooth over this change, but no such luck; as far as your editor can tell, no distributor has made that effort.
Another bit of fun is that your editor is no longer able to set the desktop background; the relevant configuration windows are ineffective. In this case, it would appear that the task of implementing the user's background choices have been moved to nautilus - just the place your editor would have thought to look for it. As it happens, your editor has no use for file managers and does not run nautilus - and is punished with an immutable Ubuntu-brown background for that sin. Happily, your editor still knows how to run xsetroot.
All of the above is a set of relatively minor grumbles, all of which are rectified in relatively short order. Once those details have been taken care of, the Hardy Heron release works quite well. One of the biggest aggravations from previous upgrades - having OpenOffice.org reformat the slides in all of your editor's presentations - was not present this time around. Hopefully we are moving into an era where "it didn't mangle my documents" is not something considered worthy of mention.
There was one very nice surprise as well. Your editor's laptop previously required almost 12 watts of power when running unplugged. This laptop is not at the bleeding edge of current technology, so the amount of time it was able to run without a recharge has been dropping for a while. With the Hardy release, steady-state power consumption has dropped to just over 9 watts - a big improvement. The credit for this change belongs to developers at all levels: kernel, applications, distributors, etc. The end result is a system which runs much more efficiently, and that is a good thing.
All told, your editor is reasonably content; this distribution looks like one which might just be worth keeping around. That's a good thing, since Ubuntu plans to maintain it as a "long-term support" release. Not that your editor intends to make much use of that long-term support; there should be a new development series starting soon, after all. One of the nice things about development distributions is that support never ends as long as one stays on the treadmill and the project itself remains alive.
Technical conferences generally provide a wealth of choices, to the point where participants have to make tough decisions at times to pick the session to sit in on. This year's Embedded Linux Conference was no exception; there were multiple slots where the author had to wish that he could be in more than one place at a time. But, he did manage to take notes in some of those that he attended; hopefully some of the conference flavor can come through in the following report.
MontaVista's Kevin Hilman presented an approach for handling power management on embedded devices that focused on changes that can be made to the kernel, but noted that there is much that can be done by applications too. Because of the time and money budgets available for embedded projects, many do not have the resources to do a complete job of tuning the kernel to get the best possible power performance. There is also no "one size fits all" solution for power management, there are too many device-specific issues to allow that.
Hilman's approach is to target specific "building blocks" that embedded developers can incorporate into their project. Each block will provide some savings, so the project can stop when the desired performance is reached—or it is time to ship the device. One of the easier steps is to customize the idle loop in the kernel, putting the processor to sleep when there is no work to be done. There are different kinds of sleep, though, generally trading off power savings and wakeup latency. The cpuidle subsystem provides a means to specify those values in an architecture independent way, which, along with a platform independent "governor", can put the processor into various sleep modes. The only platform dependent piece are the hooks to enter each of the different sleep states.
A similar approach is taken by the CPUfreq subsystem, which can reduce the clock frequency of the CPU to reduce power consumption using the Dynamic Voltage and Frequency Scaling (DVFS) feature of some processors. "Operating points" (OPs)—voltage and frequency tuples—are defined for the hardware. There are various generic CPUfreq governors that can then be used to determine when to change OPs and which to change to. The governor will invoke a platform-specific driver to effect that change. In addition, power management "quality of service" is currently being discussed to allow applications to request a certain level of performance that may override some of the lower-level sleep or frequency decisions.
SELinux has a well-earned reputation for being able to restrict processes to only use those resources that have been specifically allowed by policy, but it is rather resource intensive. Yuichi Nakamura presented Hitachi's research into bringing SELinux into a more resource constrained embedded environment. One of the first problems they encountered was the need for flash filesystems that support extended attributes (xattrs), which is where SELinux stores labels for files. Only jffs2 currently supports xattrs, so that is the one they used.
The next big hurdle was trying to get a set of policies that were stripped down to the needs of an embedded platform. Nakamura started with the SELinux reference policy (refpolicy) and started removing rules. The sheer number of rules and policies that needed to be removed was daunting—as was the need to understand what was being removed. He also ran into strange dependencies: removing a sendmail policy caused a problem in the apache rules. The solution was to create a simplified policy language and policy editor that reduced the problem to something more tractable for the embedded world. In the process it greatly reduced the size of the policy files, from 4.6M down to 60K.
Another problem encountered was the performance and size of SELinux, which is a common embedded woe. Through some hand optimization of the read/write path, along with removing some unused permissions checks, they were able to increase the performance by a factor of ten on their SuperH reference platform. By changing some static buffers in SELinux to a dynamic allocation they also saved 250K of runtime memory. Much of that work was merged into 2.6.24. There is still work to be done, but with the changes, SELinux is viable for embedded platforms.
Two sessions provided various tips and tricks for embedded development, with Gene Sally of Timesys focused on GCC, while IBM's Hugh Blemings shared some of the things he has learned from the kernel hackers he works with.
Sally discussed the different ways that developers could get a GCC toolchain for their target processor. One of the bigger hurdles that an embedded developer faces is getting a cross-compiler toolchain—one that runs on his development workstation, but generates code for the target platform. There are several ways to get the toolchain: as a tarball for popular development/target combinations, by using helper tools like crosstool or buildroot from uClibc, or by building it from source directly.
Building from source is the most difficult, of course, but allows for the most customizations and flexibility. Sally went on to describe a handful of useful GCC command-line options for helping to debug cross-compilers or just to better understand what GCC is doing:
Blemings concentrated on the development infrastructure by describing the lab that he used to port the kernel to a Taishan PowerPC-based evaluation board. When undertaking a project like that, "get to know your hardware team" because they will have lots of important information and shortcuts that can be used as part of the board "bringup". At IBM in Canberra, where Blemings is based, they have gotten to the point where they can bring up Linux on any board where they can "access memory and point the PC [program counter] at it"; his tips have come out of that environment.
One of the most important things is to realize that you will be building kernels over and over again, so optimizing your environment for that will save lots of time. His suggestion was to start with a "honkin'" compile box; he described an IBM multi-processor box as an excellent choice but noted that the cost was so high he couldn't get one. It would, however, do "3k/sec"—that's compile 3 kernels per second. In the absence of something like that, he suggested borrowing cycles by using ccache and distcc to reduce and parallelize the compilation that needs to be done. Even adding relatively modest machines into the distcc pool can significantly reduce time spent waiting for a new kernel.
One of the hottest areas in embedded Linux these days is the mobile internet device (MID) market. There were two talks on MID-focused distributions, with Canonical's David Mandala giving an overview of Ubuntu Mobile and Embedded (UME) and Nokia's Kate Alhola talking about the status and future directions of Maemo Mobile Linux. UME is a relatively recent addition to the mobile device space—they are anxiously awaiting hardware to run on—whereas Maemo has been around for a while, powering the Nokia N770, N800, and N810 internet tablets.
UME is an effort to apply the Ubuntu distribution and philosophy to touchscreen devices. Mandala explained that they are taking existing Linux applications and adapting them for small screens that use fingers, rather than keyboard and mouse, as the input device. The resolution of the displays is typically something approaching that of low-end desktops, but the physical space they take up is far smaller (i.e. the dots per inch or DPI is high) making it difficult to do development without actual hardware.
The UME project is working with Intel's Moblin.org project to target Atom processor based systems. It uses the Hildon application framework atop GNOME Mobile, running on an Ubuntu 8.04 (Hardy Heron) distribution. Mandala stressed that Linux should be "invisible" on these devices as users just want applications that work to browse the web, use email, and the like.
The main focus of UME has, so far, been on the user interface, though power consumption, memory footprint, and speeding up boot times are all on their radar. Canonical is very interested in fostering a community around UME, but that has been "a bit of a challenge", mostly due to a lack of hardware to run on. Mandala expects a few different hardware devices to be available "soon" and that will make it easier to attract a development community.
As should come as no surprise after Nokia's purchase of Trolltech early this year, Alhola announced that Maemo would be supporting both GTK and Qt in the near future. This is part of Nokia's belief that there is "no single truth", so Maemo supports multiple paths to development on the platform. Maemo directly supports C, C++, and Python, while the community has added support for Java, Objective C, Vala, and Mono.
Nokia makes a very clear distinction in its product line between phones, which are largely closed platforms, and tablets, which are open. Open source software is an essential part of their strategy as they want to build an application ecosystem around their products. "We are taking open source to the consumer mainstream," Alhola explains.
One of the interesting tools that Nokia is working on as part of Maemo is Scratchbox, which is a toolkit geared towards making cross-compilation easier. It does this by making the development environment look and act like the execution environment, using QEMU to simulate the target hardware. Scratchbox supports both ARM and x86 targets, with experimental support for additional architectures. It uses standard toolchains and distributions where possible and is released under the GPL.
LogFS is a flash filesystem that is targeted at the larger flash devices that are becoming more widespread. Unlike some filesystems currently in use, most notably jffs2, LogFS is specifically designed to avoid some of the performance and scalability problems that come with larger devices. Jörn Engel is the developer of LogFS, with some support from the Consumer Electronics Linux Forum (sponsor of ELC), so he gave an update on the status of the project.
Engel used an unconventional scale (the sucks/rules meter) to measure the progress that had been made in the last year. The scale runs from -10 to 10 and measures the "suckiness" of particular features of the filesystem. Taking a page from This Is Spinal Tap, the score for the mount speed of LogFS was measured at 11 both last year and this. It is clearly the feature that Engel is most proud of as it takes 10-60ms to mount a filesystem; a similarly sized jffs2 takes on the order of one second.
Engel looked at around ten separate attributes of the filesystem, first rating them on where LogFS was a year ago, then re-rating based on where it is today. The conclusion is that the average measure has moved from -2.75 to -0.55, so that "on average, it hardly sucks". He says he is getting confident enough to submit it to Andrew Morton for inclusion in his tree, hopefully on its way into the mainline. Engel is clearly somewhat frustrated with people who are waiting until it is "done" to start using LogFS—though there are some fairly serious usability problems that would tend to limit testers—proclaiming: "LogFS is finished, try it now, today!"
There were more talks, of course, as well as an active "hallway track" for the roughly 175 participants. ELC is a well-run and very interesting conference that is worth consideration for anyone who uses, or plans to use, Linux as an embedded operating system. This year's venue, the Computer History Museum was a nice facility for a conference of this size. It also had some great exhibits that will bring back memories for anyone who has been using computers, calculators, or game systems over the past 50 years or so—well worth a visit when one is in Silicon Valley.ongoing thread on the project's development mailing list shows that quite a few participants are concerned about where things are going. To many, it seems, OLPC is about to go down as a noble failure.
These rumors may be just a bit premature, though. When considering what may really come of OLPC, it's worth keeping a few things in mind.
One of those is the fact that the project has just completed a major push to its first mass-production system. Your editor has watched the project closely enough to see that, as with many such efforts, the people involved have been putting in lots of long hours to get the job done. When this kind of pressure is lifted, it is natural to take a break, catch up on the house work, and, perhaps, find a new job. So the departure of some key staff at this stage is not entirely surprising.
A look at the state of OLPC's software suggests that the project had set an overly ambitious set of goals for its first release. When that happens, one must jettison some objectives; the later that this is done, the more likely it is that the wrong objectives will be tossed overboard. There are signs that OLPC tried to do too much for too long, with an end result which is not as stable, as fast, or as fully-featured as one would like. As many people close to the project have noted, the laptop's software remains immature. But, as former president Walter Bender put it:
Finally, the number of laptops delivered to children is far below the level the project had planned upon. Fewer deployments means a lower impact for the project, but it also cannot be helping to create the economies of scale the project had counted on to push the cost down. There have also been some embarrassing failures along the way, including the misplacing of a large number of "Give one get one" orders until after it was too late to include them in the manufacturing run.
All of the above points to a need to make some changes in how the project is run. Changes always create uncertainty, so it would be surprising if OLPC participants were not a little nervous at the moment.
What happens in the next few months will likely determine OLPC's fate. The project's leadership has famously said in the past that OLPC is an education project, not a laptop project. Some people have recently expressed concerns that, in fact, OLPC is turning into a laptop project, with deployment numbers being the main goal. Nicholas Negroponte doesn't help when he allows himself to be quoted as being "mainly concerned with putting as many laptops as possible in children's hands." If OLPC becomes primarily a low-cost laptop vendor, and especially if it goes to proprietary operating systems as a means toward that end, it will lose much of the community that has grown up around the project.
And that would be a shame. There is great beauty in the idea of putting a well-designed learning tool into the hands of children and empowering those children by providing a system which is completely open and hackable. A large and motivated community of highly-capable people came together behind that vision and did their best to rethink how this technology should work and create something better. Deployment groups in a number of countries have gotten the resulting systems into the hands of thousands of children, and many of them are reporting good results. A lot of good things have happened here, and it doesn't have to end now.
But it might end soon. To pull things together, the project will have to communicate a clearer vision of where it plans to go with its software at all levels; Mr. Negroponte's statement of continued support for Sugar appears to be an attempt to start this process. The operational side of the project needs to get its act together. Some transparency on, for example, what is being done with donation money and what agreements have been made with outside corporations, would be most helpful. And, most of all, the group of volunteers working with this project have to be convinced anew that they are not wasting their time. If the project's leadership can manage all of that, there may well be great things coming from OLPC in the future.
Page editor: Jonathan Corbet
Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds