Perhaps the novelty should be wearing off, but the ability to update your
phone to code of your choosing is still rather amazing. For those who
have older phones, which are generally neglected by carriers and
manufacturers who quickly move on to the next "big thing", it can extend
the life of a fairly large investment. For others, who just want to
explore the capabilities of the phone hardware outside of the box created
by the industry, changing the firmware provides the freedom many of us have
come to expect from computers. The much-anticipated release of CyanogenMod 6.0 (CM6),
bringing Android 2.2 ("Froyo") along with a bunch of additional features to
many different Android phones, serves as a reminder of the possibilities
available with some phones today.
Amazing though it may be, reflashing a phone is always something of a
nerve-wracking experience. Since I had a "spare" ADP1 phone—fully supported by CM6—sitting around, I decided to try the
release on that phone. While the process went fairly smoothly, ADP1 owners
should be warned that actually using the phone with that code is a bit
painful—or was on my phone. The interface is fairly slow and
unresponsive at times. While poking around, I realized that the Dalvik
just-in-time (JIT) compiler was not turned on by default, but turning it on
and rebooting only made for minor speed improvements. It did
work well enough to convince me to try it on the Nexus One (N1) I use as my
regular phone though.
The instructions for both phones (N1,
require installing a custom recovery image that gives more control over
updating various portions of the firmware. I put the Amon_Ra recovery
image on both, but not without a bit of consternation on the N1.
Perhaps naïvely, I didn't think I needed to unlock the bootloader on
the N1. The phone was given to me by Google at the Collaboration Summit in
and I expected it to essentially be the equivalent of the ADP1. So trying
to use the fastboot utility to flash the recovery image gave
an error and choosing recovery from the bootloader menu brought up a picture
of an unhappy android—pulling the battery seemed the only way
to get rid of it.
Unlocking the bootloader is a simple process using fastboot but it
does void the warranty—however much warranty there is on a free
phone—and it also "wipes" the phone, losing any settings, call
records, applications, and so on. I considered fiddling with various
backup solutions before deciding that a clean slate wouldn't be such a bad
For the ADP1, three pieces were needed after the Amon_Ra recovery image:
DangerSPL, CyanogenMod itself, and the optional Google Apps package. DangerSPL
is a "second program loader" (SPL) that repartitions the system flash to
provide enough space for CM6. It was ported from the HTC Magic phone to
the Dream (aka G1, ADP1) and can brick the phone if used incorrectly, thus
the Danger moniker.
On an amusing side note, I still had the Vodafone pre-paid SIM card from my
recent trip to the Netherlands in the ADP1. I didn't pay too much
attention to the setup screens and suddenly found myself in a
Dutch-language interface. Given the SIM, that's a reasonable assumption
for the phone to make, of course, but it required another wipe to get it
into English. If I had been able to puzzle out enough Dutch to work
through the settings menus, I could presumably have switched it back, but
that proved challenging so I resorted to the wipe.
For the N1, just CM6 and a Google Apps package was required after Amon_Ra.
The zip files
for the various pieces need to be stored on the SD card of the phone and
the Amon_Ra recovery image then allows choosing files from SD to update the
device. While it all worked quite well, and there are detailed, though
somewhat scattered, instructions, it always seems like these upgrades have
a few unexpected, heart-stopping wrinkles. A second, unexplained
appearance of the
unhappy android on the N1 was one such wrinkle.
The Google Apps package contains the closed-source applications that
normally come with the phone: Market, Gmail, Maps, and so on. CyanogenMod
created a separate package for those ("for copyright reasons"
according to the site) after receiving a cease-and-desist
letter from Google for distributing them as part of CyanogenMod. The
"gapps" package does come from a different site, but
least so far, there are no reports of nastygrams from the Googleplex. It
doesn't seem completely unreasonable to allow those applications to be
applications won't run on anything other than
Android phones and phone owners are already licensed to use them. Newer
applications that aren't officially available for older handsets like the
ADP1 are perhaps more of a gray area.
So, after 30 minutes of futzing with installation (per phone roughly), what
is CM6 like? It seems to be a very solid release, and I ran it for half a
day on the ADP1 and more than a day on the N1 with no problems (other than
the slowness on the ADP1, which is presumably due to older, slower
hardware). There are many features in CM6 that are different than the
stock Froyo that was previously running on the N1—too many to
fully discover in the short time since it was released.
But there are several obvious things that stand out, starting with
differences in the application "drawer" which shows the icons for all of
the applications installed. Instead of the stock Froyo
four-icon-width screen with 3D effects when scrolling, the application
CM6 has five icons across and leaves out the scrolling effects. That, like
many things in CM6, is configurable so that the number of columns can be
changed for both portrait and landscape orientations.
Configurability is definitely one of the strengths of CM6. There are
options for changing the user interface in various ways such as customizing
the status bar, changing the available screen rotations, choosing the trackball
notifications, modifying how the buttons and trackball input work, and lots
more. While the ability to make so many changes is great for sophisticated
users, one can see why Google and the carriers might be interested in
reducing the number of calls they get from customers with wildly different
It's not just the user interface that can be tweaked in CM6 as there are
options to change application storage behavior (allowing all applications
to be moved to SD and to set a default location for applications to be
stored) as well as performance tweaks. The performance settings screen
comes with a "Dragons ahead" warning because changing them could cause the
phone's performance to get worse. Furthermore, no bug reports will be
accepted for problems found when mucking with things like the VM heap size
or locking the home application into memory. Those options are purely for
There are a whole host of other changes for CM6 that are listed in the changelog,
including things like support for FLAC playback for those with a
distaste for lossy audio formats (and lots of storage space). OpenVPN
support, music application enhancements, browser incognito mode, home button
themable UI elements, and many more are on the list. There is much to
discover in CM6, and I look forward to (too) many hours playing with the
Unsurprisingly, the N1 (and ADP1 for that matter) still function quite well
as a regular old cell phone since CM6 uses most of the Android code. Also
as expected, it still sends and receives text messages, browses the web,
and plays the bouncing cows game. It is, in short, a major enhancement to
the capabilities already present in Android.
Unlike other Froyo-based "mods", CM6 is built from the Android source,
rather than extracting various binaries from stock firmware. That makes it
easier to trust the CyanogenMod code—one could build their own
version for verification or customization purposes for example—but it
is also what allows CM6 to support so many different handsets. There are
nine separate phones listed as being supported by CM6.
As always, reflashing firmware, unlocking bootloaders, wiping settings, and
voiding warranties should be done with some care and thought. $500+ bricks
are not much fun. But the process has been successfully completed by many,
including notoriously fumble-fingered LWN editors such as myself, and the
capabilities that come with CM6 make it well worth the effort. I certainly
can't see any good reason to return to the firmware distributed by T-Mobile.
Comments (23 posted)
The Mozilla project officially re-launched its developer information and
outreach program this week. Previously known as the Mozilla Developer
Center (MDC), it has now been rebranded the Mozilla Developer Network (MDN),
which is a new site with refreshed content, a reorganized and expanded mission, and new community features.
Mozilla has discussed the renovation project in the open for the better
part of 2010 on Mozilla-hosted blogs as well as in its newsletters and
public call-in conferences. The driving principle is to broaden the focus
of the site from the inward-looking MDC — which effectively served
only those developers looking to build Firefox/Thunderbird extensions and
XUL applications — to a wider perspective. The MDN site is meant to
serve developers working on Mozilla add-ons and applications, but also to
be a resource for, as the site's tagline says, "everyone developing for the Open Web."
A "soft launch" of content for this wider target audience was rolled out with the 2009 debut of the Mozilla Hacks blog, which covered web development and open standards topics in addition to Mozilla-specific news, and placed an emphasis on providing demo code instead of text-only discussions. The switch-over from MDC to MDN took place on August 27th.
Former MDC participants will be pleased to learn that their existing account information has been preserved and will allow them to log in to MDN as well. Thus far, an MDN account only enables two features — the ability to post to the discussion forum and the ability to edit documentation.
Now that MDN has been officially unveiled, visitors can see content divided into four main developer categories: Web, Mobile, Add-ons, and Applications. Mozilla feels that these represent distinct and (for the most part) non-overlapping segments of the development community. Each section presents targeted content drawn from official Mozilla documentation, curated news articles and blog entries from external sites, and links to specific software projects. There is some variation between the sections; Mobile, for example, includes both "Favorite" and "Recent" article categories, and the Add-ons section includes a "Latest Comments from Our Community" box not found elsewhere, which suggests that the MDN platform is still evolving.
The Add-ons and Applications sections encompass most of the
documentation content that was previously featured at MDC. For add-on
developers, there are references and tutorials for the major APIs and
languages, case studies for extensions, plugins, and other add-ons, along
with validation and packaging help. The Application section provides an overview of the Mozilla platform, as well as guides to the tools related to building Mozilla-derived projects, including Bugzilla, Mercurial, Bonsai, Talkback, and other utilities.
The Mobile section focuses not only on Mozilla's Firefox Mobile browser, but also on developing location-aware web and mobile applications, and on using Mozilla technologies for other mobile software. A prime example of the latter is Mozilla's own Firefox Home application for the Apple iPhone, which is an iOS program that connects to Mozilla's Sync service.
Emphasizing the Open Web philosophy also ties in to Mozilla's Drumbeat initiative. Drumbeat is an umbrella project that encourages individuals to organize software projects and in-person events that advance Open Web adoption. It differs from MDN in its focus on non-developer community action, however. One of the goals of Drumbeat is to encourage online communities other than the "tech" circle to build their sites using open standards. On the other hand, both Drumbeat and MDN's Web section try to promote practical software projects (such as Universal Subtitles or Privacy Icons) that reinforce open standards.
Currently, all of the MDN sections place the primary emphasis on official, Mozilla-hosted documentation. News articles and blog entries appear lower in the page, and at the moment seem to be drawn entirely from external content sources (although many are from Mozilla blogs, and thus do originate from Mozilla authors). Blog coverage such as Mozilla intern Brian Louie's indicate that the content mix will expand and get better as more features are added to the site.
For example, community-created content is not yet included; even in the
Twitter sidebar, only MDN accounts are shown, as opposed to Mozilla or
MDN-related hash tags. Because most of the news headlines are links to
external blogs, direct commenting on the stories is not possible without
leaving the site. Louie mentions that (among other features), interactive tagging and rating of news stories is on the roadmap.
But potentially the biggest feature of the new site is the community discussion forum, which is already active. At the top of each page is a "Community" link to the phpBB-powered forum. The forum boards do not break down into quite the same categories as the MDN main sections, which is puzzling — there are separate boards for Open Web, Mozilla Platform, MDN Community, Mozilla Add-ons, and Mozilla Labs.
Nevertheless, hosting the discussion forums at MDN is a big step for the
organization. Previously, official Mozilla community interaction has taken
place entirely on mailing lists and newsgroups. The major discussion forum web site is Mozillazine, which is not affiliated with the Mozilla Foundation. MDN is following the lead taken earlier by the support.mozilla.com (a.k.a. SUMO) project, bringing discussion into a central location hosted by Mozilla itself.
There is a wealth of information already accessible at MDN, from the news articles to the documentation. Mozilla says that all of the content that was at MDC has been migrated to MDN; a direct link to the documentation landing page is available in the header of each page.
In addition to Louie's comments, Mozilla's Jay Patel has given a glimpse of where the organization intends to take MDN from here, via his blog. The first order of business is to replace the old, Mindtouch-based documentation backend with a new system built with Django. The effort is already underway for SUMO; MDN will simply "piggyback" on that tool. The plan is to migrate MDN over to the new system over several months, with the goal of moving slowly enough to add new content that is entirely translated and localized.
Further out, user-given article ratings and comments are mentioned, which may indicate either that MDN-hosted original content is on the way, or else that comments on the Mozilla blogs will simply be integrated as RSS or ATOM feed sources. In addition, Mozilla plans to hold topic-focused documentation sprints and hold developer focus groups over the last quarter of 2010 and into 2011.
Hopefully there are more changes coming still further out. It is particularly ironic that Mozilla's "Open Web" emphasis is launched on a site that dedicates its entire sidebar to the decidedly non-open Twitter service, and that its forums do not support OpenID logins. It is also a little bit troubling that the entire focus of MDN seems to be on Firefox and Firefox Mobile, to the exclusion of Thunderbird, Lightning, and other Mozilla applications. Perhaps that simply reflects the organizational divide between Mozilla Corporation and Mozilla Messaging, but it can hardly be healthy for non-Firefox projects in the long run.
Also in the long run, though, Mozilla is doing itself and the open web development community a great service by consolidating its documentation and developer resources into a single, unified whole, complete with the one thing that it has long lacked — a web-based open discussion forum.
Comments (2 posted)
Linus Torvalds rarely makes appearances at conferences, and it's even
less common for him to get up in front of the crowd and speak. He made an
exception for LinuxCon Brazil, though, where he and Andrew Morton appeared
in a question and answer session led by Linux Foundation director Jim
Zemlin. The resulting conversation covered many aspects of kernel
development, its processes, and its history.
Jim started things off by asking: did either Linus or Andrew ever expect
Linux to get so big? Linus did not; he originally wrote the kernel as a
stopgap project which he expected to throw away when something better came
along. Between the GNU Project and various efforts in the BSD camp, he
thought that somebody would surely make a more capable and professional
kernel. Meanwhile, Linux was a small thing for his own use. But, in the
end, nothing better ever did come along.
Andrew added that, as a kernel newbie (he has "only" been hacking on it for
ten years), he has less of a long-term perspective on things. But, to him,
the growth of Linux has truly been surprising.
How, Jim asked, do they handle the growth of the kernel? Andrew responded
that, as the kernel has grown, the number of developers has expanded as
well. Responsibility has been distributed over time, so that he and Linus
are handling a smaller proportion of the total work. Distributors have
helped a lot with the quality assurance side of things. At this point,
Andrew says, responsibilities have shifted to where the kernel community
provides the technology, but others take it from there and turn it into an
Linus noted that he has often been surprised at how others have used Linux
for things which do not interest him personally at all. For example, he
always found the server market to be a boring place, but others went for it
and made Linux successful in that area. That, he says, is one of the key
strengths of Linux: no one company is interested in all of the possible
uses of the system. That means that nobody bears the sole responsibility
of maintaining the kernel for all uses. And Linus, in particular, really
only needs to concern himself with making sure that all of the pieces come
together well. The application of a single kernel to a wide range of use
cases is something which has never worked well in more controlled
From there, Jim asked about the threat of fragmentation and whether it
continues to make sense to have a single kernel which is applicable to such
a wide range of tasks. Might there come a point where different versions
of the kernel need to go their separate ways?
According to Linus, we are doing very well with a single kernel; he would
hate to see it fragment. There are just too many problems which are
applicable in all domains. So, for example, people putting Linux into
phones care a lot about power management, but it turns out that server
users care a lot too. In general, people in different areas of use tend to
care about the same things, they just don't always care at the same time.
Symmetric multiprocessing was once only of interest to high-end server
applications; now it is difficult to buy a desktop which does not need SMP
support, and multicore processors are moving into phones as well. Therein
lies the beauty of the single kernel approach: when phone users need SMP
support, Linux is there waiting for them.
Andrew claimed that its wide range of use is the most extraordinary technical
attribute of the kernel. And it has been really easy to make it all work. It is
true, though, that this process has been helped by the way that "small"
devices have gotten bigger over time. Unfortunately, people who care about
small systems are still not well represented in the kernel community. But
the community as a whole cares about such systems, so we have managed to
serve the embedded community well anyway.
Next question: where do kernel developers come from, and how can Brazilian
developers in particular get more involved? Linus responded that it's
still true that most kernel developers come from North America, Europe, and
Australia. Cultural and language issues have a lot to do with that
imbalance. When you run a global project, you need to settle on a common
language, and, much to Linus's chagrin, that language wasn't Finnish. It
can be hard to find people in many parts of the world who are
simultaneously good developers and comfortable interacting in English.
What often works is to set up local communities with a small number of
people who are willing to act as gateways between the group and the wider
Andrew pointed out that participation from Japan has grown significantly in
recent years; he credited the work done by the Linux Foundation with
helping to make that happen. He also noted that working via email can be
helpful for non-native speakers; they can take as much time as needed at
each step in a conversation. As for where to start, his advice was to Just
Start: pick an interesting challenge and work on it.
Open source software, Linus said, is a great way to learn real-world
programming. Unlike classroom projects, working with an active project
with people and addressing big problems. Companies frequently look at who
is active in open source projects when they want to find good technical
people, so working on such projects is a great way to get introduced to the
world. In the end, good programmers are hard to find; they will get paid
well, often for working on open source software. Andrew agreed that having
committed changes makes a developer widely visible. At Google, he is often
passed resumes by internal recruiters; his first action is always to run
git log to see what the person has done.
Linus advised that the kernel might not be the best place for an aspiring
developer to start, though. The kernel has lots of developers, and it can
be kind of scary to approach sometimes. Smaller projects, instead, tend to
be desperate for new developers and may well be a more welcoming
environment for people who are just getting started.
At this point, a member of the audience asked about microkernel
architectures. Linus responded that this question has long since been
answered by reality: microkernels don't work. That architecture was seen
as an easy way to compartmentalize problems; Linus, too, originally thought
that it was a better way to go. But a monolithic kernel was easier to
implement back at the beginning, so that's what he did. Since then, the
flaw in microkernel architectures has become clear: the various pieces have
to communicate, and getting the communication right is a very hard
problem. A better way, he says, is to put everything you really need into
a single kernel, but to push everything possible into user space.
What about tivoization - the process of locking down Linux-based systems so
that the owner cannot run custom kernels? Linus admitted to having strong
opinions on this subject. He likes the fundamental bargain behind
version 2 of the GPL, which he characterizes as requiring an exchange
of source code but otherwise allowing people to do whatever they want with
the software. He does not like locked-down hardware at all, but, he says,
it's not his hardware. He did not develop it, and, he says, he does not
feel that he has any moral right to require that it be able to run any
kernel. The GPLv2 model, he feels, is the right one - at least, for him.
Licensing is a personal decision, and he has no problem with other projects
making different choices.
Another member of the audience questioned the single-kernel idea, noting
that Android systems are shipping kernels which differ significantly from
the mainline. Jim responded that people who created forked versions of the
kernel always come back - it's just too expensive to maintain a separate
kernel. Andrew said that the Android developers are "motivated and
anxious" to get their work upstream, both because it's the right thing to
do and because the kernel changes too quickly. Nobody, he says, has the
resources to maintain a fork indefinitely.
Linus cautioned that, while forks are generally seen as a bad thing, the
ability to fork is one of the most important parts of the open source
development model. They can be a way to demonstrate the validity of an
idea when the mainline project is not ready to try it out. At times, forks
have demonstrated that an approach is right, to the point that the kernel
developers have put in significant work to merge the forked code back into
the mainline. In the end, he says, the best code wins, and a fork can be a
good way to show that specific code is the best. Rather than being scary,
forks are a good way to let time show who is right.
Another audience member asked Linus if he would continue to
work on the kernel forever. Are there any other projects calling to him?
Linus said that "forever is a long time." That said, he'd originally
thought that the kernel was a two-month project; he is still doing it
because it stays interesting. There are always new problems to solve and new
hardware to support; it has been an exciting project for 19 years and he is
planning to continue doing it for a long time. He may have an occasional
dalliance elsewhere, like he did when writing Git, but he always comes back
to the kernel because that's where the interesting problems are.
Jim described Linus and Andrew as a couple of the most influential people
in technology. They are, he said, at the same level as people like Bill
Gates, Steve Jobs, and Larry Ellison. Those people are some of the richest
in the world. His questions to Linus and Andrew were: "are you crazy?" and
"what motivates you?"
Andrew replied that his work is helping people getting what they want
done. It is cool that this work affects millions of people; that is enough
In typical fashion, Linus answered that he is just the opposite: "I don't
care about all you people." He is, he says, in this for selfish reasons.
He was worried about finding a way to have fun in front of a computer; the
choice of the GPL for Linux made life more fun by getting people involved.
He has been gratified that people appreciate the result - that, he says, gives
meaning to life in a way that money does not. It is, he says, "taking by
The session ended with a short speech from Jim on the good things that
Linus and Andrew have done, followed by a standing ovation from the
Comments (43 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Thwarting internet censors with Collage; New vulnerabilities in gdm, openssl, phpMyAdmin, wireshark,...
- Kernel: Stable kernel statistics; Another union filesystem approach; Ocfs2
- Distributions: Can Fedora Ship on Time?; CyanogenMod 6.0; Debian, Fedora, ...
- Development: Syslog-ng license change; Diaspora, KDE 4.5.1, PostgreSQL 9.0 RC1, Akonadi,...
- Announcements: Google bails out of JavaOne; Fedora trademark defense; Contributor Agreements?, GCC, ...