The ARM architecture is growing in popularity and is expected to expand its
reach beyond the mobile and "small embedded" device space that it currently
occupies. Over the next few
years, we are likely to see ARM servers and, potentially, desktops.
Fedora has had at least some ARM support for the last few years, but
always as a secondary architecture (SA), which meant that the support lagged
that of the two primary architectures (32 and 64-bit x86) of the
distribution. Recently, though, there has been discussion of "elevating"
ARM to a primary architecture (PA), but, so far, there is lots of resistance to
a move like that.
The subject came up at a meeting of the
Fedora engineering steering committee (FESCo) on March 19. Adding ARM as a primary
architecture for Fedora 18 was a late addition to the agenda, which annoyed some, but
the discussion was largely to "start the ball rolling and collect feedback from
everyone", as Kevin Fenzi put it.
There will be many other opportunities to discuss the idea, he said.
log bears that out as the only vote taken (or even proposed) was to ask
for input from various teams (QA, release engineering, kernel, and
infrastructure) about the impact of a change like that.
The difference between primary and secondary architectures for Fedora is
rather large. Releases cannot be made without all of the packages building
and working for each primary architecture, whereas secondary architecture
packages can languish. In fact, the current release of Fedora for ARM is
based on Fedora 14—though there are alphas of Fedora 15 and
17—which is past its end-of-life on x86.
The meeting discussion focused mostly on the motivation for making ARM a
PA and why the project's goals couldn't be met, at least for now, by
remaining as an SA. Much of the motivation, it would seem, is for Fedora
to get out ahead of the curve on ARM support. Making ARM a "first class
citizen" would increase its visibility and put the full weight of the
Fedora community behind the effort. Therein, it seems, lies part of the
One could argue that Fedora has already fallen behind the curve with
respect to ARM given what Ubuntu and Debian are doing to support the
architecture. There is a lot going on with Linux on ARM, and Fedora may
well find itself becoming less relevant if support for ARM does not
improve. But the question seems to be whether that support needs to
improve as an SA before even considering whether it can be a new PA.
Based on the discussion in the meeting, Matthew Garrett posted an RFC draft of the requirements to
promote an architecture to a PA. So far, there has been no architecture
that has transitioned from an SA to a PA, so some kind of ground rules need
to be established.
Garrett lists seven potential criteria,
Promoting an architecture to primary architecture status is a
significant responsibility. It implies that the port is sufficiently
mature that little in the way of further architecture-specific changes
or rebuilds will be required, and also that it has enough development
effort to avoid it delaying the development of other primary
architectures. Further, it means that the architecture becomes part of
the overall Fedora brand. Fedora is an integrated Linux distribution
rather than a technology collection, and as such there are various
expectations that the overall Fedora experience will be consistent over
all primary architectures.
Much of the response to that posting concerns the amount of
time it (currently) takes to build ARM packages (vs. the time for x86 and
other architectures). Jakub Jelinek noted that GCC builds for 64-bit x86 are on
the order of two hours, while building for ARM takes much longer. He
followed that up with actual numbers from
GCC 4.7 builds for Fedora 17, which ranged from one-and-a-half hours for
x86_64 to more than 26 hours for armv5tel (and more than 24 for armv7hl).
Brendan Conoboy pointed out that plans for newer "enterprise
would cut those ARM build times in half, but that still leaves a
Slow builds are not just an annoyance, as there are some impacts for the
distribution if package building takes "too long".
Josh Boyer lists
two. If a package builds for the x86 family, but then fails to build
for ARM, the x86 build will have to be resubmitted after the ARM problem is
fixed. In addition, when trying to do an update (for a security issue for
example), it has to wait for the slowest build to finish before the update
can go out. Adam Williamson also notes
another problem that could arise in the release verification process:
So it's not unusual for me to be bugging, say, the kernel team to give
us a new kernel build that fixes a blocker bug, so we can do a new
release candidate, so we can test the release candidate in twelve hours,
so we can make the go/no-go meeting deadline the next morning.
If builds get significantly slower, that could have a concrete impact on
the release validation process: it's plausible that we'd either need to
extend the validation period somewhat - earlier freezes - or we would
have to eat a somewhat higher likelihood of release slippages.
Build speed is a technical issue that can presumably be overcome
(eventually) with faster hardware. Other possibilities like cross-compiling
on faster x86_64 servers or parallelizing the Koji build system (perhaps
using something like distcc)
seem to have been ruled out by Fedora release engineering or the Fedora ARM
team. While some remain unconvinced, Conoboy is adamant that cross-compilation is not a good
Look, even x86_64 is
topping out on speed and moving to a more-core and more-systems-per-rack
model. Cross compilation solves yesterday's problem, not tomorrow's. If
build speed truly is a fundamental issue to becoming PA the answer is to
harness multiple systems for a single build, not to use a somewhat faster
system to make up for the speed of a somewhat slower system. Scaling across
more cores than fit in a single SMP Linux environment is the only sensible
approach to future build speedups. Though [it] is an interesting challenge, it
is completely beyond the scope of primary architecture requirements.
But some question the wisdom of even having criteria for promoting SAs to
PAs, whether it makes sense for Fedora to even consider ARM for a PA, or
both. Kevin Kofler is definitely in the last category as he believes that the current list of PAs "should be set in stone unless a MAJOR change in hardware
landscape happens". Some would argue that the change is already
happening. But he is concerned that additional PAs put a
burden on all of the package maintainers, so that it should require an
extraordinary event (like "x86 gets discontinued by the hardware manufacturers
and everyone uses ARM instead") before any change like that is even
considered. He continues:
In the current state of things, I don't see a sufficient demand for making
ARM (or even less any other secondary architecture) a primary architecture.
If ARM is really the future as its proponents claim, we can revisit that in
a few years. Not now.
The focus should be on finding ways to make secondary architecture releases
more timely (i.e. it's not acceptable that e.g. the stable ARM release is
still Fedora 14 which doesn't even get security updates anymore), not to
cheat around the problem by making ARM a primary architecture (which does
not help all the other secondary architectures).
Kofler harps on the same points throughout the thread,
belittling the ARM market share (at least in the market segments that he
thinks should be targeted) and finding the build times for ARM packages to
be untenable. He considers the large
existing base of ARM devices to be unsuitable for installing Fedora, at
least at this
point. But, as Richard W.M. Jones points
out, that is changing rapidly:
It's a matter of time, and not very much time at that.
My £400 tablet has plenty enough power, storage and whatever else to
run Fedora. Fedora works pretty well on £200 Trim Slice servers.
Fedora is going to be shipped with £25 Raspberry Pi devices in the
Others were also skeptical of the current ARM hardware being a good target
for Fedora, but Williamson points out that
getting Fedora ARM running does more than just target
those devices. The ARM project is looking toward the future, both on servers
and mobile devices. Getting the distribution running on one is a big step
toward having it available for the other.
But the speed of the build system is just one symptom of the problems that
another PA will bring. One of the bigger questions, which remains largely
unanswered as yet, is what making ARM a PA would do for Fedora as a
distribution. It's reasonably clear why it would help the Fedora ARM
project to have ARM as a primary, but the advantages to the distribution,
at least at this point, are less clear. As Garrett put it:
primary architecture isn't meant to be a benefit to the port - it's
meant to be a benefit to Fedora. Adding arm to the PA list means you'll
have to take on a huge number of additional responsibilities, deal with
more people who are unhappy about the impact upon their packages and so
on. You get very little out of it except that there's more people to
tell you that something's broken. The question is whether making arm a
primary architecture makes Fedora a better distribution, and yes, in an
ideal world arm would demonstrate that it was just as functional as x86
before we had to make that decision.
The only reward you'll get from being a primary architecture is basking
in the knowledge that the project thinks your work is good enough to be
a primary architecture. The better you can demonstrate that in advance,
the easier the process will be.
Jones Robinson outlined many of the advantages
that the Fedora ARM team sees in another fedora-devel thread. Essentially,
it would spread the load of responsibility throughout the Fedora
community. That is, of course, the underlying concern of many posters in the
threads. But Jones sees it this way:
I'm fully aware that Primary Arch isn't the perfect panacea and that
once we're there the ARM team can't go and sit of the beach and do
nothing but it does spread the load, automate a lot of things because
it's part of core infra and processes and it then allows the ARM team
to concentrate on fixing corner case packages, working with various
components of the project, optimising the way things are done for ARM,
working with upstreams for HW etc and generally making ARM even better
rather than constantly chasing our tail trying to keep up with basic
things as building packages, running infrastructure, composing the
repos, dealing with branching and tags and targets in koji and all
those other things. Those advantages of being a primary arch can't be
There is, of course, nothing stopping the ARM team from achieving most of
its goals while staying as secondary architecture. It will be more
difficult and likely require more volunteers, but the Fedora project as a
whole needs to be convinced of the
advantages of taking on the "burden" of ARM as a primary. So far, the ARM
project doesn't seem to have made a convincing case for that, but, given
the importance of the architecture going forward, one might guess that the
situation may change in the next year or two. In the meantime, setting
some goal posts for any secondary architecture that wants to be promoted
seems like a good first step.
Comments (60 posted)
Mozilla raised eyebrows in mid-March when a patch materialized that
would allow Gecko to fall back on operating system or hardware media
decoders for multimedia content — in particular for patent- and
royalty-encumbered codecs like H.264, which are not supported natively in
Gecko. The project had fought hard to promote the adoption of unencumbered
alternatives (such as Ogg Theora or Google's WebM), so many on the mozilla.dev.platform discussion group saw enabling any support for H.264 as a violation of principle. Mozilla's Chief Technology Officer Brendan Eich argued, however, that the decision is an improvement over the existing Flash-fallback method, that Mozilla has more important fights to focus on, and that the blame for WebM's snail-paced adoption lies squarely at the feet of Google.
History of the WebM, part 1
Eich posted his take on the situation in a round-up at his own blog (which was then syndicated at the official Mozilla Hacks blog), which started with a history lesson on H.264, WebM, and the HTML5 <video> element. As far back as 2007, Eich and Mozilla had argued for the standardization of the <video> and <audio> elements to include "unencumbered" baseline codecs — at the time, Ogg Vorbis for audio and Ogg Theora for video. Eich argued that the term "unencumbered" most accurately describes the state of the codecs needed to ensure that the open web remains open; the issue is not about open source (for there are open source implementations of encumbered codecs), nor is it about patents (for standards bodies can and will accepted a patented codec if the patent holders agree to license it under royalty-free terms).
In 2007's <video> element fight, the main opponent of
Theora was H.264, which was being pushed by a royalty-collecting consortium
of companies. The protracted battle ultimately resulted in a non-decision,
with the default codec language being removed from the draft standard. But
the situation appeared to take a sudden shift in favor of unencumbered
codecs in 2009, when Google purchased codec-maker On2 and released the WebM codec under unencumbered
WebM was far newer than Theora and offered quality roughly on par
with H.264; Theora's relative performance was a big reason Google did not
it as the default codec.. Google subsequently announced its intention to transcode YouTube videos to WebM, and in 2010 Adobe announced that it would support WebM in its Flash products (meaning not just the browser plug-ins, but the content-creation tools and media-delivery servers as well). In January 2011, Google went a step further, and publicly announced that it would drop support for H.264 from its Chrome browser.
But that change never happened. 14 months later, Chrome still supports
H.264, and Eich and other Mozilla employees report that Google has remained
silent about the decision when asked. Adobe didn't implement the WebM
support that it promised either.
Meanwhile, Eich said, H.264 adoption has continued to spread, which has hampered Firefox's growth. For starters, Google's oft-cited promise to trancode YouTube's content to WebM is not all it is cracked up to be: to date Google has only transcoded half of the site's videos, and more importantly, YouTube only delivers WebM content to desktops, and only for those videos that serve no ads (which Eich said makes up a shrinking portion of the total). No other major sites have rejected H.264 in favor of WebM delivery either, while the consumer electronics industry builds more and more H.264 encoding support into video cameras.
On the desktop, Firefox falls back on the Flash plug-in's H.264 support, but on mobile devices, there is no such option. Mozilla believes that mobile browsers are clearly the battlefield deserving the most attention, and on top of that, the project's Boot to Gecko (B2G) effort stands no chance of making it into device-makers' products without H.264 support, Eich said.
Patches and a new API
The combination of Google not pushing WebM and content creators adopting
H.264 puts Mozilla in an untenable position, Eich said. Yet the majority
of phone and tablet designs ship with a pre-authorized (meaning the royalty
fees have been paid) H.264 decoder in silicon, which is what led developer
Andreas Gal to propose letting Gecko hand H.264 decoding (and perhaps other
encumbered formats like MP3 and AAC) down to the OS or hardware level.
Although other option have been discussed, Eich endorses Gal's solution.
The upshot is that Firefox and B2G users will be able to see H.264
content on platforms that support it, Mozilla will not have to pay royalty fees (or pass them on to users), and the project can turn its attention to fighting for unencumbered codecs in the next round of standardization battles.
Those battles are not far off, Eich said, starting with the WebRTC
real-time chat standard — which is already in-progress and looks
poised to recommend unencumbered codecs. There will be other fights, he
said, and other generations of video streaming codecs. Continuing to
ignore H.264, particularly on mobile devices, would ultimately risk making
Mozilla irrelevant further down the line, if not nonexistent altogether:
Losing a battle is a bitter experience. I won’t sugar-coat this pill. But
we must swallow it if we are to succeed in our mobile initiatives. Failure
on mobile is too likely to consign Mozilla to decline and irrelevance.
Gal's patch is attached to bug 714408.
In essence, it creates a new API (which Gal dubbed the Media Player API, or
MPAPI) for use by OS or hardware decoders. On desktop OSes like Linux,
MPAPI would likely be tied in to a media framework like GStreamer (which
can use encumbered codecs). It is also possible that the Mozilla Flash
plug-in could continue to serve as the H.264 decoding chain on desktop
systems, which would require work to expose MPAPI to the Gecko's main
plug-in API, NPAPI. Of course, Adobe has also said that it will stop developing its NPAPI Flash
plug-in for Linux, which means Flash-fallback will not remain a solution
for free OSes in the long term.
As of now, the patch
itself is the main source of information on MPAPI, which is still very much
a work in progress. MPAPI would hand audio and video frames back to the
main Gecko rendering toolchain, however, meaning the content would be fully
<video> or <audio> element. The vast
majority of the energy expended about the move has not been with its technical points, but with whether or not Mozilla's decision itself is wise, foolish, too pessimistic, or long overdue.
The mozilla.dev.platform discussion thread about Gal's proposal is rife with critics arguing that Mozilla is making the wrong move, and with defenders inside and outside the project. The criticisms fall into three basic camps: those that think the H.264 situation is not as bad as described, those that feel Mozilla has not tried hard enough to advocate WebM (or that it should try Just One More Time), and those that object to enabling any form of H.264 playback on principle alone.
Critics who say that WebM has not lost to H.264 tend to point to Google's YouTube transcoding effort (which Eich countered in his blog post), or hold out hope that Google will indeed drop H.264 support from Chrome (often based on the number of WebM "supporters" listed). On the latter point, Eich argues that Chrome's H.264 support is moot, because the desktop browser would simply fall back on the Flash plug-in like Firefox does today. In fact, he said, Chrome's heavily optimized Flash plug-in amounts to a "practically custom" Flash offering "best-of-breed fallback."
Christopher Blizzard added
that ultimately, the content-delivery sites simply are not interested in
I keep talking to people building sites and there are only a couple of
organizations that are willing to embrace WebM because it's the right thing
to do. Transcoding & hosting costs are huge. Beyond that I've not really
run into anyone who wants to do WebM. It's just seen as a cost that
Firefox is incurring on web developers.
Justin Lebar typified the position that Mozilla is giving up too
it "going down without a fight," and saying that he would:
publicly call on Google to fulfill its promises of old. I'd communicate
through official channels why we don't want to support H.264, MP3, etc,
and why we think Google is harming the web. [...] And I might set a
public deadline — if Google doesn't un-support H.264 by date X, then
we'll start supporting system H.264 and MP3 codecs.
Side-stepping the fact that Lebar's public deadline idea paradoxically threatens to increase support for H.264 if it is not abandoned, Mozilla's Robert O'Callahan calls it "grossly unfair" to suggest that Mozilla has not fought hard enough or long enough against H.264 adoption:
We have fought. We, alone of all major browsers (sorry Opera desktop), have held out against supporting patent-encumbered codecs for a long time. I feel it's grossly unfair to our efforts to describe that as "not a real fight".
We held the line in the hope that the industry would follow, and that Google would do a lot to improve and support WebM, especially removing H.264 support from Chrome. So we've held the line, and watched, and waited, and personally I am extremely disappointed by the results.
Likewise, Asa Dotzler confirmed that Mozilla has spent months trying to get a response from Google on the H.264 support question, only to be met with silence. O'Callahan also observed that there is already mobile hardware capable of decoding WebM video, but that Google does not enable it on Android devices. In the absence of support from the format's owners, Mozilla says, it alone cannot move WebM forward.
The objection on principle is trickier. Some in the discussion thread
expressed personal hurt that Mozilla was not standing its ground against
H.264 playback support, but more were concerned that relenting would make
it harder for the organization to lobby against encumbered formats in the
future. Eich argued that anyone who ignores the fact that Firefox users watch H.264 video via the Flash plug-in is "hiding behind Mother Adobe's skirts" and is not taking a "realistic view of the entire fallback logic chain, and of Firefox's current acute dependency on Flash," which is not different in kind from falling back on an OS decoder. Gal concurred, noting that the MPAPI proposal is only "using existing accelerated decoders that already are licensed and available on the system."
Regarding Mozilla's ability to advocate for open and unencumbered formats in the future, Doztler said that the project has backed down on other lost battles in past, such as the document.all DOM feature, but has maintained it credibility. Mozilla can influence the web, and from time to time kill a bad idea, he said in another message, but "sometimes the Web decides to go where we don't want it;" not supporting it only costs the project developers and users.
Ultimately, though, Dotzler argues that even in light of H.264's popularity, Mozilla's WebM advocacy does not constitute a wasted effort:
WebM is in much better shape because of Mozilla's efforts. Not only is WebM in better shape, but I think it actually proves that open codecs can compete. It didn't win, but it demonstrated viability and it may yet go on to claim a critical role in WebRTC.
Finally, I think there's something important in our having taken that stance. We've demonstrated that we don't default to "what's easy". We may not be able to win every battle, but we don't shy away from fighting the good fight.
A continuing story
It is worth remembering that however one feels about the H.264 lobby
and its royalty-collecting schemes, the presence of dedicated video
decoding chips is hardly an isolated situation. There are currently a
great many components in our computers which are covered by patents, and
many chips for which we do
not have source code. Yet enabling software access to those components
is not perceived as a violation of principle. Furthermore, it is hard to
argue that the Flash plug-in that Firefox currently falls back on is turf
worth fighting for — it has a history of bugs and security holes
unrelated to the video decoders it ships.
The debate over enabling H.264 playback via MPAPI shows little sign of calming down. Eich, Dotzler, Gal, and the other project members continue to argue that enabling the format is a purely pragmatic move, and that it would be a better use of Mozilla's energy to combat encumbered codecs on the still-in-development format battles.
They got a boost on March 18 when Mozilla chief Mitchell Baker posted
her own blog
entry in support of the proposed change. Baker said that "giving
our users a great experience" is both one of the project's key
values, and a demanding goal that drives realistic product development.
It's possible to fall into the view that the only way to live up to Mozilla
values is to ship the product we think people should want. This aspect is
one element, but it's not the only one. Another critical element is
shipping products that work for people now so they can love them.
The comment thread on Baker's blog follows much the same pattern as the discussion group. There are supporters, commiserators, and vocal critics. But wherever H.264 itself ends up on future versions of Firefox and B2G, one thing is for sure: H.264-vs-WebM is not the last codec fight the software world will see. As several in the thread pointed out, progress on H.265 is already well underway, and the players involved are similar — there can be little doubt that the battle will be similar, too.
Comments (25 posted)
There is value in whining at times. At a recent conference, your editor
complained that he had been unable to get a sense for what MeeGo is really
like since nobody had ever sent him an N9 handset. Some time thereafter, a
shiny blue N9 showed up on the doorstep courtesy of the kind folks at
Nokia. What follows are various
impressions from playing with this new toy; your editor, normally an
Android user, has found a lot to both like and dislike in this seemingly
doomed smartphone platform.
The N9 is an attractive device, only slightly larger than a Nexus One. The
spouse, upon handling it, complained about the rather sharp corners - but
proved reluctant to hand the device back anyway. The corners do stand out
in an age when everything is supposed to be rounded, and they can dig into
the palm slightly, but it's all a matter of taste. The handset's
specifications are reasonably standard for this vintage of device;
there is a 1GHz processor, 1GB of RAM, and 16GB of storage. In a welcome
change from previous Nokia devices, the N9 uses a standard micro-USB
connector instead of something special Nokia made up for that specific
handset. The camera is quite nice; there is also a front-facing camera,
though the built-in Skype client is unable to use it. By all appearances,
the handset is sealed forevermore; replacing the battery does not appear to
be an option.
Android users will have likely gotten used to that environment's home screen
which can be populated (especially with CyanogenMod builds) with a wide
variety of application launchers, contact shortcuts, active widgets, and
more. The N9 MeeGo experience is somewhat different, in that there are
three specialized home screens with limited potential for customization.
The first of these is the familiar matrix of icons providing access to
applications on the phone. Users can rearrange the icons (including
putting them into subfolders), but there is no way to put anything other
than application launchers on this screen.
It is also possible to remove applications via this screen.
Dishearteningly, one quickly learns that, as with many Android builds, some
applications have been rendered immortal and unremovable. Your editor has
little use for Facebook or Twitter applications, but they cannot be made to
go away. The best that can be done is to move them to a folder where, at
least, they can be kept out of sight.
The second "home" screen (accessible via a left or right swipe across the
screen) shows the running applications in a 2x2 grid. Their current
screens are visible, and specific applications can be killed if desired.
As one might expect, tapping on an application's screen brings it back to
the foreground. The third screen is a notification area; messages, weather
information, and the latest urgent Twitter spam will show up here.
Annoyingly, none of the home screens rotate when the phone is held in the
landscape orientation. Applications handle rotation without trouble, but
the home screens appear to be special.
The applications shipped with the phone are generally attractive and nice
to use - though sometimes they seem to get into dead end screens where a
"back" button would be nice to have. There is a mapping and navigation
application that works nicely and comes with suitably annoying voices in a
wide range of languages. The camera application is feature-rich and
responsive. There is a central account manager that organizes access
credentials; interestingly, it can hook into Google, but not for contact
information. Getting access to contacts will be one of the first things a
former Android user will want to do; fortunately it is possible by telling
the phone that Google is an Exchange server. WiFi tethering is built into
the phone but "forbidden" for US users; fortunately, one can install the
"SpotOn" application to get around that bit of obnoxiousness.
On the other hand, the web browser makes one wish for the Android
equivalent. Android's browser has a "fit page to screen" option that does
a nice job of rendering the interesting part of a web page in an optimally
readable form; the MeeGo browser, instead, just mashes the entire page,
unreadably, onto the screen, requiring zoom-in gestures and side-to-side
scrolling for almost every
page that has not been specifically designed for small screens. That
Android feature, arguably, is on its own responsible for the
fact that nobody at LWN has found the time to make a more mobile-friendly
version of the site; the N9 has made it clear that not everybody has as
good an experience.
The MeeGo on-screen keyboard, while being entirely functional, is also not
as nice as the Android equivalent. There appears to be no built-in
spelling correction or word prediction, making typing a longer and more
error-prone process. That is one of the bigger shortcomings of this
system. Typing on keyboard-less handsets is a painful enough procedure even
with a top-quality on-screen keyboard; this is not the place for a
second-rate solution. (Correction: there is a simple
prediction mechanism that only seems to appear some of the time; it is
better than nothing, but doesn't change the main point of this paragraph).
There is, naturally, an applications store full of things to add on to an
N9. A number of important programs are there, and, inevitably, the handset
comes with Angry Birds already installed. The range of available
applications falls far short of that found in the Android store, though.
That is far from surprising; given that MeeGo was a lame-duck platform from
the beginning, there will be little motivation for developers to put any
time into supporting it.
Inside the device
The MeeGo system is a far more Linux-like environment than Android
provides. A terminal application comes preinstalled on the device; it
works well enough for what it is, but the truth of the matter is that
trying to do command-line work with an on-screen keyboard is always going
to be painful. Fortunately, there's an easier way. If one puts the device
into developer mode (a simple menu tweak) and plugs it into a computer's
USB port, the device offers to connect in "SDK mode." In that mode, it
presents as a network interface; there is even a built-in DHCP server so
the computer side of the connection gets configured automatically.
After that, it's
just a matter of using SSH to obtain a shell on the handset. Unlike
Android handsets, the N9 has Busybox on it from the start, so the shell is
actually reasonably usable.
For the most part, the phone environment feels like Linux. There is,
however, no functioning su command; one is, instead, supposed to
use devel-su. The result is a shell that claims to be root, but
all it takes is a find command run from the top of the filesystem
to see that root is not all-powerful on this system. There are certain
things that one still cannot access or change.
This behavior is the result of the MeeGo
security framework in action. Through a combination of trusted
computing techniques and mandatory access control, Nokia keeps the device
locked down at a certain level. It wouldn't do, after all, to let those
pesky users have direct access to the media files that they think they
bought on their handset.
Of course, keeping the users away is not the only motivation for the
security framework; it is also intended to prevent applications from acting
against the users' interests. Applications are installed with "resource
tokens" describing the actions they are allowed to carry out; they include
the ability to query location information, access the camera, make
calls, etc. Superficially it looks a lot like the Android
permissions mechanism, but the implementation appears to be wired more
deeply in at the kernel level.
Notably, the application installer does not expose resource
tokens to the user, so there is no way to know what types of access a given
application will have - a major difference from Android. One suspects that
most Android users never look at the list of requested permissions, but a
subset of us tend to examine them closely indeed. The inability to know
what access has been granted to an application seems like a major
shortcoming. That will be doubly true anywhere outside of a strict
walled-garden application repository; on this system, applications from
outside Nokia's store, if they can be installed at all, can only have a
restricted set of permissions. But, restricted or not, the user should
have the chance to review the permissions requested by an application.
What if you want to bypass the mandatory access control and truly have full
access to the device? The answer would appear to be a tool called INCEPTION. It
allows the installation of applications with full privilege; one can also
disable the security framework altogether. Your editor has not had the
time to play with this tool, but it appears to be the ticket for those who
are eager to void their warranties and reach for full control of the
Perhaps a true measure of the freedom of a piece of hardware is the
existence of independent operating system distributions for it. In the
Android world, there is CyanogenMod along with a long list of less
well-known, often more dubious, "mods." For the N9 the alternatives on
offer are somewhat more restricted, but those who are truly adventurous
can give NemoN9
a try. Nemo is the
current incarnation of the "Mer" project; it is trying to continue the
development of the MeeGo framework as an independent effort.
Unfortunately, activity in this project seems to have slowed considerably,
though it is still producing regular
releases and its use in the upcoming Vivaldi tablet may spur development in
the future. What releases Nemo has made have not found their way over to
NemoN9, though, which was last updated in November, 2011.
The end of the line
Your editor has often said in the past that MeeGo could become a credible
challenger to Android and a strong force in the mobile world in general.
After some hands-on experience with a MeeGo device, that impression has not
changed. MeeGo provides a polished and pleasant user experience. It falls
short of current Android releases in some ways, but it is much nicer to use
than the early Android-based devices were. With a bit of work, MeeGo could
have been a truly competitive - and more community-friendly - alternative.
The fact that things did not turn out that way is a sad comment on the
state of the market and the management of certain companies.
The good news is that the developers who worked on this system are out
there; many of them are still employed at Nokia. MeeGo may even see some
further development for devices other than handsets. But the sad fact is
has placed its bets on a proprietary operating system with uncertain
prospects in the mobile market. If that bet does not work out as hoped,
Nokia may yet rediscover the high-quality, free-software alternative at its
disposal. Then, perhaps, we'll see a new attempt to put MeeGo-based
handsets on the market. For now, though, the N9 has all the look of a
solid, sleek and polished platform with no future. In truth, it deserved
better than that.
Comments (85 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Shadow hardening; New vulnerabilities in chromium, kernel, libpng, systemd, ...
- Kernel: Linsched; 3.4 Merge window part 1; The perils of pr_info(); Toward better NUMA scheduling.
- Distributions: Distributions looking at LLVM; Anonymous OS, Bodhi, Fedora, GNU Linux-libre, Mythbuntu, ...
- Development: Perl 5.16 and beyond; Crossroads I/O, Git, notmuch, xpra, ...
- Announcements: Seigo: Spark becomes Vivaldi, Greg KH interview, Openmobility conference, LSM, LPC, ...