By Jake Edge
June 19, 2013
The Raspberry Pi has clearly made
a splash since its debut as a consumer product in April 2012. Thanks to
the generosity of the Python Software Foundation, all of the attendees at
this year's PyCon were given one of the
diminutive ARM computers; a giveaway that was announced just prior to
Raspberry Pi
founder Eben Upton's keynote. While it has taken a
bit to find time to give it a try—conference season is upon us—that has
finally come to pass.
Background
For anyone living under a rock (or, perhaps, just largely uninterested in
such things),
the Raspberry Pi—often abbreviated "RPi"—is a
credit-card-sized Linux computer that is targeted at children. While it
may have
been envisioned as an educational tool to teach kids about computers and
programming, there seem to be plenty of adults "playing" with the RPi as
well. It has modest hardware (a 700MHz ARM11 core with 512M of RAM for
the Model B) by today's—even yesterday's—standards, but it is vastly more
powerful than the 8-bit microcomputers that served as something of a role
model in its design.
The original price tag was meant to be $25, but that couldn't quite be met,
so the Model B (which was the first shipped), was priced at $35.
Eventually, the Model A (without on-board Ethernet) did hit the $25 price
point. In either case, it is a low-cost device that is meant to be
affordable to students (or their parents) in both the developed and
developing world. It requires a monitor (either composite video or HDMI)
and a USB keyboard and mouse, which will add to the cost somewhat, but, at
least in some areas, cast-off televisions and input devices may not be all
that hard to find. Given its size, an RPi can be easily transported
between home and a computer lab at school as well.
The goal is to give students a platform on which they can easily begin
programming without having to install any software or do much in the way
of configuration; turn it on and start hacking. Because of the price, an
interested child could have their own RPi, rather than vying for time on a
shared computer at school or at home. That at least is the vision that the
project started with, but its reach quickly outgrew that vision as it has
been adopted by many in the "maker" community and beyond.
The "Pi" in the name stands for Python (despite the spelling), which is one
of the primary programming environments installed on the device. But
that's not all. The Raspbian
distribution that came on an SD card with the PyCon RPi also comes with the Scratch visual programming environment
and the Smalltalk-based Squeak (which
is used to implement Scratch).
As its name would imply, Raspbian is based on Debian (7.0 aka "Wheezy"). It
uses the resource-friendly LXDE desktop environment and provides the Midori
browser, a terminal program, a local Debian reference manual, the IDLE
Python IDE (for both Python 2.7.3 and
3.2.3), and some Python games as launcher icons on the desktop.
Firing it up
Starting up the RPi is straightforward: hook up the monitor, keyboard, and
mouse, insert the SD card, and apply power. Using three of the general
purpose I/O (GPIO) pins on the device will provide a USB serial console,
but it isn't generally needed. Once it boots, logging in as "root" (with
no password) for the first time will land in the raspi-config tool.
Or you can log in as "pi" with password "raspberry" to get to the command line.
The configuration tool allows changing settings for the device, such as the
time zone, "pi" user password, starting up X at boot, enabling
sshd (set a root password first), and changing the memory split
between Linux and the GPU.
From the command line, though, the venerable startx command will
bring up the LXDE environment. One note: when using an HDMI to VGA converter some tweaking to the video
mode may be required.
It should come as no surprise that, once configured, the system behaves
like a normal Debian
system. The initial "apt-get upgrade" took quite some time, as
there were lots of packages to pick up, but subsequent upgrades have been
quick. It is entirely suitable for its intended purpose, but can be
expanded with the packages available from the Raspbian (and other)
repositories.
NOOBS
Of course there are other distribution choices to run on the RPi. In early
June, the Raspberry Pi Foundation (the organization behind the device) announced the "New Out
Of Box Software" (NOOBS) installer that makes it much easier to get
started. The NOOBS zip file needs to be downloaded and unpacked onto a 4G
or larger SD card, but once that's done, multiple distributions can be
installed without needing network access or requiring special imaging
software to put a boot image onto the card.
NOOBS acts like a recovery image, in that it will prompt to
install one of several distributions on first boot, but it is always
available by holding down the shift key when booting. You can overwrite
the existing distribution on the card to recover from a corrupted
installation or to switch to one of the others. In addition, it has a tool
to edit the config.txt system configuration file for the currently
installed
distribution or to open a browser to get help right from NOOBS.
Using NOOBS is meant to be easy, and it was—once I could get it to boot.
My choice of using a VGA monitor (thus an HDMI to VGA converter) meant that
I needed a development version of NOOBS and the config.txt from
Raspbian.
NOOBS provides images for several different distributions: Arch
Linux ARM, OpenELEC, Pidora, Raspbian (which is recommended), RaspBMC, and RISC OS. OpenELEC and
RaspBMC are both XBMC-based media-centric
distributions, while Arch Linux ARM, Raspbian, and Pidora are derived from
their siblings in the desktop/server distribution world. RISC OS is the original
operating system for Acorn computers that used the first ARM processors. It
is a proprietary operating system (with source) that is made available
free of charge for RPi users.
Installing Pidora using NOOBS was simple, though it took some time for
NOOBS to copy
the distribution image to a separate SD card partition. Pidora seems to use the
video mode
information from the NOOBS config.txt as there were no problems on
that score. Using startx appears to default to GNOME (which is not
even installed), so the desktop wouldn't start up; switching
the default desktop to Xfce in /etc/sysconfig/desktop may be
required. Once installed, booting gives a choice of NOOBS (by holding down
the shift key) or Pidora (or whatever other distribution is installed).
It is a fully functional installation, not LiveCD-style, so there is a
writable ext4 partition to store programs and other data (like the screen
shot at right) or to add and update packages on the system.
There are a lot of people and projects using the RPi for various interesting
things. The front page blog at the RPi home page is regularly updated with
stories about things like an RPi lab in Ghana, a sailing robot using an
RPi for navigation and control, and the Onion Pi, a Tor proxy
running on an RPi. In his PyCon keynote, Upton listed numerous
projects that have adopted the RPi for everything from music synthesizers
to art installations and aerial photography from weather balloons.
The RPi is being used to research and test new technologies as well.
There are plans afoot to
switch from X to the Wayland display server protocol, which will make it a
useful testing ground for Wayland and Weston. Beyond that, the foundation
has been helping to fund PyPy, the Python
interpreter written in Python as a way to improve the performance of
that language on the device.
It seems that some combination of capabilities, community, and, perhaps,
marketing have led to the RPi's popularity. The focus on providing a
platform to learn programming that was portable and easy to use has widened
far beyond that niche. It has
resulted in
an ecosystem of companies that are selling accessories for the RPi
(including things like cases, add-ons for controlling other devices using
the GPIO pins, sensors, and so on). But it is probably the "fun" aspect
that is the biggest push behind much of the RPi's momentum—the system
really does hearken back to the days of TRS-80s and other 8-bit computers,
but with color, sound, video, and a lot more power.
Comments (35 posted)
By Jonathan Corbet
June 17, 2013
The Ubuntu desktop has been committed to the Unity shell for some time;
more recently, Canonical also
announced
that Ubuntu will be moving over to
the new, in-house Mir display server. That decision raised a number of
eyebrows at the time, given that most of the desktop Linux community had
long since settled on Wayland as its way forward. As time passes, though,
the degree to which Canonical is breaking from the rest of the community is
becoming increasingly clear. The Linux desktop could never be described as
being "unified," but the split caused by projects like Mir and
SurfaceFlinger may prove to be more profound
than the desktop wars of the past.
Canonical developer Former Canonical developer Jonathan
Riddell started the most recent discussion with
some worries about the future of Kubuntu,
the KDE-based flavor of the Ubuntu distribution. KDE does not currently
run on Mir, and some KDE developers (such as KWin lead developer Martin
Gräßlin) have made
it clear that they are not interested in adding Mir support. So Ubuntu
will be shipping with a display server that does not support KDE in any
sort of native mode. While libraries providing X and Wayland protocol
support for Mir will certainly
exist, they are unlikely to provide the level of functionality needed by
desktop components like the KDE core. The result, Jonathan said, was that
"the
switch to Mir in Ubuntu seems pretty risky for the existence of
Kubuntu"; he wondered about how other Ubuntu flavors might be
affected as well.
Unsurprisingly, the developers working on Mir insist that they do not want
to throw out the non-Unity desktop environments. Ubuntu community manager
Jono Bacon was quick to say:
[I]t would be a failing of the Mir project if it meant that flavors
could no longer utilize Ubuntu as a foundation, but this is going
to require us to collaborate to find good solutions.
In other words, Canonical has a certain willingness to help make other
desktop environments work on Mir, but it will take some effort from the
developers of those environments as well. More specifically, Thomas Voß has offered to work with the developers of KWin
(the KDE window manager) to find ways to make it work within the Mir
environment. Assuming that a path forward is found, it is entirely
possible that Kubuntu will be able to run under Mir on a Ubuntu-based
system.
The problem is that such solutions are likely to be second-class citizens
in general, and there are reasons to believe that the problem could be more
acute in this case. The Mir
specification does not describe it as a display server for all desktop
environments; instead, it says "The purpose of Mir is to enable the
development of the next generation Unity." There are a number of
implications that come out of a position like that, not the least of which
being that Mir and Unity appear to be developed in lockstep with no
particular effort to standardize the protocol between them.
Canonical developer Christopher Halse Rogers described
the situation in fairly direct terms:
We kinda have an explicit IPC protocol, but not really. We don't
intend to support re-implementations of the Mir client libraries,
and will make no effort to not break them if someone tries.
This position differs significantly from that of the Wayland project, which
has based itself on a stable protocol specification. Leaving the system
"protocol-agnostic" (that's the term used in the Mir specification)
certainly gives a lot of freedom to the Mir/Unity developers, who can
quickly evolve the system as a whole. But it can only make life difficult
for developers of any other system who will not have the same level of
access to Mir development and who might like a bit more freedom to mix and
match different versions of the various components.
The result of this approach to development may well be that
Mir support from desktop environments other than Unity ends up being
half-hearted at best; it cannot be a whole lot of fun to develop for a
display server that exists primarily to support a competing system. Few
other distributions have shown interest in using Mir, providing another
disincentive for developers. So, as the
X Window System starts to fade away into the past, Ubuntu looks to be left
running a desktop stack that is not used to any significant degree anywhere
else. Ubuntu, increasingly, will be distinct from other distributions,
including the Debian distribution on which it is based.
The success of Android (which uses its own display server called
SurfaceFlinger) shows that reimplementing the stack can be a workable
strategy. But there must certainly be a limit to how many of these
reimplementations can survive in the long run, and the resources
required to sustain this development are significant. Canonical is taking a
significant risk by separating from the rest of the graphics development
community in this way.
Over the many years of its dominance, X has been both praised and
criticized from many angles. But, perhaps, we have not fully appreciated
the degree to which the X Window System has served as a unifying influence
across the Linux desktop environment. Running one desktop environment did
not preclude using applications from a different project; in the end, they
all talked to the X server, and they all worked well (enough). Over the
next few years we will see the process of replacing X speed up, but there
does not appear to be any
single replacement that can take on the same unifying role. We can expect
the desktop environment to fragment accordingly. Indeed, that is already
happening; very few of us run Android applications on our desktop Linux
systems.
"Fragmentation" is generally portrayed as a bad thing, and it certainly can
be; the proprietary changes made by each Unix vendor contributed to the
decline of proprietary Unix as a whole. But we should remember that the
divergence we are seeing now is all happening with free software. That
means that a lot of experimentation can go on, with the best ideas being
easily copied from one project to the next, even if licensing differences
will often prevent the movement of the code itself. If things go well, we
will see a quicker exploration of the space than we would have under a
single project and a lot of innovation. But the cost may be a long period
where nothing is as well-developed or as widely supported as we might like
it to be.
Comments (234 posted)
By Nathan Willis
June 18, 2013
An unfortunate drawback to the scratch-your-own-itch development
model on which many free software projects depend is that
creators can lose interest. Without a maintainer, code gets stale and
users are either stranded or simply jump ship to a competing project.
If the community is lucky, new developers pick up where the old ones
left off, and a project may be revived or even driven to entirely new
levels of success. On the other hand, it is also possible for
multiple people to start their own forks of the code base, which can
muddy the waters in a hurry—as appears to be happening at the
moment with the 2D animation tool Pencil. Plenty of people want to
see it survive, which has resulted in a slew of individual forks.
Pencil, for those
unfamiliar with it, is a "cell animation"
application, which means that it implements old-fashioned animation
drawn frame by frame (although obviously the software helps out
considerably compared to literally drawing each frame from scratch).
In contrast, the other popular open source animation tools Tupi and
Synfig are vector-based, where motion comes from interpolating and
transforming vector objects over a timeline. Despite its
old-fashioned ambiance, though, Pencil has proven itself to be a popular
tool, particularly for fast prototyping and storyboarding, even when
the animator may create the final work in a vector application.
Original Pencil maintainer Pascal Naidon drifted away from the
project by 2009. At that time, the latest release was
version 0.4.4, but there were newer, unpackaged updates in the
Subversion source repository. Version 0.4.4 eventually started showing
signs of bit-rot, however, particularly as newer versions of the Qt
framework (against which Pencil is built) came out. Users of the
application, however, have continued to maintain a community on the
official site's discussion forum.
A box of forks
Understandably, there were never a ton of Pencil users,
at least as compared to a general-purpose desktop application. But
the dormant project picked up a dedicated follower when the Morevna
Project, an open source anime movie project, adopted it for its workflow. Morevna's
Konstatin Dmitriev began packaging his own fork
of Pencil in late 2009, based on the latest official Subversion code.
He added keybindings for commands, command-line options to integrate
Pencil with Morevna's scripted rendering system, and fixed a number of
bugs. Subsequently, he began adding new features as well, adding
user-selectable frame rates, some new editing tools, and support for
multiple-layer "onion
skinning." Onion skinning in animation is the UI technique of
overlaying several (usually translucent) frames onto the current
drawing, so the animator can visualize motion. But there are also a
lot of bug fixes in the Morevna fork that deal with audio/video import
and export, since the team uses Pencil to generate fill-in
sequences for unfinished shots. Since Morevna is a Linux-based
effort, only Linux packages are available, and they are still built
against Qt 4.6.
In contrast, the Pencil2D
fork started by Chris Share eschewed new features and focused squarely
on fixing up the abandoned Subversion code for all three major desktop
OSes. Share's fork is hosted at SourceForge. One
of the fixes he applied was updating the code for Qt 5, but that
decision caused major problems when Qt 5 dropped support for
pressure-sensitive Wacom pen tablets, which are a critical tool for
animators. In early June 2013, Matt Chang started his own fork also at the
Pencil2D site, using Qt 4.8.4. Whether Share's fork hit a brick wall
with the Qt 5 port or has simply stagnated for other reasons, Chang's
is still active, to the point where he has posted a roadmap on the
Pencil2D forum and is taking feature suggestions. Chang has only
released binaries for Windows, but he believes the code will run on
Linux and OS X as well, and maintains it for all three.
Both of the forks at Pencil2D headed off on their own, rather than
working with Dmitriev's Morevna fork. More to the point,
Chang's roadmap includes a different set of drawing tools and a
separate implementation of function keybindings. Luckily, the two forks'
editing tool additions do not conflict; Morevna's adds a
"duplicate this frame" button and adds controls for moving layers, while
Chang's include object transformations and canvas rotation.
In contrast to the other Pencil forks, the Institute for New Media Art
Technology (Numediart) at the University of Mons took its fork in an entirely
different direction as part of its "Eye-nimation"
project. Eye-nimation is used to produce stop-motion animation.
Numediart's Thierry Ravet integrated support for importing images
directly from a USB video camera into Pencil, where the images can be
traced or otherwise edited. It uses the OpenCV library to grab live input from
the camera, and adds image filters to reduce the input to black and
white (bi-level, not grayscale) and smooth out pixelation artifacts.
Ravet spoke about the project at Libre
Graphics Meeting in April. The work is cross-platform, although it is
built on top of an earlier release of the original Pencil code,
0.4.3.
As if three concurrent forks were not enough, many Linux
distributions still package the final official release from the
original project, 0.4.4. And there are several independent Pencil
users who maintain
their own builds of the unreleased Subversion code, some of which
refer to it as version 0.5.
Sharpening up
On the off chance that one might lose count, the total currently
stands at five versions of Pencil: the final release from the original
maintainer (0.4.4), the unreleased Subversion update, the Morevna
fork, Chang's Pencil2D fork, and Numediart's Eye-nimation. The
situation is a source
of frustration for fans of the program, but how to resolve it is
still quite up in the air. Dmitriev maintains the Morevna fork for
utilitarian reasons (to get things done for Morevna); his preference
is to work on Synfig and he does not have time to
devote to maintaining Pencil in the long run, too. Chang does seem
interested in working on Pencil and in maintaining his fork as open
project that is accessible to outside contributors.
But combining the efforts could be a substantial headache. The
Morevna fork is considerably further along, but Chang has already
refactored his fork enough that merging the two (in either direction)
would be non-trivial, to say the least. And it is not clear whether
the Eye-nimation feature set is something that other Pencil users
want; Dmitriev expressed
some interest in it in his post-LGM blog report, though he was
concerned that Numediart had not based its work on the Morevna fork.
The primary competition for Pencil is the prospect that
cell-animation support will get added to another program. Krita has
a Google
Summer of Code (GSoC) student working on the feature (in addition to the
partial support already
added), while Dmitriev said in a private email that he hopes one
day to implement cell-animation features in Synfig. If either effort bears
fruit, that would be a positive development, but in the near term news
of something like a GSoC project can sap energy from existing efforts,
yet still might ultimately fall short.
It is a fairly common problem in the free software community for a
well-liked project to fizzle out because the maintainers can no longer
spend the time required to develop it and no one else steps up. It is
rarer for multiple parties to independently take up the mantle and
produce competing derivatives—especially in an obscure niche
like traditional animation software. But when that does happen, the
surplus energy, if it remains divided, can still end up doing little
to revitalize a project that many users want to see make a return.
Comments (11 posted)
Page editor: Nathan Willis
Inside this week's LWN.net Weekly Edition
- Security: Tor Browser Bundle 3.0; New vulnerabilities in kernel, perl-Module-Signature, puppet, xen, ...
- Kernel: Power-aware scheduling; Tags and IDs; Merging Allwinner support
- Distributions: A Debian GNU/Hurd snapshot; Debian, openSUSE, RHEL.
- Development: The Places API for Firefox; Subversion 1.8.0; LLVM 3.3; LibreOffice progress under the hood; ...
- Announcements: SCO v. IBM reopened, TDF welcomes MIMO, OSI elections, ...
Next page:
Security>>