By Nathan Willis
February 27, 2013
The day before SCALE 11x in Los
Angeles was devoted to a series
of mini-summits; some centered around a single software project (such as
Puppet or Ubuntu), but others were topical—such as the FOSS
Mentoring summit. That summit was devoted to the
mechanics of working with the human aspects of a free software
project, from developers to end users. For example, Nathan Betzen
of XBMC discussed the communication tools, from forum packages to
external social networking services, and how the project makes use of
each. Karsten Wade spoke about infrastructure volunteers (e.g.
as system administrators), and how to recruit, train, and foster
increased participation. Robyn Bergeron spoke about organizing and
managing predictable release cycles in a project that relies on
unpaid—and often hard-to-predict—volunteer
contributors.
Communication tools
Betzen has been working with XBMC since 2008, and currently serves
as community manager. His talk dealt with end-user-facing
tools—blogs, forums, and so forth—as distinguished from
development-centric tools like source code management or build
systems. In large part, he said, this is of interest because XBMC is
a user-facing application with a large community of users who are not
engaged in software development.
He started off with general advice on "how not to suck" at user
community interaction. This advice included
sincerity in messaging (communicating why you honestly
feel your project is great, rather than crafting an artificial
message), restricting your posts to appropriate and relevant
information (regardless of how awesome your cat photos may be), and
seeking relevant input from the users. On that point, he said that
starting an open source project is a public statement that you expect other
people will want to work on the product. The earlier you accept
that—including the fact that these others will bring different
opinions and plans—the better your communication will be.
He then summarized the most common avenues for user communication,
and what XBMC has learned about their suitability for different
purposes. Web-based discussion forums are the "heart" of your
community, he said: where everyone is going to go to talk to and ask
questions of the development team, and where they will look for help.
He gave no advice on the specific software package used, but said the
important thing is that users can register and write their
words on your footprint on the Internet. He gave some "best
practices" advice, such as trying to shut down as few threads as
possible (other than lawbreaking topics and bots). Even if your
community becomes known for its off-topic discussions, people are
still coming to the site and learning about your software. He also
advised making forum rules early (and making them highly-visible), and
making as much intra-team communication open as possible, even if that only
means posting read-only threads summarizing team meetings.
Blogs are distinctly different from forums as a community tool,
Betzen said. If comment threads on individual posts are the only
place where discussions occur, the project suffers. First, individual
posts come and go (including going off the bottom of the first page
rather fast), and blogs allow only site administrators to start a
topic of discussion. That prevents community members from speaking
what's on their mind, plus it creates a bottleneck. "You want as many
people as possible to do the work of making conversations."
He then discussed the relative pros and cons of the current social
networking services, starting with "the great beast," Facebook.
Facebook is a "not terrible" way to gauge the reach of your project,
he said, and is considerably better than log-based web analytics. The
reason is that every Facebook account is bound to a real person.
Thus, in addition to calculating your own project's reach, you can
compare it to the reach of your competitor projects. It is also a
useful outreach tool, since it is one of the default places people
unfamiliar with your project start a lot of their searches. It
works well as a "personality outreach" tool, where you can ask end
users to submit ideas and content of their own. For example, he said
that XBMC regularly asks users to post photos of their own XBMC
systems, and frequently gets submissions for themes, mascots, and
other mock-up designs that eventually become reality.
In contrast to Facebook, Betzen said, Twitter is a terrible way to
measure reach, since it is dominated by fake accounts and bots. "Half
the people on Twitter aren't real people," he said. "You can become a
'Twitter master' with thousands of followers, most of whom don't know
who you are and aren't even real." Still, he said, it functions well
as a front-line support service, and is useful to visibly communicate
with non-users and participate in the larger world.
Reddit can be a good tool, he continued, but it really works best
for already-engaged projects. He advised against creating your own
"subReddit," since that can be seen as self-aggrandizing, but added
that the site can be useful as a persistent news outlet and to
promote "causes" and campaigns. For example, he cited the case of a
third-party port of XBMC to the original Xbox hardware: when PayPal
unilaterally shut off the group's donation account without
explanation, a Reddit furor arose that was credited for PayPal's
decision to mysteriously reinstate the account (again without explanation).
Finally, he discussed Google+ and YouTube. Google+ is popular
among developers but not with users, yet it has a killer feature with the
"Hangout" video chat system. This allows people to talk directly to a
project team, which is a great way to engage, and it can be used to
broadcast live meetings. Similarly, YouTube videos offer some
"magic", because there are frequently features or procedures that are
clear in a screencast, but terrible to try to read on a "list of
steps that was written on a wiki several years ago and hasn't been updated."
Infrastructure
Wade's session also dealt with the practicalities of managing a
project, in particular how to organize and maintain infrastructure:
the systems administration and support tasks that keep a project
functioning smoothly (as opposed to the development of the actual code).
He highlighted the importance of treating infrastructure as an avenue
of open participation, just like hacking.
Wade's advice had two sides. First, using open tools protects
the project (in that owning your data and other "stuff" is critical),
and it allows other open projects to see how you do things and learn
from them. Second, making infrastructure participation part of the
project gives interested administrators a way to pitch in and support
the project even if they are not prepared to be developers and (as
with coding) it offers a good way for those interested in systems
administration to learn and equip themselves with skills that will be
valuable later.
He drew examples from his experiences on the Fedora distribution
and on the oVirt project. The first piece of advice was to divide
project infrastructure into three categories: core-essential,
core-nonessential, and non-core. Core-essential pieces are those that
are required to actually develop the software, such as source code
management, build systems, and testing tools. Core-nonessential
pieces are those necessary to keep project participation functioning,
but are not related to the project code itself, such as
configuration-management tools or mailing list software. Non-core
pieces are those concerned with data and metadata, and are generally
orthogonal to the system on which they run, such as wikis,
documentation, and blogs.
These categories should be treated separately, he said, and doing
so benefits both administrators and developers. Developers may prefer
to take care of the core-essential pieces themselves, but they do not
need to take time setting up Mailman configuration or provisioning
database servers. Administrators, too, can divide up tasks in the
layers, letting go of control "down to the bare metal" as
administrators often want.
Wade instead advised projects to treat the "holy grail" of
administration—root access on the server—just like it does
the holy grail of development, commit access. By starting new
infrastructure volunteers on non-core tasks (even wiki maintenance or
web page editing), they can learn the system, gain experience and the
trust of other administrators, and work their way toward the core
systems. Having a transparent process and meritocractic system are
ideals that define open source projects, he said, and they apply to
infrastructure administration just as they do to anything else.
A lot of administrators treat the prospect of sharing root
access warily, since it is tricky to trust volunteers that they have
never met in person. But Wade argued that such trust-building is no
different than the trust-building process required of new developers.
It is just a mindset many administrators have not adopted. Projects
can and should consider sandboxing systems and other tools to guard
against accidental catastrophes, but ultimately administrators need to
remember that systems administration is not a life-or-death endeavor,
and if someone on the team does make a mistake, the project can always
roll back the change. Of course, that advice does assume that the
project is keeping backups, but that is hardly a new piece of advice.
He also offered several practical ideas for projects looking to get their
infrastructure into better shape. The first is to start small and
scale up as the project grows. Initially, the bare minimum that the
project requires will have to do, even if that is a bargain-rate web
hosting plan. The second idea is to find ways for interested sponsors
to contribute to the infrastructure as a means of supporting the
project. Providing tools or covering all or part of the hosting bills
is a good way to let companies visibly support the project, and by
allowing multiple companies to contribute, it shows that the project
has broad support (as well as giving all vendors equal
opportunities). He mentioned that several companies provide servers
to the Fedora project; whichever one is responsible for serving up a
particular page is indicated on the page itself.
Finally, he provided specific examples of how the oVirt and Fedora
projects each split up their infrastructure organization. Fedora's
core-essentials include the Koji build system, pkgDB database, Bodhi
updater, Yum update system, source code management, testing
infrastructure, and "Fedora people" web pages (the latter because they
are sometimes used to provide package repositories). Fedora's
core-nonessentials include Bugzilla, MediaWiki, WordPress, Planet,
elections infrastructure, and mirror management system. The non-core
pieces include web hosting and FTP mirroring. The oVirt
core-essentials are fewer in number: Git, Gerrit, Jenkins, and Yum.
Its core-nonessentials include Puppet, Foreman server manager, and
Mailman. Its non-core pieces include MediaWiki and to some extent
Mailman (here Wade observed that sometimes the borders between the
layers can be fuzzy), and external services like GitHub and Identi.ca.
The audience asked several questions, such as how IRC fits into the
mix. Wade replied that IRC is a tricky one, since it is used for
communication but is also integral to the development process. The
projects tend to use external IRC networks like OFTC and Freenode, but
run their own logging and announcement bots (which he considers pieces
of project infrastructure). Another audience member suggested that
"core" and "essential" might be taken as loaded words, which Wade
readily conceded (who wants to hear they have been deemed
nonessential, after all?). He said he was open to better suggestions,
but that the principle of layers and levels was the main point.
Release cycles
Bergeron discussed "project management" in what she called the
classical, business-lingo sense—specifically, addressing the
questions of "how do we get this puppy out the door on time?" and "how
do we juggle the logistics?" In the business world, there are project
management certifications and training classes, she said, but none of
them talk about how to apply their tactics to the open source
approach, which has peculiarities. First, open source projects are
composed of many people (often even critical team members) who are
volunteering their time, and cannot be dictated to like a salaried
employee. Second, such projects must cope with constant uncertainty
in turnover, such as people changing jobs, going away to college, or
losing interest for other reasons. Bergeron offered her own advice on
this type of project management based on her experiences as Fedora
project leader, specifically with regard to managing a development and
release cycle.
The first piece of advice is to have a schedule. Shipping
on time is the holy grail, she said, but it is important to recognize
that shipping on time at the cost of angering and demoralizing the
project's members is not worth it—ideally, you want your project
to grow continually. How the schedule is determined needs to scale
with the project, as does how it is communicated to project members as
a whole. For a three-person team, putting it on a single web page may
be sufficient; for a large project it is not. It does, however,
actually need to be published somewhere. Mentioning it in a
blog post is not good enough, as eventually it will scroll off the
bottom of the page.
The next piece of advice is to communicate everything. A lot of
people assume that open source projects are constantly communicating,
but there are things every project can do better. In brief, she said,
you cannot remind anyone too frequently about anything. But clarity
is the key, so although important things like the schedule need to
be communicated broadly to avoid the "well nobody told me
about that" problem, they also need to be published in a
consistent, predictable manner. In other words, emailing it and
posting it at wiki/Schedule one time, then blogging it and
posting at wiki/Calendar the next is a bad idea. An audience
member asked how to communicate constantly without it becoming
annoying; Bergeron replied that that was a risk but it could be
mitigated with a good tone and a smile.
The next piece of advice is to manage how much work takes place
during the release cycle. Projects need to agree on what features and
changes will receive attention, and not simply let everyone do
whatever they want when they want to do it. But these plans also need
to consider failure a possibility, and plan for incomplete efforts and
how to roll back changes. Change-planning can be contentious, she
said, but it is critical that the project have the conversations in
the community, or else people will leave. Sometimes people hear that
warning and think "I want X to leave, he's a damn fool," she
said, but you never know who he will take with him or where he might go.
The fourth piece of advice is to learn to live with Murphy (as in
Murphy's
Law). Problems will happen, she said, but when they do they
usually contain learning opportunities. In addition, coming through
the problem together can be good for the community, as it fosters a
sense of camaraderie. The final piece of advice was to consciously
set expectations for each cycle. The best way to do that is to be
transparent about the decision-making process and to be
communicative. Surprises can scare people off, she said, but
invitations to help out are awesome.
In the audience question-and-answer session, one person asked what
tools there are for project management. Bergeron recommended
TaskJuggler, among other open
source options. Another audience member asked how projects can
prevent "collisions" where more than one person wants to work on the
same thing. Bergeron replied that Fedora has several teams that
collaborate, which is one way to tackle the problem, but added that it
was rarely a practical concern. Most of the time, there is far more
work needing to get done than there are volunteers to do it.
All three sessions provided practical advice, and, judging by the
number of audience questions (of which only a fraction were recounted
above), that advice was useful for quite a few attendees. Since the
free and open source software ecosystem is one that exists to produce
software, it can be all too easy to spend too much time thinking about
revision control systems and contributor agreements and conclude that
those topics cover what it takes to manage a successful project. But
as Wade said, what really makes a project "open" is how it functions,
not what license it uses; that applies to communication,
infrastructure, and scheduling just as it does to development.
Comments (6 posted)
By Nathan Willis
February 27, 2013
Kyle Rankin is a systems administrator by trade, but a 3D printing
aficionado by hobby. At SCALE 11x in Los Angeles, he presented the
Sunday morning keynote
address, which looked at the history and present circumstances of
the 3D printing movement—and drew parallels with the rise of
Linux.
Rankin described himself as a "software guy" not a hardware hacker,
a fact that slowed down his entry into the 3D printing world, where
most projects demand quite a bit of fabrication and soldering-iron
skill. In many of his other hobbies, he said, he ends up using a
Raspberry Pi or other embedded Linux systems instead of the more
common Arduino microcontroller, since he can solve his problems with
software. Consequently, he followed the home 3D printer world for more
than a year before finding the right product and making a printer
purchase. That purchase was a Printrbot, an open hardware printer
that includes pre-assembled electronics.
Apart from finding the right hardware, Rankin said a big challenge
to getting started with 3D printing was justifying the up-front
expense to his wife's satisfaction. Initially this was a big
obstacle, he said, because the only answer to the question "So what
can you make with a 3D printer?" seemed to be "parts for another 3D
printer." As time went on, however, the array of printable object
possibilities broadened, including hard-to-find parts for fixing
broken appliances, household tools and objects, and baby toys.
How did we get here
Rankin then outlined the history of the home 3D printing movement,
which he said included a number of parallels to the growth of Linux.
For example, initially 3D printing was the exclusive domain of
high-end devices costing hundreds of thousands of dollars, and
affordable only to corporations. But as happened with "Big Iron"
Unix, eventually lower cost devices (under US $30,000) became
available at universities, at which point do-it-yourself-ers began
asking themselves whether they could build similar systems at
home.
In 2004, Adrian Bowyer announced the RepRap project, which Rankin said
was akin to Linus Torvalds's initial post to comp.os.minix announcing
Linux. RepRap was designed from the beginning to be an open source
software and hardware project to create a 3D printer that could be
built by anyone, without the need for specialty materials—any
parts that were not easily available off-the-shelf must be printable
with a RepRap itself. As the project picked up speed, Rankin said it
evolved into the Debian of 3D printing, thanks in large part to its
commitment to avoid parts that were only available in certain regions.
In 2006, a RepRap device first printed a workable part for another
RepRap, and in 2007 the first device printed a complete set of RepRap
parts. Around this same time, community members started up side
businesses printing and selling these parts, which Rankin likened to
the Linux CD-pack businesses of the 1990s. In 2008, early adopter
Zach Smith founded Thingiverse, a site for publishing and sharing 3D
printable object models; the site proved to be the most important
contribution to the 3D printing movement since RepRap. In 2009, Smith
and others founded MakerBot, a for-profit company centered around
selling open source 3D printers, which Rankin said was akin to Red
Hat's foray into building a commercial business around a Linux
distribution. Like Red Hat Linux, derivatives began to appear, such
as the modified versions of Makerbot printer kits sold by Makergear.
Over the following years, the designs and capabilities of 3D
printers evolved rapidly. The RepRap Prusa Mendel was released in
2010, with noticeable improvements in simplicity and buildability over
earlier designs—to the point where the Mendel is still the most
popular RepRap model today. In 2011, Printrbot appeared on
Kickstarter, with a funding goal of US $25,000. Printrbot's aim was
to create an end-user focused printer that was easier to build and use
than the earlier designs, with the motto "a 3D printer in every home."
Rankin compared this to Ubuntu with its emphasis on creating a quality
end-user experience. Like Ubuntu, Printrbot proved popular, ultimately
raising US $830,000 and starting a deluge of 3D printer–related
Kickstarter projects.
December 2011 saw the release of the RepRap Mendel Max, which was
built on a tougher aluminum extrusion frame, which enabled
significantly faster printing by eliminating vibrations. In early
2012, Makerbot revealed another leap forward, with a dual-extrusion
option for its Replicator printer. Dual-extrusion printing allowed
for cosmetic options like multi-color prints, but it also allowed
users to print different types of objects (such as by using one print
head to print "support structures" in a water-soluble plastic). In
September, however, Makerbot announced that it was taking its printers
closed source to combat against clones. Around the same time,
Thingiverse (which is owned by Makerbot) changed its terms of service
to say that it owned all user-uploaded designs.
There was backlash against both moves. Rankin compared it to the
backlash against Red Hat when it stopped providing public downloads of
its distribution and began taking trademark action against people who
redistributed Red Hat clones. The 3D printer community backlash was
greater against Thingiverse, he said, including an "Occupy
Thingiverse" movement spearheaded by Josef Prusa (who created the
popular Prusa Mendel model of RepRap). The Occupy Thingiverse
movement flooded the site with a manifesto document written by Prusa,
which can still be found in many Thingiverse design searches today.
3D printing today
Rankin concluded his talk by describing the current
state-of-the-art in 3D printing. He compared the most recent models
from most of the vendors in terms of size, cost, and printing
specifications. Prices continue to drop for the hardware, he noted,
but ironically the cost of the plastic filament used for printing has
shot up considerably—primarily because investors in the plastics
futures market saw the rise of 3D printing coming and bought into it
to turn a profit.
Nevertheless, Rankin described several recent changes that make the
investment in a printer worthwhile. Ease-of-use has improved
markedly, he said. Again comparing it to the Linux world, he said
today it is no longer necessary to call the local 3D printer expert to
guide you through the build process, much like the days when one had
to go to an "install fest" to get Linux installed and a working
xorg.conf file crafted. There are also more interesting
designs for printable objects. He showed a set of LEGO block designs
collected from the now–public domain specifications of the
original LEGOs, plus an array of connectors for joining LEGOs to other
brands of blocks. There are also extremely high-quality toy designs
available for printing now, such as the elaborate fairytale castle set
and a wide range of Nerf weaponry.
But 3d printing technology is still marching forward. Current work
focuses on increasing print speeds and finer print resolutions, but
there are other interesting projects in the works, too. One is the
effort to create a cheaper alternative to the now-expensive plastic
filament used as printing material. The Filabot project attempts to
grind up recyclable plastic into usable filament, albeit with
less-than-perfect success so far, while the company Formlabs has been
working on a desktop printer that uses stereolithography
to print, eschewing plastic filament altogether. The next big topic
for research is 3D scanning, Rankin said; if affordable 3D scanners
become a reality, people will be able to print solid objects without
ever touching modeling software—or perhaps "3D fax" machines
will arise.
Finally, there are controversies to expect as 3D printing becomes
even more mainstream. One is copyright and patent law; as more and
more objects become reproducible without permission, people should
expect businesses to make moves to try and protect their revenue
streams. Rankin noted that some 3D-printable materials can be used to
make molds as in lost-wax
casting; this means functional metal objects are possible, which
are a more likely source of copyright and patent contention. They may
also raise legal questions: Rankin cited the case of a Thingiverse
user last year who posted models that could be used to print firearm
parts, including the lower
receiver—which the US government considers the "actual" gun
(and is where serial numbers are located). The designs were taken down by
Thingiverse, but as is always the case on the Internet, they are still
available elsewhere.
Home 3D printing remains largely a hobbyist activity today, but the
comparison to Linux is an eye-opening one. Linux, too, started off as
a hobbyist project, and it grew quickly to dominate computing. Rankin
emphasized that open source software powers most of the current 3D
printing revolution, from hardware designs to modeling to driving the
printers. It may still have a ways to go, but for a movement less
than a decade old, its progress is already remarkable.
Comments (1 posted)
By Nathan Willis
February 27, 2013
Developers have used the diminutive Raspberry Pi as a platform for
an assortment of computing tasks, but one of the most popular tasks
has been entertainment—including video gaming. At SCALE 11x in
Los Angeles, developer Guillermo Antonio Amaral Bastidas presented
his work on the Marshmallow
Entertainment System (MES), a retro-styled video game engine for
8- and 16-bit, 2D games. He compared MES to the competition (both
open and closed) and explained what he has learned along the way.
Heroes and villains
Amaral does not simply talk the talk; his presentation was
delivered in the form of a 2D side-scrolling MES game in which he
navigated a character (which looked like him) through a game
world—a world in which his text notes were embedded in a
obstacle course (and, in some cases, floating by like clouds). He
started off with a rundown of the other "open-ish" game consoles,
comparing their openness and their specifications (both hardware and
software).
The first was the Uzebox,
a do-it-yourself retro gaming console based on an AVR
microcontroller. The Uzebox offers little in the way of power,
running at 30MHz, but it is 8-bit native, so game designers can build
actual 8-bit, "Nintendo-style" games with the project's software
development kit (SDK). The SDK includes a clock-perfect emulator,
which is vital for testing games during development, and the project
is completely open: open software and firmware, plus full hardware
schematics. It may lack power, but Uzebox is also very affordable at US
$60.
The GP2X is a slightly more powerful
device designed as a handheld akin to the PlayStation Portable. It
runs at 200MHz, which Amaral described as "mid-range" for such a
system, and it is marketed as a ready-to-use consumer device. The SDK
is open source, but Amaral said he was still unsure about the openness
of the hardware. The GP2X is sold only in South Korea, so it can be
difficult (and expensive) to find in other regions.
There are several Android-powered gaming devices on the market, he
said, such as the Ouya and the GameStick. Both are very powerful,
particularly for their price points (around US $80 for the GameStick,
$100 for the Ouya, which is currently in pre-order). But they
are both designed to play only Android games. So far, emulators have
been promised but are not yet available. Amaral said he does not
trust Android emulators to deliver a clock-perfect emulation
environment, which should concern game developers. Both projects describe
their SDKs as open source, but he said it was not clear exactly which
components are available under an open source license and which are
not. The hardware is proprietary for both products.
The challenger
MES has been two years in the making, he said. The early work was done
on BeagleBoard and PandaBoard hardware, with the goal of creating a
miniature game-focused distribution that anyone could download and
run from a memory card on an off-the-shelf product. The BeagleBoard
and PandaBoard were eventually discarded as being too slow at
graphics, at which point he turned his attention to pre-release
Raspberry Pi hardware. The Pi was an excellent fit because it can be
safely overclocked to 1GHz, developers can write games for it in pure
C++, and because it introduces few dependencies. He spent a
considerable amount of time building the MES engine, plus the time
required to get the Raspbian distribution into a workable shape (which
included getting Qt4 running).
Lest there be any doubt, MES is entirely open source. It includes
the stripped-down version of Raspbian which is focused solely on
launching the Marshmallow game engine. He had initially intended each
MES game to be installed on a separate SD card, so that they would be
swapped in and out like the cartridges of 8-bit era game consoles.
But the final
builds used up just 110MB (for the OS and game engine),
so he now recommends people install as many games as they want on a
single card. The OS image uses a modified version of Buildroot and
launches into a game selector screen. Amaral described the game
selector as a work in progress, but the modified version of buildroot
and other changes are all available on his GitHub
repository.
During the development process, Amaral learned a few things about
developing for the Raspberry Pi that he had not anticipated. The
graphics capabilities are "awesome," he said, to the point where MES
runs better on the Raspberry Pi than it does on his laptop. It even
runs fine on the open source video driver, for those who do wish to
avoid binary blobs. But audio support was less
pleasant. The device supports both pure ALSA and OpenAL, but OpenAL
runs too slowly to be useful. On the other hand, ALSA support was
unsatisfactory as well; the device supports opening only one audio
channel at a time. To get around this limitation (to, for example,
provide background music as well as sound effects), Amaral wrote his
own software audio mixer for MES.
A development wrinkle of a different sort is the Raspberry Pi's
power cycling. The device has no reboot switch; it starts up when
power is connected, and shuts off when it is removed. That can be
annoying if using the device as a game system; more so while developing
for it. To work around this problem, he designed an add-on board that
sports a hardware reboot-switch. The board is called the Raspberry Pi
Power Button, and the MES project sells it as a fundraiser, though
the schematics are
free on Amaral's GitHub site and anyone can build their own.
MES is just getting started as a game development platform. He
described the engine as being "its own SDK," but so far there is not
much in the way of a development guide. The SD card images come
with a single demo game—though Amaral said he repeatedly
encourages fans to contribute more to the game. The platform is
BSD licensed, a decision he hoped would appeal to many
independent game developers hoping to make products. Despite the
newness of the platform, he said three gaming companies have contacted
him to test it out, as well as attention from open
source software and open hardware circles.
Raspberry Pi devices are a popular topic on the conference circuit
these days, especially for gaming. But the most common gaming
experience seems to be resurrecting 80's and 90's–era
proprietary games for their nostalgia value. While that is certainly
a source of entertainment, writing original games is more interesting,
and MES shows that they can still provide that 8-bit retro
feel, without the hassle of finding a legally-questionable ROM image
from an old, commercial game.
Comments (16 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Android security underpinnings; New vulnerabilities in java, kernel, openssh, rails, ...
- Kernel: 3.9 Merge window, second episode; In-kernel switcher for big.LITTLE; Loading keys from Microsoft PE binaries.
- Distributions: The Ubuntu Touch experience; Debian, Mandriva, MINIX, RHEL, ...
- Development: User namespaces; Ruby 2.0.0; Subsurface 3.0; Xiph.org's Digital Show and Tell; ...
- Announcements: LG acquires webOS, 18 carriers supporting Firefox OS, 2013 "State of X.Org" report, ...
Next page:
Security>>