By Jake Edge
February 20, 2013
The second day of this year's Android
Builders Summit started off with a panel discussion of a
provocative question: Is Android the new embedded Linux? As moderator
Karim Yaghmour noted, it is not really a "yes or no" question, rather it
was meant as a "conversation starter". It certainly had that effect and
led panel members to explain how they saw the relationship between
"traditional" embedded Linux and Android.
The panel
Four embedded Linux experts were assembled for the panel, with each
introducing themselves at the outset. David Stewart is an engineering
manager at the Intel Open Source Technology Center where he is focused on
the company's embedded Linux efforts, in particular the Yocto project.
Mike Anderson has been doing embedded work for 37 years and is now the CTO
for The PTR Group, which does embedded Linux consulting and training. Tim
Bird is a senior staff software engineer at Sony Network Entertainment as
well as being involved with the Linux Foundation's Consumer Electronics
Working Group. Linaro's Android lead Zach Pfeffer rounded out the group.
He has been working on Android "since it was a thing" and in embedded Linux
for twelve years.
What is "embedded Linux"?
Defining the term "embedded Linux" and whether it describes Android was
Yaghmour's first query to the
panel. Bird said that he didn't think that Android qualifies as embedded Linux.
Embedded means a "fixed function device" to him, so while Sony wants to
make a platform out of its TVs and other devices, which is "great stuff", he
doesn't see it as "real embedded". Real embedded systems are typified by
being "baked at the factory" for set functionality "and that's what
it does".
Pfeffer disagreed, noting that Android had helped get Linux into some
kinds of devices where it had been lacking. The Android model is a
"particularly efficient way" to support new systems-on-chip (SoCs), so it
provides a way for new systems to be built with those SoCs quickly. While
phones and other Android devices might not fit the profile of traditional
embedded devices, the Android kernel is providing a base for plenty of
other devices
on new SoCs as they become available.
What were the driving forces behind the adoption of embedded Linux,
Yaghmour asked. Anderson had a "one word" answer: royalties, or really the
"lack thereof". Bird agreed that the lack of royalties was a big deal, but the
availability of the source code may have been even more important. It
meant that the user didn't have to talk to the supplier again, which was
important, especially for smaller device makers, because they were never
able to get much support from vendors. With the source, they could fix
their own problems.
Stewart noted that people tend to make the assumption that embedded means
that a realtime operating system is required. Often that's not the case
and Linux is perfectly suited to handling embedded tasks. There is also a
certification ecosystem that has built up around embedded Linux for areas
like safety and health, which helps with adoption.
In addition to the other reasons mentioned, Pfeffer noted that "Linux is fun".
Often disruptive technology comes about because an engineer wants to do
something fun. With a manager who is "more enlightened or maybe
just cheap", they can pull their Linux hobby into work. It is much more
fun to work on embedded Linux than something like Windows Mobile, and he
has done both, he said.
Yaghmour then asked: what is it in Android that is attracting device makers in
that direction? Stewart said that he is "not an Android
guy", but he thinks it is the user interface familiarity that is drawing
manufacturers in. It is not so much the app store, apps, or services, but
that users are now expecting the "pinchy, zoomy, swirly" interface.
Anderson agreed, noting that it makes it much easier to drop a new device
onto a factory floor if users already understand how to interact with it.
Bird pointed to the silicon vendors as a big part of the move to Android. The
big silicon vendors do an Android port before anything else, he said.
Stewart (from Intel) noted that not all silicon vendors had that
Android-first strategy, to a round of chuckles. While there is the "thorny
issue" of free video driver support, Bird continued, many people are
"coattailing" on the Android support that the silicon vendors provide.
On the other hand, Android has been the "club" to bring some vendors to the
table in terms of open source drivers, Anderson said, using Broadcom as an
example.
But Pfeffer believes that the app ecosystem is the big draw. It is a "clear
value proposition" for a vendor who can build a platform that can be
monetized. The APIs provided by Android either won't change or will be
backward compatible, so vendors can depend on them. In fact, Google doesn't
document how the platform is put together because it doesn't want vendors
to depend on things at that level, he said.
But for vendors who are
putting Android on their own hardware, they are going to have to understand
and adapt the platform, Bird said. Stewart noted that he heard that early
Android tablets had to just hide the phone dialer because there was no way
to get
rid of it. There was much agreement that customizing Android to make it
smaller or faster was difficult to do.
Drawbacks to Android
That led to the next question: what are the drawbacks for Android? Bird
said that it has a "really big footprint" and that "JITted code is slower
than native". That is a classic tradeoff, of course. As an example he
noted the first video ad in a print magazine, which used an "inexpensive"
Android
phone board in the magazine page. That board was around $50, so it only
appeared in the first 1000 issues of the magazine. Because of the size of
Android, you will not see a $5 board that can run the whole stack, he said.
Pfeffer countered that you can get Android to fit in 64M on certain classes
of devices. Android doesn't prevent you from "going low", he said. Bird
noted that his camera project only has 32M. Anderson described the Android
platform as having "seven layers that keep you from the hardware", which
adds to the complexity and size. In addition, you need a high-end GPU in order to run
Ice Cream Sandwich reasonably, he said. Pfeffer said that there was the
possibility of using
software rendering, but there was skepticism expressed about the performance of that option.
Beyond just the footprint and complexity, are there drawbacks in how the
Android community is put together and works, Yaghmour asked. Bird
mentioned that there isn't really a community around "headless Android" and
that there isn't really any way for one to spring up. Because "you get
whatever Google puts out next", there is little a community could do to
influence the direction of headless Android. If, for example, you wanted
to add something for headless Android that Google has no interest in, you
have to
maintain that separately as there isn't much of a path to get it upstream.
There are devices that are difficult to add to Android, Anderson said.
Adding a sensor that "Google never thought of" to libsensors is "not
trivial". Making a headless Android system is also not easy unless you
dive deeply into the Android Open Source Project (AOSP) code. Stewart
noted that Android adoption is generally a one-way street, so it is
difficult to switch away from it. Pfeffer agreed, noting that the
companies that do adopt Android reap a lot of benefits, but "it is a
one-way trip".
When he started looking at Android, Yaghmour thought it would be "easy", as
it was just embedded Linux. There was a lot more to it than that, with
various pieces of the Linux stack having been swapped out for Android
replacements. But that's a good thing, Bird said. His "strong opinion" was
that Android is a "breath of
fresh air" that didn't try to force Unix into the embedded space. Android
was able to discard some of the Unix baggage, which was a necessary step, he said.
There are some really good ideas in Android, especially in the app
lifecycle and the idea of intents, all of which added up to "good stuff".
Android was in the right place at the right time, Anderson said. For
years, embedded Linux couldn't make up its mind what the user interface
would be, but Android has brought one to the table. Android also takes into
account the skill set of programmers that are coming out of school today.
Constrained environments are not often taught in schools, so the "infinite
memory" model using Java may be appropriate.
Stewart noted that HTML5 still has the potential for cross-platform user
interfaces, and doesn't think the door should be closed on that
possibility. Yocto is trying to support all of the different user
interface possibilities (Qt, GTK, EFL, HTML5, ...). There is also the
question of the future of Java, Anderson said. The security issues that
Oracle has been slow to fix is worrisome, and no one really knows where
Oracle plans to take Java.
While embedded Linux has nothing to learn from Android on the technical
level, it could
take some higher-level lessons, Pfeffer said. Focusing on creating an
ecosystem from the app developer to the user is extremely important.
Beyond that, reducing the time to market, as Android has done, so that a
new widget using a new SoC can be in the hands of app developers and users
quickly should be a priority. Existence proofs are very powerful, so a
system which has a billion users that a device maker can just plug into is
compelling, he said.
Licensing
For the most part, Android started with a clean slate for licensing
reasons, Yaghmour said; what are the pros and cons of that decision? The
licenses do have an effect, Bird said, and the BSD/Apache license nature of
Android changes how companies perceive their responsibilities with respect
to the open source communities. Companies like BSD licenses, but it doesn't
encourage them to push their changes upstream—or to release them at
all. That means we don't really know how much interesting technology is
getting lost by not being shared, which "is a worry", he said.
Stewart noted that the BSD license seemed to remove
the "multiplicative effect" that you see in code bases that are licensed
under the GPL. He pointed out that the BSDs themselves seem to suffer from that
because sharing the code is not required. Anderson said that the vendors
hiding their code make it hard for his company to help its customers. If a
codec the customer wants doesn't work with the PowerVR GPU drivers, there
is little he can do to help them. Some of those vendors are just "hiding
behind the license", he said.
The license situation is a "red herring", according to Pfeffer, because
"market pressure will trump any licensing issues". If a GPLv3-licensed
component will help a device maker beat their competitor to market, "it
will ship".
Embedded Linux and Android can coexist both in the community and in the
same devices, the panel agreed. The key is the kernel, Anderson said, as
long as that is present one could run the Android user interface atop a
realtime kernel, if you understand the architecture of both sides.
Another possibility would be to use virtualization, Stewart said, perhaps
in an automotive setting with Android games and apps in the back seat
running in a VM on a more traditional embedded Linux system to control the
critical systems.
Yaghmour's final question asked whether we will eventually see Android "wipe
embedded Linux off the map". All except Pfeffer had short "no" answers.
Pfeffer said he would hate to see traditional embedded Linux go away, but
that we may see it eventually. He likened Android to the invention of the
loom. Prior to that invention, textiles were crafted by hand, but the loom
standardized how you create textiles, and Android may do the same for Linux
devices. Anderson and Bird were quick to point out SoCs and platforms
where Android will never run as counter-examples. Stewart had the last
word on that question when he described Pfeffer's answer as something
like what "Bill Gates would have said"—to a round of laughter from
participants and audience alike.
[ Thanks to the Linux Foundation for assisting with travel costs to San Francisco for ABS. ]
Comments (none posted)
By Jake Edge
February 20, 2013
The Linux Foundation's Rudolf Streif introduced one of the morning keynotes
at the 2013 Android
Builders Summit (ABS) by noting that
androids in space have a long history—at least in science fiction like
Star Wars. He was introducing Dr. Mark Micire of the US National Aeronautics and Space Administration
(NASA) Ames Research Center, who recently led a project that put the Android
operating system into space in the form of an "intelligent space robot"
that currently inhabits the International Space Station (ISS). Micire
brought the tale of how that came about to the first day of ABS on February
18 in San Francisco.
He started off by expressing amazement at what the community has done with
Android that takes it far beyond its mobile phone roots. He has several
different versions of his talk, but when he looked at the talk descriptions
for ABS, he quickly realized that the "geeky version" would be right for
the audience. A video provided the high-level view of the project,
starting with the liftoff of the last space shuttle, which carried the
first version of the robots to the ISS, to a description of
using a Nexus S smartphone to communicate with and control the robot. The idea is to have
a robot
available to both the crew and the ground-based operations staff to
take over some of the menial tasks that astronauts currently have to perform.
Some history
The spherical light saber trainer seen on the Millennium Falcon in Star
Wars was the
inspiration for several different space robot projects over the years,
Micire said. That includes the "personal satellite assistant" (PSA) which
was developed by the NASA Ames Research Center. It had a display
screen, two-way audio, a camera, and useful tools like a flashlight in a
roughly spherical package. Similarly, the Johnson Space Center created the
AirCam that could fly around the shuttle in space to take photographs and
video of the spacecraft. The AirCam actually flew on the shuttle in 1987,
but both projects were eventually canceled.
Micire's project evolved from a senior project at MIT, which
created roughly spherical satellite simulators to be used for experimenting with
synchronized satellite maneuvers. The algorithms to do those kinds of
maneuvers need to be developed in some cases, but it is expensive to test new
algorithms
with actual
satellites. The MIT SPHERES (Synchronized Position Hold, Engage, Reorient
Experimental Satellites) project used volleyball-sized robots that could be
flown inside the ISS to test these algorithms.
The SPHERES robots have a tank of carbon dioxide to use as propellant, much
like a paintball gun. In fact, when they need refilling, Micire has sometimes
taken them to Sports Authority (a US sporting goods store) to the
puzzlement of the clerks there. The CO2 is routed to thrusters
that can move the robot in three dimensions.
A Texas Instruments DSP that is "a decade old at this point" is what runs
the SPHERES robot. There is a battery pack to run the CPU and some
ultrasonic receivers that are used for calculating position. That pack uses
standard AA batteries, he said, because lithium-ion and other battery types can
explode in worst-case scenarios, which makes it difficult to get them
aboard a spacecraft. It is easy to "fly AA batteries", though, so lots of
things on the ISS run using them.
Since the cost of getting mass to low earth orbit is high, he said that he
doesn't even want to contemplate the
amount being
spent on resupplying AA batteries to the ISS.
The robot also has an infrared transmitter that sends a pulse used by the
controller of
ultrasonic beacons installed in an experimental lab area of the ISS. The
IR pulse is seen by the controller, which responds by sending several
ultrasonic pulses at
a known rate. The receivers on the SPHERES pick that signal up; using the
known location of the transmitters and the speed of sound, it can then
triangulate its position within the experimental zone, which is a cubical
area six feet on a side. Micire
showed video of the SPHERES in action on the ISS. He played the video at
4-6x normal speed so that the movement wasn't glacial; NASA safety
engineers prefer not to have high-speed
maneuvering via CO2 jets inside spacecraft.
The NASA Human Exploration and Telerobotics (HET) project that Micire runs
wanted to create robots that could handle a number of different tasks in
space that are currently done by astronauts. The idea is to provide both
the crew on the station and the team on the ground with a useful tool.
Right now, if there is an indicator light on a particular panel in the
station and the ground crew wants to know its state, they have to ask a
crew member to go look. But a robot could be flown over to the panel and
relay video back to the ground, for example.
The HET team was faced with the classic decision of either rolling its own
controller for the Smart SPHERES or buying something "commercial off the
shelf" (COTS). The team didn't have a strong opinion about which choice
was better,
but sat down to list their requirements. Those requirements included
sensors like a gyroscope, camera, accelerometer, and so on, in a package with a
reasonably powerful CPU and a fair amount of memory and storage. While
Micire was
worriedly
thinking "where are we going to find such a device?", he and the team were
all checking their email on their smartphones. It suddenly became obvious
where to find the device needed, he said with a chuckle. Even NASA can't
outrun the pace of the mobile phone industry in terms of miniaturization and
power consumption, he said.
Flight barriers
There are a lot of barriers to getting
a device "space rated" so that it can fly on the ISS (or other
spacecraft). The engineers at NASA are concerned about safety
requirements, and anything that could potentially "deorbit the station" are
of particular concern. HET wanted to go from a concept to flight in
roughly a year; "that's insane", Micire said, as it normally requires 2-3 years
from concept to flight because of safety and other requirements.
But using a mobile phone will help speed the process. Right about the time
a platform was needed, he heard about the Nexus S ("bless the internet!")
being released. It had just what was needed, so he and a colleague "camped out"
in line at the Mountain View Best Buy to get numbers 11 and 12 of the 13
that were delivered to that store.
The first thing they did to these
popular and hard-to-get new phones was to tear them apart to remove the
ability to transmit in the cellular bands. For flight safety, there must
be a hardware mechanism that turns off the ability to transmit. Removing the
driver from the kernel was not sufficient for the safety engineers, so a
hardware solution was needed. They decided to remove the transmit chip from
the board, but it was
a ball-grid-array (BGA) part, so they heated one of the boards to try to do
so. The first attempt resulted in an "epic fail" that ruined the phone,
but the attempt on the second board was successful. Now, pulling that chip
is the first
thing done to new phones to get around that "airplane mode problem".
The next problem they faced was the batteries. As he mentioned earlier,
lithium-ion is problematic for space; it takes two years to get those kinds
of batteries certified. Instead they used a "space certified" AA battery
holder, adding a diode that was used to fool the battery controller on the
phone. Micire said that he did a bit of "redneck engineering" to test the
performance of the AA batteries over time: he taped the phone to his laptop
and pointed its camera at voltage
and current meters hooked up to the battery pack. The phone ran a
time-lapse photo application, and he
transcribed the data from that video into a spreadsheet. He found that the
phone will
run well for seven hours using six AA batteries.
In the micro-gravity environment in the ISS, broken glass is a serious
problem. It can "become an inhalant", for example. Something had to be
done about the display glass so that breaking it would not result in glass
fragments. Micire thought he had the perfect solution by putting acrylic
tape over the display, but it turns out that tape is flammable, so it was
deemed unsuitable. In the
end, Teflon tape fit the bill. He showed some graphic photographic
evidence of what was done to a phone "in the interests of science" to prove
to NASA safety engineers that a broken screen would not cause a hazard.
The phone interfaces to the SPHERES over a USB serial connection because
the TI DSP
doesn't support anything else. The phone and battery holder are then
essentially taped to the side of the robot, as can be seen at right.
The team had "no time for software", Micire said, but "Cellbots saved our lunch" with a data
logging app for Android. In order to test the Nexus S sensors in space,
they needed a way to log the sensor data while the Smart SPHERES were
operating. It turns out that asking Samsung what its accelerometer does in
micro-gravity is not very fruitful ("we don't know, you're from NASA").
Sampling every sensor at high frequency and recording the data would allow
them to figure out which sensors worked and which didn't.
For any part that is used in an aircraft or spacecraft, a "certificate of
conformance" is required. That certificate comes from the supplier and
asserts that the part complies with the requirements. It's fairly easy to
get that from most suppliers, Micire said, but Best Buy is not in that
habit. In a bit of "social hacking", they showed up at the store five
minutes before closing time, cornered a very busy manager, and asked them
to sign a piece of paper that said "a Nexus S is a Nexus S"—after a
puzzled look as another store employee bugged them for attention, the
manager simply signed the certificate.
It turns out that all of the computers on the ISS run Windows XP SP 3,
which means there is no driver to talk to the Nexus S. Since it would take 2-3
years to get a driver certified to be installed on those machines, another
solution had to be found. They ended up writing an app that would kick the
phone's USB into mass storage mode prior to the cable being plugged into the
computer. Because Windows XP has a driver for a USB mass storage device,
it could be used to communicate with the Nexus S.
Testing
The first test units were launched on the final shuttle mission, and Micire
showed
video of the Smart SPHERES in action on the ISS. The light level was
rather low in the video because the fluorescent lights were turned down to
reduce jamming on the beacons. That was actually useful as it proved that
the camera produced reasonable data even in low-light situations. The
sensors on the phone (gyroscope, magnetometer, ...) worked well, as shown
in his graphs. The gravity
sensor showed near-zero gravity, which must mean that it was broken, he
joked. In reality, that is, of
course, the proper reading in a micro-gravity environment.
There are "lots of tubes" between the ISS and ground-based networks, so the
latency can be rather large. They were still able to do video transmission in
real time from the Smart SPHERES to the ground during the initial tests,
which was a bit of a surprise. After that test, the mission director
pulled the team aside; at first Micire was a little worried they were in
trouble, but it turned out that the director wanted to suggest adding Skype
so he could have a "free-flying robot that I can chase astronauts with".
In December 2012, another experiment was run. Once again, sped-up video
was shown of the robot navigating to a control panel to send video of its
state to controllers on the ground. Those controllers can do minor
adjustments to the orientation of the robot (and its camera) by panning from
side to side. There is no ability to navigate the robot in realtime from
the ground due to latency and potential loss-of-signal issues.
Other experiments are planned for this year and next, including having the
robot handle filming an interview with one of the astronauts. Currently
when a class of schoolchildren or other group has the opportunity to
interview the crew in space, two astronauts are required: one for the
interview and one to hold the camera. Since the Nexus S gives them "face
recognition for free", the robot could keep the camera focused on the crew
member being interviewed, which would free up the other crew member.
Micire's talk was an excellent example of what can happen when a device
maker doesn't lock down its device. It seems likely that no one at
Google or Samsung considered the possibility of the Nexus S being used to
control space robots when they built that phone. But because they didn't
lock it down, someone else did consider it—and then went out and actually
made it happen.
[ Thanks to the Linux Foundation for assisting with travel costs to San Francisco for ABS. ]
Comments (18 posted)
By Jake Edge
February 21, 2013
Andrew Chatham came up the peninsula to San Francisco from Google to talk
to the 2013 Embedded
Linux Conference about the self-driving car project. Chatham has
worked on the project since 2009 and seen it make great strides. It is by
no means a finished product, but the project has done 400,000 miles of
automated
driving so far.
History
"Cars are a miracle", he said. The 45-mile drive he did to Mountain View
yesterday would have taken our ancestors all day to do on a horse. But,
cars are also problematic, with more than 30,000 annual deaths in the US
due to car accidents. That number has "finally started dropping", likely
due to more seat belt usage, but it is still too high. Even if there
are no fatalities, accidents cost time, money, and more. We have done a
pretty good job figuring out how to survive accidents, he said, but it is time
to stop having them.
In the mid-2000s, the US Defense Advanced Research Projects Agency (DARPA) ran several
challenges for self-driving cars on a 150-mile course in
the Mojave
Desert. The first year, the winning team's vehicle went only seven
miles. But the next year, five teams actually completed the course, which
was eye-opening progress. In 2007, DARPA moved the challenge to a
simulated urban environment that featured a limited set of traffic
interactions (four-way stops, but no traffic lights, for example). After
that event, "DARPA declared victory" and moved on to other challenges,
Chatham said.
In 2009, Google stepped in to solve the problem "for real". Chatham noted
that people have asked why Google would get involved since driving cars doesn't
involve searching the internet or serving ads. The company thinks it is an
important problem that needs to be solved, he said. Google is qualified to
attack the problem even though it has never made cars because it is mostly
a software problem. Also, one major component of the problem involves maps,
which is an area where Google does have some expertise.
Broadly, there are two categories for self-driving cars: one for cars with
all the
smarts in the car itself and one where the smarts are in the road.
For cars that are self-contained, they need to be ready for anything and
cannot make assumptions about the obstacles they will face. That tends to
lead to cars that move slowly and drive cautiously, much differently than
humans. Smart roads allow for dumb cars, but there are some serious
obstacles to overcome. Infrastructure is expensive, so there is a
chicken-and-egg problem: who will build expensive smart roads (or even
lanes) for
non-existent dumb cars that can use them?
The Google approach is something of a hybrid. There are no actual
infrastructure changes, but the system creates a "virtual infrastructure".
That virtual infrastructure, which is built up from sensor readings and map
information, can be used by the car to make assumptions much like a human
does about what to expect, and what to do.
Sensors and such
The car's most obvious sensor is the laser rangefinder that lives in a
bubble on top of the car. It spins ten times per second and produces
100,000 3D points on each spin. Each of those points have 5cm accuracy.
The laser can only see so far, though, and can be degraded in conditions that
affect photons, such as rain.
The car also has radar, which is not as precise as the laser, but it can
see further. It can also see behind cars and other solid objects. Using
the doppler effect, speed information for other objects can be derived.
There are also cameras on the car. The general "computer vision problem"
is hard, and still unsolved, but it isn't needed for the car's usage of the
camera. The camera is used for things that humans use as well, which means
they are generally rather obvious and are of known shapes, sizes, and
likely positions (e.g. traffic lights). Beyond that are the expected
sensors like gyroscope, accelerometer, GPS, compass, and so on.
There are two main computers in the car. One is a very simple "drive by
wire system" that has no operating system and is just in a tight loop
controlling the brakes, steering, and accelerator. The second is a
"workstation class system running FreeBSD", Chatham joked. In reality it
is running a lightly customized Ubuntu 12.04 LTS. It is not running the
realtime kernel, but uses SCHED_FIFO and control groups to provide
"realtime-ish" response.
There are several classes of processes that run on the system, with the
least critical being put into control groups with strict resource limits.
If any of the critical processes miss their deadlines, it is a red flag
event which gets logged and fixed. In the 400,000 miles the cars have
traveled (so far always with a human on board), those kinds of problems
have been largely eliminated. All of the data for those journeys has been
stored, so it can be played back whenever the code
is changed to try to find any regressions.
From the "blank slate" state of the car, GPS data is added so that it knows
its location. That data is "awful really", with 5m accuracy "on a good
day" and 30m accuracy at other times. The car's sensors will allow it to
accurately know which way it is pointing and how fast it is going. From
there, it adds a logical description of the roads in that location derived
from the Google Maps data. It uses maps with 10cm resolution plus altitude
data, on top of which the logical information, like road locations, is layered.
All of that information is used to build a model of the surroundings. The
altitude data is used to recognize things like trees alongside the road, as
well as to determine the acceleration profile when climbing hills. The
goal is to stay close to the center of the lane in which the car is
traveling, but not if something is blocking the lane (or part of it). You
also don't want to hit the guy in front of you, so don't go faster than he
does. Once the model is built, driving is largely a matter of following
those two rules, he said.
Problems and solutions
In California (unlike many other places) it is legal for motorcycles to
travel between the lanes, moving between the cars that are in each of those
lanes.
That was a difficult problem to solve because that situation could fool the
sensors to some extent.
One of the tests they ran was a race track that was set up to see how the
car did
versus Google employees. It reliably beat them in that test, though
Chatham believes the employees would have eventually beaten the car. It's
"a Prius, not a sports car", so there were limits to the kinds of
maneuvering that can be done, but the test really showed the "precision
with which we can
reliably drive the car", he said.
Lots about driving is "social", Chatham said. For example, there is a set
of rules that
are supposed to be followed at a four-way stop, but no one follows
them. The car had to learn to start to edge out to see when the others
would let it go through. Similarly, merging is social, and they have spent
a lot of time getting that right. Unlike human drivers, the car can't make
eye contact, so it is a matter of getting the speed and timing right for
what is expected.
The "bug on the window problem" is another difficult one. For the car, anything
that messes up its sensors needs to be handled gracefully. In those cases,
handing control back to the human in a sensible fashion is the right thing
to do.
Many people ask about how the car does in snow, but it hasn't been tried
yet. Currently, Chatham thinks it wouldn't do all that well, but thinks it
"could do OK eventually". One problem there is that snowbanks appear to be
giant walls of water to the lasers.
"People do stupid things", he said. If you drive 400K miles, you are going
to experience some of them. Normally the expectation is that other people
value their lives; if you didn't believe that, you would never leave home.
But there are exceptions, so
a self-driving car, like a regularly driving human, needs to be prepared
for some of that craziness.
The video of a blind
man using
the self-driving car is the kind of story that shows where this
technology could lead, Chatham said. There are a lot of people who can't
drive for one reason or another, so a self-driving car has the potential to
change their lives. "I wish it were done now, but it's not", he said.
Chatham answered a few questions after the talk. They have done very
little work on "evasive maneuvers", he said. Everyone overestimates their
ability in that area and the advice from police and others is to just use
the brakes. There are no plans as yet to release any of the source code,
nor are there any plans for a product at this point. Three states have
"legalized" self-driving cars, California, Nevada, and Florida. It is
furthest along in California where the Department of Motor Vehicles is
currently drafting rules to govern their use.
[ I would like to thank the Linux Foundation for travel assistance to attend ELC. ]
Comments (36 posted)
Page editor: Jonathan Corbet
Security
By Jonathan Corbet
February 19, 2013
A security-oriented firm called Trustwave recently sent out
a
preview of an upcoming report [PDF] that features some focused criticism of
how the Linux community handles security vulnerabilities. Indeed, it says:
"
Software developers vary greatly in their ability to respond and
patch zero-day vulnerabilities. In this study, the Linux platform had the
worst response time, with almost three years on average from initial
vulnerability to patch." Whether or not one is happy with how
security updates work with Linux, three years sounds like a rather longer
response time than most of us normally expect. Your editor decided to
examine the situation by focusing on two vulnerabilities that are said to
be included in the Trustwave report and one that is not.
Three years?
As of this writing, Trustwave's full report is not available, so a detailed
look at its claims is not possible. But, according to this
ZDNet article, the average response time was calculated from these two
"zero-day" vulnerabilities:
- CVE-2009-4307: a divide-by-zero crash
in the ext4 filesystem code. Causing this oops requires convincing
the user to mount a specially-crafted ext4 filesystem image.
- CVE-2009-4020: a buffer overflow in
the HFS+ filesystem exploitable, once again, by convincing a user to
mount a specially-crafted filesystem image on the target system.
The ext4 problem was reported on
October 1, 2009 by R.N. Sastry, who had been doing some filesystem fuzz
testing. The report included the filesystem image that triggered the bug —
that is the "exploit code" that Trustwave used to call this bug a zero-day
vulnerability. Since the problem was limited to a kernel oops, and since
it required the victim's cooperation (in the form of mounting the
attacker's filesystem) to trigger, the ext4 developers did
not feel the need to drop everything and fix it immediately; Ted Ts'o
committed a
fix toward the end of November. SUSE was the first distributor to
issue an update containing the fix; that happened on January 17, 2010.
Red Hat did not put out an update until the end of March — nearly five
months after the problem was disclosed — and Mandriva waited until February
of 2011.
One might argue that things happened slowly, even for an extremely
low-priority bug, but where does "three years" come from? It turns out
that the fix did not work properly on the x86 architecture; Xi Wang
reported the problem's continued existence
on December 26,
2011, and sent a proper fix on
January 9, 2012. A new CVE number (CVE-2012-2100) was assigned for the problem
and the fix was promptly committed into the mainline. Distributors were a
bit slow to catch up, though; Debian issued an update in March, Ubuntu in
May, and Red Hat waited until mid-November — nearly eleven months after
disclosure — to ship the fix to its users. The elapsed time from the
initial disclosure until Red Hat's shipping an update that fixes the
problem properly is, indeed, just over three years.
The story for the HFS/HFS+ vulnerability is similar. An initial patch
fixing a buffer overflow in the HFS filesystem was posted by Amerigo Wang
at the beginning of December, 2009. The fix was committed by Linus on
December 15, and distributor updates began with Red Hat's on
January 19, 2010. Some distributors were rather slower, but it was
another hard-to-exploit bug that was deemed to have a low priority.
The problem is that the kernel supports another (newer) filesystem called
HFS+. It
is a separate filesystem implementation, but it contains a fair amount of
code that was cut-and-pasted from the original HFS implementation, much like ext4
started with a copy of the ext3 code. The danger of this type of code
duplication is well known: developers will fix a bug in one copy but not
realize that the same issue may be present in the other copy as well.
Naturally enough,
that was the case here; the HFS+ filesystem had the same buffer overflow
vulnerability, but nobody thought to do anything about it until Timo Warns
quietly told a few kernel developers about it at the end of April 2012.
Greg Kroah-Hartman committed
a fix on May 4, and the problem was publicly disclosed a few days
after that. Once again, a new CVE number (CVE-2012-2319) was assigned, and, once again,
distributors dawdled with the fixes; openSUSE sent an update in June, while
Red Hat waited until October, five months after the problem became known.
The time period from the initial disclosure of the HFS vulnerability until
Red Hat's update for the HFS+ problem was just short of three years.
One could look at this situation two ways. On one hand, Trustwave has clearly chosen
its vulnerabilities carefully, then applied an interpretation that yielded
the longest delay possible. Neither story above describes a zero-day
vulnerability knowingly left open for three years; for most of that time,
it was assumed that the problems had been fixed. That is doubly true for
the HFS+ filesystem, for which the vulnerability was not even disclosed
until May, 2012. Given the nature of the vulnerabilities, it is highly
unlikely that the black hats were jealously guarding them in the meantime;
the odds are good that no system has ever been compromised by exploiting
either one of them. Trustwave's claims, if they are indeed built on these
two vulnerabilities, are dubious and exaggerated at best.
On the other hand, even low-priority vulnerabilities requiring the victim's
cooperation should be fixed — and fixed properly — in a timely manner,
and it is not at all clear that happened with these problems. The
response to the ext4 problem was arguably fast enough given the nature of
the problem, but the fact that the problem persisted on the obscure x86
architecture suggests that the testing applied to that fix was, at best,
incomplete. In the HFS/HFS+ case, one could argue that somebody
should have thought to check for copies of the bug elsewhere. The fact
that the HFS and HFS+ filesystems are nearly unused and nearly unmaintained
did not
help in this case, but attackers do not restrict themselves to
well-maintained code. And, for both bugs, distributors took their time to get
the fixes out to their users. We can do better than that.
Meanwhile, in 2013
Perhaps the slowness observed above is the natural response to
vulnerabilities that nobody is actually all that worried about. Had they
been something more serious, it could be argued, the response would have
been better. As it happens, there is an open issue at the time of this
writing that can be examined to see how well we do respond; the answer
is a bit discouraging.
On January 20, a discussion on the private kernel security list went public
with this patch posting by Oleg Nesterov.
It seems that the Linux implementation of the ptrace() system call
contains a race condition: a traced process's registers can be changed in a
way that causes the
kernel to restore that process's stack contents to an arbitrary location.
The end result
is the ability to run arbitrary code in kernel mode. It is a local attack,
in that the attacker needs to be able to run an exploit program on the
target system. But, given the ability to run such a program, the attacker
can obtain full root privileges. That is the kind of vulnerability
that needs quick attention; it puts every system out there at the mercy of
any untrusted users that may have accounts there — or at the mercy of any
attacker that may be able to
compromise a network service to run an arbitrary program.
On February 15, the vulnerability was disclosed as such, complete with handy exploit
code for those who do not wish to write their own. Most victims are
unlikely to apply the kernel patch included with the exploit that makes the
race condition easier to
hit; the exploit also needs the ability to run a process with real-time
priority to win the race more reliably.
But, even without the patch or real-time scheduling, a sufficiently patient
attacker should be able to
time things right eventually. Solar Designer reacted to the disclosure this way:
I haven't looked into this closely yet, but at first glance it
looks like the worst Linux kernel vulnerability in a few years.
For distro vendor kernels (rather than mainline, which was patched
almost a month ago), this is a 0-day.
Arguably this should not be a zero-day vulnerability: the public discussion
of the fix is nearly one month old, and the private discussion had been
going on for some time before. But, as of this writing, no distributors
have issued updates for this problem. That leads to some obvious
questions; quoting Solar Designer again:
The mainline commits from January are by Oleg Nesterov of Red Hat.
Why wasn't(?) the issue handled with due severity within Red Hat,
then - such that Red Hat would at the very least have a statement
on whether and which of their kernels are affected by now.
One assumes that such a statement will be forthcoming in the near future. In the meantime,
users and system administrators worldwide need to be worried about whether
their systems are vulnerable and who might be exploiting the problem.
Once again, we can do better than that. This bug was known to be a serious
vulnerability from the outset; one of the developers who reported it
(Salman Qazi, of Google) also provided the exploit code to show how severe
the situation was. Distributors knew about the problem and had time to
respond to it — but that response did not happen in a timely manner. The
ptrace() problem will certainly be
straightened out in less than three years, but that still may not be a
reason for pride. Users should not be left wondering what the situation is
(at least) one month after distributors know about a serious vulnerability.
Comments (52 posted)
Brief items
I know some people don't "like" how the kernel team handles bug
reports and fixes, but seriously, this should have been pretty
obvious by anyone watching the stable kernel releases, which all
distros do. The fact that the distros didn't notify others is not
the kernel community's fault, sorry.
—
Greg Kroah-Hartman
Comments (none posted)
James Bottomley
describes
the process of taking control of a UEFI secure boot system.
"
Even if you only ever plan to run Windows or stock distributions of
Linux that already have secure boot support, I’d encourage everybody who
has a new UEFI secure boot platform to take ownership of it. The way you
do this is by installing your own Platform Key. Once you have done this,
you can use key database maintenance tools like keytool to edit all the
keys on the Platform and move the platform programmatically from Setup Mode
to User Mode and back again. This blog post describes how you go about
doing this."
Comments (57 posted)
Over at Linux.com, Linux Foundation (LF) system administrator Konstantin Ryabitsev
describes a joint effort by the LF and the Fedora project to support two-factor authentication in Linux. The article describes multi-factor authentication, some of the problems inherent with using hardware tokens, and notes that smartphones can provide much of the same functionality without requiring a dedicated device. "
Nearly all of us carry a powerful computer in our pocket that is more than capable of calculating and displaying TOTP [Time-based One-Time Password] tokens. Google recognized this a while back and released a free mobile app called 'Google Authenticator,' available on most mobile platforms. Anyone can set up two-factor authentication for their Google Account using the Authenticator, but the best part is that it's not just limited to Google's services. Since TOTP is an open standard, any infrastructure can use Google Authenticator to provision their own software tokens and implement TOTP-based two-factor authentication for their services."
Comments (74 posted)
New vulnerabilities
blender: privilege escalation
| Package(s): | blender |
CVE #(s): | CVE-2010-5105
|
| Created: | February 15, 2013 |
Updated: | February 20, 2013 |
| Description: |
From the openSUSE bug tracker:
An insecure temporary file use flaw was found in the way
'undo save quit' routine of Blender kernel of Blender, a 3D
modeling, animation, rendering and post-production software
solution, performed management of 'quit.blend' temporary file,
used for session recovery purposes. A local attacker could use
this flaw to conduct symbolic link attacks, leading to ability
to overwrite arbitrary system file, accessible with the privileges
of the user running the blender executable.
|
| Alerts: |
|
Comments (none posted)
boost: input validation bypass
| Package(s): | boost1.49 |
CVE #(s): | CVE-2013-0252
|
| Created: | February 18, 2013 |
Updated: | February 25, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that the Boost.Locale library incorrectly validated some
invalid UTF-8 sequences. An attacker could possibly use this issue to
bypass input validation in certain applications. |
| Alerts: |
|
Comments (none posted)
dbus-glib: privilege escalation
| Package(s): | dbus-glib |
CVE #(s): | CVE-2013-0292
|
| Created: | February 18, 2013 |
Updated: | March 11, 2013 |
| Description: |
From the Mageia advisory:
A privilege escalation flaw was found in the way dbus-glib, the D-Bus
add-on library to integrate the standard D-Bus library with the GLib
thread abstraction and main loop, performed filtering of the message
sender (message source subject), when the NameOwnerChanged signal was
received. A local attacker could use this flaw to escalate their
privileges. |
| Alerts: |
|
Comments (none posted)
gnome-online-accounts: information disclosure
| Package(s): | gnome-online-accounts |
CVE #(s): | CVE-2013-0240
|
| Created: | February 15, 2013 |
Updated: | March 25, 2013 |
| Description: |
From the openSUSE bug tracker:
It was found that Gnome Online Accounts (GOA)
did not perform SSL certificate validation, when
performing Windows Live and Facebook accounts creation.
A remote attacker could use this flaw to conduct
man-in-the-middle (MiTM) attacks, possibly leading
to their ability to obtain sensitive information. |
| Alerts: |
|
Comments (none posted)
java: sandbox restriction bypass
| Package(s): | java |
CVE #(s): | CVE-2013-1486
|
| Created: | February 20, 2013 |
Updated: | March 19, 2013 |
| Description: |
From the Red Hat advisory:
An improper permission check issue was discovered in the JMX component in
OpenJDK. An untrusted Java application or applet could use this flaw to
bypass Java sandbox restrictions. |
| Alerts: |
|
Comments (none posted)
java: sandbox restriction bypass
| Package(s): | java |
CVE #(s): | CVE-2013-1484
CVE-2013-1485
|
| Created: | February 20, 2013 |
Updated: | February 21, 2013 |
| Description: |
From the Red Hat advisory:
Improper permission check issues were discovered in the JMX and
Libraries components in OpenJDK. An untrusted Java application or applet
could use these flaws to bypass Java sandbox restrictions. (CVE-2013-1484)
An improper permission check issue was discovered in the Libraries
component in OpenJDK. An untrusted Java application or applet could use
this flaw to bypass certain Java sandbox restrictions. (CVE-2013-1485) |
| Alerts: |
|
Comments (none posted)
jquery: cross-site scripting
| Package(s): | jquery |
CVE #(s): | CVE-2011-4969
|
| Created: | February 14, 2013 |
Updated: | March 13, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that jQuery incorrectly handled selecting elements using
location.hash, resulting in a possible cross-site scripting (XSS) issue.
With cross-site scripting vulnerabilities, if a user were tricked into
viewing a specially crafted page, a remote attacker could exploit this to
modify the contents, or steal confidential data, within the same domain.
|
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel |
CVE #(s): | CVE-2013-0290
|
| Created: | February 18, 2013 |
Updated: | March 22, 2013 |
| Description: |
From the Red Hat bugzilla:
A flaw was found in the way __skb_recv_datagram() processed skbs with no payload when MSG_PEEK was requested. An unprivileged local user could use this flaw to cause local denial of service. |
| Alerts: |
|
Comments (none posted)
mediawiki: session fixation flaw
| Package(s): | mediawiki |
CVE #(s): | CVE-2012-5391
|
| Created: | February 19, 2013 |
Updated: | March 22, 2013 |
| Description: |
From the Red Hat bugzilla:
A session fixation flaw was found in the way MediaWiki, a wiki engine, performed maintenance of user session ids after user login / logout. A remote attacker could provide a specially-crafted URL that, when visited by an authenticated MediaWiki user, could allow the attacker to impersonate the victim.
|
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-0784
CVE-2013-0772
CVE-2013-0765
CVE-2013-0773
CVE-2013-0774
CVE-2013-0777
CVE-2013-0778
CVE-2013-0779
CVE-2013-0781
|
| Created: | February 20, 2013 |
Updated: | March 11, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-0784)
The RasterImage::DrawFrameTo function in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to obtain sensitive information from process memory or cause a denial of service (out-of-bounds read and application crash) via a crafted GIF image. (CVE-2013-0772)
Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 do not prevent multiple wrapping of WebIDL objects, which allows remote attackers to bypass intended access restrictions via unspecified vectors. (CVE-2013-0765)
The Chrome Object Wrapper (COW) and System Only Wrapper (SOW) implementations in Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 do not prevent modifications to a prototype, which allows remote attackers to obtain sensitive information from chrome objects or possibly execute arbitrary JavaScript code with chrome privileges via a crafted web site. (CVE-2013-0773)
Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 do not prevent JavaScript workers from reading the browser-profile directory name, which has unspecified impact and remote attack vectors. (CVE-2013-0774)
Use-after-free vulnerability in the nsDisplayBoxShadowOuter::Paint function in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via unspecified vectors. (CVE-2013-0777)
The ClusterIterator::NextCluster function in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code or cause a denial of service (out-of-bounds read) via unspecified vectors. (CVE-2013-0778)
The nsCodingStateMachine::NextState function in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code or cause a denial of service (out-of-bounds read) via unspecified vectors. (CVE-2013-0779)
Use-after-free vulnerability in the nsPrintEngine::CommonPrint function in Mozilla Firefox before 19.0, Thunderbird before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via unspecified vectors. (CVE-2013-0781)
|
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-0775
CVE-2013-0776
CVE-2013-0780
CVE-2013-0782
CVE-2013-0783
|
| Created: | February 20, 2013 |
Updated: | March 11, 2013 |
| Description: |
From the CVE entries:
Use-after-free vulnerability in the nsImageLoadingContent::OnStopContainer function in Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code via crafted web script. (CVE-2013-0775)
Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 allow man-in-the-middle attackers to spoof the address bar by operating a proxy server that provides a 407 HTTP status code accompanied by web script, as demonstrated by a phishing attack on an HTTPS site. (CVE-2013-0776)
Use-after-free vulnerability in the nsOverflowContinuationTracker::Finish function in Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via a crafted document that uses Cascading Style Sheets (CSS) -moz-column-* properties. (CVE-2013-0780)
Heap-based buffer overflow in the nsSaveAsCharset::DoCharsetConversion function in Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 allows remote attackers to execute arbitrary code via unspecified vectors. (CVE-2013-0782)
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 19.0, Firefox ESR 17.x before 17.0.3, Thunderbird before 17.0.3, Thunderbird ESR 17.x before 17.0.3, and SeaMonkey before 2.16 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-0783)
|
| Alerts: |
|
Comments (none posted)
nss-pam-ldapd: code execution
| Package(s): | nss-pam-ldapd |
CVE #(s): | CVE-2013-0288
|
| Created: | February 18, 2013 |
Updated: | March 25, 2013 |
| Description: |
From the Debian advisory:
Garth Mollett discovered that a file descriptor overflow issue in the
use of FD_SET() in nss-pam-ldapd, which provides NSS and PAM modules for
using LDAP as a naming service, can lead to a stack-based buffer
overflow. An attacker could, under some circumstances, use this flaw to
cause a process that has the NSS or PAM module loaded to crash or
potentially execute arbitrary code. |
| Alerts: |
|
Comments (none posted)
openconnect: code execution
| Package(s): | openconnect |
CVE #(s): | CVE-2012-6128
|
| Created: | February 15, 2013 |
Updated: | February 25, 2013 |
| Description: |
From the Mageia advisory:
A stack-based buffer overflow flaw was found in the way OpenConnect, a
client for Cisco's "AnyConnect" VPN, performed processing of certain
host names, paths, or cookie lists, received from the VPN gateway.
A remote VPN gateway could provide a specially-crafted host name, path
or cookie list that, when processed by the openconnect client would lead
to openconnect executable crash. |
| Alerts: |
|
Comments (none posted)
pidgin: multiple vulnerabilities
| Package(s): | pidgin |
CVE #(s): | CVE-2013-0271
CVE-2013-0272
CVE-2013-0273
CVE-2013-0274
|
| Created: | February 14, 2013 |
Updated: | March 21, 2013 |
| Description: |
From the Pidgin advisories:
CVE-2013-0271: The MXit protocol plugin saves an image to local disk using a filename that could potentially be partially specified by the IM server or by a remote user.
CVE-2013-0272: The code did not respect the size of the buffer when parsing HTTP headers, and a malicious server or man-in-the-middle could send specially crafted data that could overflow the buffer. This could lead to a crash or remote code execution.
CVE-2013-0273: libpurple failed to null-terminate user IDs that were longer than 4096 bytes. It's plausible that a malicious server could send one of these to us, which would lead to a crash.
CVE-2013-0274: libpurple failed to null-terminate some strings when parsing the response from a UPnP router. This could lead to a crash if a malicious user on your network responds with a specially crafted message. |
| Alerts: |
|
Comments (none posted)
polarssl: multiple vulnerabilities
| Package(s): | polarssl |
CVE #(s): | CVE-2013-1621
CVE-2013-1622
|
| Created: | February 14, 2013 |
Updated: | February 20, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-1621:
An array index error might allow remote attackers to cause a denial
of service via vectors involving a crafted padding-length value
during validation of CBC padding in a TLS session
CVE-2013-1622:
Malformed CBC data in a TLS session could allow remote attackers to
conduct distinguishing attacks via statistical analysis of timing
side-channel data for crafted packets.
These appear to be related to the "Lucky Thirteen" vulnerabilities. |
| Alerts: |
|
Comments (none posted)
roundcubemail: cross-site scripting
| Package(s): | roundcubemail |
CVE #(s): | CVE-2012-6121
|
| Created: | February 18, 2013 |
Updated: | February 20, 2013 |
| Description: |
From the Red Hat bugzilla:
A cross-site scripting (XSS) flaws were round in the way Round Cube Webmail, a browser-based multilingual IMAP client, performed sanitization of 'data' and 'vbscript' URLs. A remote attacker could provide a specially-crafted URL that, when opened would lead to arbitrary JavaScript, VisualBasic script or HTML code execution in the context of Round Cube Webmail's user session. |
| Alerts: |
|
Comments (none posted)
rubygem-rdoc: cross-site scripting
| Package(s): | rubygem-rdoc |
CVE #(s): | CVE-2013-0256
|
| Created: | February 15, 2013 |
Updated: | February 20, 2013 |
| Description: |
From the Ruby advisory:
RDoc documentation generated by rdoc bundled with ruby are vulnerable to an XSS exploit. All ruby users are recommended to update ruby to newer version which includes security-fixed RDoc. If you are publishing RDoc documentation generated by rdoc, you are recommended to apply a patch for the documentaion or re-generate it with security-fixed RDoc.
RDoc documentation generated by rdoc 2.3.0 through rdoc 3.12 and prereleases up to rdoc 4.0.0.preview2.1 are vulnerable to an XSS exploit. This exploit may lead to cookie disclosure to third parties. |
| Alerts: |
|
Comments (none posted)
xen: denial of service
| Package(s): | xen |
CVE #(s): | CVE-2013-0215
CVE-2013-0153
|
| Created: | February 18, 2013 |
Updated: | February 20, 2013 |
| Description: |
From the Red Hat bugzilla [1], [2]:
[1] The oxenstored daemon (the ocaml version of the xenstore daemon) does
not correctly handle unusual or malicious contents in the xenstore
ring. A malicious guest can exploit this to cause oxenstored to read
past the end of the ring (and very likely crash) or to allocate large
amounts of RAM.
A malicious guest administrator can mount a denial of service attack
affecting domain control and management functions.
[2] To avoid an erratum in early hardware, the Xen AMD IOMMU code by default chooses to use a single interrupt remapping table for the whole system. This sharing implies that any guest with a passed through PCI device that is bus mastering capable can inject interrupts into other guests, including domain 0.
Furthermore, regardless of whether a shared interrupt remapping table is in use, old entries are not always cleared, providing opportunities (which accumulate over time) for guests to inject interrupts into other guests, again including domain 0.
In a typical Xen system many devices are owned by domain 0 or driver domains, leaving them vulnerable to such an attack. Such a DoS is likely to have an impact on other guests running in the system.
A malicious domain which is given access to a physical PCI device can mount a denial of service attack affecting the whole system. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The 3.8 kernel was released on February 18; Linus
said: "
The release got delayed a couple
of days because I was waiting for confirmation of a small patch, but hey,
we could also say that it was all intentional, and that this is the special
'Presidents' Day Release'. It sounds more planned that way, no?"
Some of the headline features in this release include metadata integrity
checking in the xfs filesystem, the foundation for much improved NUMA
scheduling,
kernel memory usage accounting
and associated usage limits,
inline data
support for small files in the ext4 filesystem, nearly complete
user namespace support, and much more. See
the
KernelNewbies 3.8 page for
lots of details.
Stable updates:
3.7.8,
3.4.31, and 3.0.64 were released on February 14,
3.7.9,
3.4.32, and
3.0.65 were released on February 17,
and 3.2.39 came out on February 20.
Comments (none posted)
One person's bug is another person's fascinating invertebrate.
—
Neil Brown
Comments in XFS, especially weird scary ones, are rarely
wrong. Some of them might have been there for close on 20 years,
but they are our documentation for all the weird, scary stuff that
XFS does. I rely on them being correct, so it's something I always
pay attention to during code review. IOWs, When we add, modify or
remove something weird and scary, the comments are updated
appropriately so we'll know why the code is doing something weird
and scary in another 20 years time.
—
Dave Chinner
Just to get back at you though, I'll turn on an incandescent light
bulb every time I have to use -f.
—
Chris Mason (to Eric Sandeen)
Comments (none posted)
By Jonathan Corbet
February 20, 2013
The story of the "native Linux KVM tool" (or, more recently, "kvmtool") has
been playing out since early 2011. This tool serves as a simple
replacement for the QEMU emulator, making it easy to set up and run guests
under KVM. The kvmtool developers have been working under the assumption
that their code would be merged into the mainline kernel, as was done with
perf, but others have
disagreed
with that idea. The result has been a repetitive conversation every merge
window or two as kvmtool was proposed for merging.
The conversation for the 3.9 merge window has seemingly been a bit more
decisive, though. Ingo Molnar (along with kvmtool developer Pekka Enberg)
presented a long list of reasons why they
thought it made sense to put kvmtool into the mainline repository. Ingo
even compared kernel tooling to Somalia,
saying that it was made up of "disjunct entities with not much
commonality or shared infrastructure," though, presumably, with
fewer pirates. Few others came to the
defense of kvmtool, leaving Ingo and Pekka to carry forward the argument on
their own.
Linus responded that he saw no convincing
reason to put kvmtool in the mainline; indeed, he thought that tying
kvmtool with the kernel could be retarding its development. He concluded
with:
So here, let me state it very very clearly: I will not be merging
kvmtool. It's not about "useful code". It's not about the project
keeping to improve. Both of those would seem to be *better* outside
the kernel, where there isn't that artificial and actually harmful
tie-in.
That is probably the end of the discussion unless somebody can come up with
a new argument that Linus will find more convincing. At this point, it
seems that kvmtool is destined to remain out of the mainline kernel
repository.
Comments (4 posted)
Kernel development news
By Jonathan Corbet
February 20, 2013
The 3.9 merge window has gotten off to a relatively slow start, with a mere
1,200 non-merge change sets pulled into the mainline as of this writing.
The process may have been slowed a bit by a sporadic reboot problem that
crept in relatively early, and which has not yet been tracked down. Even
so, a number of significant changes have already found their way in for
3.9, with many more to follow.
Important user-visible changes include:
- Progress has been made toward the goal of eliminating the timer tick
while running in user space. The patches merged for 3.9 fix up the
CPU time accounting code, printk() subsystem, and irq_work
code to function without timer interrupts; further
work can be expected in future development cycles.
- A relatively simple scheduler
patch fixes the "bouncing cow problem," wherein, on a system with
more processors than running processes, those processes can wander
across the processors, yielding poor cache behavior.
For a "worst-case" tbench benchmark run, the result is a 15x
improvement in performance.
- The format of tracing events has been changed to remove some unused
padding. This change created problems
when it was first attempted in 2011, but it seems that the relevant
user-space programs have since been fixed (by moving them to the
libtraceevent library). It is worth trying again; smaller events
require less bandwidth as they are communicated to user space.
Anybody who observes any remaining problems
would do well to report them during the 3.9 development cycle.
- The ftrace tracing system has gained the ability to take a static
"snapshot" of the tracing buffer controllable via a debugfs file. See
this
ftrace.txt patch for documentation on how to use this feature.
- The perf bench utility has a new set of benchmarks intended to help
with the evaluation of NUMA balancing patches.
- perf stat has been augmented to include the ability to print
out information at a regular interval.
- New hardware support includes:
- Systems and processors:
The "Goldfish" virtual x86 platform used for Android development,
Technologic Systems TS-5500 single-board computers, and
SGI Ultraviolet System 3 systems.
- Input:
Cypress PS/2 touchpads and
Cypress APA I2C trackpads.
- Miscellaneous:
ST-Ericsson AB8505, AB9540, and AB8540 pin controllers,
Maxim MAX6581, MAX6602, MAX6622,
MAX6636, MAX6689, MAX6693, MAX6694, MAX6697, MAX6698, and MAX6699
temperature sensor chips,
TI / Burr Brown INA209 power monitors,
TI LP8755 power management units,
NVIDIA Tegra114 pinmux controllers,
Allwinner A1X pin controllers,
ARM PL320 interprocessor communication mailboxes,
Calxeda Highbank CPU frequency controllers,
Freescale i.MX6Q CPU frequency controllers, and
Marvell Kirkwood CPU frequency controllers.
Changes visible to kernel developers include:
- The workqueue functions work_pending() and
delayed_work_pending() have been deprecated; users are being
changed throughout the kernel tree.
- The "regmap" API, which simplifies management of device register sets,
now supports a "no bus" mode if the driver supplies simple "read" and
"write" functions. Regmap has also gained asynchronous I/O support.
If the usual schedule holds, the 3.9 merge window should stay open until
approximately March 5. As usual, LWN will list the most significant
changes throughout the merge window; tune in next week for the next
exciting episode.
Comments (none posted)
By Jonathan Corbet
February 20, 2013
The ARM "
big.LITTLE" architecture is an
interesting beast: it combines clusters of two distinct ARM-based CPU
designs into a single processor. One cluster contains relatively slow
Cortex-A7 CPUs that are highly power-efficient, while the other cluster is
made up of fast, power-hungry Cortex-A15 CPUs. These CPUs can be powered
up and down in any combination, but there are additional power savings if
an entire cluster can be powered down at once. Power-efficient scheduling
is currently a challenge for Linux even on homogeneous architectures;
big.LITTLE throws another degree of freedom into the mix that the scheduler
is absolutely unprepared to deal with, currently.
As a result, the initial approach to big.LITTLE is to treat each pair of
fast and slow CPUs as if it were a single CPU with high- and low-frequency
modes. That approach reduces the problem to writing an appropriate
cpufreq governor at the cost of forcing one CPU in each pair to be powered
down at any given time. The big.LITTLE patch set is more fully described
in the article linked above; that patch
set is coming along but is not yet ready for merging into the mainline.
One piece of the larger patch set that might be ready for 3.9, though, is
the "multi-cluster power management" (MCPM)
code.
The Linux kernel has reasonably good CPU power management, but that code,
like the scheduler, was not designed with multiple, dissimilar clusters in
mind. Fixing that requires adding logic that can determine when entire
clusters must be powered up and down, along with the code that actually
implements those transitions. The MCPM subsystem is concerned with the
latter part of the problem, which is not as easy as one might expect.
Multi-cluster power management involves the definition of a state machine
that implements a 2x3 table of states. Along one axis are the three states
describing the cluster's current power situation: CLUSTER_DOWN,
CLUSTER_UP, and CLUSTER_GOING_DOWN. The first two are
steady states, while the third indicates that the cluster is being powered
down, but that the power-down operation is not yet complete. The other
axis in the state table describes whether the kernel running on some CPU
has decided that the
cluster needs to be powered up or not; those states are called
INBOUND_NOT_COMING_UP and INBOUND_COMING_UP. The table
as a whole thus contains six states, along with a well-defined set of rules
describing transitions between those states.
Shutdown
To begin with, imagine a cluster that is in a small portion of the state
space: it is either fully powered up or fully powered down:
The cluster is running or not; in either one of the above state
combinations, there is no plan to bring up the cluster (the
INBOUND_COMING_UP substate would make no sense in a fully-running
cluster in any case).
If we start from the top of the diagram (CLUSTER_UP), we can then
trace out the sequence of steps needed to bring the cluster down. The
first of those, once the power-down decision has been made, is to determine
which CPU is (in the MCPM terminology) the "last man" that is in charge of
shutting everything down
and turning off the lights on its way out. Since the cluster is fully
operational, that decision is relatively easy; a would-be last man simply
acquires the relevant spinlock and elects itself into the position. Once
that has happened, the last man pushes the cluster through to the
CLUSTER_DOWN state:
All transitions marked with solid red arrows are executed by the last man
CPU. Once the decision to power down has been made, the cluster moves to
CLUSTER_GOING_DOWN, where the cleanup work is done. Among other
things, the last man will wait until all other CPUs in the cluster have
powered themselves down. Once everything is ready, the last man pushes the
cluster into CLUSTER_DOWN, powering itself down in the process.
Coming back up
Bringing the cluster back up is a similar process, but with an interesting
challenge: the CPUs in the cluster must elect a "first man" CPU to perform
the initialization work far enough that the kernel can run safely on all
the other CPUs. The problem is that, when a cluster first powers up, there
may be no memory coherence between the CPUs in that cluster, so spinlocks
are not a reliable mechanism for mutual exclusion. Some other mechanism
must be used to safely choose a first man; that mechanism is called "voting
mutexes" or "vlocks."
The core idea behind vlocks is that, while atomic instructions will not
work between CPUs, it is still possible to use memory barriers to ensure
that other CPUs can see a specific memory change. Acquiring a vlock in
this environment is a multi-step operation: a CPU will indicate that it is
about to vote for a lock holder, then vote for itself. Once (1) at
least one CPU has voted for itself, and (2) all CPUs interested in
voting have had their say, the CPU that voted last wins. The vlocks.txt documentation file included with
the patch set provides the following pseudocode to illustrate the
algorithm:
int currently_voting[NR_CPUS] = { 0, };
int last_vote = -1; /* no votes yet */
bool vlock_trylock(int this_cpu)
{
/* signal our desire to vote */
currently_voting[this_cpu] = 1;
if (last_vote != -1) {
/* someone already volunteered himself */
currently_voting[this_cpu] = 0;
return false; /* not ourself */
}
/* let's suggest ourself */
last_vote = this_cpu;
currently_voting[this_cpu] = 0;
/* then wait until everyone else is done voting */
for_each_cpu(i) {
while (currently_voting[i] != 0)
/* wait */;
}
/* result */
if (last_vote == this_cpu)
return true; /* we won */
return false;
}
Missing from the pseudocode is the use of memory barriers to make each
variable change visible across the cluster; in truth, the memory caches for
the cluster have not been enabled at the time that the first-man election
takes place, so few barriers are necessary. Needless to say, vlocks are
relatively slow, but that doesn't matter much when compared to a
heavyweight operation like powering up an entire cluster.
Once a first man has been chosen, it drives the cluster through a set of
states on its way back to full functionality:
The dotted green lines indicate state transitions executed by the inbound,
first-man CPU. When a decision is made to power the cluster up, the first
man will switch to the CLUSTER_DOWN / INBOUND_COMING_UP
combination. While the cluster is in this state, the first man is the only
CPU running; its job is to initialize things to the point that the other
CPUs can safely resume the kernel with properly-functioning mutual
exclusion primitives. Once that has been achieved, the cluster moves to
CLUSTER_UP / INBOUND_COMING_UP while the other CPUs come on line;
a final transition to CLUSTER_UP / INBOUND_NOT_COMING_UP happens
shortly thereafter.
That describes the basic mechanism, but leaves one interesting question
unaddressed: what happens when CPUs disagree about whether the cluster
should go up or down? Such disagreements will not happen during the
power-up process; the cluster is being brought online to execute a specific
task that will still need to be done. But it is possible for the kernel as
a whole to change its mind about powering a cluster down; an unexpected
interrupt or load spike could indicate that the cluster is still needed.
In that case, a new first man may make an appearance while the last man is
trying to clock out and go home. This situation is handled by having the
first man transition the cluster into the sixth state combination:
The CLUSTER_GOING_DOWN / INBOUND_COMING_UP state encapsulates the
conflicted situation where the CPUs differ on the desired state. The
eventual outcome needs to be a powered-up, functioning cluster.
The last man must occasionally check for this state transition as it goes
through its power-down rituals; when it notices that the cluster actually
wants to be up, it faces a choice:
The optimal solution would be to abort the power-down process, unwind any
work that has been done, and put the cluster into the CLUSTER_UP /
INBOUND_COMING_UP state, at which point the first man can finish the
job. Should that not be practical, though, the last man can complete the
job and switch to CLUSTER_DOWN / INBOUND_COMING_UP instead; the
first man will then go through the full power-up operation. Either way,
the end result will be a functioning cluster.
A few closing notes
The above text pretty much describes the process used to change a cluster's
power state; most of the rest is just architecture-specific details. For
the curious, a lot more information can be found in cluster-pm-race-avoidance.txt, included with
the MCPM patch set. It is noteworthy that the entire MCPM patch
set is contained within the ARM architecture subtree; indeed, the entire
big.LITTLE patch is ARM-specific. Perhaps that is how it needs to be, but
it is also not difficult to imagine that other architectures may, at some
point, follow ARM into the world of heterogeneous clusters. There may come
a time when many of the lessons learned here will need to be applied to
generic code.
Traditionally, ARM developers have confined themselves to working with a
specific ARM subarchitecture, leading to a lot of duplicated (and
substandard) code under arch/arm as a whole. More recently, there
has been a big push to work across the ARM subarchitectures; that has
resulted in a lot of cleaned up support code and abstractions for ARM as a
whole. But, possibly, the ARM developers are still a little bit
nervous about stepping outside of arch/arm and making changes to
the core kernel when those changes are needed. Given that there are
probably more Linux systems running on ARM processors than any other, it
would be natural to expect that the needs of the ARM architecture would
drive the evolution of the kernel as a whole. That is certainly happening,
but, one could argue, it could be happening more often and more
consistently.
One could instead argue that the big.LITTLE patch set is a short-term hack intended to get
Linux running on the relevant hardware until a proper solution can be
implemented. The "proper solution" is still likely to need MCPM, though,
and, in any case, this kind of hack has a tendency to stick around for a
long time. There is almost certainly a long list of use cases for which
the basic big.LITTLE approach gives more than adequate results, while
getting proper performance out of a true, scheduler-based solution may take
years of tricky work. Cpufreq-based Big.LITTLE support may need to persist
for a long time while a scheduler-based approach is implemented and stabilized.
That work is currently underway in the form of the big LITTLE MP project; there are patches being passed around within Linaro
now. Needless to say, this work does touch the core scheduler, with over
1000 lines added to kernel/sched/fair.c. Thus far, though, this
work has been done by ARM developers with little code from core scheduler
developers and no exposure on the linux-kernel mailing list. One can only
imagine that, once the linux-kernel posting is made, there will be a
reviewer comment or two to address. So big LITTLE MP is probably not
headed for the mainline right away.
Big LITTLE MP may well be one of the first significant core kernel changes
to be driven by the needs of the mobile and embedded community. It will
almost certainly not be the last. The changing nature of the computing
world has already made itself felt by bringing vast numbers of developers
into the kernel community. Increasingly, one can expect those developers
to take their place in the decision-making process for the kernel as a
whole. Once upon a time, it was said that the kernel was entirely driven
by the needs of enterprises. To the extent that was true, the situation is
changing; we are currently partway through a transition
to where enterprise developers have a lot of help from the mobile and
embedded community.
Comments (1 posted)
Patches and updates
Kernel trees
- Linus Torvalds: Linux 3.8 .
(February 19, 2013)
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
- Eric W. Biederman: [PATCH review 00/85] userns changes for 9p, afs, ceph, cifs, coda,
gfs2, ncpfs, nfs, nfsd, and ocfs2 .
(February 18, 2013)
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
- Rusty Russell: vringh .
(February 19, 2013)
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
February 20, 2013
The Tizen project has
released version 2.0 of its consumer electronics–focused Linux
platform, accompanied by an updated software development kit (SDK).
The new release offers a host of updates and changes to the Web
runtime and the various HTML5 APIs that support it, but Tizen 2.0 also
addresses native application development—which the 1.0 release
largely skipped over.
What's in the box
The 2.0 release was previewed
in an alpha release back in September 2012, and was declared
final on February 18. The SDK is Eclipse-based, and installers are provided
for Ubuntu (32- and 64-bit) as well as for Windows and Mac OS X.
Platform source code is available through the project's git repository. As was the
case with the 1.0 release, the target platform is ARM-based mobile
phones.
As to the make-up of Tizen 2.0 itself, the platform consists of the
core operating system, the Web application framework, and the
newly-added native application framework. The core operating
system has changed little; the kernel has been bumped to 3.0, with
support added for the contiguous memory
allocator (CMA), external connector (extcon) class devices, and
some input/output memory management unit (IOMMU)-capable DMA
devices.
The other system frameworks have received bumps as well, of which
only the Smack security framework and the SyncML-based synchronization
framework are discussed in the documentation. Smack now supports
recursive behavior on directories with the "transmute" bit set, and the
Tizen source includes a reference Smack policy that does not
enforce access control for applications (and thus may not be a good
fit for real-world deployment). The synchronization
framework has been updated to version 1.2 of the Data Synchronization
(DS) and Device Management (DM) profiles, and has a plugin API. The
current plugins in the source release support plain text files,
vCards, and xCal files.
Web things
Tizen's primary application development message has been "HTML5 and
JavaScript" since day one, and the Web framework has received top
billing in the 2.0 development story as well. There are a
number of updated and newly-supported Web APIs, both from the World
Wide Web Consortium (W3C) and from Tizen's custom API stockpile. New
from the W3C side is support for the Vibration, Network Information,
Web Audio, HTML Media Capture, and Clipboard APIs. Network Information and
Web Audio are perhaps
less self-explanatory than the others; the former allows applications
to access connection information (specifically, bandwidth and whether
or not the connection is metered), while the latter is a high-level
API for synthesis and audio processing (as opposed to general sound
output or microphone capture).
In addition to the W3C APIs, Tizen supports several
platform-specific APIs for HTML5 applications. Some of these tackle
application development matters that may differ wildly between HTML5
platforms, such as application lifecycle management, which covers (among other
things) installation, update, and removal. Others, however, address
areas where there are competing APIs from the W3C or other third
parties, such as Alarms, Messaging, Bluetooth, and device power
management.
This is arguably the point at which the "write once, run
everywhere" promise of HTML5 web applications starts to show some
kinks. Other mobile platforms that base their application development
pitch on HTML5 support custom APIs, too. In some cases these APIs
overlap with W3C specifications, and in others they overlap with competing
vendors' offerings. Firefox OS, for example, boasts a long list of supported W3C
APIs as well, but it too has its share of original interfaces—such as WebSMS
for text messages, a feature which Tizen handles in
the Messaging API.
Still, the Tizen 2.0 release offers a detailed
changelog (as an ODS
spreadsheet file) covering the new and updated features of the
non-W3C-standard APIs. The spreadsheet enumerates all of the changed
methods and attributes in the new release; 479 changes in total. That
may sound daunting, but the availability of the changelog itself is a
welcome addition. Several of the non-W3C APIs are slated to be
addressed by one W3C Working Group or another, and the W3C prefers to
keep its discussions on mailing lists and publish its specifications
in updated documents. Compared to wading through multiple revisions
of a "draft specification," sifting through a spreadsheet is child's
play. In fact, Tizen's developer resources and documentation have
improved markedly since the 1.0 days; there are tutorial- and
reference-style documents for almost every API and feature, and the
wiki has filled in nicely.
Lastly, the Tizen Web runtime has received some minor updates. It
now supports running installed Web applications from external storage
(e.g., memory cards), and it supports the NPRuntime plugin standard
supported by most other major web browsers. The runtime has also
dropped support for the W3C's Widget URI scheme, which
was used to address application components in a more secure manner
than simply linking to them with file:/// URIs. A
replacement URI scheme is said to be coming in a Tizen 2.1
update.
Here come the natives
The Tizen project's promotional material still emphasizes HTML5 as
the development method of choice, but the 2.0 release finally fills in
the missing or undocumented pieces of the native application
framework. The general outline has long been known, of course: Tizen
provides a Linux userland that is intentionally similar to that found
in desktop distributions (with D-Bus, GStreamer, ConnMan, GeoClue, and
other familiar components), and uses the Enlightenment Foundation
Libraries (EFL) as a GUI toolkit and framework. A few intrepid
developers set out to build or port applications to Tizen during the
1.0 period—some of whom were
quite successful—but 2.0 unveils the complete application
model, libraries and utilities, and an API covering most if not all of
the functionality provided to HTML5 application developers.
The SDK fully supports developing native C++ applications in the
Eclipse-based IDE. Both GUI applications and background services are
supported, and Tizen supports multitasking (although, as is common with
mobile phone platforms, only one GUI application can run in the
foreground at a time). There is a low-memory killer that will zap
background applications in least-recently-used order until memory
pressure has been relieved. The developer's guide points out a few
areas where Tizen native applications differ from typical C++
conventions, such as error handling (Tizen uses error results rather
than exceptions).
The native development platform offers a file I/O layer (with
several pre-defined locations like the media folder), an
"SQLite-compatible" database API, internationalization of strings and
numbers, symmetric and asymmetric cryptography, key and certificate
management, and all of the multimedia and networking APIs one would
expect. There are C++ APIs for accessing the device sensors and
telephony framework as well. EFL provides the user interface widgets,
complete with scene management and animation effects, but OpenGL ES is
available, too.
HTML who?
In fact, the list of features not supported via the native
API is considerably shorter than the list of features which
are. There are certainly several: so far, for example, there
is no bidirectional text support, map service, or secure SIM card file
storage. And that is not a comprehensive list by any means; rather,
what is striking is the extent to which Tizen is now supporting native
application development at all, when one considers the HTML5-only message
that dominated Tizen's early days. In contrast, in this month's 2.0
release, even the reference applications are written in native
code, not in HTML5.
A skeptic might take this influx of native development tools as a
change of heart—as
yet-another-OS-vendor-backing-away-from-HTML5. But it is not clear
that Tizen has lost any enthusiasm for HTML5; considering
that plenty of developers already express suspicion that HTML5 will
ever be able to compete seriously with native applications' speed and
power, the project may simply be reaching out to attract hordes of
those developers to the platform, rather than settling for just the
HTML5-only crowd.
After all, some application developers will no doubt be happy
just to write in C++ rather than JavaScript because they are more
comfortable doing so. In addition, the HTML5 application development
story is far from complete. Among the non-W3C specifications
supported by Tizen's Web API are well over a dozen APIs for which either
the System Applications, Web Apps, or Device API Working Group is
expected to eventually produce a W3C Recommendation. Since it is hard
to lose money betting against the speed of the W3C standardization
process, perhaps the Tizen project is simply interested in providing a
complete development framework that it can advertise as stable,
especially considering that project co-sponsor Samsung is rumored to be
bringing Tizen-powered devices to market this year.
In the short run, the challenge for Tizen will be seeing how its
native APIs fare in the wild when compared to Qt, which has both a
large installed base and the support of several independent mobile OS
projects (such as Ubuntu and Sailfish OS). The Tizen project recently
announced its second Tizen Developer Conference, to be held May 22
through 24 in San Francisco. It will be interesting to see how the
content of the program balances out between the HTML5 and native APIs.
Should the native API framework take off, that would no doubt be
welcome news for Tizen. For HTML5 fans, however, that would leave
Mozilla's Firefox OS as the sole remaining Web-application driven
mobile platform in the arena—and push the likelihood of a real
HTML5-versus-native showdown much further down the road.
Comments (11 posted)
Brief items
Canonical has
announced
an upcoming version of its distribution for tablets; it seems to have come
a long way since we
reviewed an early
release last November. "
Take calls in Skype while you work in a
document, make notes on the side while you surf the web, tweet while you
watch a movie. Or use apps collaboratively – drag content from one app to
another for a super-productive day. We’ve reinvented the tablet as a bridge
between phone and PC."
Comments (19 posted)
Canonical has
announced
that a preview version of its distribution for phones will be made
available on February 21. "
The release also marks the start of
a new era for Ubuntu, with true convergence between devices. When complete,
the same Ubuntu code will deliver a mobile, tablet, desktop or TV
experiences depending on the device it is installed on, or where it is
docked. Ubuntu 13.10 (due in October) will include a complete entry-level
smartphone experience."
The initial images will be for Galaxy Nexus and Nexus 4 handsets.
Comments (19 posted)
The second point release of Ubuntu 12.04 LTS is available. "
To help
support a broader range of hardware, the 12.04.2 release adds an updated
kernel and X stack for new installations on x86 architectures, and matches
the ability of 12.10 to install on systems using UEFI firmware with Secure
Boot enabled."
Full Story (comments: none)
GNU Linux-libre 3.8-gnu is available, based on the recent 3.8 release of
the Linux kernel. "
The GNU Linux-libre project takes a minimal-changes approach to cleaning up Linux, making no effort to substitute components that need to be removed with functionally equivalent Free ones. Nevertheless, we encourage and support efforts towards doing so."
Full Story (comments: none)
Distribution News
Debian GNU/Linux
The first release candidate of the Debian installer for 7.0 (wheezy) is
available for testing.
Full Story (comments: none)
Fedora
Fedora 16 reached its end of life on February 12. There will be no further
updates. Users of Fedora 16 are encouraged to upgrade to Fedora 17.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Kororaa Linux has become the
Korora
Project. Lead developer Chris Smart
announced
the name change on his blog. "
The motivation for this was not only
the dropping of an excess letter ‘a’, but it’s also a reflection of the
community which is starting to grow nicely and I wanted something people
could better associate with and belong to." Korora 18 beta is
available for
testing.
Comments (none posted)
The H
takes
a look Sabayon 11. "
The 64-bit live images of Sabayon 11 can now boot and install on UEFI systems with Secure Boot enabled, as the Sabayon developers decided to adopt Matthew Garret's signed-shim. A SecureBoot key is in the /SecureBoot directory of the live media and can be used on initial booting, while a SecureBoot keypair is generated during installation and can then be added to the firmware's database, which allows users to sign their own binaries."
Comments (none posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
February 20, 2013
On February 4, the Mozilla Firefox and Google Chrome teams
demonstrated their interoperability by conducting a live video chat
between the two offices. This is possible because Firefox and Chrome
have both implemented support for WebRTC, the real-time multimedia
communication framework being developed by both projects, in
conjunction with the World Wide Web Consortium (W3C) and Internet
Engineering Task Force (IETF). WebRTC's JavaScript
API will allow web developers to write audio/video
chat applications that function without extensions or
plugins—and, in theory, with reliable interoperability between
browser implementations.
Mozilla documented
the test call on its Mozilla Hacks blog, and the Google team did
the same on the Chromium blog. A recording of the test call
is viewable in both posts as a YouTube video. It lasts about a minute, and
although it looks simple enough, as Mozilla Chief Innovation Officer
Todd Simpson explains, there are quite a few important details under
the surface. The call used the royalty-free VP8 and Opus codecs for video and audio,
respectively, used Interactive Connectivity Establishment (ICE), Session Traversal
Utilities for NAT (STUN), and Traversal
Using Relays around NAT (TURN) for firewall
traversal, and was encrypted with Secure Real-time Transport Protocol (SRTP) and Datagram
Transport Layer Security (DTLS). ICE is a
higher-level NAT-traversal protocol that makes use of STUN and TURN to
select from among several possible connection methods; DTLS is a
datagram-oriented secure transport layer that uses SRTP for key
exchange. The actual media streams are sent over a WebRTC PeerConnection.
The application used in the demo call is AppRTC, which runs on Google's
App Engine service. Interested parties can test it out for
themselves, but will need to use either a recent nightly build of
Firefox (the Desktop edition only, for now) or a Chrome 25 beta in
order to utilize the chat. For curious developers, AppRTC's source
code is available
for inspection; the cross-browser interoperability is made possible by
a short JavaScript adapter
that smooths over the differences between Firefox's
and Chrome's function names: Firefox prefixes its interfaces with
moz and Chrome with webkit (note that, at the moment,
Chrome appears to be the only WebKit-based browser with a WebRTC
implementation). Such prefixing behavior is a
familiar sight to web developers, although the WebRTC interoperability page says
both browsers will drop the prefixes when the specification gets "more
finalized." According to that page, there are a few other syntactic
differences between the browsers' implementations, as well as
differences in STUN support and SRTP connection negotiation. The
Mozilla blog entry also includes code snippets from another sample
application, which appears to be a Firefox-only affair.
Beyond words
Live video chatting is nice, and for Linux users in particular, having
the functionality "baked in" to two of the most popular cross-platform
browsers is a far sight more appealing than installing binary plugins. But
WebRTC's functionality offers more than just conversation. The
getUserMedia API used to access video and audio data through
webcam hardware can be used in other classes of applications. Mozilla
has a tutorial
implementing simple photo-booth functionality, for example, and Marko
Dugonjić recently speculated
that it could be used to implement proximity detection.
WebRTC also specifies a general-purpose DataChannel
API in addition to the PeerConnection media stream. Clients
can use any underlying data transport protocol they choose; WebRTC
only specifies that they agree on its setup, teardown, and
reliability. Mozilla is the first browser vendor to implement
DataChannels for WebRTC; back in November 2012, Simpson demonstrated
Firefox using DataChannels to share content over Firefox's Social
API, including live text chat and peer-to-peer file transfer.
The codec wars
The ability to use WebRTC with royalty-free codecs like Opus and
VP8 can also be seen as a partial vindication of Mozilla's 2012 decision to implement OS-fallback
support for the patent-encumbered H.264 codec. The decision enabled
playback of H.264 content by passing the necessary decoding duties
down to the operating system—including, particularly on mobile
clients, hardware video decoders. Prior to that decision, Mozilla had
argued that it would not support H.264 because doing so would require
it to pay royalties to H.264's patent holders. Mozilla instead fought
for the adoption of the royalty-free Theora and VP8 codecs, including
arguing for the inclusion of such a free codec as a requirement in the
HTML5 <video> element.
When it announced in March 2012 that it would implement a fallback
mechanism for H.264 playback, Mozilla justified the decision by
saying it needed to focus its resources on emerging media standards,
rather than by continuing to fight against an entrenched one. Brendan
Eich cited
WebRTC as the next major battlefield. The battle appears to be going
in favor of unencumbered codecs, as the IETF draft specification
requires
Opus, but it is clearly still not over. The corresponding draft that
addresses video
requirements mentions VP8, but it requires neither VP8 nor any
other specific codec.
No doubt proponents of H.264—particularly those who stand to
reap royalty payments—will continue to lobby in favor of H.264.
But the playing field is different; unlike the <video>
element, consumer video cameras (many of which record to H.264
directly via hardware encoders) do not factor into the basic WebRTC
use case. And, just as importantly, the development of WebRTC is
spearheaded by two free software browser projects. That gives
what-the-browsermakers-want an intrinsic head start against competing
codecs. The fact that users can download and use VP8-powered WebRTC
for free, real-time video chats today gives the royalty-free an even
bigger advantage: the sole working implementation.
Comments (1 posted)
Brief items
Want to visit an incomplete version of our website where you can't
zoom? Download our app!
—
Randall Munroe
Sometimes it looks like an IDE, sometimes it looks like an
operating system, sometimes it just looks like an editor. If you
simply must feel like you’re using an IDE, fire up Eclipse; then
there’ll be no doubt. Just don’t blame me if you still aren’t
happy.
—
Jon Snader, addressing
the question "Is Emacs an IDE?"
Comments (3 posted)
Opera has announced that it will stop using its own rendering engine and will migrate its browser to WebKit and the V8 JavaScript engine—specifically, the Chromium flavor of WebKit. Opera Mobile will be ported first, with the desktop edition to follow later. The announcement downplays the significance of the change, saying: "Of course, a browser is much more than just a renderer and a JS engine, so this is primarily an "under the hood" change. Consumers will initially notice better site compatibility, especially with mobile-facing sites - many of which have only been tested in WebKit browsers."
Comments (92 posted)
Liberated Pixel Cup, the free software game-design contest, has finally revealed the winning entries. The overall grand prize went to "Lurking Patrol Comrades," with additional nods going to "Big Island," "Castle Defense," and "Laurelia's Polymorphable Citizens." In addition to the wrap-up, the announcement addressed the potential for more LPC-style contests in the future: "Despite the judging delay, one other sign of success is how excited many of the participants of this year's Liberated Pixel Cup have been to find out if there would be another one. The answer is simply: we aren't sure, but we are certainly interested in it."
Comments (1 posted)
The developers in the Samba Team are considering removing the SWAT
administration tool due to the series of security problems related to it.
"
The issue isn't that we can't write secure code, but that writing
secure Web code where we can't trust the authenticated actions of our
user's browser is a very different model to writing secure system code.
Frankly it just isn't our area." Unless somebody steps up to
maintain this tool properly, it may well be on its way out.
Full Story (comments: 15)
Karl Berry has released version 5.0 of Texinfo, the GNU documentation format. This is the first new release in for years. "By far the biggest change is
a complete reimplementation of makeinfo. It is now much more flexible
but also, sadly, much slower, despite all our optimization efforts. We
hope all the many improvements make the new version worthwhile for users
nevertheless."
Full Story (comments: none)
A new version of TinyCC, the tiny C compiler, has been released after a three-year development cycle. Changes include many improvements for ARM, multi-arch configuration, and out-of-tree builds.
Full Story (comments: none)
Firefox 19 is available, for Linux and other platforms. According to the release notes, notable changes include the built-in PDF renderer and several changes to the Web Console and debugging tools.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
February 16 marked two years of Guile 2.x, and in celebration the project held a "potluck"-style hackfest. The entries include portable Scheme bindings to Memcached, a distributed computation system based on ZeroMQ, and a "boot to Guile" image for QEMU. "The image boots a Linux-Libre kernel with an initrd containing a copy of Guile, and where the +/init+ file is a Scheme program. The image's build process is fully automated with GNU Guix. This is a preview of what the Guix-based GNU distribution will soon use."
Full Story (comments: none)
Joseph S. Myers has published another update on the progress of GCC 4.8.0. Stabilization work continues, and "GCC trunk is likely to be ready some time in March for a 4.8 release
branch to be created and a first release candidate made from that
branch."
Full Story (comments: none)
Libre Graphics World takes
a look at recent GIMP developments that refresh the application's
capabilities for working with astrophotography. High bit-depth
support is a major factor, but so is supporting the Flexible Image Transport System
(FITS) data format and implementing key image processing algorithms. "Further work here is likely to involve porting the plugins to GEGL, so that tools like the rounding of stars and layers alignment would work directly on GEGL buffers."
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
The Linux Foundation and Dice have announced the results of the
2013
Linux Jobs Survey & Report. Nearly 1000 hiring managers and 2600
Linux professionals were surveyed. "
Tech is in, but nowhere is the opportunity for career advancement and big financial reward more evident than in the Linux jobs market where salaries for skilled Linux talent are rising at nearly double the rate of other tech professionals."
Full Story (comments: none)
The Creative Commons has
announced the
posting of the third and final draft of the 4.0 license suite and the
beginning of the last comment period. "
In this third discussion
period, we will be returning our attention to ShareAlike compatibility, the
centerpiece of our interoperability agenda. We will take a harder look at
the mechanism necessary to permit one-way compatibility out from BY-SA to
other similarly spirited licenses like GPLv3, and whether one-way
compatibility is, in fact, desired."
Comments (none posted)
Linaro has
announced
the formation of the Linaro Networking Group (LNG), with twelve
founding members. "
With ARM-based SoCs at the heart of the transformation occurring in cloud and mobile infrastructure applications such as switching, routing, base-stations and security, Linaro’s members are collaborating on fundamental software platforms to enable rapid deployment of new services across a range of converged infrastructure platforms. Developing the base platform for diverse and complex networking applications requires a significant amount of software that addresses common challenges. LNG will deliver this as an enhanced core Linux platform for networking equipment. Linaro has been providing common core software for ARM-Powered®, Linux-based mobile devices since June 2010 with recognized success, and it is now building on the collaborative working model that it has created to form special groups focusing on the server and networking segments."
Comments (none posted)
The Python Software Foundation (PSF) has
announced that the trademark on "Python" is at risk in the European Union. A company called Veber has applied for a community trademark on Python "
for all software, services, servers... pretty much anything having to do with a computer". The PSF is looking for help in opposing the application: "
According to our London counsel, some of the best pieces of evidence we can submit to the European trademark office are official letters from well-known companies 'using PYTHON branded software in various member states of the EU' so that we can 'obtain independent witness statements from them attesting to the trade origin significance of the PYTHON mark in connection with the software and related goods/services.' We also need evidence of use throughout the EU." (Thanks to Ben Boeckel and Sebastian Pipping.)
Comments (23 posted)
Articles of interest
The
Southern California
Linux Expo (SCALE) team has posted interviews with the speakers.
Interviewees include
David
Rodriguez,
Nathan
Betzen,
Jonathan
Thomas,
Jenn
Greenaway,
Jérôme
Petazzoni,
Roy
Sutton,
Dennis
Kibbe,
Lance
Albertson,
Philip
Ballew,
Joe
Brockmeier,
Deb
Nicholson,
Thomas
Cameron,
John
Willis,
Stuart
Sheldon,
Robyn
Bergeron,
Mark
Hinkle,
Bob
Reselman,
Christophe
Pettus,
Brandon
Burton,
Jorge
Castro,
Matthew
Garrett, and
Kyle
Rankin. SCALE begins February 22 in Los Angeles, California.
Comments (none posted)
Calls for Presentations
GNOME.Asia 2013 will take place May 24-25 in Seoul, Korea. The call for
papers is open until March 8. "
The conference follows the release of
GNOME 3.8, helping to bring new desktop paradigms that facilitate user
interaction in the computing world. It will be a great place to celebrate
and explore the many new features and enhancements to the groundbreaking
GNOME 3 release and to help make GNOME as successful as possible."
Full Story (comments: none)
Puppet Camp
2013 will take place April 19 in Nürnberg, Germany. The
call for
papers will be open until March 15.
Full Story (comments: none)
Registration and the Call for Papers are open for the openSUSE Conference
which takes place July 18-22, 2013 in Thessaloniki, Greece. The call for
papers closes April 3. "
Talk, Workshop and Birds of a Feather session
submissions should be focused in the following 3 areas; Community &
Project, Geeko tech, and openWorld."
Full Story (comments: none)
Upcoming Events
Akademy, the KDE community summit, will
be hosting the Qt Contributor Summit during the week of July 13-19 in Bilbao, Spain. "
A combined conference makes sense. There are many strong personal ties and working relationships among KDE and Qt contributors. Meeting face-to-face will be productive for both projects. 'As part of both the Qt and KDE communities, I've seen how the two have benefited from each other. In the last year and a half, the pace picked up when many KDE developers started working on Qt and certain features inspired by KDE were proposed and accepted into Qt 5. Akademy and the Qt Contributor Summit co-hosting this year means the two communities will have a much bigger opportunity for cross-pollination of ideas.' Thiago Macieira, Qt Core Maintainer, Software Architect at Open Source Technology Center, Intel Corporation." The core Akademy talks will be on Saturday and Sunday (July 13-14), while the Qt Contributor Summit will be "unconference" style in parallel with the Akademy Birds of a Feather (BoF) sessions on Monday and Tuesday (July 15-16).
Comments (none posted)
Events: February 21, 2013 to April 22, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
February 20 February 22 |
Embedded Linux Conference |
San Francisco, CA, USA |
February 22 February 24 |
Mini DebConf at FOSSMeet 2013 |
Calicut, India |
February 22 February 24 |
FOSSMeet 2013 |
Calicut, India |
February 22 February 24 |
Southern California Linux Expo |
Los Angeles, CA, USA |
February 23 February 24 |
DevConf.cz 2013 |
Brno, Czech Republic |
February 25 March 1 |
ConFoo |
Montreal, Canada |
February 26 February 28 |
ApacheCon NA 2013 |
Portland, Oregon, USA |
February 26 February 28 |
O’Reilly Strata Conference |
Santa Clara, CA, USA |
February 26 March 1 |
GUUG Spring Conference 2013 |
Frankfurt, Germany |
March 4 March 8 |
LCA13: Linaro Connect Asia |
Hong Kong, China |
March 6 March 8 |
Magnolia Amplify 2013 |
Miami, FL, USA |
March 9 March 10 |
Open Source Days 2013 |
Copenhagen, DK |
March 13 March 21 |
PyCon 2013 |
Santa Clara, CA, US |
March 15 March 16 |
Open Source Conference |
Szczecin, Poland |
March 15 March 17 |
German Perl Workshop |
Berlin, Germany |
March 16 March 17 |
Chemnitzer Linux-Tage 2013 |
Chemnitz, Germany |
March 19 March 21 |
FLOSS UK Large Installation Systems Administration |
Newcastle-upon-Tyne , UK |
March 20 March 22 |
Open Source Think Tank |
Calistoga, CA, USA |
| March 23 |
Augsburger Linux-Infotag 2013 |
Augsburg, Germany |
March 23 March 24 |
LibrePlanet 2013: Commit Change |
Cambridge, MA, USA |
| March 25 |
Ignite LocationTech Boston |
Boston, MA, USA |
| March 30 |
Emacsconf |
London, UK |
| March 30 |
NYC Open Tech Conference |
Queens, NY, USA |
April 1 April 5 |
Scientific Software Engineering Conference |
Boulder, CO, USA |
April 4 April 5 |
Distro Recipes |
Paris, France |
April 4 April 7 |
OsmoDevCon 2013 |
Berlin, Germany |
April 6 April 7 |
international Openmobility conference 2013 |
Bratislava, Slovakia |
| April 8 |
The CentOS Dojo 2013 |
Antwerp, Belgium |
April 8 April 9 |
Write The Docs |
Portland, OR, USA |
April 10 April 13 |
Libre Graphics Meeting |
Madrid, Spain |
April 10 April 13 |
Evergreen ILS 2013 |
Vancouver, Canada |
| April 14 |
OpenShift Origin Community Day |
Portland, OR, USA |
April 15 April 17 |
Open Networking Summit |
Santa Clara, CA, USA |
April 15 April 17 |
LF Collaboration Summit |
San Francisco, CA, USA |
April 15 April 18 |
OpenStack Summit |
Portland, OR, USA |
April 17 April 18 |
Open Source Data Center Conference |
Nuremberg, Germany |
April 17 April 19 |
IPv6 Summit |
Denver, CO, USA |
April 18 April 19 |
Linux Storage, Filesystem and MM Summit |
San Francisco, CA, USA |
| April 19 |
Puppet Camp |
Nürnberg, Germany |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol