By Jake Edge
February 20, 2013
The second day of this year's Android
Builders Summit started off with a panel discussion of a
provocative question: Is Android the new embedded Linux? As moderator
Karim Yaghmour noted, it is not really a "yes or no" question, rather it
was meant as a "conversation starter". It certainly had that effect and
led panel members to explain how they saw the relationship between
"traditional" embedded Linux and Android.
The panel
Four embedded Linux experts were assembled for the panel, with each
introducing themselves at the outset. David Stewart is an engineering
manager at the Intel Open Source Technology Center where he is focused on
the company's embedded Linux efforts, in particular the Yocto project.
Mike Anderson has been doing embedded work for 37 years and is now the CTO
for The PTR Group, which does embedded Linux consulting and training. Tim
Bird is a senior staff software engineer at Sony Network Entertainment as
well as being involved with the Linux Foundation's Consumer Electronics
Working Group. Linaro's Android lead Zach Pfeffer rounded out the group.
He has been working on Android "since it was a thing" and in embedded Linux
for twelve years.
What is "embedded Linux"?
Defining the term "embedded Linux" and whether it describes Android was
Yaghmour's first query to the
panel. Bird said that he didn't think that Android qualifies as embedded Linux.
Embedded means a "fixed function device" to him, so while Sony wants to
make a platform out of its TVs and other devices, which is "great stuff", he
doesn't see it as "real embedded". Real embedded systems are typified by
being "baked at the factory" for set functionality "and that's what
it does".
Pfeffer disagreed, noting that Android had helped get Linux into some
kinds of devices where it had been lacking. The Android model is a
"particularly efficient way" to support new systems-on-chip (SoCs), so it
provides a way for new systems to be built with those SoCs quickly. While
phones and other Android devices might not fit the profile of traditional
embedded devices, the Android kernel is providing a base for plenty of
other devices
on new SoCs as they become available.
What were the driving forces behind the adoption of embedded Linux,
Yaghmour asked. Anderson had a "one word" answer: royalties, or really the
"lack thereof". Bird agreed that the lack of royalties was a big deal, but the
availability of the source code may have been even more important. It
meant that the user didn't have to talk to the supplier again, which was
important, especially for smaller device makers, because they were never
able to get much support from vendors. With the source, they could fix
their own problems.
Stewart noted that people tend to make the assumption that embedded means
that a realtime operating system is required. Often that's not the case
and Linux is perfectly suited to handling embedded tasks. There is also a
certification ecosystem that has built up around embedded Linux for areas
like safety and health, which helps with adoption.
In addition to the other reasons mentioned, Pfeffer noted that "Linux is fun".
Often disruptive technology comes about because an engineer wants to do
something fun. With a manager who is "more enlightened or maybe
just cheap", they can pull their Linux hobby into work. It is much more
fun to work on embedded Linux than something like Windows Mobile, and he
has done both, he said.
Yaghmour then asked: what is it in Android that is attracting device makers in
that direction? Stewart said that he is "not an Android
guy", but he thinks it is the user interface familiarity that is drawing
manufacturers in. It is not so much the app store, apps, or services, but
that users are now expecting the "pinchy, zoomy, swirly" interface.
Anderson agreed, noting that it makes it much easier to drop a new device
onto a factory floor if users already understand how to interact with it.
Bird pointed to the silicon vendors as a big part of the move to Android. The
big silicon vendors do an Android port before anything else, he said.
Stewart (from Intel) noted that not all silicon vendors had that
Android-first strategy, to a round of chuckles. While there is the "thorny
issue" of free video driver support, Bird continued, many people are
"coattailing" on the Android support that the silicon vendors provide.
On the other hand, Android has been the "club" to bring some vendors to the
table in terms of open source drivers, Anderson said, using Broadcom as an
example.
But Pfeffer believes that the app ecosystem is the big draw. It is a "clear
value proposition" for a vendor who can build a platform that can be
monetized. The APIs provided by Android either won't change or will be
backward compatible, so vendors can depend on them. In fact, Google doesn't
document how the platform is put together because it doesn't want vendors
to depend on things at that level, he said.
But for vendors who are
putting Android on their own hardware, they are going to have to understand
and adapt the platform, Bird said. Stewart noted that he heard that early
Android tablets had to just hide the phone dialer because there was no way
to get
rid of it. There was much agreement that customizing Android to make it
smaller or faster was difficult to do.
Drawbacks to Android
That led to the next question: what are the drawbacks for Android? Bird
said that it has a "really big footprint" and that "JITted code is slower
than native". That is a classic tradeoff, of course. As an example he
noted the first video ad in a print magazine, which used an "inexpensive"
Android
phone board in the magazine page. That board was around $50, so it only
appeared in the first 1000 issues of the magazine. Because of the size of
Android, you will not see a $5 board that can run the whole stack, he said.
Pfeffer countered that you can get Android to fit in 64M on certain classes
of devices. Android doesn't prevent you from "going low", he said. Bird
noted that his camera project only has 32M. Anderson described the Android
platform as having "seven layers that keep you from the hardware", which
adds to the complexity and size. In addition, you need a high-end GPU in order to run
Ice Cream Sandwich reasonably, he said. Pfeffer said that there was the
possibility of using
software rendering, but there was skepticism expressed about the performance of that option.
Beyond just the footprint and complexity, are there drawbacks in how the
Android community is put together and works, Yaghmour asked. Bird
mentioned that there isn't really a community around "headless Android" and
that there isn't really any way for one to spring up. Because "you get
whatever Google puts out next", there is little a community could do to
influence the direction of headless Android. If, for example, you wanted
to add something for headless Android that Google has no interest in, you
have to
maintain that separately as there isn't much of a path to get it upstream.
There are devices that are difficult to add to Android, Anderson said.
Adding a sensor that "Google never thought of" to libsensors is "not
trivial". Making a headless Android system is also not easy unless you
dive deeply into the Android Open Source Project (AOSP) code. Stewart
noted that Android adoption is generally a one-way street, so it is
difficult to switch away from it. Pfeffer agreed, noting that the
companies that do adopt Android reap a lot of benefits, but "it is a
one-way trip".
When he started looking at Android, Yaghmour thought it would be "easy", as
it was just embedded Linux. There was a lot more to it than that, with
various pieces of the Linux stack having been swapped out for Android
replacements. But that's a good thing, Bird said. His "strong opinion" was
that Android is a "breath of
fresh air" that didn't try to force Unix into the embedded space. Android
was able to discard some of the Unix baggage, which was a necessary step, he said.
There are some really good ideas in Android, especially in the app
lifecycle and the idea of intents, all of which added up to "good stuff".
Android was in the right place at the right time, Anderson said. For
years, embedded Linux couldn't make up its mind what the user interface
would be, but Android has brought one to the table. Android also takes into
account the skill set of programmers that are coming out of school today.
Constrained environments are not often taught in schools, so the "infinite
memory" model using Java may be appropriate.
Stewart noted that HTML5 still has the potential for cross-platform user
interfaces, and doesn't think the door should be closed on that
possibility. Yocto is trying to support all of the different user
interface possibilities (Qt, GTK, EFL, HTML5, ...). There is also the
question of the future of Java, Anderson said. The security issues that
Oracle has been slow to fix is worrisome, and no one really knows where
Oracle plans to take Java.
While embedded Linux has nothing to learn from Android on the technical
level, it could
take some higher-level lessons, Pfeffer said. Focusing on creating an
ecosystem from the app developer to the user is extremely important.
Beyond that, reducing the time to market, as Android has done, so that a
new widget using a new SoC can be in the hands of app developers and users
quickly should be a priority. Existence proofs are very powerful, so a
system which has a billion users that a device maker can just plug into is
compelling, he said.
Licensing
For the most part, Android started with a clean slate for licensing
reasons, Yaghmour said; what are the pros and cons of that decision? The
licenses do have an effect, Bird said, and the BSD/Apache license nature of
Android changes how companies perceive their responsibilities with respect
to the open source communities. Companies like BSD licenses, but it doesn't
encourage them to push their changes upstream—or to release them at
all. That means we don't really know how much interesting technology is
getting lost by not being shared, which "is a worry", he said.
Stewart noted that the BSD license seemed to remove
the "multiplicative effect" that you see in code bases that are licensed
under the GPL. He pointed out that the BSDs themselves seem to suffer from that
because sharing the code is not required. Anderson said that the vendors
hiding their code make it hard for his company to help its customers. If a
codec the customer wants doesn't work with the PowerVR GPU drivers, there
is little he can do to help them. Some of those vendors are just "hiding
behind the license", he said.
The license situation is a "red herring", according to Pfeffer, because
"market pressure will trump any licensing issues". If a GPLv3-licensed
component will help a device maker beat their competitor to market, "it
will ship".
Embedded Linux and Android can coexist both in the community and in the
same devices, the panel agreed. The key is the kernel, Anderson said, as
long as that is present one could run the Android user interface atop a
realtime kernel, if you understand the architecture of both sides.
Another possibility would be to use virtualization, Stewart said, perhaps
in an automotive setting with Android games and apps in the back seat
running in a VM on a more traditional embedded Linux system to control the
critical systems.
Yaghmour's final question asked whether we will eventually see Android "wipe
embedded Linux off the map". All except Pfeffer had short "no" answers.
Pfeffer said he would hate to see traditional embedded Linux go away, but
that we may see it eventually. He likened Android to the invention of the
loom. Prior to that invention, textiles were crafted by hand, but the loom
standardized how you create textiles, and Android may do the same for Linux
devices. Anderson and Bird were quick to point out SoCs and platforms
where Android will never run as counter-examples. Stewart had the last
word on that question when he described Pfeffer's answer as something
like what "Bill Gates would have said"—to a round of laughter from
participants and audience alike.
[ Thanks to the Linux Foundation for assisting with travel costs to San Francisco for ABS. ]
Comments (none posted)
By Jake Edge
February 20, 2013
The Linux Foundation's Rudolf Streif introduced one of the morning keynotes
at the 2013 Android
Builders Summit (ABS) by noting that
androids in space have a long history—at least in science fiction like
Star Wars. He was introducing Dr. Mark Micire of the US National Aeronautics and Space Administration
(NASA) Ames Research Center, who recently led a project that put the Android
operating system into space in the form of an "intelligent space robot"
that currently inhabits the International Space Station (ISS). Micire
brought the tale of how that came about to the first day of ABS on February
18 in San Francisco.
He started off by expressing amazement at what the community has done with
Android that takes it far beyond its mobile phone roots. He has several
different versions of his talk, but when he looked at the talk descriptions
for ABS, he quickly realized that the "geeky version" would be right for
the audience. A video provided the high-level view of the project,
starting with the liftoff of the last space shuttle, which carried the
first version of the robots to the ISS, to a description of
using a Nexus S smartphone to communicate with and control the robot. The idea is to have
a robot
available to both the crew and the ground-based operations staff to
take over some of the menial tasks that astronauts currently have to perform.
Some history
The spherical light saber trainer seen on the Millennium Falcon in Star
Wars was the
inspiration for several different space robot projects over the years,
Micire said. That includes the "personal satellite assistant" (PSA) which
was developed by the NASA Ames Research Center. It had a display
screen, two-way audio, a camera, and useful tools like a flashlight in a
roughly spherical package. Similarly, the Johnson Space Center created the
AirCam that could fly around the shuttle in space to take photographs and
video of the spacecraft. The AirCam actually flew on the shuttle in 1987,
but both projects were eventually canceled.
Micire's project evolved from a senior project at MIT, which
created roughly spherical satellite simulators to be used for experimenting with
synchronized satellite maneuvers. The algorithms to do those kinds of
maneuvers need to be developed in some cases, but it is expensive to test new
algorithms
with actual
satellites. The MIT SPHERES (Synchronized Position Hold, Engage, Reorient
Experimental Satellites) project used volleyball-sized robots that could be
flown inside the ISS to test these algorithms.
The SPHERES robots have a tank of carbon dioxide to use as propellant, much
like a paintball gun. In fact, when they need refilling, Micire has sometimes
taken them to Sports Authority (a US sporting goods store) to the
puzzlement of the clerks there. The CO2 is routed to thrusters
that can move the robot in three dimensions.
A Texas Instruments DSP that is "a decade old at this point" is what runs
the SPHERES robot. There is a battery pack to run the CPU and some
ultrasonic receivers that are used for calculating position. That pack uses
standard AA batteries, he said, because lithium-ion and other battery types can
explode in worst-case scenarios, which makes it difficult to get them
aboard a spacecraft. It is easy to "fly AA batteries", though, so lots of
things on the ISS run using them.
Since the cost of getting mass to low earth orbit is high, he said that he
doesn't even want to contemplate the
amount being
spent on resupplying AA batteries to the ISS.
The robot also has an infrared transmitter that sends a pulse used by the
controller of
ultrasonic beacons installed in an experimental lab area of the ISS. The
IR pulse is seen by the controller, which responds by sending several
ultrasonic pulses at
a known rate. The receivers on the SPHERES pick that signal up; using the
known location of the transmitters and the speed of sound, it can then
triangulate its position within the experimental zone, which is a cubical
area six feet on a side. Micire
showed video of the SPHERES in action on the ISS. He played the video at
4-6x normal speed so that the movement wasn't glacial; NASA safety
engineers prefer not to have high-speed
maneuvering via CO2 jets inside spacecraft.
The NASA Human Exploration and Telerobotics (HET) project that Micire runs
wanted to create robots that could handle a number of different tasks in
space that are currently done by astronauts. The idea is to provide both
the crew on the station and the team on the ground with a useful tool.
Right now, if there is an indicator light on a particular panel in the
station and the ground crew wants to know its state, they have to ask a
crew member to go look. But a robot could be flown over to the panel and
relay video back to the ground, for example.
The HET team was faced with the classic decision of either rolling its own
controller for the Smart SPHERES or buying something "commercial off the
shelf" (COTS). The team didn't have a strong opinion about which choice
was better,
but sat down to list their requirements. Those requirements included
sensors like a gyroscope, camera, accelerometer, and so on, in a package with a
reasonably powerful CPU and a fair amount of memory and storage. While
Micire was
worriedly
thinking "where are we going to find such a device?", he and the team were
all checking their email on their smartphones. It suddenly became obvious
where to find the device needed, he said with a chuckle. Even NASA can't
outrun the pace of the mobile phone industry in terms of miniaturization and
power consumption, he said.
Flight barriers
There are a lot of barriers to getting
a device "space rated" so that it can fly on the ISS (or other
spacecraft). The engineers at NASA are concerned about safety
requirements, and anything that could potentially "deorbit the station" are
of particular concern. HET wanted to go from a concept to flight in
roughly a year; "that's insane", Micire said, as it normally requires 2-3 years
from concept to flight because of safety and other requirements.
But using a mobile phone will help speed the process. Right about the time
a platform was needed, he heard about the Nexus S ("bless the internet!")
being released. It had just what was needed, so he and a colleague "camped out"
in line at the Mountain View Best Buy to get numbers 11 and 12 of the 13
that were delivered to that store.
The first thing they did to these
popular and hard-to-get new phones was to tear them apart to remove the
ability to transmit in the cellular bands. For flight safety, there must
be a hardware mechanism that turns off the ability to transmit. Removing the
driver from the kernel was not sufficient for the safety engineers, so a
hardware solution was needed. They decided to remove the transmit chip from
the board, but it was
a ball-grid-array (BGA) part, so they heated one of the boards to try to do
so. The first attempt resulted in an "epic fail" that ruined the phone,
but the attempt on the second board was successful. Now, pulling that chip
is the first
thing done to new phones to get around that "airplane mode problem".
The next problem they faced was the batteries. As he mentioned earlier,
lithium-ion is problematic for space; it takes two years to get those kinds
of batteries certified. Instead they used a "space certified" AA battery
holder, adding a diode that was used to fool the battery controller on the
phone. Micire said that he did a bit of "redneck engineering" to test the
performance of the AA batteries over time: he taped the phone to his laptop
and pointed its camera at voltage
and current meters hooked up to the battery pack. The phone ran a
time-lapse photo application, and he
transcribed the data from that video into a spreadsheet. He found that the
phone will
run well for seven hours using six AA batteries.
In the micro-gravity environment in the ISS, broken glass is a serious
problem. It can "become an inhalant", for example. Something had to be
done about the display glass so that breaking it would not result in glass
fragments. Micire thought he had the perfect solution by putting acrylic
tape over the display, but it turns out that tape is flammable, so it was
deemed unsuitable. In the
end, Teflon tape fit the bill. He showed some graphic photographic
evidence of what was done to a phone "in the interests of science" to prove
to NASA safety engineers that a broken screen would not cause a hazard.
The phone interfaces to the SPHERES over a USB serial connection because
the TI DSP
doesn't support anything else. The phone and battery holder are then
essentially taped to the side of the robot, as can be seen at right.
The team had "no time for software", Micire said, but "Cellbots saved our lunch" with a data
logging app for Android. In order to test the Nexus S sensors in space,
they needed a way to log the sensor data while the Smart SPHERES were
operating. It turns out that asking Samsung what its accelerometer does in
micro-gravity is not very fruitful ("we don't know, you're from NASA").
Sampling every sensor at high frequency and recording the data would allow
them to figure out which sensors worked and which didn't.
For any part that is used in an aircraft or spacecraft, a "certificate of
conformance" is required. That certificate comes from the supplier and
asserts that the part complies with the requirements. It's fairly easy to
get that from most suppliers, Micire said, but Best Buy is not in that
habit. In a bit of "social hacking", they showed up at the store five
minutes before closing time, cornered a very busy manager, and asked them
to sign a piece of paper that said "a Nexus S is a Nexus S"—after a
puzzled look as another store employee bugged them for attention, the
manager simply signed the certificate.
It turns out that all of the computers on the ISS run Windows XP SP 3,
which means there is no driver to talk to the Nexus S. Since it would take 2-3
years to get a driver certified to be installed on those machines, another
solution had to be found. They ended up writing an app that would kick the
phone's USB into mass storage mode prior to the cable being plugged into the
computer. Because Windows XP has a driver for a USB mass storage device,
it could be used to communicate with the Nexus S.
Testing
The first test units were launched on the final shuttle mission, and Micire
showed
video of the Smart SPHERES in action on the ISS. The light level was
rather low in the video because the fluorescent lights were turned down to
reduce jamming on the beacons. That was actually useful as it proved that
the camera produced reasonable data even in low-light situations. The
sensors on the phone (gyroscope, magnetometer, ...) worked well, as shown
in his graphs. The gravity
sensor showed near-zero gravity, which must mean that it was broken, he
joked. In reality, that is, of
course, the proper reading in a micro-gravity environment.
There are "lots of tubes" between the ISS and ground-based networks, so the
latency can be rather large. They were still able to do video transmission in
real time from the Smart SPHERES to the ground during the initial tests,
which was a bit of a surprise. After that test, the mission director
pulled the team aside; at first Micire was a little worried they were in
trouble, but it turned out that the director wanted to suggest adding Skype
so he could have a "free-flying robot that I can chase astronauts with".
In December 2012, another experiment was run. Once again, sped-up video
was shown of the robot navigating to a control panel to send video of its
state to controllers on the ground. Those controllers can do minor
adjustments to the orientation of the robot (and its camera) by panning from
side to side. There is no ability to navigate the robot in realtime from
the ground due to latency and potential loss-of-signal issues.
Other experiments are planned for this year and next, including having the
robot handle filming an interview with one of the astronauts. Currently
when a class of schoolchildren or other group has the opportunity to
interview the crew in space, two astronauts are required: one for the
interview and one to hold the camera. Since the Nexus S gives them "face
recognition for free", the robot could keep the camera focused on the crew
member being interviewed, which would free up the other crew member.
Micire's talk was an excellent example of what can happen when a device
maker doesn't lock down its device. It seems likely that no one at
Google or Samsung considered the possibility of the Nexus S being used to
control space robots when they built that phone. But because they didn't
lock it down, someone else did consider it—and then went out and actually
made it happen.
[ Thanks to the Linux Foundation for assisting with travel costs to San Francisco for ABS. ]
Comments (18 posted)
By Jake Edge
February 21, 2013
Andrew Chatham came up the peninsula to San Francisco from Google to talk
to the 2013 Embedded
Linux Conference about the self-driving car project. Chatham has
worked on the project since 2009 and seen it make great strides. It is by
no means a finished product, but the project has done 400,000 miles of
automated
driving so far.
History
"Cars are a miracle", he said. The 45-mile drive he did to Mountain View
yesterday would have taken our ancestors all day to do on a horse. But,
cars are also problematic, with more than 30,000 annual deaths in the US
due to car accidents. That number has "finally started dropping", likely
due to more seat belt usage, but it is still too high. Even if there
are no fatalities, accidents cost time, money, and more. We have done a
pretty good job figuring out how to survive accidents, he said, but it is time
to stop having them.
In the mid-2000s, the US Defense Advanced Research Projects Agency (DARPA) ran several
challenges for self-driving cars on a 150-mile course in
the Mojave
Desert. The first year, the winning team's vehicle went only seven
miles. But the next year, five teams actually completed the course, which
was eye-opening progress. In 2007, DARPA moved the challenge to a
simulated urban environment that featured a limited set of traffic
interactions (four-way stops, but no traffic lights, for example). After
that event, "DARPA declared victory" and moved on to other challenges,
Chatham said.
In 2009, Google stepped in to solve the problem "for real". Chatham noted
that people have asked why Google would get involved since driving cars doesn't
involve searching the internet or serving ads. The company thinks it is an
important problem that needs to be solved, he said. Google is qualified to
attack the problem even though it has never made cars because it is mostly
a software problem. Also, one major component of the problem involves maps,
which is an area where Google does have some expertise.
Broadly, there are two categories for self-driving cars: one for cars with
all the
smarts in the car itself and one where the smarts are in the road.
For cars that are self-contained, they need to be ready for anything and
cannot make assumptions about the obstacles they will face. That tends to
lead to cars that move slowly and drive cautiously, much differently than
humans. Smart roads allow for dumb cars, but there are some serious
obstacles to overcome. Infrastructure is expensive, so there is a
chicken-and-egg problem: who will build expensive smart roads (or even
lanes) for
non-existent dumb cars that can use them?
The Google approach is something of a hybrid. There are no actual
infrastructure changes, but the system creates a "virtual infrastructure".
That virtual infrastructure, which is built up from sensor readings and map
information, can be used by the car to make assumptions much like a human
does about what to expect, and what to do.
Sensors and such
The car's most obvious sensor is the laser rangefinder that lives in a
bubble on top of the car. It spins ten times per second and produces
100,000 3D points on each spin. Each of those points have 5cm accuracy.
The laser can only see so far, though, and can be degraded in conditions that
affect photons, such as rain.
The car also has radar, which is not as precise as the laser, but it can
see further. It can also see behind cars and other solid objects. Using
the doppler effect, speed information for other objects can be derived.
There are also cameras on the car. The general "computer vision problem"
is hard, and still unsolved, but it isn't needed for the car's usage of the
camera. The camera is used for things that humans use as well, which means
they are generally rather obvious and are of known shapes, sizes, and
likely positions (e.g. traffic lights). Beyond that are the expected
sensors like gyroscope, accelerometer, GPS, compass, and so on.
There are two main computers in the car. One is a very simple "drive by
wire system" that has no operating system and is just in a tight loop
controlling the brakes, steering, and accelerator. The second is a
"workstation class system running FreeBSD", Chatham joked. In reality it
is running a lightly customized Ubuntu 12.04 LTS. It is not running the
realtime kernel, but uses SCHED_FIFO and control groups to provide
"realtime-ish" response.
There are several classes of processes that run on the system, with the
least critical being put into control groups with strict resource limits.
If any of the critical processes miss their deadlines, it is a red flag
event which gets logged and fixed. In the 400,000 miles the cars have
traveled (so far always with a human on board), those kinds of problems
have been largely eliminated. All of the data for those journeys has been
stored, so it can be played back whenever the code
is changed to try to find any regressions.
From the "blank slate" state of the car, GPS data is added so that it knows
its location. That data is "awful really", with 5m accuracy "on a good
day" and 30m accuracy at other times. The car's sensors will allow it to
accurately know which way it is pointing and how fast it is going. From
there, it adds a logical description of the roads in that location derived
from the Google Maps data. It uses maps with 10cm resolution plus altitude
data, on top of which the logical information, like road locations, is layered.
All of that information is used to build a model of the surroundings. The
altitude data is used to recognize things like trees alongside the road, as
well as to determine the acceleration profile when climbing hills. The
goal is to stay close to the center of the lane in which the car is
traveling, but not if something is blocking the lane (or part of it). You
also don't want to hit the guy in front of you, so don't go faster than he
does. Once the model is built, driving is largely a matter of following
those two rules, he said.
Problems and solutions
In California (unlike many other places) it is legal for motorcycles to
travel between the lanes, moving between the cars that are in each of those
lanes.
That was a difficult problem to solve because that situation could fool the
sensors to some extent.
One of the tests they ran was a race track that was set up to see how the
car did
versus Google employees. It reliably beat them in that test, though
Chatham believes the employees would have eventually beaten the car. It's
"a Prius, not a sports car", so there were limits to the kinds of
maneuvering that can be done, but the test really showed the "precision
with which we can
reliably drive the car", he said.
Lots about driving is "social", Chatham said. For example, there is a set
of rules that
are supposed to be followed at a four-way stop, but no one follows
them. The car had to learn to start to edge out to see when the others
would let it go through. Similarly, merging is social, and they have spent
a lot of time getting that right. Unlike human drivers, the car can't make
eye contact, so it is a matter of getting the speed and timing right for
what is expected.
The "bug on the window problem" is another difficult one. For the car, anything
that messes up its sensors needs to be handled gracefully. In those cases,
handing control back to the human in a sensible fashion is the right thing
to do.
Many people ask about how the car does in snow, but it hasn't been tried
yet. Currently, Chatham thinks it wouldn't do all that well, but thinks it
"could do OK eventually". One problem there is that snowbanks appear to be
giant walls of water to the lasers.
"People do stupid things", he said. If you drive 400K miles, you are going
to experience some of them. Normally the expectation is that other people
value their lives; if you didn't believe that, you would never leave home.
But there are exceptions, so
a self-driving car, like a regularly driving human, needs to be prepared
for some of that craziness.
The video of a blind
man using
the self-driving car is the kind of story that shows where this
technology could lead, Chatham said. There are a lot of people who can't
drive for one reason or another, so a self-driving car has the potential to
change their lives. "I wish it were done now, but it's not", he said.
Chatham answered a few questions after the talk. They have done very
little work on "evasive maneuvers", he said. Everyone overestimates their
ability in that area and the advice from police and others is to just use
the brakes. There are no plans as yet to release any of the source code,
nor are there any plans for a product at this point. Three states have
"legalized" self-driving cars, California, Nevada, and Florida. It is
furthest along in California where the Department of Motor Vehicles is
currently drafting rules to govern their use.
[ I would like to thank the Linux Foundation for travel assistance to attend ELC. ]
Comments (36 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Three kernel vulnerabilities; New vulnerabilities in dbus-glib, java, mozilla, xen, ...
- Kernel: kvmtool; The 3.9 merge window opens; Multi-cluster power management.
- Distributions: Tizen 2.0 loads up on native code; Ubuntu, Linux-libre, ...
- Development: WebRTC interoperability; Opera moves to WebKit; Samba and SWAT; GIMP and astrophotography; ...
- Announcements: 2013 Linux Jobs Report, CC 4.0 comment period, LNG, Python trademark, ...
Next page:
Security>>