By Jake Edge
November 5, 2008
A company's response to security vulnerabilities is always interesting to
watch. Google has the reputation of being fairly cavalier regarding flaws
reported in its code;
the first security vulnerability reported
for the Android mobile phone software appears to follow that pattern.
Unfortunately for users of Android phones, though, Google's attitude and
relatively slow response might some day lead to an "in the wild" exploit
targeting the phones.
The flaw was first reported to Google on October 20 by Independent Security
Evaluators (ISE), but was not patched for the G1 phone—the only
shipping Android phone—until November 3. Details on the
vulnerability are thin, but it affects the web browser and is caused by
Google shipping an out-of-date component. Presumably a library or content
handler was shipped with a known security flaw that could lead to code
execution as the user id which runs the browser.
It should be noted that compromising the browser does not affect the rest
of the phone due to Android's security architecture. Unlike the iPhone,
separate applications are run as different users, so that phone
functionality is isolated from the browser, instant messaging, and other
tools. An iPhone compromise in any application can lead to the attacker
being able to make phone calls and get access to private data associated
with any application; clearly Google made a better choice than Apple.
One interesting recent development, though, is the availability of an
application that provides a root-owned
telnet daemon. With that running, a simple telnet gets full access to
the phone's filesystem. From there, jailbreaking—circumventing the
restrictions placed by a carrier on applications—as well as unlocking
the phone from a specific carrier are possible. While it is easy to see
how that might be useful for the owner of Android, though it opens the
phone to rather intrusive attacks, it probably is not what T-Mobile (and
other carriers down the road) had in mind.
Google's first
response to the vulnerability report was to whine that Charlie
Miller, who discovered the flaw, was not being "responsible" by talking
about it before a fix was ready. Miller did not disclose details, but did
report the existence of—along with some general information
about—the flaw. Google's previous reputation regarding vulnerability
reporting, as well as how it treated Miller, undoubtedly played a role in
his decision.
Perhaps the most galling thing is that the flaw was in a free software
component that had been updated prior to the Android release to, at least
in part, close that hole. It would seem that the Android team was not
paying attention to security flaws reported in the free software components
that make up the phone software stack. Hopefully, this particular
occurrence will serve as a wake-up call on that front.
Given that the fix was already known, it is a bit puzzling that it
would take two weeks for updates to become available. It was the first
update made for Android phones in the field, but one hopes the bugs in that
process were worked out long ago. Overall, Google's response leaves rather
a lot to be desired.
If Google wants security researchers to be more "responsible" in their
disclosure, it would be well served by looking at its own behavior. Taking
too much time to patch a vulnerability—especially one with a known
and presumably already tested fix—is not the way to show the security
community that it takes such bugs seriously. Whining about disclosure
rarely, if ever, goes anywhere; working in a partnership with folks who
find security flaws is much more likely to bear fruit.
(
Log in to post comments)