|
|
Subscribe / Log in / New account

Android's first vulnerability

By Jake Edge
November 5, 2008

A company's response to security vulnerabilities is always interesting to watch. Google has the reputation of being fairly cavalier regarding flaws reported in its code; the first security vulnerability reported for the Android mobile phone software appears to follow that pattern. Unfortunately for users of Android phones, though, Google's attitude and relatively slow response might some day lead to an "in the wild" exploit targeting the phones.

The flaw was first reported to Google on October 20 by Independent Security Evaluators (ISE), but was not patched for the G1 phone—the only shipping Android phone—until November 3. Details on the vulnerability are thin, but it affects the web browser and is caused by Google shipping an out-of-date component. Presumably a library or content handler was shipped with a known security flaw that could lead to code execution as the user id which runs the browser.

It should be noted that compromising the browser does not affect the rest of the phone due to Android's security architecture. Unlike the iPhone, separate applications are run as different users, so that phone functionality is isolated from the browser, instant messaging, and other tools. An iPhone compromise in any application can lead to the attacker being able to make phone calls and get access to private data associated with any application; clearly Google made a better choice than Apple.

One interesting recent development, though, is the availability of an application that provides a root-owned telnet daemon. With that running, a simple telnet gets full access to the phone's filesystem. From there, jailbreaking—circumventing the restrictions placed by a carrier on applications—as well as unlocking the phone from a specific carrier are possible. While it is easy to see how that might be useful for the owner of Android, though it opens the phone to rather intrusive attacks, it probably is not what T-Mobile (and other carriers down the road) had in mind.

Google's first response to the vulnerability report was to whine that Charlie Miller, who discovered the flaw, was not being "responsible" by talking about it before a fix was ready. Miller did not disclose details, but did report the existence of—along with some general information about—the flaw. Google's previous reputation regarding vulnerability reporting, as well as how it treated Miller, undoubtedly played a role in his decision.

Perhaps the most galling thing is that the flaw was in a free software component that had been updated prior to the Android release to, at least in part, close that hole. It would seem that the Android team was not paying attention to security flaws reported in the free software components that make up the phone software stack. Hopefully, this particular occurrence will serve as a wake-up call on that front.

Given that the fix was already known, it is a bit puzzling that it would take two weeks for updates to become available. It was the first update made for Android phones in the field, but one hopes the bugs in that process were worked out long ago. Overall, Google's response leaves rather a lot to be desired.

If Google wants security researchers to be more "responsible" in their disclosure, it would be well served by looking at its own behavior. Taking too much time to patch a vulnerability—especially one with a known and presumably already tested fix—is not the way to show the security community that it takes such bugs seriously. Whining about disclosure rarely, if ever, goes anywhere; working in a partnership with folks who find security flaws is much more likely to bear fruit.


Index entries for this article
SecurityMobile phones


to post comments

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 6:14 UTC (Thu) by dw (subscriber, #12017) [Link] (9 responses)

The reason they were using an old library version in the first place was likely stability: it was known to work with the rest of the code base.

So aside from 'merely' updating to the latest library version, they need to ensure that component still functions as expected. Perhaps this involves as little as just running a whole bunch of automated tests, or perhaps it involves more extensive testing involving real people using the stuff.

Then there is the fact that the Android team having an updated release is not that same as the carrier having the updated release, and here is where I suspect the slowdown might have been from.

As pointed out this is the first update to the phone, and most probably the first real test of a chain of command running from the security team, to the developers, to the release engineers, to the testers, then handed off to the carrier (possibly involving at least one face-to-face meeting), through their testing and release process, and finally pushed to the phones.

And for a first update, especially involving all the corporate hell that dealing with a long-established Telco is likely to entail, I think 2 weeks isn't all that bad.

Android's security design is such that the vulnerability was mitigated before anyone even knew about it, and so they are afforded a little extra slack in managing the risk of leaving the hole unpatched vs. mucking up a hastily pushed release.

There are possibly other factors involved too, like, what would happen if the first ever update did screw up the phone? This might cause unending pain in the future for the company if their Telco customers don't trust the updates coming downstream, and may delay future updates even longer to ensure their worthiness.

A final note is, for the above reasons, time-to-response isn't always the most useful metric to judge a product's security by. 2 weeks of a remote root hole is a little different to 2 weeks' ability to DoS a web browser sandbox, or perhaps, spy on a user's WAP sessions.

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 7:12 UTC (Thu) by jimparis (guest, #38647) [Link] (3 responses)

The root hole is different from the web browser issue, just to clear up any confusion.

It's very bizarre that the root hole exists. A Java app (ie. pTerminal) can spawn local applications. This is done with real uid (and effective uid and saved uid) set to eg. 10040. No big deal. But if you execute /system/bin/telnetd, it acts like it was setuid root and runs with euid=0 -- even though it's not setuid root. Almost seems like an intentional backdoor...

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 11:48 UTC (Thu) by alex (subscriber, #1355) [Link] (2 responses)

It may well be intentional as development tool. Likely they forgot to remove from the release images either by accident or design.

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 14:16 UTC (Thu) by jimparis (guest, #38647) [Link] (1 responses)

A development tool would be a setuid telnetd binary, not a magic telnetd binary that grants euid=0 without being setuid.

Someone from Google has mentioned to me via IRC that it's definitely not intentional. I still haven't managed to track down the cause though.

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 16:21 UTC (Thu) by jimparis (guest, #38647) [Link]

Nevermind, I found it -- init spawns a root shell on /dev/console that picks up all keyboard input. Hilarious!
http://android.jim.sh/index.php/ConsoleShell

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 16:01 UTC (Thu) by pflugstad (subscriber, #224) [Link] (4 responses)

I totally agree, and was also my first thought on reading that 2nd to last paragraph.

I think FOSS people (and security folks especially) tend to be extremely cavalier about the whole testing and release process that most commercial programs go through. The prevailing opinion seems to be that you can just update to the latest version of the library and away you go. And, for most FOSS software, this is usually not a problem.

But for commercial products this is just not realistic. On a product as complex as a mobile phone, it was almost certainly undergoing release testing by the time the vulnerability was known and simply upgrading an internal library is not feasible at that point in time.

Now, I certainly agree that Google probably didn't handle this especially well, and their response probably made things works. A better response would have been: "whoops - okay, the current release is already in the process of going out. Please hold off disclosure until we can get an update out with the fix in it." But well, everyone makes mistakes - hopefully they'll learn from this.

2 weeks isn't completely unreasonable.

Posted Nov 6, 2008 19:36 UTC (Thu) by bfields (subscriber, #19510) [Link] (1 responses)

I think proprietary software people (and embedded folks especially) tend to be extremely cavalier about the whole support infrastucture that most FOSS programs are distributed through. The prevailing opinion seems to be that you can just keep ship one version of the software and then go away. And, for most proprietary software, this is usually not a problem.

But for products exposed to the internet, this is just not realistic. On a product as complex as a mobile phone, it should almost certainly have had a robust security triage and upgrade-distribution system in place by the time the vulnerability was known, and simply leaving a remote exploit in an internal library should not have been an option at that point in time.

2 weeks isn't completely unreasonable.

Posted Nov 7, 2008 4:33 UTC (Fri) by pflugstad (subscriber, #224) [Link]

I'm not entirely sure if you're agreeing with me, or mocking me.

Most FOSS programs don't have a support structure at all, and they most certainly do release a product and go away (or rather, move onto the next version). The most common answer to a problem is: are you running the latest and greatest. The prevailing opinion is that if you aren't running that latest, it's rarely worth the time to debug your problem. This is not necessarily a bad thing, it's a reality for situations where there is one or a few people developing or supporting a product.

And even if the problem is in the latest and greatest, bugs can still remain outstanding for weeks or months from the time of report until the time a fix is available. Just go find one of the articles here on LWN about how long it takes the distros to fix a hole. The only good thing here is that the distros are getting better - but they still take time to fix bugs and most of that time probably ends up being regression testing.

Commercial products are no better - but FOSS is no panacea here.

And as far as this particular product is concerned, the mobile phone companies are really getting into new territory here: their standard model is to release one (highly proprietary) product and and walk away - the lifetime of a stand mobile phone these days is what, 9 months? The concept of actually releasing a patch for a phone is probably pretty foreign to them.

Could Google have done better: probably - and they probably will in the future. Same for the telco. But 2 weeks from the time of being notified of a problem to the time of a patch being available is not outrageous.

2 weeks isn't completely unreasonable.

Posted Nov 7, 2008 0:02 UTC (Fri) by ewan (guest, #5533) [Link] (1 responses)

This makes no sense. OK, it's easy to see why no-one wants to ship untested code that might have bugs in, but in what world is better to ship tested code that known to have bugs in?

Probably OK is clearly better than definitely broken.

2 weeks isn't completely unreasonable.

Posted Nov 7, 2008 4:15 UTC (Fri) by pflugstad (subscriber, #224) [Link]

Software is shipped with known bugs all the time. There have even been recent articles on LWN about the kernel devs having this type of debate and what do to about it (lack of testing vs moving forward being the main argument, IIRC - not all that different than this discussion).

In this particular case, when was the flaw discovered in the underlying library, and when was this known to Google and where was the phone in the process? Given the lead time a lot of these products require (ship software to manufacture, start making and loading them onto phones, ship phones to retailers, etc), you could easily be talking months. So it's highly likely that it was not possible to actually fix the shipping Android code.

So, how long did the update process take: 2 weeks. Which, as I was agreeing with dw, is not an unreasonable amount of time in order to make sure that the new version of the library doesn't break anything (I expect regression tests on a web browser can take a while, much less on an entire mobile phone) and from there, push it through the whole carrier/telco procss.

The only real bad part is Google response.

Android's first vulnerability

Posted Nov 8, 2008 18:01 UTC (Sat) by kov (subscriber, #7423) [Link]

I bet the beer that is inside my fridge right now that this is the same vulnerability Google Chrome was released with: http://blogs.zdnet.com/security/?p=1843.


Copyright © 2008, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds