|
|
Subscribe / Log in / New account

Stagefrightening

By Jake Edge
July 29, 2015

The recently reported Android Stagefright vulnerability is certainly scary sounding. But, as with many vulnerability reports these days, there is something of a lack of detailed information—though plenty of hype. Evidently more details will become available after a presentation at Black Hat USA on August 5.

The vulnerability was announced on the Zimperium Mobile Security blog on July 27. The company found a number of flaws in the user-space Android Stagefright media framework that could lead to remote code execution. The flaws have been present since Android 2.2 ("Froyo"), which was released in May 2010. That means there are roughly a billion devices potentially vulnerable to the flaws.

According to the announcement, one of the nastier vectors for exploiting the vulnerability is through Multimedia Messaging Service (MMS) messages sent to a device. In many cases, just knowing the phone number is enough for an attacker to trigger the flaw—without any user action required. In fact, in some cases, the actual message can be deleted by the attack, so the only indication of an attack is in a notification message or the received-message logs.

Based on some CyanogenMod commits by Zimperium zLabs VP of Platform Research and Exploitation Joshua J. Drake (also known as "jduck"), the flaws are mostly integer overflow and underflow bugs in Stagefright. A Threatpost article indicates that fuzzing was used to find at least some of the flaws.

The bugs have been fixed in the Android open-source code and incorporated into updates for CyanogenMod, Firefox for Android (which uses Stagefright), and the Blackphone from Silent Circle. But the vast majority of Android devices have yet to see an over-the-air update to fix the problems. That likely puts a lot of phones at risk.

In fact, there are serious questions about whether older phones will even get updates. Given that the patches are public, it would seem only a matter of time before attacks are mounted using the flaws. Between now and whenever carriers and manufacturers decide to put out an update (if they ever do), most Android phone users will be vulnerable.

There is one mitigation technique that may help: turning off auto-loading of MMS messages. In many texting applications (and Google Hangouts, which is also affected), there are settings to turn off this convenience feature. Older phones, though, tend not to have those settings. But even if auto-loading is disabled, users will have to make a difficult decision when they receive an MMS message—the safest option is to never load it, which may be less than desirable from a few angles.

Part of the reason that this vulnerability is so dangerous is because Stagefright is evidently overly privileged in many Android devices. That means that compromising Stagefright allows the attacker to do many things that would be restricted for components with lesser privileges. According to Drake in the Threatpost article, that includes monitoring communications, using the internet, and other "nasty things". He speculated that some of that access might be required to be able to support various digital rights management (DRM) schemes.

It would appear that all unpatched Android devices (2.2 and newer) are vulnerable, though it is a little hard to tell. So far at least, there doesn't seem to be a test to determine whether a given device is vulnerable. It's also difficult to be sure just how Stagefright is used on a given device. Obviously not performing multimedia playback from untrusted sources (which is not a bad policy in any case) should avoid the bugs but, as the MMS auto-loading shows, it is not always clear when and how Stagefright code is being invoked.

Even though the fixes are public, the scope of the problem is still a bit murky. Screen shots from the blog post show a proof-of-concept attack against Android 5.1.1 ("Lollipop"), but are widespread attacks against multiple Android versions possible? One would guess they are, but we probably won't know more until after Black Hat. It has become something of an annual rite of (northern hemisphere) summer to see hyped security vulnerabilities ahead of the conference. Sometimes they turn out to be worse than early indications—and sometimes the actual exploitability doesn't live up to the hype.

If this is the "big one" that researchers have been predicting for Android for some time—as it certainly appears to be—it will be interesting to see how the ecosystem reacts. How much effort will carriers and handset makers expend to patch the problems for users of years-old devices? Given that many of those devices already have serious, known flaws that have yet to be fixed, there is little reason to believe the reaction to Stagefright will be all that much different. Eventually, though, that cavalier attitude toward serious security holes in customers' phones may well come back to bite Android and its ecosystem.

Index entries for this article
SecurityAndroid


to post comments

Stagefrightening

Posted Jul 30, 2015 9:31 UTC (Thu) by ortalo (guest, #4654) [Link] (24 responses)

I wonder if such things cannot be discovered by directly using the static analysis functions of compilers...
I am always surprised that fuzzing is the most popular tool nowadays, even when you have access to the source code.
In some sense, fuzzing is just "instrumented luck" no? As opposed to static analysis, which would be "rigorous verification". (Yes, I am deliberately polemic.)

Anyway when privileged libraries get deployed to a billion device, I would expect the supplier to request controls in the build system. (I imagine Linux is not built with gcc -w.) Brown paper bag for Google & co. IMHO.
Unless all those smartphones are really disposable devices and all their users knowingly agree...

Stagefrightening

Posted Jul 30, 2015 10:03 UTC (Thu) by ms (subscriber, #41272) [Link] (3 responses)

If I were more cynical, I might make suggestions that a lot of the "engineers" writing this code would consider such tools to "get in their way" and are even an affront to their "programming prowess". The fact peer review didn't apparently catch this stuff either is terrifying. But I'm sure they probably did TDD or at least have 95%+ code coverage in their tests. So that makes everything OK.

As the produce of our industry becomes ever more critical in maintaining every aspect of civilisation, we have *got* to get better at *proving* correctness, or at the very least vast model checking. People who don't understand tools like quickcheck or similar are going to rapidly find themselves unemployable. As the recent Jeep issue shows (cars could be rooted and controlled remotely), sooner or later there are going to be massive financial consequences to getting this stuff wrong.

Stagefrightening

Posted Jul 30, 2015 13:18 UTC (Thu) by ortalo (guest, #4654) [Link]

I am not as optimistic as you. (Seriously!)

It seems to me massive problems have already occurred but culprits consistently manage to get out of harm and, sooner or later, secure computer systems may become luxury equipment.
Not that model checking cannot be done for yachting in the Carribean but it would leave me with a sense of guilt. ;-)

Stagefrightening

Posted Jul 31, 2015 9:45 UTC (Fri) by jezuch (subscriber, #52988) [Link] (1 responses)

> If I were more cynical, I might make suggestions that a lot of the "engineers" writing this code would consider such tools to "get in their way" and are even an affront to their "programming prowess".

I am not one of those "engineers". I get excited every time findbugs (static analysis tool for Java) adds a new detector and immediately start scanning all of the code under my care :) The same for pedantic warning settings in the compiler.

Stagefrightening

Posted Jul 31, 2015 15:46 UTC (Fri) by ortalo (guest, #4654) [Link]

Kudos to you. And when you become the master, do not forget to impose on all your padawans the rules needed to share this excitement.
After all, programs writing programs are at the heart of our art... ;-)

Stagefrightening

Posted Jul 30, 2015 11:41 UTC (Thu) by Fowl (subscriber, #65667) [Link] (6 responses)

Many fuzzers actually are much closer to "compile" time, eg. http://lcamtuf.coredump.cx/afl/

The update model for Android is just broken. Decades of experience in hardware independent software has been thrown out the window in the name of expediency. Worse is better, eh?

Stagefrightening

Posted Jul 30, 2015 11:49 UTC (Thu) by ms (subscriber, #41272) [Link] (5 responses)

It certainly benefits them and their partners that one of the only sure-fire ways to get a security update is to have to drop $500 on a new phone. The extent to which this is unethical and amoral beggars belief.

Stagefrightening

Posted Jul 30, 2015 12:37 UTC (Thu) by magnus (subscriber, #34778) [Link] (1 responses)

Maybe the manufacturers are hoping that Android is "too big to fail" and that they then get their costs for patching all old devices (or replacing with new ones) covered by a govenment contract. :)

Stagefrightening

Posted Jul 30, 2015 12:41 UTC (Thu) by ms (subscriber, #41272) [Link]

Cute idea :) Struggling to think of one western government where > 1% of the ministers know what Android is though...

Stagefrightening

Posted Jul 30, 2015 12:59 UTC (Thu) by ms (subscriber, #41272) [Link] (1 responses)

s/amoral/immoral/. Learn something new every day...

immoral or amoral? It's all about perspective

Posted Jul 31, 2015 12:38 UTC (Fri) by pr1268 (guest, #24648) [Link]

I dunno... Myself, not knowing there was a difference in the two words' meaning, looked each up. Slightly bending the definition of amoral from "neither moral nor immoral" to being both, then amoral would fit just fine.

Granted, from the device user's perspective, intentionally neglecting to patch a vulnerability in the interest of new device sales would surely be considered immoral. From Google's senior management team, it could be considered moral.

Of course, this could also be extended to the wireless providers—their complicity in this matter is certainly moral I mean immoral Er, amoral just pick one! ☺

Stagefrightening

Posted Jul 30, 2015 13:24 UTC (Thu) by ortalo (guest, #4654) [Link]

It's also heavily uneffective: you *may* get some corrections but you *probably* get more new vulnerabilities too.

In fact, sometimes it seems the sure-fire way to get a security improvement is to downgrade to an old dumb phone; though this does nothing to try to solve the problem of course.

Stagefrightening

Posted Jul 30, 2015 16:58 UTC (Thu) by jhoblitt (subscriber, #77733) [Link] (3 responses)

There's no shortage of static analysis tools out there. However, it's a pretty poor story to tell developers if you just run these 10+ tools, and manually pick through the massive number of false positives, your code is _less likely_ to have a major security issue.

https://en.wikipedia.org/wiki/List_of_tools_for_static_co...

The analysis tools and languages need to evolve significantly to keep the development work-flow at a reasonable level of complexity. Start holding your breath...

Stagefrightening

Posted Jul 30, 2015 18:34 UTC (Thu) by ms (subscriber, #41272) [Link] (2 responses)

Why is that a poor story? Tools are awkward and tricky to use until they get regular use and get polished by that. Remember how awful the first editors were?
The tools will get better once they are mandated for any code that is ever fed untrusted inputs. I'd rather have fewer features and slower code but code that is flawless than a billion insecure and badly implemented features. Slowly, more people may come to agree with this.

Stagefrightening

Posted Jul 31, 2015 7:56 UTC (Fri) by NAR (subscriber, #1313) [Link] (1 responses)

The problem is that static analyzers don't make the code flawless. I remember a "sales pitch" for an Erlang product, they said that practically all of the reported bugs were the kind of "software did what the developer wanted to do, not what the user wanted to do". Static analyzers may decrease the number of some kind of bugs, but definitely not eliminate all bugs. So the choice is between "slower development, less features, (hopefully) less bugs" and "faster development, more features, (probably) more bugs".

Stagefrightening

Posted Aug 3, 2015 7:51 UTC (Mon) by jezuch (subscriber, #52988) [Link]

My experience with static analyzers is that they protect you from silly mistakes. Like using instanceof on a variable that can never refer to an object of this type. Or that this field is accessed from these methods but is inconsistently synchronized. Which is great but kind of underwhelming. I don't think I've seen anything that would detect serious architectural problems.

Stagefrightening

Posted Aug 1, 2015 9:57 UTC (Sat) by error27 (subscriber, #8346) [Link] (8 responses)

The article suggests that this bug was probably an integer overflow. If you yse the `git log --all-match --grep integer --grep overflow --no-merges` on the kernel source then I am among the winners (Dan Carpenter).

1 25 Author: Thomas Meyer <thomas@m3y3r.de>
2 25 Author: Dan Carpenter <dan.carpenter@oracle.com>
3 24 Author: Xi Wang <xi.wang@gmail.com>
4 9 Author: Dan Carpenter <error27@gmail.com>
5 7 Author: Guenter Roeck <linux@roeck-us.net>
6 5 Author: Wenliang Fan <fanwlexca@gmail.com>
7 5 Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
8 4 Author: Haogang Chen <haogangchen@gmail.com>
9 4 Author: Brian Norris <computersforpeace@gmail.com>
10 4 Author: Andrey Ryabinin <a.ryabinin@samsung.com>

Thomas Meyer used a Coccinelle script to switch calls from kzalloc() to kcalloc(), it's a good thing to do for code hardening, but it's not bug fixes. Xi Wang was doing PHD work at MIT (http://pdos.csail.mit.edu/~xi/).

I used custom Smatch (http://smatch.sf.net) checks to find my integer overflow bugs. Even though the Smatch checks were useful, I haven't released them because they have too many false positives. Of course, a bad person could easily spend a week looking through false positives and feel good if he found one real bug, but for a normal dev that would be a waste of time. These things are asymmetric. :/

Basically, there are few static checkers which can find integer overflows. It's harder than finding buffer overflows. You have to know which data can be trusted an which comes from untrusted sources. You have to track two variables instead of one. You have to do cross function analysis. There are a lot of integer overflows which we don't care about. There are some integer overflows which are safe. Also we often use integer overflows to test for integer overflows like this "if (foo + bar < foo)".

Out of open source static checkers, Smatch comes the closest, but it is still too rough and it has only been tuned for the linux kernel. Btw, there are some runtime integer detection tools. PaX has done some work with this and Xi Wang.

Stagefrightening

Posted Aug 1, 2015 23:56 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (4 responses)

> Also we often use integer overflows to test for integer overflows like this "if (foo + bar < foo)".

Sounds like something worth a __builtin_will_overflow function to explicitly denote such uses.

Stagefrightening

Posted Aug 2, 2015 11:14 UTC (Sun) by kleptog (subscriber, #1183) [Link] (2 responses)

In gcc you can just say: if(__builtin_add_overflow(foo, bar, &sum)) { error("Overflow"); }

See: https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Built...

Other compilers have similar features: https://msdn.microsoft.com/en-us/library/windows/desktop/...

It's not these these features don't exist, it's that (a) they're not standardised and (b) people aren't using them.

Stagefrightening

Posted Aug 2, 2015 13:03 UTC (Sun) by mathstuf (subscriber, #69389) [Link] (1 responses)

Yep, much better interface there since you don't waste the computation.

> It's not these these features don't exist, it's that (a) they're not standardised

Why would this matter for, at least, the kernel?

Stagefrightening

Posted Aug 3, 2015 21:08 UTC (Mon) by kleptog (subscriber, #1183) [Link]

> > It's not these these features don't exist, it's that (a) they're not standardised

>Why would this matter for, at least, the kernel?

For the kernel it doesn't matter so much, other than it's new (GCC 5.0 new to be precise). But you could probably whip these up in an afternoon in assembly if you wanted to, I think it's telling that this hasn't happened. You don't need compiler support to make these functions, it just makes it easier. The kernel devs could have implemented it years ago if they wanted to.

For all user space applications not being standardised makes it hard because you'd really rather not rely on special compiler features.

Stagefrightening

Posted Aug 2, 2015 11:31 UTC (Sun) by PaXTeam (guest, #24616) [Link]

Stagefrightening

Posted Aug 2, 2015 2:59 UTC (Sun) by spender (guest, #23067) [Link]

Another big problem (maybe added under the "safe" overflow category) when dealing with integer overflows via GCC plugins is dealing with the various translation units in GCC that introduce (intentional) integer overflows not present in the source code. This has been the main source of false-positives in the past for PaX's size_overflow plugin.

-Brad

Stagefrightening

Posted Aug 2, 2015 8:43 UTC (Sun) by peter-b (guest, #66996) [Link]

> Also we often use integer overflows to test for integer overflows like this "if (foo + bar < foo)".

When doing code reviews, I usually like to see "if (foo > UINT32_MAX - bar)", for example, because it detects overflows a priori rather than a posteriori. On the other hand, it requires you to use the correct limit macro. I agree with mathstuff that a "__builtin_will_overflow()" in the compiler would be nice to have.

Stagefrightening

Posted Aug 4, 2015 9:11 UTC (Tue) by ortalo (guest, #4654) [Link]

IMHO, we need much more work like yours.
I have tried to be an advocate of this need for several years now (possibly decades), but not very successfully to say the least. The fact that it is a difficult technical topic (arcane for the general public) and that many people in (or more precisely around) computer security either dream of or claim impossible things certainly does not help.

But anyway, Coccinelle + Smatch, even if you factor in the compiler writers efforts, Coverity and one time research things like Astree, that's not enough (it spans a decade...).

Note that while thinking about it, that's a pretty general problem wrt computer security: investment is extremely ill-managed. Look at all those ordinay users happily paying every month for antivirus software (now for their Android smartphone also), at governemental or private funding for legal study of the cyberspace or cyberwarfare attack tools and all the difficulties you have actually funding decently security communication libraries, static analysers development, compilers enhancement, etc.

We lack some public authority with the capability to evaluate objectively these state of facts on computer security issues at macro levels. (Similar to what CERTs do at the elementary vulnerability level or maybe IETF on Internet issues.)

Governance is our problem now in this field. (Which is the elegant way to say that the people who have the money are inadequate.)

Corporations only understand liability

Posted Jul 30, 2015 16:04 UTC (Thu) by brugolsky (guest, #28) [Link]

The solution to the problem of embedded firmware is liability throughout the manufacturing and distribution chain. The only safe harbor should be independently buildable open source, subject to a "standard of care" that requires little more than a USB/Bluetooth/WiFi/... connection to the device and "git clone ... && cd ... && make build install push" or similar. That should include fully user-replaceable keys for all secured components. Manufacturers that are fond of putting DRM in their products should state in appropriate documentation and marketing literature that user-modified software that impairs DRM will disable specific functionality.


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds