|
|
Subscribe / Log in / New account

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Over on the Project Zero blog, Maddie Stone has a lengthy post about a zero-day exploit that was found and fixed in the Android Binder interprocess communication mechanism. The post details the search for the problem, which was apparently being used in the wild, its fix, and how it can be exploited. This is all part of an effort to "make zero-day hard"; one of the steps the project is taking is to disseminate more information on these bugs. "Complete detailed analysis of the 0-days from the point of view of bug hunters and exploit developers and share it back with the community. Transparency and collaboration are key. We want to share detailed root cause analysis to inform developers and defenders on how to prevent these types of bugs in the future and improve detection. We hope that by publishing details about the exploit and its methodology, this can inform threat intelligence and incident responders. Overall, we want to make information that’s often kept in silos accessible to all."

to post comments

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 22, 2019 14:55 UTC (Fri) by ncultra (✭ supporter ✭, #121511) [Link]

The classics (use after free) never go out of style and have real staying power.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 22, 2019 22:36 UTC (Fri) by roc (subscriber, #30627) [Link] (17 responses)

Did anyone fixing the bug upstream know that there were security implications? It's hard to believe the answer is "no"; use-after-free by default should be treated as a security bug.

If they did, but the security implications were not explicitly stated because of the kernel dev policy of "we don't talk about security bugs because 'a bug is a bug is a bug'", then that policy is culpable here.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 22, 2019 23:16 UTC (Fri) by clugstj (subscriber, #4020) [Link] (12 responses)

Really? Someone finds and fixes a bug and because they didn't determine the security implications, you want to blame them for the bug? Way to discourage bug fixes.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 0:32 UTC (Sat) by roc (subscriber, #30627) [Link] (10 responses)

Fixing the bug without understanding the implications would be unfortunate, but a genuine mistake. No big deal.

Fixing the bug, understanding the security implications but deliberately not communicating them for policy reasons would be an indictment of said policy.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 1:07 UTC (Sat) by clugstj (subscriber, #4020) [Link] (9 responses)

How can it be a mistake to fix a bug without understanding all possible implications of the bug? Isn't it more important to have a fix available sooner, than to wait until all the possible misuses of the bug are understood?

I, for one, enjoy fixing bugs, but I don't wish to spend time twisting my brain to understand how some perverse individual might misuse the bug.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 2:18 UTC (Sat) by rahulsundaram (subscriber, #21946) [Link]

> How can it be a mistake to fix a bug without understanding all possible implications of the bug?

All possible may not be known but the more important point here is that just because all possible implications aren't known doesn't mean that one should hide known implications as some Linux kernel developers do

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 11:01 UTC (Sat) by roc (subscriber, #30627) [Link] (5 responses)

If a bug fix needs to be backported to released products as a matter of urgency, but no-one notices that, I think we should consider that a mistake.

This doesn't mean one needs to delay making a fix available until "all possible implications are understood".

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 18:46 UTC (Sat) by tuna (guest, #44480) [Link] (4 responses)

Maybe it would be better for the makers of those devices to make sure you can use the latest versions of Linux instead of depending on backports.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 3:09 UTC (Sun) by roc (subscriber, #30627) [Link] (1 responses)

Upstream Linux kernel releases don't happen frequently enough for "update to the latest released upstream kernel" to be a viable security strategy. So at least you have to backport to the stable branches maintained by Greg K-H etc.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 8:28 UTC (Sun) by tuna (guest, #44480) [Link]

If you consider stable Linux version (5.4.x) that are released between the major versions released by Thorvalds, you should be getting all known stable bug fixes (including sequrity fixes). That might be to many updates for Android devices though....

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 19:04 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (1 responses)

Google <a href="https://arstechnica.com/gadgets/2019/11/google-outlines-p...">recently proposed running Android on mainline kernels</a>. But they want a stable kernel ABI because Android (as realistically deployed on hardware that the typical consumer actually uses) is basically guaranteed to have a lot of binary blobs.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 22:55 UTC (Mon) by mfuzzey (subscriber, #57966) [Link]

Not sure it has many *kernel* binary blobs.
Userspace binary blobs yes sure but a stable kernel API would be irrelevant.

Lots of out of tree kernel drivers too unfortunately but most do have source available even if the quality, as is typical with vendor non mainlined code is poor.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 17:41 UTC (Sun) by ballombe (subscriber, #9523) [Link]

Alas, if one do not understand the implication of the bug, maybe one do not understand the implication of the fix.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 29, 2019 12:11 UTC (Fri) by jezuch (subscriber, #52988) [Link]

Well, sure, but it's a good mindset to have anyway. How can this policy be abused? How can this piece of code fail, however silly or perverse the input need be? In some contexts you'd better assume you're constantly under attack.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 18:48 UTC (Mon) by raven667 (subscriber, #5198) [Link]

>> If they did [know], but the security implications were not explicitly stated because of the kernel dev policy of "we don't talk about security bugs because 'a bug is a bug is a bug'", then that policy is culpable here.

> Really? Someone finds and fixes a bug and because they didn't determine the security implications, you want to blame them for the bug? Way to discourage bug fixes.

I think you didn't read through and understand the entire comment or you are pretending to misunderstand, either way your comment doesn't follow the conversation.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 9:52 UTC (Mon) by Vipketsh (guest, #134480) [Link] (3 responses)

I see this reaction quite often and it is generally a case of "author
of fix failed to predict the bug will be used in an exploit". All those other
similar bugs not used exploits don't seem to receive any such reactions.

Noone giving a reaction like this is actually trying to come up with
procedure or definitions to end up at the desired outcome and, I think,
not by accident.

There are one of two ways this can go:

1, For every bug fix some kind of full security analysis needs to take
place to understand if possibly it can have some security
implications. This requirement generally ends up close to "write a
proof of concept exploit using the bug".

Other than the obvious excruciating amount of effort required it is
also by no means fool-proof: there is no telling if the analysis
missed something or not. You want it foolproof or close to it,
otherwise you end up with people making the same statement the
parent made.

Considering that bug fixes have value on their own, even without any
security analysis there is little chance for this to happen.

2, Define a catagory of bugs that have possible security implications.

The problem here is the definition of "security implications".
Consider denial-of-service issues. Some believe that they are
security issue and, in certain contexts, they are right. Now consider
how a simple bug is little different: It caused a user to be unable
to do something (i.e. it failed to provide service). So, now all
bugs are security issues?

It's practically impossible to give a definition that (i) doesn't
trun every bug into "security implication" and (ii) covers a
reasonable number of bugs bugs used in exploits.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 21:45 UTC (Mon) by roc (subscriber, #30627) [Link] (2 responses)

It is not difficult to identify classes of bugs that should be treated as security issues by default: use-after-free, buffer overflow, and wild read/write/jump bugs, for example. This binder bug was UAF which everyone should realize, by now, are usually exploitable. So my proposal would be to start with those classes, add any more that spring to mind, and have a policy of identifying them as security bugs by default (that is, unless a developer makes a strong argument that a particular bug cannot be exploited for some reason specific to that bug).

Your argument seems to boil down to "high-precision classification of bugs as security issues is really hard, so we shouldn't try to do any such classification", i.e. making the best the enemy of the good.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 9:33 UTC (Tue) by Vipketsh (guest, #134480) [Link] (1 responses)

Yes, high-precision classification of security bugs is hard and that was one of my points.

Once you start adding "possible security issue" labels to bugs, there is an implication that all other bugs are not security issues.

Then what happens is some bug not bearing the label is used in an exploit giving irrefutable evidance that it is a security issue and immediatly the developers are incompetent for not realising it before. What can the devolopers do then ? Mark another type of bug as "possible security issue" ? Repeat a couple of times and the result is that every bug bears the "possible security issue" label, which is no better than not marking any bug.

The only way to then bring value back to the "possible security issue" label is to do security analysis and it better be "high-precision" otherwise the developers risk being shamed for being incompetent the next time some exploit is published.

And that is my other original point: you either converge to "all bugs are security issues" or "only bugs shown to be exploitable are security issues". The former has no value and the later is an unfeasable amount of work.

In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

(As an aside: interesting how you use the words "add any more that *spring to mind*" [emphasis mine] implying there is little analysis or thought needed to add a type of bug to "possible security issue". "Denial-of-service" sprang to mind as I argued before and so now all "bugs are possible security issues". See how this is difficult?)

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 20:48 UTC (Tue) by roc (subscriber, #30627) [Link]

> In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

Sure. The experience of many projects is that that hasn't happened. In Firefox, for example, developers make good-faith efforts to call out bugs that are security issues, and a memory corruption bug is treated by default as exploitable, but the project certainly has not collapsed to "all bugs are security issues". In fact I can't think of any project other than the Linux kernel where developers are explicitly against trying to call out bugs as security issues. So I think the kernel community needs arguments to support the proposition that they are special and can't do what other projects do.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 10:43 UTC (Mon) by error27 (subscriber, #8346) [Link] (3 responses)

The process failure was that the original patch did not have a Fixes tag. It was just marked as "Cc: stable <stable@vger.kernel.org> # 4.14". What we have learned now is that it should have been backported to at least v4.9.

A second possible way to avoid this in the future would to have a syzbot running on the 4.9 kernel.

And, of course, it would help to have more eyeballs reviewing the kernel for security issues. Syzbot blamed me for a use after free last week. It was a bad git blame because the bug was a race condition which didn't always trigger so it picked me at random. I proposed a fix (possibly incorrectly because it was in a subsystem I had never looked at before) but no one has had a chance to look at it yet. Everyone is over worked.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 13:52 UTC (Tue) by cyphar (subscriber, #110703) [Link] (1 responses)

I agree, and there really should be a push for folks to include Fixes tags (I'm surprised there isn't a rule about Cc-ing stable without a Fixes tag being a no-no). If you went through all the effort to fix a bug and test it, a quick blame to find where the bug was introduced is usually (in my experience) not too much extra effort. To be fair, this won't always be entirely accurate (and the most accurate test is actually try to exploit it) but it's almost certainly good enough for the vast majority of such patches.

A lot of the discussion about this bug is around classifying certain types of patches as security fixes, but GregKH has consistently said that stable considers all bugs to be security bugs. In fact the patch *was* backported to stable (just not to all the trees that needed it).

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Dec 5, 2019 14:17 UTC (Thu) by hmh (subscriber, #3838) [Link]

I understand there has been some cases of "bad feelings" over being the target of a "fixes" tag when the target commit only exposed an underlying issue, etc. Especially when there is already some friction between the people (or companies, or teams, or tribes, or...) involved.

I'd propose using "Canary:" [insert bikeshedding here] instead of "Fixes" in that case, though :-) As in "if the commit listed in Canary is present, or a backport thereof, you very very likely also want this commit for the whole thing to work better, no specific reason implied".

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted May 27, 2020 8:29 UTC (Wed) by dvyukov (guest, #57055) [Link]

FWIW syzbot is running on 4.4, 4.9, 4.14 and 4.19.

4.14 and 4.19 instances are working:
https://syzkaller.appspot.com/linux-4.14
https://syzkaller.appspot.com/linux-4.19
But this does not have the effect you are assuming just presence of these instances will have :) Nobody is looking at these reports/doing anything with them (maybe with the exception of black hats).

4.4 and 4.9 instances are not functioning because syzbot was never ever able to get a successful booting build for these kernels:
https://syzkaller.appspot.com/linux-4.4
https://syzkaller.appspot.com/linux-4.9
Here are some of the trails of what happened:
https://groups.google.com/forum/#!searchin/syzkaller-lts-...
https://groups.google.com/forum/#!searchin/syzkaller-lts-...
Somebody first needs to fix build/boot for these kernels. I mentioned it a number of time to various people, but so far nobody had enough interest.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 21:49 UTC (Mon) by rweikusat2 (subscriber, #117920) [Link] (2 responses)

Something which deserves to be mentioned here: This isn't explicitly said in the text and what is said about this suggests that the person who wrote it doesn't know the details herself. While the use-after-free is necessary to do actual reads/ writes of kernel memory, another crucial part of the exploit is the ability to get a pointer to some recently freed chunk of memory by exploiting the fact that the kmalloc allocator is a power-of-2 freelist allocator creating slab caches for memory chunks whose size is a certain power of 2. Ie,
if a struct something has a size > 1 << (n - 1) and <= 1 << n, a pointer to a recently freed memory chunk the structure used to reside in can be obtained by causing a kmalloc allocation of something in the same size class, eg, an array of struct iovecs (as in the exploit).

Considering this, caches of memory chunks used for allocation based on object type and not object size would make such exploits at least more difficult, if not at all impossible.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 27, 2019 12:33 UTC (Wed) by mfuzzey (subscriber, #57966) [Link] (1 responses)

Yes but I suspect the number of object types that are allocated by kmalloc() is much greater than the number of size classes used.

So that solution would likely have a higher overhead maintaining more free lists. And freelists of infrequently allocated objects would probably waste space.

But I'm not a MM expert so maybe it's optimizable.

Also introducing something like that would be a pretty huge tree wide change due to the need to pass the object type.
Though I suppose it could be done bit by bit with some macro magic assistance.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 27, 2019 13:02 UTC (Wed) by rweikusat2 (subscriber, #117920) [Link]

The original slab allocator, as conceived by Jeff Bonwick, was a generalized way to allocate typed objectes and cache already initialized but currently unused objects for future reuse. And that's what the slab/ slub/ slob allocators in Linux do as well. kmalloc is just a convenience interface on top of that which reintroduces size-based aggregation of memory chunks through the backdoor. And one drawback of this is that it means use-after-free errors are fairly easily exploitable because memory which used to hold some trusted information may be handed out to untrusted code to use as it sees fit while other trusted code still has a pointer to it.


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds