|
|
Subscribe / Log in / New account

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 22, 2019 22:36 UTC (Fri) by roc (subscriber, #30627)
Parent article: Bad Binder: Android In-The-Wild Exploit (Project Zero)

Did anyone fixing the bug upstream know that there were security implications? It's hard to believe the answer is "no"; use-after-free by default should be treated as a security bug.

If they did, but the security implications were not explicitly stated because of the kernel dev policy of "we don't talk about security bugs because 'a bug is a bug is a bug'", then that policy is culpable here.


to post comments

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 22, 2019 23:16 UTC (Fri) by clugstj (subscriber, #4020) [Link] (12 responses)

Really? Someone finds and fixes a bug and because they didn't determine the security implications, you want to blame them for the bug? Way to discourage bug fixes.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 0:32 UTC (Sat) by roc (subscriber, #30627) [Link] (10 responses)

Fixing the bug without understanding the implications would be unfortunate, but a genuine mistake. No big deal.

Fixing the bug, understanding the security implications but deliberately not communicating them for policy reasons would be an indictment of said policy.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 1:07 UTC (Sat) by clugstj (subscriber, #4020) [Link] (9 responses)

How can it be a mistake to fix a bug without understanding all possible implications of the bug? Isn't it more important to have a fix available sooner, than to wait until all the possible misuses of the bug are understood?

I, for one, enjoy fixing bugs, but I don't wish to spend time twisting my brain to understand how some perverse individual might misuse the bug.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 2:18 UTC (Sat) by rahulsundaram (subscriber, #21946) [Link]

> How can it be a mistake to fix a bug without understanding all possible implications of the bug?

All possible may not be known but the more important point here is that just because all possible implications aren't known doesn't mean that one should hide known implications as some Linux kernel developers do

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 11:01 UTC (Sat) by roc (subscriber, #30627) [Link] (5 responses)

If a bug fix needs to be backported to released products as a matter of urgency, but no-one notices that, I think we should consider that a mistake.

This doesn't mean one needs to delay making a fix available until "all possible implications are understood".

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 23, 2019 18:46 UTC (Sat) by tuna (guest, #44480) [Link] (4 responses)

Maybe it would be better for the makers of those devices to make sure you can use the latest versions of Linux instead of depending on backports.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 3:09 UTC (Sun) by roc (subscriber, #30627) [Link] (1 responses)

Upstream Linux kernel releases don't happen frequently enough for "update to the latest released upstream kernel" to be a viable security strategy. So at least you have to backport to the stable branches maintained by Greg K-H etc.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 8:28 UTC (Sun) by tuna (guest, #44480) [Link]

If you consider stable Linux version (5.4.x) that are released between the major versions released by Thorvalds, you should be getting all known stable bug fixes (including sequrity fixes). That might be to many updates for Android devices though....

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 19:04 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (1 responses)

Google <a href="https://arstechnica.com/gadgets/2019/11/google-outlines-p...">recently proposed running Android on mainline kernels</a>. But they want a stable kernel ABI because Android (as realistically deployed on hardware that the typical consumer actually uses) is basically guaranteed to have a lot of binary blobs.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 22:55 UTC (Mon) by mfuzzey (subscriber, #57966) [Link]

Not sure it has many *kernel* binary blobs.
Userspace binary blobs yes sure but a stable kernel API would be irrelevant.

Lots of out of tree kernel drivers too unfortunately but most do have source available even if the quality, as is typical with vendor non mainlined code is poor.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 24, 2019 17:41 UTC (Sun) by ballombe (subscriber, #9523) [Link]

Alas, if one do not understand the implication of the bug, maybe one do not understand the implication of the fix.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 29, 2019 12:11 UTC (Fri) by jezuch (subscriber, #52988) [Link]

Well, sure, but it's a good mindset to have anyway. How can this policy be abused? How can this piece of code fail, however silly or perverse the input need be? In some contexts you'd better assume you're constantly under attack.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 18:48 UTC (Mon) by raven667 (subscriber, #5198) [Link]

>> If they did [know], but the security implications were not explicitly stated because of the kernel dev policy of "we don't talk about security bugs because 'a bug is a bug is a bug'", then that policy is culpable here.

> Really? Someone finds and fixes a bug and because they didn't determine the security implications, you want to blame them for the bug? Way to discourage bug fixes.

I think you didn't read through and understand the entire comment or you are pretending to misunderstand, either way your comment doesn't follow the conversation.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 9:52 UTC (Mon) by Vipketsh (guest, #134480) [Link] (3 responses)

I see this reaction quite often and it is generally a case of "author
of fix failed to predict the bug will be used in an exploit". All those other
similar bugs not used exploits don't seem to receive any such reactions.

Noone giving a reaction like this is actually trying to come up with
procedure or definitions to end up at the desired outcome and, I think,
not by accident.

There are one of two ways this can go:

1, For every bug fix some kind of full security analysis needs to take
place to understand if possibly it can have some security
implications. This requirement generally ends up close to "write a
proof of concept exploit using the bug".

Other than the obvious excruciating amount of effort required it is
also by no means fool-proof: there is no telling if the analysis
missed something or not. You want it foolproof or close to it,
otherwise you end up with people making the same statement the
parent made.

Considering that bug fixes have value on their own, even without any
security analysis there is little chance for this to happen.

2, Define a catagory of bugs that have possible security implications.

The problem here is the definition of "security implications".
Consider denial-of-service issues. Some believe that they are
security issue and, in certain contexts, they are right. Now consider
how a simple bug is little different: It caused a user to be unable
to do something (i.e. it failed to provide service). So, now all
bugs are security issues?

It's practically impossible to give a definition that (i) doesn't
trun every bug into "security implication" and (ii) covers a
reasonable number of bugs bugs used in exploits.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 21:45 UTC (Mon) by roc (subscriber, #30627) [Link] (2 responses)

It is not difficult to identify classes of bugs that should be treated as security issues by default: use-after-free, buffer overflow, and wild read/write/jump bugs, for example. This binder bug was UAF which everyone should realize, by now, are usually exploitable. So my proposal would be to start with those classes, add any more that spring to mind, and have a policy of identifying them as security bugs by default (that is, unless a developer makes a strong argument that a particular bug cannot be exploited for some reason specific to that bug).

Your argument seems to boil down to "high-precision classification of bugs as security issues is really hard, so we shouldn't try to do any such classification", i.e. making the best the enemy of the good.

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 9:33 UTC (Tue) by Vipketsh (guest, #134480) [Link] (1 responses)

Yes, high-precision classification of security bugs is hard and that was one of my points.

Once you start adding "possible security issue" labels to bugs, there is an implication that all other bugs are not security issues.

Then what happens is some bug not bearing the label is used in an exploit giving irrefutable evidance that it is a security issue and immediatly the developers are incompetent for not realising it before. What can the devolopers do then ? Mark another type of bug as "possible security issue" ? Repeat a couple of times and the result is that every bug bears the "possible security issue" label, which is no better than not marking any bug.

The only way to then bring value back to the "possible security issue" label is to do security analysis and it better be "high-precision" otherwise the developers risk being shamed for being incompetent the next time some exploit is published.

And that is my other original point: you either converge to "all bugs are security issues" or "only bugs shown to be exploitable are security issues". The former has no value and the later is an unfeasable amount of work.

In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

(As an aside: interesting how you use the words "add any more that *spring to mind*" [emphasis mine] implying there is little analysis or thought needed to add a type of bug to "possible security issue". "Denial-of-service" sprang to mind as I argued before and so now all "bugs are possible security issues". See how this is difficult?)

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 20:48 UTC (Tue) by roc (subscriber, #30627) [Link]

> In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

Sure. The experience of many projects is that that hasn't happened. In Firefox, for example, developers make good-faith efforts to call out bugs that are security issues, and a memory corruption bug is treated by default as exploitable, but the project certainly has not collapsed to "all bugs are security issues". In fact I can't think of any project other than the Linux kernel where developers are explicitly against trying to call out bugs as security issues. So I think the kernel community needs arguments to support the proposition that they are special and can't do what other projects do.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds