|
|
Subscribe / Log in / New account

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 25, 2019 21:45 UTC (Mon) by roc (subscriber, #30627)
In reply to: Bad Binder: Android In-The-Wild Exploit (Project Zero) by Vipketsh
Parent article: Bad Binder: Android In-The-Wild Exploit (Project Zero)

It is not difficult to identify classes of bugs that should be treated as security issues by default: use-after-free, buffer overflow, and wild read/write/jump bugs, for example. This binder bug was UAF which everyone should realize, by now, are usually exploitable. So my proposal would be to start with those classes, add any more that spring to mind, and have a policy of identifying them as security bugs by default (that is, unless a developer makes a strong argument that a particular bug cannot be exploited for some reason specific to that bug).

Your argument seems to boil down to "high-precision classification of bugs as security issues is really hard, so we shouldn't try to do any such classification", i.e. making the best the enemy of the good.


to post comments

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 9:33 UTC (Tue) by Vipketsh (guest, #134480) [Link] (1 responses)

Yes, high-precision classification of security bugs is hard and that was one of my points.

Once you start adding "possible security issue" labels to bugs, there is an implication that all other bugs are not security issues.

Then what happens is some bug not bearing the label is used in an exploit giving irrefutable evidance that it is a security issue and immediatly the developers are incompetent for not realising it before. What can the devolopers do then ? Mark another type of bug as "possible security issue" ? Repeat a couple of times and the result is that every bug bears the "possible security issue" label, which is no better than not marking any bug.

The only way to then bring value back to the "possible security issue" label is to do security analysis and it better be "high-precision" otherwise the developers risk being shamed for being incompetent the next time some exploit is published.

And that is my other original point: you either converge to "all bugs are security issues" or "only bugs shown to be exploitable are security issues". The former has no value and the later is an unfeasable amount of work.

In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

(As an aside: interesting how you use the words "add any more that *spring to mind*" [emphasis mine] implying there is little analysis or thought needed to add a type of bug to "possible security issue". "Denial-of-service" sprang to mind as I argued before and so now all "bugs are possible security issues". See how this is difficult?)

Bad Binder: Android In-The-Wild Exploit (Project Zero)

Posted Nov 26, 2019 20:48 UTC (Tue) by roc (subscriber, #30627) [Link]

> In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".

Sure. The experience of many projects is that that hasn't happened. In Firefox, for example, developers make good-faith efforts to call out bugs that are security issues, and a memory corruption bug is treated by default as exploitable, but the project certainly has not collapsed to "all bugs are security issues". In fact I can't think of any project other than the Linux kernel where developers are explicitly against trying to call out bugs as security issues. So I think the kernel community needs arguments to support the proposition that they are special and can't do what other projects do.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds