Bad Binder: Android In-The-Wild Exploit (Project Zero)
Bad Binder: Android In-The-Wild Exploit (Project Zero)
Posted Nov 25, 2019 9:52 UTC (Mon) by Vipketsh (guest, #134480)In reply to: Bad Binder: Android In-The-Wild Exploit (Project Zero) by roc
Parent article: Bad Binder: Android In-The-Wild Exploit (Project Zero)
of fix failed to predict the bug will be used in an exploit". All those other
similar bugs not used exploits don't seem to receive any such reactions.
Noone giving a reaction like this is actually trying to come up with
procedure or definitions to end up at the desired outcome and, I think,
not by accident.
There are one of two ways this can go:
1, For every bug fix some kind of full security analysis needs to take
place to understand if possibly it can have some security
implications. This requirement generally ends up close to "write a
proof of concept exploit using the bug".
Other than the obvious excruciating amount of effort required it is
also by no means fool-proof: there is no telling if the analysis
missed something or not. You want it foolproof or close to it,
otherwise you end up with people making the same statement the
parent made.
Considering that bug fixes have value on their own, even without any
security analysis there is little chance for this to happen.
2, Define a catagory of bugs that have possible security implications.
The problem here is the definition of "security implications".
Consider denial-of-service issues. Some believe that they are
security issue and, in certain contexts, they are right. Now consider
how a simple bug is little different: It caused a user to be unable
to do something (i.e. it failed to provide service). So, now all
bugs are security issues?
It's practically impossible to give a definition that (i) doesn't
trun every bug into "security implication" and (ii) covers a
reasonable number of bugs bugs used in exploits.
Posted Nov 25, 2019 21:45 UTC (Mon)
by roc (subscriber, #30627)
[Link] (2 responses)
Your argument seems to boil down to "high-precision classification of bugs as security issues is really hard, so we shouldn't try to do any such classification", i.e. making the best the enemy of the good.
Posted Nov 26, 2019 9:33 UTC (Tue)
by Vipketsh (guest, #134480)
[Link] (1 responses)
Once you start adding "possible security issue" labels to bugs, there is an implication that all other bugs are not security issues.
Then what happens is some bug not bearing the label is used in an exploit giving irrefutable evidance that it is a security issue and immediatly the developers are incompetent for not realising it before. What can the devolopers do then ? Mark another type of bug as "possible security issue" ? Repeat a couple of times and the result is that every bug bears the "possible security issue" label, which is no better than not marking any bug.
The only way to then bring value back to the "possible security issue" label is to do security analysis and it better be "high-precision" otherwise the developers risk being shamed for being incompetent the next time some exploit is published.
And that is my other original point: you either converge to "all bugs are security issues" or "only bugs shown to be exploitable are security issues". The former has no value and the later is an unfeasable amount of work.
In summary, what I'm trying to say is: "please provide arguments why applying 'possible security issue' labels woudn't converge to one of the above".
(As an aside: interesting how you use the words "add any more that *spring to mind*" [emphasis mine] implying there is little analysis or thought needed to add a type of bug to "possible security issue". "Denial-of-service" sprang to mind as I argued before and so now all "bugs are possible security issues". See how this is difficult?)
Posted Nov 26, 2019 20:48 UTC (Tue)
by roc (subscriber, #30627)
[Link]
Sure. The experience of many projects is that that hasn't happened. In Firefox, for example, developers make good-faith efforts to call out bugs that are security issues, and a memory corruption bug is treated by default as exploitable, but the project certainly has not collapsed to "all bugs are security issues". In fact I can't think of any project other than the Linux kernel where developers are explicitly against trying to call out bugs as security issues. So I think the kernel community needs arguments to support the proposition that they are special and can't do what other projects do.
Bad Binder: Android In-The-Wild Exploit (Project Zero)
Bad Binder: Android In-The-Wild Exploit (Project Zero)
Bad Binder: Android In-The-Wild Exploit (Project Zero)
