that'd be because you carefully cut the part i quoted from you and responded to. let's see it again:
> because of this good guys need to apply all the patches, not try to
> cherry-pick the ones that they think are 'important'.
you're saying that all patches come in equal quality. if that were the case, reverts wouldn't need to occur yet they do, that was my point. then there're the regressions that may or may not result in a revert, but in any case they show that one cannot blindly backport all patches.
> you can't have it both ways. either the tagging of patches as security
> issues is accurate
what does accurate mean here? that only security issues are marked as such, and all of them at that? who has that kind of expectation? except citizens of your (non-existent) black&white world, i mean.
beyond that, why do developers mark patches that can result in, say, file system corruption? is that accurate in the sense you meant it above? hint: it's *not*, any kernel heap overflow or similar memory corruption (not to mention several other classes of security bugs) can very well result in such file system (meta)data corruption but that fact is pretty much never mentioned in a commit. yet developers don't stop let alone argue about marking file system corruption fixes as such. how do you explain that discrepancy?
> (so that you can have a secure system by only applying those patches and
> not applying the other fixes),
again: there's no such thing as a secure system. if you believe you have one, give remote shell access to the internet and see how long it will survive, despite having all your precious patches applied. so coming back to reality, people strive for making a balance (between having an acceptable risk and having to put effort into backporting/applying patches), and marking security fixes helps that process.
> or it's not (in which case you need to apply all patches to get all the
> possible security fixes)
are you saying that applying all patches (whatever that means by the way) will result in a secure system? seriously? why don't you give remote shell access to your most important personal box(es) to everyone? could it be because, horribile dictu, you don't actually consider your system (with all those patches applied) secure enough to withstand attacks?
> bad guys don't need a security park to investigate a commit, but if
> things have a security mark they know that they will find
> vulnerabilities in those commits, there is no chance that they will
> investigate the implications and then discover that they wasted their
did you even read what you responded to? bad guys do *not* care one whit about explicitly marked security fixes because their value is quickly diminishing (as people apply them). the real value is in bugs that either aren't fixed at all or whose fixes aren't widely known and hence not propagating as well as they could.
last but not least, since security fixes aren't marked as such because of thorough and careful examination of impact, there's a very good chance that a given bug cannot be exploited beyond DoS. that has about no value as compared to full privilege escalation bugs. so no, marking a commit as security related doesn't automatically hand anyone a full blown weaponized exploit, it takes a lot of time to reach that stage (or determine why it's not possible) and you save no time for the bad guys by not explicitly calling out attention to security fixes.
> I know this isn't a perfect analogy, but it does illustrate why full
> public exposure of all the details of every security problem is not
> necessarily a good thing. with commits, the details are by definition
> available (the code changes are visible to everyone), so it can be a
> reasonable approach to not then hold up a sign that says 'target this'
real life analogies *never* hold up in the digital world, yours is bleeding from so many wounds that it's not even funny.
1. a high-security building has been built as such, nothing remotely similar can be said of linux. save for L4 derivatives and coyotos, i can't really think of any contemporary public effort in fact.
2. an unlocked door is trivial to (ab)use, nothing remotely similar can be said of contemporary kernel bugs. case in point, the vmsplice exploit that abused not one but two bugs actually, and the kernel devs with all their knowledge of the kernel managed to miss the second (and more important) one initially.
3. building managers' role is nothing remotely similar to that of kernel developers. builders would be a closer match, except a building will eventually be finished (because it's built to a plan) whereas the kernel never will.
that's a couple of fundamental flaws and i haven't even finished parsing your first sentence. as for what i quoted above, you have yet to explain why bad guys would target explicitly marked security fixes in the first place (i told you why they wouldn't) and why they'd be helped by those marks (i told you why they wouldn't).