I am not saying that it's inherently evil to label a security fix as such, but I am saying that it's also not inherently evil to not label a security fix.
there are very few reverted commits after a release (excluding -rc releases). If you are running a pre-release kernel then you have bigger problems, but if you stick with the released kernels (especially with the -stable releases) then there are very few reverted commits. so I don't see how that is relevant.
>> tagging some of them as having security implications strongly implies
>> that ones that are not tagged do not have security implications, and
>> that is incorrect.
> why does it imply that? because you say so? ;) in the real world out
> there, people maintaining kernels for their users are not dumb and they
> know that any security labeling can at most be best effort, not a
you can't have it both ways. either the tagging of patches as security issues is accurate (so that you can have a secure system by only applying those patches and not applying the other fixes), or it's not (in which case you need to apply all patches to get all the possible security fixes)
> there's *zero* reduction in effort for the bad guys. if you have evidence
> otherwise, i'm all ears.
bad guys don't need a security park to investigate a commit, but if things have a security mark they know that they will find vulnerabilities in those commits, there is no chance that they will investigate the implications and then discover that they wasted their time.
it's like discovering that a high-security building has an unlocked door where your only communication with the building managers is public with an unknown amount of time before the owners see the message.
you can say 'door X is vunerable' in which case the bad guys can go directly to that door and get in.
or you can say 'I found a unlocked door' in which case the building owner needs to check every door (potentially finding other problems), but the bad guy would probably have to try several doors before finding the open one, and in a high-security building there would be cameras on the doors giving the building owner some chance of spotting the bad guy before they get in, even without seeing the message
I know this isn't a perfect analogy, but it does illustrate why full public exposure of all the details of every security problem is not nessasarily a good thing. with commits, the details are by definition available (the code changes are visible to everyone), so it can be a reasonable approach to not then hold up a sign that says 'target this'