|
|
Subscribe / Log in / New account

Kernel vulnerabilities: old or new?

Kernel vulnerabilities: old or new?

Posted Oct 20, 2010 0:24 UTC (Wed) by martinfick (subscriber, #4455)
Parent article: Kernel vulnerabilities: old or new?

> There may some comfort in knowing that a large proportion of 2010's known security vulnerabilities are not a product of 2010's development.

Hmm, I am not sure that is comforting. Could it not simply mean that we are doing a really poor job of finding the new bugs (and have thus not found them yet)?

This whole article while interesting, just seems a little bit silly. It really is impossible to make any valuable conclusions whatsoever about the rate of introduction vs fixing from any of this data since it ignores all the potentially unknown/unfixed bugs. The only thing that could be concluded is that many bugs lurk for a LONG time unfixed.


to post comments

Kernel vulnerabilities: old or new?

Posted Oct 20, 2010 1:49 UTC (Wed) by bfields (subscriber, #19510) [Link] (2 responses)

Could it not simply mean that we are doing a really poor job of finding the new bugs (and have thus not found them yet)?

So, say we want to determine the age distribution of kernel bugs. Given infinite time, we could find every kernel bug, then determine the age of each bug we find, and get an exact answer to our question. But of course we can only afford to investigate a sample of the kernel bugs.

So one way to phrase your criticism is: the sample chosen (of kernel vulnerabilities found this year) is biased towards older bugs, because it takes time for kernel bugs to be found.

So the problem is to find a sample that we think is more representative.

One approach might be to look just at bugs discovered by one new technique. If we believe the technique is sufficiently novel that very few of the bugs it discovers would have been discovered without it, then we could hope that set of bugs it discovers might have the characteristics of a random sample. (And perhaps we could test the novelty of the technique by looking through previously discovered bugs to see if any of them would have been caught by the new technique.)

Kernel vulnerabilities: old or new?

Posted Oct 20, 2010 3:27 UTC (Wed) by martinfick (subscriber, #4455) [Link] (1 responses)

> So one way to phrase your criticism is: the sample chosen (of kernel vulnerabilities found this year) is biased towards older bugs, because it takes time for kernel bugs to be found.

No, my criticism is that the sample is biased towards the known bugs. It is not very useful to attempt to compare the known to the unknown. There could be an infinite amount of bugs being introduced, we have no idea. Each "fix" could even introduce more bugs than it fixes. The discovery rate is unrelated to the introductory rate! Looking at a subset of the possible bugs tells you nothing conclusive about the total except that the total includes the subset. It's like trying to determine when we will have mastered intergalactic space travel from the rate of scientific papers published during the 20th century. :)

Kernel vulnerabilities: old or new?

Posted Oct 20, 2010 8:36 UTC (Wed) by nix (subscriber, #2304) [Link]

Quite so. You'd get the same results if there was one very numerous class of security holes that almost always took at least ten years to track down, and other less numerous classes that were normally found faster. The same results, but the rate of hole introduction would be going *up* since 2.6.12 because the rate of kernel growth shot up since the introduction of git: we just haven't found any of those bugs yet.

(for the record this is a rather unlikely scenario -- 'digital kuru' if you will -- but it is a valid interpretation of the data, I think.)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds