The headline metrics in the Security Innovation don't seem to match the stats we keep ourselves, and it would be interesting to see their raw data to see if there is some error in their methods, or perhaps if it's an effect of sampling on the package or vulnerability set they used.
The Red Hat Security Response team publish all our raw data so you can run your own metrics of interest against any set of vulnerabilities. For every CVE name we also give the impact we assigned to the issue (and running the metrics you'll see we do fix things that are most important quicker).
A link to the data and a small perl script to run metrics is available
with a link from http://blogs.redhat.com/people/archive/000201.html
But "days of risk" only tells you a portion of the story: it doesn't tell you how long a vendor knew about an issue before it was known to the public, or how long it was being exploited before being reported. We're trying to be a bit more transparant on that too, if you follow the bugzilla links in RHSA or Fedora Core advisories we've started this year adding "reported" date fields to the status_whiteboard section.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds