|
|
Log in / Subscribe / Register

Coverity's kernel code quality study

Coverity's kernel code quality study

Posted Dec 15, 2004 0:42 UTC (Wed) by MathFox (guest, #6104)
In reply to: Coverity's kernel code quality study by brouhaha
Parent article: Coverity's kernel code quality study

You should realise that any (automated or manual) bug-checking process only finds a subset of the bugs that exist in a program. There ain't no silver bullet! The Stanford/Coverity bug checker will only find some of the bugs and be blind for the others.
Running an automated checker on a program will find a lot of bugs in the first run... but it will not find some bugs that hide in the bling spot of the checker. You can fix the bugs that the scanner finds and rerun it; but it can not help you with the bugs that it is blind for. So there will allways be an unknown(!) number of bugs left after a perfect scan.

It makes more sense to compare the numbers of bugs that were found on the first run of the scanner between different projects, or the total number of bugs spotted by the scanner instead of the number of <residual> bugs that the scanner finds after several iterations of bug fixing.

The holy grail in software testing is knowing the exact number of bugs in the product. :)


to post comments

Coverity's kernel code quality study

Posted Dec 15, 2004 0:48 UTC (Wed) by emkey (guest, #144) [Link]

Shouldn't the holy grail be knowing exactly where all those bugs are? :-)


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds