Back in January, LWN reported
on a grant awarded to Coverity by the U.S. Department of Homeland
Security. Coverity (working with Stanford) would apply its static analysis
tools to the code bases of a large set of free software projects and report
on the results. The effort was designed to help provide a sense of the
quality of free software while simultaneously helping to improve that
Coverity has now announced
its first set of results in the form of a press release, a table of defect counts, and a glossy
report. The main point made in the report - and picked up on by most of
the media coverage - is that the software which makes up the "LAMP stack"
(kernel, Apache, MySQL, PostgreSQL, PHP, Perl, Python) has a significantly
lower rate of defects than the larger set of projects reviewed. From this
result, one might well conclude that the most heavily-used and
carefully-reviewed projects tend to have better code. Perhaps not a
breathtaking result, but it's still nice to know.
The projects with the lowest defect density include Ethereal, OpenVPN,
Perl, and xmms; the all-time winner is xmms, with a total of six detected
errors. At the other end of the scale, one finds Amanda, Firebird,
NetSNMP, OpenLDAP, Samba, X, and Xine. The MySQL code base turned up 136
defects (a density of 0.224 per thousand lines of code), while PostgreSQL
has 295 (density of 0.362). Those results are interesting in the context
of this quote from the report:
For example, MySQL, PostgreSQL, and Berkeley DB have certified
versions of their software that contain zero Coverity defects.
We asked Coverity CTO Ben Chelf about the discrepancy between this claim
and the published results, and heard back:
We are working with the community now to determine exactly why that
is. Obviously the code changes over time so that is one potential
factor for the new issues. We hope that by opening up this mainline
access, we can assure that all _future_ versions of many of these
packages will contain zero Coverity defects.
Unfortunately, that response does not really answer the question. The
possibilities would seem to be: (1) whoever paid for the "certified
versions" has not fed the resulting fixes back into the mainline;
(2) all of the detected defects have been introduced into the code
base since the certification run was done, or (3) the tests run on the
"certified versions" were less comprehensive. None of those ideas is
That notwithstanding, the work being done at Coverity is clearly helping to
clean up the code of the projects being surveyed. Patches for some bugs
found in the kernel are already circulating, and various
other projects are looking at the results as well.
With regard to Samba, the Coverity
folks provided us with a quote from Jeremy Allison:
Coverity has found bugs in parts of Samba that we had previously
considered completely robust and tested. It's like having a
developer on the team with an inhuman attention to detail, who
points out all the corner cases and boundary conditions you hadn't
considered when you first wrote the code. It's making a *major*
contribution to the code quality of the Samba project.
Running static analysis tools on the code is a clear win for software
quality and Coverity, by chasing down the resources to pay for this kind of
work, is helping the free software community. Even so, we could not resist
asking Mr. Chelf this question: wouldn't it help the community even more to
release the checker under a free license, so that the community could do
its own analysis and improve the tool as well? He responded:
We want to have a very strong relationship with the open source
community for a long time to come. We recognize that open source
software is a more and more critical part of many organizations'
(commercial and non-commercial) infrastructure. As we keep a
healthy finger on the heartbeat of what the community wants from
this type of technology, we feel we'll be the best ones to provide
it, regardless of form. Does that mean open source? It's too early
to say at this point.
In other words, we'll have to content ourselves with the reports from
Coverity - when Coverity sees fit to provide them - for the foreseeable
future. It is vastly preferable to not having those reports.
Still, there would be a great advantage to having static analysis tools
which did not depend on any one corporation's generosity to run. The
community seems to be a bit slow in the development of these tools,
however. The "sparse" utility, written by Linus Torvalds, is regularly
used to find certain types of bugs in the kernel. It has seen little use
beyond the kernel, however, and has not developed anything close to the
capabilities of Coverity's tools. The once-promising smatch project seems to have
stalled for the last two years. Various other projects exist (Wikipedia
list), but none seem to have reached any sort of critical mass.
The free software community prides itself on the quality of its code.
Static analysis techniques will clearly be an important part of maintaining
that quality in the future. Many eyeballs do indeed shake out bugs; adding
some automated eyeballs to the mix will help find even more of them. We have
been lucky that a company which has developed some interesting static
analysis techniques has - for a few years, now - shared the
results of its analysis with parts of the free software community. We
should hope that this generosity continues for a long time, but we may also
want to think about creating some tools of our own for the day when that
generosity runs out.
to post comments)