|
|
Subscribe / Log in / New account

Some notes from the Coverity survey

Back in January, LWN reported on a grant awarded to Coverity by the U.S. Department of Homeland Security. Coverity (working with Stanford) would apply its static analysis tools to the code bases of a large set of free software projects and report on the results. The effort was designed to help provide a sense of the quality of free software while simultaneously helping to improve that quality.

Coverity has now announced its first set of results in the form of a press release, a table of defect counts, and a glossy report. The main point made in the report - and picked up on by most of the media coverage - is that the software which makes up the "LAMP stack" (kernel, Apache, MySQL, PostgreSQL, PHP, Perl, Python) has a significantly lower rate of defects than the larger set of projects reviewed. From this result, one might well conclude that the most heavily-used and carefully-reviewed projects tend to have better code. Perhaps not a breathtaking result, but it's still nice to know.

The projects with the lowest defect density include Ethereal, OpenVPN, Perl, and xmms; the all-time winner is xmms, with a total of six detected errors. At the other end of the scale, one finds Amanda, Firebird, NetSNMP, OpenLDAP, Samba, X, and Xine. The MySQL code base turned up 136 defects (a density of 0.224 per thousand lines of code), while PostgreSQL has 295 (density of 0.362). Those results are interesting in the context of this quote from the report:

For example, MySQL, PostgreSQL, and Berkeley DB have certified versions of their software that contain zero Coverity defects.

We asked Coverity CTO Ben Chelf about the discrepancy between this claim and the published results, and heard back:

We are working with the community now to determine exactly why that is. Obviously the code changes over time so that is one potential factor for the new issues. We hope that by opening up this mainline access, we can assure that all _future_ versions of many of these packages will contain zero Coverity defects.

Unfortunately, that response does not really answer the question. The possibilities would seem to be: (1) whoever paid for the "certified versions" has not fed the resulting fixes back into the mainline; (2) all of the detected defects have been introduced into the code base since the certification run was done, or (3) the tests run on the "certified versions" were less comprehensive. None of those ideas is particularly reassuring.

That notwithstanding, the work being done at Coverity is clearly helping to clean up the code of the projects being surveyed. Patches for some bugs found in the kernel are already circulating, and various other projects are looking at the results as well. With regard to Samba, the Coverity folks provided us with a quote from Jeremy Allison:

Coverity has found bugs in parts of Samba that we had previously considered completely robust and tested. It's like having a developer on the team with an inhuman attention to detail, who points out all the corner cases and boundary conditions you hadn't considered when you first wrote the code. It's making a *major* contribution to the code quality of the Samba project.

Running static analysis tools on the code is a clear win for software quality and Coverity, by chasing down the resources to pay for this kind of work, is helping the free software community. Even so, we could not resist asking Mr. Chelf this question: wouldn't it help the community even more to release the checker under a free license, so that the community could do its own analysis and improve the tool as well? He responded:

We want to have a very strong relationship with the open source community for a long time to come. We recognize that open source software is a more and more critical part of many organizations' (commercial and non-commercial) infrastructure. As we keep a healthy finger on the heartbeat of what the community wants from this type of technology, we feel we'll be the best ones to provide it, regardless of form. Does that mean open source? It's too early to say at this point.

In other words, we'll have to content ourselves with the reports from Coverity - when Coverity sees fit to provide them - for the foreseeable future. It is vastly preferable to not having those reports.

Still, there would be a great advantage to having static analysis tools which did not depend on any one corporation's generosity to run. The community seems to be a bit slow in the development of these tools, however. The "sparse" utility, written by Linus Torvalds, is regularly used to find certain types of bugs in the kernel. It has seen little use beyond the kernel, however, and has not developed anything close to the capabilities of Coverity's tools. The once-promising smatch project seems to have stalled for the last two years. Various other projects exist (Wikipedia has a list), but none seem to have reached any sort of critical mass.

The free software community prides itself on the quality of its code. Static analysis techniques will clearly be an important part of maintaining that quality in the future. Many eyeballs do indeed shake out bugs; adding some automated eyeballs to the mix will help find even more of them. We have been lucky that a company which has developed some interesting static analysis techniques has - for a few years, now - shared the results of its analysis with parts of the free software community. We should hope that this generosity continues for a long time, but we may also want to think about creating some tools of our own for the day when that generosity runs out.


to post comments

Something to buy...

Posted Mar 8, 2006 21:01 UTC (Wed) by ebirdie (guest, #512) [Link]

If I were a prosperous software vendor facing FOSS community as a competitor to tackle out, I would buy Coverity, get my stuff fixed, feed the public with reports serving the purpose and, am I bad enough, feeding some crackers with first line knowledge to get showcases. Battle is over on security front.

No! That would be way too easy to figure out and transparent for an anti-competitive measure.

Whatever the above motives were, I think there is now certain lack of free tools to produce equivalent first line knowledge by code auditing.

We need an RMS, a Linus, or Tridge or Alan or ...

Posted Mar 8, 2006 22:41 UTC (Wed) by AnswerGuy (guest, #1256) [Link] (7 responses)

We, the free software community, need to find a Linus, or Tridge or Alan Cox or someone with those considerable talents who also develops the passion to make something nurture something like sparse, or smatch or any of those to maturity.

Also I see Coverity as a picture perfect case of the pressures on the FSF to devise a new version of the GPL. Coverity has apparently followed the letter of the law in their use of gcc to create their xgcc/MetaL static checker. By providing only a service they are never obligated to release their work even though they benefit, extensively and commercially, from the derivation ... from all the work that GCC creators and maintainers have poured into the base software for over a decade.

(This is not to slight Coverity. They have poured their own sweat and tears into their product for a few years --- and they've complied with the license so far as I know. They have a need to make their money, too. But clearly the world will be a better place when we all have access to a top notch static testing tool. I wish I could say that people would use saner languages like Python and Perl for most of their work --- but they won't and we'll need a core of C for the foreseeable future).

JimD

We need an RMS, a Linus, or Tridge or Alan or ...

Posted Mar 9, 2006 4:43 UTC (Thu) by kirkengaard (guest, #15022) [Link]

AnswerGuy brings a good point.

" Running static analysis tools on the code is a clear win for software quality and Coverity, by chasing down the resources to pay for this kind of work, is helping the free software community. Even so, we could not resist asking Mr. Chelf this question: wouldn't it help the community even more to release the checker under a free license, so that the community could do its own analysis and improve the tool as well? He responded:

We want to have a very strong relationship with the open source community for a long time to come. We recognize that open source software is a more and more critical part of many organizations' (commercial and non-commercial) infrastructure. As we keep a healthy finger on the heartbeat of what the community wants from this type of technology, we feel we'll be the best ones to provide it, regardless of form. Does that mean open source? It's too early to say at this point. "

Notice the evasion.
Q: Would the community benefit from having the source to the checker? Wouldn't you also benefit from that community access?
A: We like our market, we feel we have a good hold on this market, and we feel no compulsion to invite competition in this market. We'll release when we feel we have to.

corbet, I know I've sounded naysaying on this once already, it just feels icky to me. I get that feeling every time somebody has to reach for the letter of the law in defense of questionably-ideal actions. And they are giving back to the community, and maybe it is a perfectly good open-source business model, all above-board by license terms. I just have this feeling, in the back of my IANAL brain, that says to me "the free-on-release concept is designed to prevent proprietary appropriation of GPL code on grand scale, and here's a nice grand scale, where people are importing their code to this 'in-house' use of unreleased, obviously release-quality proprietary extensions of GPL code, to benefit as end-users of that code. What would RMS say?"

Short: it feels like a Wrong Thing, as far as foreclosure is concerned, because it feels as though the users of the program do not have the freedoms guaranteed by the GPL; they only have the freedoms allowed by Coverity, the operator of the program. If Coverity is running the Checker of their own volition and then offering the results for sale, the results are a work product separate from the use of the program. If someone is paying them to operate the program over a specific set of code, then the "user" gets fuzzier to me. Maybe not in the absolute legal language sense, but certainly in the ideal.

We need an RMS, a Linus, or Tridge or Alan or ...

Posted Mar 9, 2006 6:00 UTC (Thu) by bos (guest, #6154) [Link] (1 responses)

When Stanford spawned Coverity, Coverity dropped gcc altogether and switched to the EDG front end, a commercial C/C++ parser that is basically the only thing you can buy for this sort of work (or if you're writing a new compiler).

http://www.stanford.edu/~engler/spin05-coverity.pdf

So your comments about them taking advantage of the GPL are not strictly accurate; they apply to what some of the Coverity people *used* to do back when they were still in a research group at Stanford.

My apologies ...

Posted Mar 9, 2006 6:47 UTC (Thu) by AnswerGuy (guest, #1256) [Link]

I was unaware that they had made that switch. (Though the fact remains that they did prototype their work on the backs of free software ... and they never released any of the source fruits of *that* work).

Still, it's useful that they provide the service. Occasionally and patchy as those audits have been.

It still underscores the need for us to find a talented, motivated source for this in the FOSS community.

Perl, sane?

Posted Mar 9, 2006 8:09 UTC (Thu) by ncm (guest, #165) [Link] (1 responses)

How strange it is to find the words "Perl" and "saner" in such close proximity, with no apparent sense of irony. I doubt that Larry himself could have managed it.

Perl, sane?

Posted Mar 9, 2006 13:20 UTC (Thu) by bronson (subscriber, #4806) [Link]

Perl is totally sane. It's the universe that's insane...

We need an RMS, a Linus, or Tridge or Alan or ...

Posted Mar 9, 2006 9:40 UTC (Thu) by Zelatrix (guest, #5163) [Link] (1 responses)

I wish I could say that people would use saner languages like Python and Perl for most of their work

However nice those language are (well, Python is anyway), their semantics and dynamic nature make them extremely difficult to statically check, compared to C.

The ultimate language for static-checkability is of course SPARK (see this Slashdot article), which allows you to prove the absence of run-time errors in your program relatively easily and lets you go all the way up to full formal proof of correctness should you so desire (disclaimer: I work for Praxis; the submitter of that article, as far as I know, has no links with the company).

We need an RMS, a Linus, or Tridge or Alan or ...

Posted Mar 12, 2006 10:24 UTC (Sun) by Xman (guest, #10620) [Link]

How does SPARK's support for static analysis compare to say ML and similar functional languages?

Python Beat Perl by a Tiny Margin; TCL Loses against even PHP

Posted Mar 8, 2006 22:52 UTC (Wed) by AnswerGuy (guest, #1256) [Link] (11 responses)

Somehow I'm not surprised that Python and Perl were close, and that Python beat out Perl in this test by a small margin. Likewise I'm not at all surprised that TCL lost by quite a margin over PHP which was paled in comparison to the other two P's of the LAMP crew.

Ruby is conspicuously absent.

The worst omission is OpenSSH!!!

OpenSSH

Posted Mar 8, 2006 22:58 UTC (Wed) by corbet (editor, #1) [Link] (6 responses)

I didn't get this into the article, but I asked them about conspicuously missing packages. Both OpenSSH and KDE are evidently on the list for future attention, along with quite a few others.

Non-Free competitors

Posted Mar 9, 2006 0:00 UTC (Thu) by ewan (guest, #5533) [Link] (4 responses)

Also notable by their absence are the non-Free alternatives. This study
was done for the US Deptartment of Homeland Security, and I would imagine
that the US government would have access to the source for much of the
proprietary software they use and be just as interested in the results for
that as for Free software.

The alternative is that the Free tools will have a 'checked by the DHS'
stamp of approval (as it were) and the proprietary ones won't.

Non-Free competitors

Posted Mar 9, 2006 0:35 UTC (Thu) by drj826 (guest, #7352) [Link] (3 responses)

I wonder if the DHS is expressly prohibited from objectively evaluating the quality of proprietary code and posting the results? Hmmm...

Non-Free competitors

Posted Mar 9, 2006 1:11 UTC (Thu) by corbet (editor, #1) [Link]

Could be, but a simpler possibility exists: purveyors of proprietary software are being encouraged to pay for evaluations of their wares, and have the right to control the disemination of the results.

Non-Free competitors

Posted Mar 9, 2006 1:19 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] (1 responses)

Coverity works on source code, and proprietary software vendors are not in the habit of giving out their source code. A number of proprietary software companies use Coverity in-house, but they don't tend to let the world know how many bugs are found (though hopefully they fix the bugs).

Non-Free competitors

Posted Mar 9, 2006 10:15 UTC (Thu) by ewan (guest, #5533) [Link]

They hand out their source when the alternative is to be exluded from government contracts on grounds that uninspected code can't be trusted. Microsoft at least have been doing it for some time.

OpenSSH

Posted Mar 9, 2006 12:08 UTC (Thu) by pointwood (guest, #2814) [Link]

Did you get an answer? OpenSSH would be quite interesting and since I prefer KDE over GNOME, I'm sorry to see it being left out.

I'm surprised

Posted Mar 9, 2006 7:26 UTC (Thu) by davidw (guest, #947) [Link] (3 responses)

I am actually quite surprised that Tcl didn't do better. It has a pretty rigorous set of tests, and an active group of very good developers. If anything, I've always thought it moved a bit too slowly.

Here's a good article about Tcl, by the way:

"Tcl the Misunderstood":
http://antirez.com/articoli/tclmisunderstood.html

I'm surprised

Posted Mar 9, 2006 9:24 UTC (Thu) by macc (guest, #510) [Link] (1 responses)

I picked one random report (tcl:69) and it
was a false positiv.
nothing more known at the moment.

They're working with beta code!

Posted Mar 9, 2006 20:10 UTC (Thu) by davidw (guest, #947) [Link]

Hrm. Seems sourceforge's archives are not caught up yet, but it appears that, at least with Tcl, they were testing it with the beta release. Of course, that's better for Tcl, but less a reflection of the defect rate in solid, released code...

Tcl

Posted Mar 14, 2006 17:15 UTC (Tue) by rwmj (subscriber, #5474) [Link]

Funnily enough at university I did a mix of static and dynamic analysis on Tcl's C source code (in fact on the full Tcl/Tk), and found it to be of very high quality. Of course my tools didn't compare at all to Coverity - nevertheless I was unable to find any problems at all automatically in Tcl/Tk, although I did in some other (now obsolete) packages - eg. microemacs.

Rich.

No need for generosity

Posted Mar 9, 2006 8:09 UTC (Thu) by NAR (subscriber, #1313) [Link]

Still, there would be a great advantage to having static analysis tools which did not depend on any one corporation's generosity to run.

I don't think it depends on generosity. I'd think some good hard cash would also make the Coverity tool run on the free software project of your choice.

Bye,NAR

Some notes from the Coverity survey

Posted Mar 9, 2006 8:52 UTC (Thu) by k8to (guest, #15413) [Link] (11 responses)

It is interesting that when I worked for a now defunct competitor to Coverity, we too wanted to provide useful information to the open source world, but failed in getting any attention.

The method my employer attemped to use was to simply provide the tools for use by various organizations like Red Hat, SuSE, etc. They weren't open source, but we wanted to at least put the tools in their hands to run on whatever projects they felt needed attention, as well as provide a public scanning facility for open source projects to use.

Strangely, we were unable to get anyone to reply with interest.

Obviously, having free and open tools to improve software quality is an ideal I believe in, but I think many software developers and software development organizations will not adopt new tools -- especially process improvement tools -- without being pushed. So I believe that having coverity summarily distribute reports is an essential piece of drawing attention to this kind of technology.

Some notes from the Coverity survey

Posted Mar 9, 2006 10:33 UTC (Thu) by nelljerram (subscriber, #12005) [Link] (2 responses)

The method my employer attemped to use was to simply provide the tools for use by various organizations like Red Hat, SuSE, etc. They weren't open source, but we wanted to at least put the tools in their hands to run on whatever projects they felt needed attention, as well as provide a public scanning facility for open source projects to use.

Strangely, we were unable to get anyone to reply with interest.

The only thing strange here is that some people simply refuse to get the point despite an abundance of material everywhere explaining it. The point is freedom.

(Ever heard of a project called BitKeeper?)

Some notes from the Coverity survey

Posted Mar 10, 2006 1:32 UTC (Fri) by k8to (guest, #15413) [Link] (1 responses)

This is a reasonable guess as to what happened, but not accurate.

No objections were raised over the closed source nature of the tools. No objections were raised over lack of control. Through my years of working in different parts of the software industry I had direct contacts to many of the right people to consider allocating time to improving security in open tools.

What I got was not objections but utter silence. No counteroffers, no comments, nothing.

The bulk of the staff who work at Linux organizations simply do not
or did not understand the important benefits of static security analysis on tools they are very much interested in selling as secure and reliable.

This is why I say that a summary report comparing hard numbers is much more likely to get attention on this important field.

Some notes from the Coverity survey

Posted Mar 11, 2006 8:40 UTC (Sat) by LetterRip (guest, #6816) [Link]

Well Coverity it appears analysed first, then reported the results. Also I believe for the kernel when the did the first version (a couple of years ago) they provided some fixes to. So that is why they have gotten the attention they have - they proved their value first.

LetterRip

Some notes from the Coverity survey

Posted Mar 9, 2006 13:33 UTC (Thu) by bronson (subscriber, #4806) [Link] (4 responses)

It sounds like you were asking Red Hat et al to donate their engineering time to test your closed-source program. And you're surprised that they were not interested??

Some notes from the Coverity survey

Posted Mar 9, 2006 16:21 UTC (Thu) by sepreece (guest, #19270) [Link] (3 responses)

What in his posting suggested that the purpose of this was to test their tools? I'd guess they were probably interested in getting some good press and some PR attention as a result of making their tools available, but the projects would have gotten some benefit (reduction in latent problems) out of the exercise, too.

On the other hand, running the tools does take some time, analyzing the results does take some time, and many of the problems reported by most tools are "possible" problems rather than operational problems. And it's less satisfying than spending the same time writing code for new functionality.

And, of course, we don't know who they offered them to or how hard they tried to get projects' attention, etc.

Some notes from the Coverity survey

Posted Mar 10, 2006 1:44 UTC (Fri) by k8to (guest, #15413) [Link]

Bingo. My company would have loved to do shared press releases with Red Hat or SuSE or whatever saying how the tool helped them and also ran well on their systems and so on. There were even more cross marketing fits than I should go into here that made it a better idea than it sounds for both parties.

And yes of course using static anlysis tools takes time/energy, which is why we wanted to make them available to open source projects as much as possible, while still remaining saleable tools. They didn't need testing, they already worked and had already been being sold for some time. It was a matter of making them as available as possible. Certainly the 6 engineers who made the product and were continuing to improve it didn't have time to review hundreds of open source projects, but you'd think SuSE, Red Hat, and other organizations would have interest in eliminating security problems in their systems. And if it generated enough interest in the field that the open source world cloned the functionality, I honestly don't think any of us would have minded.

As it turned out the run end shorter than expected, but all we wanted was a few years of income from a few years of work, and that's what we got. Making it all free software might have been nice, but it would have required a much larger investment of cash up front.

It's possible that I'm totally wrong about this, but in my years working for Linux companies, I heard many rumours about it being nearly impossible to get Red Hat's attention, and my experiences match. SuSE I had direct contacts with and called and emailed many appropriate parties, but got no real reply at all. I would say we spent about 4 months periodically attempting to contact various parties before giving up on the idea. Novell central was definitely interested but couldn't comunicate internally within their organization.

Anyway, details details, my takeaway is that you're not going to get a development organization's interest by saying "here is a tool you can use to find bugs". You have to say "here are lots of bugs in your software we already found".

Some notes from the Coverity survey

Posted Mar 10, 2006 1:48 UTC (Fri) by k8to (guest, #15413) [Link] (1 responses)

Oh, regarding possible vs operational, the sexy part of the tools was a more or less 0 false postiives track record. It was very good about identifying real problems in an obvious way. That's what made it so efficient vs older approaches.

Some notes from the Coverity survey

Posted Mar 16, 2006 12:31 UTC (Thu) by jpetso (subscriber, #36230) [Link]

So, what has become of your tool?
I guess when your firm became defunct it was sent into the void without a
thought of open sourcing it now that no money can be made out of it
anymore?

Some notes from the Coverity survey

Posted Mar 10, 2006 9:15 UTC (Fri) by ortalo (guest, #4654) [Link]

Which tool or company was it?
If you have true experience in this field, do you think it would be easy to provide some general guidelines for an open-source project targetted at building a real-world checker?
IMHO static analysis subjects are pretty difficult and the main blockage currently for an open tool is to get a good specification or design not necessarily the development energy for actually building the tool (or maybe my brain is simply getting too old to work on such theoretical subjects).

Some notes from the Coverity survey

Posted Mar 10, 2006 11:10 UTC (Fri) by BenR (guest, #30999) [Link] (1 responses)

I think it's just simple free-software geek psychology: give us a tool and we just see more work to do, but give us bug reports and we're desperate to fix them to keep our geek pride.

Some notes from the Coverity survey

Posted Mar 10, 2006 12:58 UTC (Fri) by emj (guest, #14307) [Link]

Well you are right, but it's reallt alot easier if you get the bugreports instead of having to produce them yourself. It would take some time to setup the tool to run on the source code archives, and to understand the tool.

No, this way is alot better, just sad for k8to that his company didn't do this.

PostgreSQL defects

Posted Mar 9, 2006 12:40 UTC (Thu) by alvherre (subscriber, #18730) [Link] (7 responses)

Unfortunately, that response does not really answer the question. The possibilities would seem to be: (1) whoever paid for the "certified versions" has not fed the resulting fixes back into the mainline; (2) all of the detected defects have been introduced into the code base since the certification run was done, or (3) the tests run on the "certified versions" were less comprehensive. None of those ideas are particularly reassuring.

There's a fourth possibility: the new test runs are not nearly as polished as the paid runs were and thus have a lot of false positives.

I'm a PostgreSQL developer. It also surprised me initially that the count was so high, for there was a previous run some months ago (sponsored by EnterpriseDB). A fellow hacker, Neil Conway, was given access to the results and propagated the fixes to the open code. So I had a look at the current reports to see how could we fare so badly.

Turns out that the vast majority of the reports are probably false positives. In the PostgreSQL code, it is quite widespread to use a "bail out" function after checking for unexpected or erroneous conditions, so the code following it is never executed. However, the checker took no notice of that; I expect every single appearance of said pattern may be reported as a bug, when it's not.

Apparently, the EnterpriseDB guys had their run "configured" so that these false positives did not appear in the report. They are working on getting the configuration propagated to this new run.

Mr. Chelf is already aware of this, and the issue is being worked on. With some luck we should have a better report soon.

PostgreSQL defects

Posted Mar 9, 2006 16:58 UTC (Thu) by madscientist (subscriber, #16861) [Link] (5 responses)

Coverity does a "deep dive" into the code, and it recognizes all standard functions that never return (abort(), exit(), exec(), etc.) So, if your "bail out" function eventually invoked one of those known functions, it would have been marked as never returning. The vast majority of such functions do in fact call a standard never-returns function ultimately (how else do you get out?) and so this isn't generally a problem.

The only ways this could be a problem are (a) your code invokes some way of bailing that isn't a recognized standard "never returns" function, or (b) the code that invokes the never returns function was not included in the Coverity database (maybe it was part of a base library that wasn't checked by Coverity. As you mention, notes can be added to Coverity to have it recognize other "never returns" functions.

I wonder if the EDG frontend groks GCC's __attribute__(()) settings... it would be nice if it did. Do you mark your functions as never returning with these?

PostgreSQL defects

Posted Mar 9, 2006 17:15 UTC (Thu) by corbet (editor, #1) [Link]

From the discussion on the postgres list, I gather that there is a "returns sometimes" function which may have confused the situation a bit.

PostgreSQL defects

Posted Mar 9, 2006 17:15 UTC (Thu) by alvherre (subscriber, #18730) [Link] (3 responses)

Actually our "bail out" function does a only longjmp, cleans up and continue execution somewhere else (having aborted the current transaction, etc). It would be pretty bad a database server, if it called exit() because of a problem!

Actually is slightly more complex, because the same function is invoked if you want to issue a warning or harmless notice (which continues normal execution after sending the message text to the client), a local error condition (which sends the message and longjmps), a harder error (which kills the current process) or a very critical problem (which closes all connections and restarts the database server). (These correspong to ereport(NOTICE), ereport(ERROR), ereport(FATAL) and ereport(PANIC), respectively.)

The fix that's currently being discussed involves using some #ifdef that would only be activated if the static checker tool is in use (rather than the regular compiler), which would conditionally call exit() at the end of emitting the error message. The static checker can easily detect this and act accordingly. See http://archives.postgresql.org/pgsql-hackers/2006-03/msg0...

We don't use __attribute__(()). Not sure if it would be useful with our setup.

PostgreSQL defects

Posted Mar 9, 2006 17:22 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Well, you certainly could use __attribute__((noreturn)) in this situation: it doesn't mean `function never returns anywhere'; just `function never returns to its caller', so functions that always longjmp() are candidates.

It's really easy to make __attribute__'s invisible to non-GCC compilers:

#ifndef __GNUC__
#define __attribute__(x)
#endif

__attribute__((noreturn)) at least has been supported for donkey's years, so you don't have to worry about versions of GCC that support __attribute__ but not noreturn.

PostgreSQL defects

Posted Mar 9, 2006 20:05 UTC (Thu) by kleptog (subscriber, #1183) [Link] (1 responses)

You miss the point. You can't mark the function "noreturn" because it's a *sometimes return* function, depending on the arguments. If it's an ERROR or greater, it doesn't return, less it does return.

Actually, even this isn't quite true. Under some situations WARNINGs don't return either, but that's not relevent to the static checking under discussion.

The reason Coverity misses it is probably because the function ereport() is not just a function but a macro which expands to something like:

push_new_error_on_error_stack(error_level);
set_optional_error_values();
act_on_errors();

See how the error level is passed to the first function but the third function is the one that doesn't return depending on the error level. It would take a pretty clever static checker to pick this up. The proposed solution is to add an explicit:

if( error_level >= ERROR ) exit(0);

at the end of the macro. It will never get executed but it helps the static checker out.

PostgreSQL defects

Posted Mar 12, 2006 1:02 UTC (Sun) by nix (subscriber, #2304) [Link]

Ah, yes, agreed; that makes a lot of sense.

PostgreSQL defects

Posted Mar 9, 2006 17:12 UTC (Thu) by bkw1a (subscriber, #4101) [Link]

Another possibility is that coverity has improved its own code since
it tested the "certified" versions. It would be interesting to
re-run the tests on the certified versions, using the current coverity
code.

Sourceforge-like approach to semi-FOSS-friendly software

Posted Mar 15, 2006 3:58 UTC (Wed) by pm101 (guest, #3011) [Link]

An intermediate approach would be to have Coverity provide a free service where one uploads code to their web site, and they run the bug checker on it, with two caveats:
* The user agrees the code is under one of n free software licenses
* Coverity posts the code on the web site for some period of time

This would prevent commercial use, while permitting free software use.


Copyright © 2006, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds