Some notes from the Coverity survey
Coverity has now announced its first set of results in the form of a press release, a table of defect counts, and a glossy report. The main point made in the report - and picked up on by most of the media coverage - is that the software which makes up the "LAMP stack" (kernel, Apache, MySQL, PostgreSQL, PHP, Perl, Python) has a significantly lower rate of defects than the larger set of projects reviewed. From this result, one might well conclude that the most heavily-used and carefully-reviewed projects tend to have better code. Perhaps not a breathtaking result, but it's still nice to know.
The projects with the lowest defect density include Ethereal, OpenVPN, Perl, and xmms; the all-time winner is xmms, with a total of six detected errors. At the other end of the scale, one finds Amanda, Firebird, NetSNMP, OpenLDAP, Samba, X, and Xine. The MySQL code base turned up 136 defects (a density of 0.224 per thousand lines of code), while PostgreSQL has 295 (density of 0.362). Those results are interesting in the context of this quote from the report:
We asked Coverity CTO Ben Chelf about the discrepancy between this claim and the published results, and heard back:
Unfortunately, that response does not really answer the question. The possibilities would seem to be: (1) whoever paid for the "certified versions" has not fed the resulting fixes back into the mainline; (2) all of the detected defects have been introduced into the code base since the certification run was done, or (3) the tests run on the "certified versions" were less comprehensive. None of those ideas is particularly reassuring.
That notwithstanding, the work being done at Coverity is clearly helping to clean up the code of the projects being surveyed. Patches for some bugs found in the kernel are already circulating, and various other projects are looking at the results as well. With regard to Samba, the Coverity folks provided us with a quote from Jeremy Allison:
Running static analysis tools on the code is a clear win for software quality and Coverity, by chasing down the resources to pay for this kind of work, is helping the free software community. Even so, we could not resist asking Mr. Chelf this question: wouldn't it help the community even more to release the checker under a free license, so that the community could do its own analysis and improve the tool as well? He responded:
In other words, we'll have to content ourselves with the reports from Coverity - when Coverity sees fit to provide them - for the foreseeable future. It is vastly preferable to not having those reports.
Still, there would be a great advantage to having static analysis tools which did not depend on any one corporation's generosity to run. The community seems to be a bit slow in the development of these tools, however. The "sparse" utility, written by Linus Torvalds, is regularly used to find certain types of bugs in the kernel. It has seen little use beyond the kernel, however, and has not developed anything close to the capabilities of Coverity's tools. The once-promising smatch project seems to have stalled for the last two years. Various other projects exist (Wikipedia has a list), but none seem to have reached any sort of critical mass.
The free software community prides itself on the quality of its code.
Static analysis techniques will clearly be an important part of maintaining
that quality in the future. Many eyeballs do indeed shake out bugs; adding
some automated eyeballs to the mix will help find even more of them. We have
been lucky that a company which has developed some interesting static
analysis techniques has - for a few years, now - shared the
results of its analysis with parts of the free software community. We
should hope that this generosity continues for a long time, but we may also
want to think about creating some tools of our own for the day when that
generosity runs out.
Posted Mar 8, 2006 21:01 UTC (Wed)
by ebirdie (guest, #512)
[Link]
No! That would be way too easy to figure out and transparent for an anti-competitive measure.
Whatever the above motives were, I think there is now certain lack of free tools to produce equivalent first line knowledge by code auditing.
Posted Mar 8, 2006 22:41 UTC (Wed)
by AnswerGuy (guest, #1256)
[Link] (7 responses)
Also I see Coverity as a picture perfect case of the pressures on the FSF to devise a new version of the GPL. Coverity has apparently followed the letter of the law in their use of gcc to create their xgcc/MetaL static checker. By providing only a service they are never obligated to release their work even though they benefit, extensively and commercially, from the derivation ... from all the work that GCC creators and maintainers have poured into the base software for over a decade.
(This is not to slight Coverity. They have poured their own sweat and tears into their product for a few years --- and they've complied with the license so far as I know. They have a need to make their money, too. But clearly the world will be a better place when we all have access to a top notch static testing tool. I wish I could say that people would use saner languages like Python and Perl for most of their work --- but they won't and we'll need a core of C for the foreseeable future).
JimD
Posted Mar 9, 2006 4:43 UTC (Thu)
by kirkengaard (guest, #15022)
[Link]
" Running static analysis tools on the code is a clear win for software quality and Coverity, by chasing down the resources to pay for this kind of work, is helping the free software community. Even so, we could not resist asking Mr. Chelf this question: wouldn't it help the community even more to release the checker under a free license, so that the community could do its own analysis and improve the tool as well? He responded:
We want to have a very strong relationship with the open source community for a long time to come. We recognize that open source software is a more and more critical part of many organizations' (commercial and non-commercial) infrastructure. As we keep a healthy finger on the heartbeat of what the community wants from this type of technology, we feel we'll be the best ones to provide it, regardless of form. Does that mean open source? It's too early to say at this point. "
Notice the evasion.
corbet, I know I've sounded naysaying on this once already, it just feels icky to me. I get that feeling every time somebody has to reach for the letter of the law in defense of questionably-ideal actions. And they are giving back to the community, and maybe it is a perfectly good open-source business model, all above-board by license terms. I just have this feeling, in the back of my IANAL brain, that says to me "the free-on-release concept is designed to prevent proprietary appropriation of GPL code on grand scale, and here's a nice grand scale, where people are importing their code to this 'in-house' use of unreleased, obviously release-quality proprietary extensions of GPL code, to benefit as end-users of that code. What would RMS say?"
Short: it feels like a Wrong Thing, as far as foreclosure is concerned, because it feels as though the users of the program do not have the freedoms guaranteed by the GPL; they only have the freedoms allowed by Coverity, the operator of the program. If Coverity is running the Checker of their own volition and then offering the results for sale, the results are a work product separate from the use of the program. If someone is paying them to operate the program over a specific set of code, then the "user" gets fuzzier to me. Maybe not in the absolute legal language sense, but certainly in the ideal.
Posted Mar 9, 2006 6:00 UTC (Thu)
by bos (guest, #6154)
[Link] (1 responses)
http://www.stanford.edu/~engler/spin05-coverity.pdf
So your comments about them taking advantage of the GPL are not strictly accurate; they apply to what some of the Coverity people *used* to do back when they were still in a research group at Stanford.
Posted Mar 9, 2006 6:47 UTC (Thu)
by AnswerGuy (guest, #1256)
[Link]
I was unaware that they had made that switch. (Though the fact remains that they did prototype their work on the backs of free software ... and they never released any of the source fruits of *that* work).
Still, it's useful that they provide the service. Occasionally and patchy as those audits have been.
It still underscores the need for us to find a talented, motivated source for this in the FOSS community.
Posted Mar 9, 2006 8:09 UTC (Thu)
by ncm (guest, #165)
[Link] (1 responses)
Posted Mar 9, 2006 13:20 UTC (Thu)
by bronson (subscriber, #4806)
[Link]
Posted Mar 9, 2006 9:40 UTC (Thu)
by Zelatrix (guest, #5163)
[Link] (1 responses)
However nice those language are (well, Python is anyway), their semantics and dynamic nature make them extremely difficult to statically check, compared to C.
The ultimate language for static-checkability is of course SPARK (see this Slashdot article), which allows you to prove the absence of run-time errors in your program relatively easily and lets you go all the way up to full formal proof of correctness should you so desire (disclaimer: I work for Praxis; the submitter of that article, as far as I know, has no links with the company).
Posted Mar 12, 2006 10:24 UTC (Sun)
by Xman (guest, #10620)
[Link]
Posted Mar 8, 2006 22:52 UTC (Wed)
by AnswerGuy (guest, #1256)
[Link] (11 responses)
Ruby is conspicuously absent.
The worst omission is OpenSSH!!!
Posted Mar 8, 2006 22:58 UTC (Wed)
by corbet (editor, #1)
[Link] (6 responses)
Posted Mar 9, 2006 0:00 UTC (Thu)
by ewan (guest, #5533)
[Link] (4 responses)
Posted Mar 9, 2006 0:35 UTC (Thu)
by drj826 (guest, #7352)
[Link] (3 responses)
Posted Mar 9, 2006 1:11 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Mar 9, 2006 1:19 UTC (Thu)
by JoeBuck (subscriber, #2330)
[Link] (1 responses)
Posted Mar 9, 2006 10:15 UTC (Thu)
by ewan (guest, #5533)
[Link]
Posted Mar 9, 2006 12:08 UTC (Thu)
by pointwood (guest, #2814)
[Link]
Posted Mar 9, 2006 7:26 UTC (Thu)
by davidw (guest, #947)
[Link] (3 responses)
Here's a good article about Tcl, by the way:
"Tcl the Misunderstood":
Posted Mar 9, 2006 9:24 UTC (Thu)
by macc (guest, #510)
[Link] (1 responses)
Posted Mar 9, 2006 20:10 UTC (Thu)
by davidw (guest, #947)
[Link]
Posted Mar 14, 2006 17:15 UTC (Tue)
by rwmj (subscriber, #5474)
[Link]
Rich.
Posted Mar 9, 2006 8:09 UTC (Thu)
by NAR (subscriber, #1313)
[Link]
I don't think it depends on generosity. I'd think some good hard cash would also make the Coverity tool run on the free software project of your choice.
Posted Mar 9, 2006 8:52 UTC (Thu)
by k8to (guest, #15413)
[Link] (11 responses)
The method my employer attemped to use was to simply provide the tools for use by various organizations like Red Hat, SuSE, etc. They weren't open source, but we wanted to at least put the tools in their hands to run on whatever projects they felt needed attention, as well as provide a public scanning facility for open source projects to use.
Strangely, we were unable to get anyone to reply with interest.
Obviously, having free and open tools to improve software quality is an ideal I believe in, but I think many software developers and software development organizations will not adopt new tools -- especially process improvement tools -- without being pushed. So I believe that having coverity summarily distribute reports is an essential piece of drawing attention to this kind of technology.
Posted Mar 9, 2006 10:33 UTC (Thu)
by nelljerram (subscriber, #12005)
[Link] (2 responses)
Strangely, we were unable to get anyone to reply with interest.
The only thing strange here is that some people simply refuse to get the point despite an abundance of material everywhere explaining it. The point is freedom.
(Ever heard of a project called BitKeeper?)
Posted Mar 10, 2006 1:32 UTC (Fri)
by k8to (guest, #15413)
[Link] (1 responses)
No objections were raised over the closed source nature of the tools. No objections were raised over lack of control. Through my years of working in different parts of the software industry I had direct contacts to many of the right people to consider allocating time to improving security in open tools.
What I got was not objections but utter silence. No counteroffers, no comments, nothing.
The bulk of the staff who work at Linux organizations simply do not
This is why I say that a summary report comparing hard numbers is much more likely to get attention on this important field.
Posted Mar 11, 2006 8:40 UTC (Sat)
by LetterRip (guest, #6816)
[Link]
LetterRip
Posted Mar 9, 2006 13:33 UTC (Thu)
by bronson (subscriber, #4806)
[Link] (4 responses)
Posted Mar 9, 2006 16:21 UTC (Thu)
by sepreece (guest, #19270)
[Link] (3 responses)
On the other hand, running the tools does take some time, analyzing the results does take some time, and many of the problems reported by most tools are "possible" problems rather than operational problems. And it's less satisfying than spending the same time writing code for new functionality.
And, of course, we don't know who they offered them to or how hard they tried to get projects' attention, etc.
Posted Mar 10, 2006 1:44 UTC (Fri)
by k8to (guest, #15413)
[Link]
And yes of course using static anlysis tools takes time/energy, which is why we wanted to make them available to open source projects as much as possible, while still remaining saleable tools. They didn't need testing, they already worked and had already been being sold for some time. It was a matter of making them as available as possible. Certainly the 6 engineers who made the product and were continuing to improve it didn't have time to review hundreds of open source projects, but you'd think SuSE, Red Hat, and other organizations would have interest in eliminating security problems in their systems. And if it generated enough interest in the field that the open source world cloned the functionality, I honestly don't think any of us would have minded.
As it turned out the run end shorter than expected, but all we wanted was a few years of income from a few years of work, and that's what we got. Making it all free software might have been nice, but it would have required a much larger investment of cash up front.
It's possible that I'm totally wrong about this, but in my years working for Linux companies, I heard many rumours about it being nearly impossible to get Red Hat's attention, and my experiences match. SuSE I had direct contacts with and called and emailed many appropriate parties, but got no real reply at all. I would say we spent about 4 months periodically attempting to contact various parties before giving up on the idea. Novell central was definitely interested but couldn't comunicate internally within their organization.
Anyway, details details, my takeaway is that you're not going to get a development organization's interest by saying "here is a tool you can use to find bugs". You have to say "here are lots of bugs in your software we already found".
Posted Mar 10, 2006 1:48 UTC (Fri)
by k8to (guest, #15413)
[Link] (1 responses)
Posted Mar 16, 2006 12:31 UTC (Thu)
by jpetso (subscriber, #36230)
[Link]
Posted Mar 10, 2006 9:15 UTC (Fri)
by ortalo (guest, #4654)
[Link]
Posted Mar 10, 2006 11:10 UTC (Fri)
by BenR (guest, #30999)
[Link] (1 responses)
Posted Mar 10, 2006 12:58 UTC (Fri)
by emj (guest, #14307)
[Link]
No, this way is alot better, just sad for k8to that his company didn't do this.
Posted Mar 9, 2006 12:40 UTC (Thu)
by alvherre (subscriber, #18730)
[Link] (7 responses)
There's a fourth possibility: the new test runs are not nearly as polished as the paid runs were and thus have a lot of false positives.
I'm a PostgreSQL developer. It also surprised me initially that the count was so high, for there was a previous run some months ago (sponsored by EnterpriseDB). A fellow hacker, Neil Conway, was given access to the results and propagated the fixes to the open code. So I had a look at the current reports to see how could we fare so badly.
Turns out that the vast majority of the reports are probably false positives. In the PostgreSQL code, it is quite widespread to use a "bail out" function after checking for unexpected or erroneous conditions, so the code following it is never executed. However, the checker took no notice of that; I expect every single appearance of said pattern may be reported as a bug, when it's not.
Apparently, the EnterpriseDB guys had their run "configured" so that these false positives did not appear in the report. They are working on getting the configuration propagated to this new run.
Mr. Chelf is already aware of this, and the issue is being worked on. With some luck we should have a better report soon.
Posted Mar 9, 2006 16:58 UTC (Thu)
by madscientist (subscriber, #16861)
[Link] (5 responses)
The only ways this could be a problem are (a) your code invokes some way of bailing that isn't a recognized standard "never returns" function, or (b) the code that invokes the never returns function was not included in the Coverity database (maybe it was part of a base library that wasn't checked by Coverity. As you mention, notes can be added to Coverity to have it recognize other "never returns" functions.
I wonder if the EDG frontend groks GCC's __attribute__(()) settings... it would be nice if it did. Do you mark your functions as never returning with these?
Posted Mar 9, 2006 17:15 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Mar 9, 2006 17:15 UTC (Thu)
by alvherre (subscriber, #18730)
[Link] (3 responses)
Actually is slightly more complex, because the same function is invoked if you want to issue a warning or harmless notice (which continues normal execution after sending the message text to the client), a local error condition (which sends the message and longjmps), a harder error (which kills the current process) or a very critical problem (which closes all connections and restarts the database server). (These correspong to ereport(NOTICE), ereport(ERROR), ereport(FATAL) and ereport(PANIC), respectively.)
The fix that's currently being discussed involves using some #ifdef that would only be activated if the static checker tool is in use (rather than the regular compiler), which would conditionally call exit() at the end of emitting the error message. The static checker can easily detect this and act accordingly. See http://archives.postgresql.org/pgsql-hackers/2006-03/msg0...
We don't use __attribute__(()). Not sure if it would be useful with our setup.
Posted Mar 9, 2006 17:22 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
It's really easy to make __attribute__'s invisible to non-GCC compilers:
#ifndef __GNUC__
__attribute__((noreturn)) at least has been supported for donkey's years, so you don't have to worry about versions of GCC that support __attribute__ but not noreturn.
Posted Mar 9, 2006 20:05 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (1 responses)
Actually, even this isn't quite true. Under some situations WARNINGs don't return either, but that's not relevent to the static checking under discussion.
The reason Coverity misses it is probably because the function ereport() is not just a function but a macro which expands to something like:
push_new_error_on_error_stack(error_level);
See how the error level is passed to the first function but the third function is the one that doesn't return depending on the error level. It would take a pretty clever static checker to pick this up. The proposed solution is to add an explicit:
if( error_level >= ERROR ) exit(0);
at the end of the macro. It will never get executed but it helps the static checker out.
Posted Mar 12, 2006 1:02 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Mar 9, 2006 17:12 UTC (Thu)
by bkw1a (subscriber, #4101)
[Link]
Posted Mar 15, 2006 3:58 UTC (Wed)
by pm101 (guest, #3011)
[Link]
This would prevent commercial use, while permitting free software use.
If I were a prosperous software vendor facing FOSS community as a competitor to tackle out, I would buy Coverity, get my stuff fixed, feed the public with reports serving the purpose and, am I bad enough, feeding some crackers with first line knowledge to get showcases. Battle is over on security front.Something to buy...
We, the free software community, need to find a Linus, or Tridge or Alan Cox or someone with those considerable talents who also develops the passion to make something nurture something like sparse, or smatch or any of those to maturity.We need an RMS, a Linus, or Tridge or Alan or ...
AnswerGuy brings a good point.We need an RMS, a Linus, or Tridge or Alan or ...
Q: Would the community benefit from having the source to the checker? Wouldn't you also benefit from that community access?
A: We like our market, we feel we have a good hold on this market, and we feel no compulsion to invite competition in this market. We'll release when we feel we have to.
When Stanford spawned Coverity, Coverity dropped gcc altogether and switched to the EDG front end, a commercial C/C++ parser that is basically the only thing you can buy for this sort of work (or if you're writing a new compiler).We need an RMS, a Linus, or Tridge or Alan or ...
My apologies ...
How strange it is to find the words "Perl" and "saner" in such close proximity, with no apparent sense of irony. I doubt that Larry himself could have managed it.Perl, sane?
Perl is totally sane. It's the universe that's insane...Perl, sane?
I wish I could say that people would use saner languages like Python and Perl for most of their work
We need an RMS, a Linus, or Tridge or Alan or ...
How does SPARK's support for static analysis compare to say ML and similar functional languages?We need an RMS, a Linus, or Tridge or Alan or ...
Somehow I'm not surprised that Python and Perl were close, and that Python beat out Perl in this test by a small margin. Likewise I'm not at all surprised that TCL lost by quite a margin over PHP which was paled in comparison to the other two P's of the LAMP crew.Python Beat Perl by a Tiny Margin; TCL Loses against even PHP
I didn't get this into the article, but I asked them about conspicuously missing packages. Both OpenSSH and KDE are evidently on the list for future attention, along with quite a few others.
OpenSSH
Also notable by their absence are the non-Free alternatives. This study Non-Free competitors
was done for the US Deptartment of Homeland Security, and I would imagine
that the US government would have access to the source for much of the
proprietary software they use and be just as interested in the results for
that as for Free software.
The alternative is that the Free tools will have a 'checked by the DHS'
stamp of approval (as it were) and the proprietary ones won't.
I wonder if the DHS is expressly prohibited from objectively evaluating the quality of proprietary code and posting the results? Hmmm...Non-Free competitors
Could be, but a simpler possibility exists: purveyors of proprietary software are being encouraged to pay for evaluations of their wares, and have the right to control the disemination of the results.
Non-Free competitors
Coverity works on source code, and proprietary software vendors are not in the habit of giving out their source code. A number of proprietary software companies use Coverity in-house, but they don't tend to let the world know how many bugs are found (though hopefully they fix the bugs).
Non-Free competitors
They hand out their source when the alternative is to be exluded from government contracts on grounds that uninspected code can't be trusted.
Microsoft at least have been doing it for some time.
Non-Free competitors
Did you get an answer? OpenSSH would be quite interesting and since I prefer KDE over GNOME, I'm sorry to see it being left out.OpenSSH
I am actually quite surprised that Tcl didn't do better. It has a pretty rigorous set of tests, and an active group of very good developers. If anything, I've always thought it moved a bit too slowly.I'm surprised
http://antirez.com/articoli/tclmisunderstood.html
I picked one random report (tcl:69) and itI'm surprised
was a false positiv.
nothing more known at the moment.
Hrm. Seems sourceforge's archives are not caught up yet, but it appears that, at least with Tcl, they were testing it with the beta release. Of course, that's better for Tcl, but less a reflection of the defect rate in solid, released code...They're working with beta code!
Funnily enough at university I did a mix of static and dynamic analysis on Tcl's C source code (in fact on the full Tcl/Tk), and found it to be of very high quality. Of course my tools didn't compare at all to Coverity - nevertheless I was unable to find any problems at all automatically in Tcl/Tk, although I did in some other (now obsolete) packages - eg. microemacs.Tcl
Still, there would be a great advantage to having static analysis tools which did not depend on any one corporation's generosity to run.
No need for generosity
It is interesting that when I worked for a now defunct competitor to Coverity, we too wanted to provide useful information to the open source world, but failed in getting any attention.Some notes from the Coverity survey
Some notes from the Coverity survey
The method my employer attemped to use was to simply provide the tools for use by various organizations like Red Hat, SuSE, etc. They weren't open source, but we wanted to at least put the tools in their hands to run on whatever projects they felt needed attention, as well as provide a public scanning facility for open source projects to use.
This is a reasonable guess as to what happened, but not accurate.Some notes from the Coverity survey
or did not understand the important benefits of static security analysis on tools they are very much interested in selling as secure and reliable.
Well Coverity it appears analysed first, then reported the results. Also I believe for the kernel when the did the first version (a couple of years ago) they provided some fixes to. So that is why they have gotten the attention they have - they proved their value first.Some notes from the Coverity survey
It sounds like you were asking Red Hat et al to donate their engineering time to test your closed-source program. And you're surprised that they were not interested??Some notes from the Coverity survey
What in his posting suggested that the purpose of this was to test their tools? I'd guess they were probably interested in getting some good press and some PR attention as a result of making their tools available, but the projects would have gotten some benefit (reduction in latent problems) out of the exercise, too.Some notes from the Coverity survey
Bingo. My company would have loved to do shared press releases with Red Hat or SuSE or whatever saying how the tool helped them and also ran well on their systems and so on. There were even more cross marketing fits than I should go into here that made it a better idea than it sounds for both parties.Some notes from the Coverity survey
Oh, regarding possible vs operational, the sexy part of the tools was a more or less 0 false postiives track record. It was very good about identifying real problems in an obvious way. That's what made it so efficient vs older approaches.Some notes from the Coverity survey
So, what has become of your tool? Some notes from the Coverity survey
I guess when your firm became defunct it was sent into the void without a
thought of open sourcing it now that no money can be made out of it
anymore?
Which tool or company was it?Some notes from the Coverity survey
If you have true experience in this field, do you think it would be easy to provide some general guidelines for an open-source project targetted at building a real-world checker?
IMHO static analysis subjects are pretty difficult and the main blockage currently for an open tool is to get a good specification or design not necessarily the development energy for actually building the tool (or maybe my brain is simply getting too old to work on such theoretical subjects).
I think it's just simple free-software geek psychology: give us a tool and we just see more work to do, but give us bug reports and we're desperate to fix them to keep our geek pride.Some notes from the Coverity survey
Well you are right, but it's reallt alot easier if you get the bugreports instead of having to produce them yourself. It would take some time to setup the tool to run on the source code archives, and to understand the tool.Some notes from the Coverity survey
PostgreSQL defects
Unfortunately, that response does not really answer the question. The possibilities would seem to be: (1) whoever paid for the "certified versions" has not fed the resulting fixes back into the mainline; (2) all of the detected defects have been introduced into the code base since the certification run was done, or (3) the tests run on the "certified versions" were less comprehensive. None of those ideas are particularly reassuring.
Coverity does a "deep dive" into the code, and it recognizes all standard functions that never return (abort(), exit(), exec(), etc.) So, if your "bail out" function eventually invoked one of those known functions, it would have been marked as never returning. The vast majority of such functions do in fact call a standard never-returns function ultimately (how else do you get out?) and so this isn't generally a problem.PostgreSQL defects
From the discussion on the postgres list, I gather that there is a "returns sometimes" function which may have confused the situation a bit.
PostgreSQL defects
Actually our "bail out" function does a only longjmp, cleans up and continue execution somewhere else (having aborted the current transaction, etc). It would be pretty bad a database server, if it called exit() because of a problem!PostgreSQL defects
Well, you certainly could use __attribute__((noreturn)) in this situation: it doesn't mean `function never returns anywhere'; just `function never returns to its caller', so functions that always longjmp() are candidates.PostgreSQL defects
#define __attribute__(x)
#endif
You miss the point. You can't mark the function "noreturn" because it's a *sometimes return* function, depending on the arguments. If it's an ERROR or greater, it doesn't return, less it does return.PostgreSQL defects
set_optional_error_values();
act_on_errors();
Ah, yes, agreed; that makes a lot of sense.PostgreSQL defects
Another possibility is that coverity has improved its own code sincePostgreSQL defects
it tested the "certified" versions. It would be interesting to
re-run the tests on the certified versions, using the current coverity
code.
An intermediate approach would be to have Coverity provide a free service where one uploads code to their web site, and they run the bug checker on it, with two caveats: Sourceforge-like approach to semi-FOSS-friendly software
* The user agrees the code is under one of n free software licenses
* Coverity posts the code on the web site for some period of time