Lessons from Log4j
Lessons from Log4j
Posted Dec 16, 2021 20:48 UTC (Thu) by rahulsundaram (subscriber, #21946)In reply to: Lessons from Log4j by HenrikH
Parent article: Lessons from Log4j
Far less adoption would have made a large difference.
Posted Dec 17, 2021 9:04 UTC (Fri)
by eru (subscriber, #2753)
[Link] (1 responses)
On an alternate timeline with no open source, small shared components like log4j would not exist (never mind trivialities like the infamous left-pad). Licensing them would be too much bother, so purchases would be done only for larger pieces of software. Instead, companies would use the facilities in the OS or language runtime they use, and if not sufficient, roll their own.
In general, the alternate timeline would have less, and more expensive software. Hard to say if it would be of higher quality.
Posted Dec 28, 2021 15:40 UTC (Tue)
by jd (guest, #26381)
[Link]
I found a report from 2013 which states: "Code quality for open source software continues to mirror code quality for proprietary software: For the second consecutive year, code quality for both open source and proprietary software code was better than the generally accepted industry standard defect density for good quality software of 1.0. defect density (defects per 1,000 lines of code, a commonly used measurement for software quality). Open source software averaged a defect density of .69, while proprietary code (a random sampling of code developed by enterprise users) averaged .68."
Open Source did better, according to another report. "In fact, the most recent report (2013) found open source software written in C and C++ to have a lower defect density than proprietary code. The average defect density across projects of all sizes was 0.59 for open source, and 0.72 for proprietary software.
Yet other reports give other figures. "Defect density (defects per 1,000 lines of code) of open source code and commercial code has continued to improve since 2013: When comparing overall defect density numbers between 2013 and 2014, the defect density of both open source code and commercial code has continued to improve. Open source code defect density improved from 0.66 in 2013 to 0.61 in 2014, while commercial code defect density improved from 0.77 to 0.76."
Bear in mind that all three reports are basing their 2013 figures on the same 2013 analysis by Coverity and all three manage to give different numbers. Since the link to Coverity's report no longer works, I cannot tell you if any of them are correct.
Nonetheless, two of the three give better defect density levels to open source software, with the third being essentially equal. We can certainly use that to say that the commercial software examined certainly wasn't better and may have been worse. Of course, a lot can happen in 8 years, almost 9, and I can't find anything later than 2014.
So if we can't rely on tech articles, maybe we can look at methodology. Power of Ten and the CERT Guidelines for Secure Software would seem logical places to start. I do not, personally, know anyone who adheres to either and I've worked for a decent selection of companies. However, anecdotal evidence isn't worth much and it could be that everywhere else on the planet does. It may be a decent selection, but it's not really random and it's certainly not verifiable. Are there any surveys out there? Then there are rulesets like MISRA, which has fans and haters.
PRQA seems to have been seized by Perforce and previously free-to-read coding standards now seem to be locked up, so I have no idea what they currently are. ( The 2005 JSF rules are here: https://www.stroustrup.com/JSF-AV-rules.pdf - if they're as rapidly developing as Lockheed-Martin imply in one online presentation, these are well out of date.) All I can tell you with any confidence is that no open source coder, and very few professionals, have bought the Perforce suite as their software control system and are using the code analyzer to spot defects. It may be possible to use Helix QAC with Git, but I don't see anything to indicate that.
What I'm getting out of this is that the scene is messy, that a fair amount of the advice is apocryphal or at least wildly inaccurate (so a great starting point for budding galactic hitchhikers), that very few people are using the tools that do exist and that even when everything meshes just right, nobody seems to know what the results are.
I do sincerely hope that it's not as bleak as all that, but I'm worried it might actually be worse.
Posted Dec 18, 2021 2:25 UTC (Sat)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
Posted Dec 18, 2021 10:06 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Recognise a Turing Machine for what it is - a security nightmare. And DON'T USE IT WHEN IT'S NOT APPROPRIATE.
Cheers,
Lessons from Log4j
Lessons from Log4j
Lessons from Log4j
Lessons from Log4j
Wol