|
|
Subscribe / Log in / New account

Assessing risk with the Core Infrastructure Initiative

By Nathan Willis
July 22, 2015

The Linux Foundation's Core Infrastructure Initiative (CII) exists to "fortify" critical open-source software projects with funding, code reviews, and other kinds of support, with a particular eye toward shoring up those packages to prevent serious security crises. CII was formed in response to the memorable "Heartbleed" vulnerability discovered in OpenSSL, which was the first adopted project. Recently, CII unveiled its Census Project, a semi-automated ranking of open-source projects by security risk. The numbers make for some interesting reading—although the conclusions subsequently drawn by the CII can be puzzling.

The Census Project was announced on July 9, at which time the CII presented the results of its project-analysis work. There is a multi-page HTML table on the Census Project page, as well as a white paper [PDF] co-authored with the Institute for Defense Analyses (IDA) that goes into detail on the methods and metrics considered and used. The upshot is that each project examined in the census is assigned an integer score on a scale from 0 to 16, with higher numbers indicating the greatest risk that the project could be the source of an undiscovered security hole. The peculiar aspect to the story, however, is that the CII appears to have amassed a list of high-risk projects that has little to do with the results of the Census Project.

Scoring projects

The process used to determine the scores, though, did not involve any inspection of the code itself—just a look at project "metadata" of various flavors. As described on the web site, the Census Project counts seven factors when compiling its scores. How these factors are measured requires a more detailed examination (below), but the list itself is short:

  • The number of CVEs filed (worth from 0 to 3 points)
  • The project's contributor count over the past 12 months (2 to 5 points)
  • The project's ranking in the Debian popularity list (point value unspecified)
  • Whether or not the project has a known web site (0 or 1 point)
  • Whether or not the package is exposed to the network (0 or 2 points)
  • Whether or not the package processes network data (0 or 1 point)
  • Whether or not the package could be used for local privilege escalation (0 or 1 point)
  • Whether or not the project includes an executable or only provides data (0 or -3 points)

The number of points assigned for popularity in Debian is not specified. The other factors, however, are only enough to add up to a score of 13, so perhaps the popularity is a 0–to–3 score—and it would appear that a high popularity ranking corresponds to more "risk" points. In addition, CII's Emily Ratliff noted that only CVEs since 2010 were counted.

Individual pages for each project assessment provide a bit more detail (see, for example, the page for tcpd), noting which language the program is implemented in, so other factors may be part of the scoring formula. Ultimately, of course, the score is the product of a human assessment of the project, as the CII web site makes plain. While some of the input data is harvested from Debian and from Black Duck's OpenHub, other factors clearly involve some qualitative judgment—such as whether or not a package could be used for local privilege escalation—and the white paper mentions that the speed with which CVEs are fixed played a role in the rankings.

Of the packages assessed so far, the first big cliff in the scoring occurs between the packages scoring 9 or above and those scoring 8 or below. This top-scoring class of packages includes the following:

PackageScore
tcpd11
whois11
ftp11
netcat-traditional11
at10
libwrap010
traceroute10
xauth10
bzip29
hostname9
libacl19
libaudit09
libbz2-1.09
libept1.4.129
libreadline69
libtasn1-39
linux-base9
telnet9

Regrettably, the raw numbers that make up each package's score do not appear to be available. It would have been interesting to see the exact point values assigned for number of contributors, for example. It is also not entirely clear how some of factors are scored—does "could be used for local privilege escalation" mean simply "is installed setuid," for example? The project has a GitHub repository where some of the data-scraping code can be inspected, but the CII site and white paper both indicate that human assessment of the data plays a major role in the final process (starting with cleaning up the "noisy" raw data).

Beyond scores

In the end, though, the oddest thing about the scoring is that these raw scores do not indicate which projects CII will invest in. The white paper, after a lengthy (60-page) explanation of the methodologies employed, comes up with a different set of human-selected "riskiest" projects based on the authors' "knowledge of how the programs are used" and on which project "appear to be relatively unmaintained". The human-identified project list includes: xauth, bzip2, libaudit0, libtasn1-3, bind9, exim4, isc-dhcp, gnutls26, gpgme, openldap, pam, openssl, net-tools, openssh, rsyslog, wget, apr-util, coolkey, ntp, gnupg, gzip, expat, freetype, libgcrypt11, keyutils, xz-utils, p11-kit, pcre3, cyrus-sasl2, libxml2, shadow, tar, zlib, apr, libjpeg8, libpng, libressl, unzip, giflib, mod-gnutls, postfix, and cryptsetup.

This list contains little that is surprising. The projects highlighted are those that must deal with untrusted network connections, those that are responsible for processing potentially malicious data file formats, and those that are responsible for enforcing security measures for the system as a whole or for application programs. This may seem a bit anticlimactic, since it varies little from the list that any security-conscious user might come up with on their own.

Nevertheless, it is good to see someone attempt systematic analysis to reach a conclusion about the riskiness of common programs. The troubling factor is that, so far, the analysis only underscores common sense. The larger question is what CII intends to do with this information. The first few CII-supported projects (ntpd, GnuPG, Frama-C, OpenSSL, OpenSSH, Debian's reproducible builds, and The Fuzzing Project) were selected before there was a formal process in place.

The Census Project is a first step toward assembling such a process. Still, the web page makes a point of saying that "the decision to fund a project in need is not automated by any means." The white paper concludes by saying only that CII participants "believe the next step is to further investigate these OSS projects for security and project healthiness."

Interestingly enough, outsiders are invited to participate in the CII's project-identification process by contributing patches or suggestions to the Census Project code on GitHub or by writing to one of the CII mailing lists. Thus far, two other projects have been suggested for consideration on the cii-census list (the archives of which are visible only to subscribers): the Trinity fuzz tester and the PaX patch set. Both suggestions were referred to the CII steering committee, which includes one representative each from the supporting companies: Amazon Web Services, Adobe, Bloomberg, Cisco, Dell, Facebook, Fujitsu, Google, Hitachi, HP, Huawei, IBM, Intel, Microsoft, NetApp, NEC, Qualcomm, RackSpace, Salesforce.com, and VMware.

The CII itself is still finding its footing. Apart from the Census Project, multiple pages on the site invite projects interested in funding to contact the CII with a grant request, but point them to a contact page but does not yet have a formal process defined. Time will tell how CII goes about selecting which projects to support from among the high-risk prospects. Hopefully, much of that selection process will take place in the open. As this census shows, there is no shortage of important projects that are in need of additional support; transparency in determining which ones merit support is as important as the ability to study the resulting improvements to the source code.

Index entries for this article
SecurityCore Infrastructure Initiative
SecurityResearch


to post comments

Assessing risk with the Core Infrastructure Initiative

Posted Jul 23, 2015 16:02 UTC (Thu) by ortalo (guest, #4654) [Link] (2 responses)

Ouch. Such a nice website (indeed), but not event a mention of NVD CVSS (https://nvd.nist.gov/cvss.cfm) nor of all the existing works toward software security evaluation metrics (e.g. NIST, SANS or CERT works in this area, to name only the most evident ones) or even the risk analysis techniques principles (the latter thing I could so easily forgive... ;-).
Not too speak of the fact that all these approaches frequently fail to provide usable output [1] and many people recommend to focus on security requirements (and thus critical programs) in order to allocate resources (though that requires justification and controlled even if subjective assessment).
Seems to me they are reinventing the wheel, and that they build it square too this time.

Am I getting old and bitter or just old? Help!

Well, in order to try to save myself, let me just suggest some definitive fund allocation method for software security improvement: wisely choose a wise project leader to wisely select the targeted programs, pay people for auditing code and proving they improved programs security, rinse project leader (change if unwise or exhausted), re-iterate.
If you do not have funds for several iterations: simply rinse everyone generously at the nearest meeting place or save the money for your favorite charity. Well, just my 0.02 cents on that method, I am in a hurry. ;-)

[1] Why do you think MITRE never tried to go further that atomic vulnerability evaluation? My guess is they closed the can as soon as they saw what was inside.

Assessing risk with the Core Infrastructure Initiative

Posted Jul 23, 2015 17:02 UTC (Thu) by david.a.wheeler (subscriber, #72896) [Link] (1 responses)

CVSS is for evaluating specific vulnerabilities, not for evaluating an entire project, so CVSS doesn't really help directly. The CII census does use CVE counts. CVSS could be sort-of used to weight the CVEs, but it's not clear it would help that much.

As far as "other evaluation techniques", there's a lot more detail here:
https://www.coreinfrastructure.org/sites/cii/files/pages/...

But a lot of these other evaluation techniques are hard to apply to the specific problem of ranking OSS projects for security investments. For example, third-party Common Criteria evaluations cost a lot of time and money for a single project; it's hard to imagine applying them to hundreds or thousands of OSS projects. NIST runs the FIPS 140-2 process, but that's focused only on crypto modules (and yes, we mention that). If you DO know of something else that would help rank OSS projects for security investments, please let us know! The issue tracker is a good way to let us know:
https://github.com/linuxfoundation/cii-census

Assessing risk with the Core Infrastructure Initiative

Posted Jul 24, 2015 12:58 UTC (Fri) by ortalo (guest, #4654) [Link]

[Disclaimer] This is just the immediate comment answer. I *will* read your 88 pages asap and think again and go on the issue tracker if i have something interesting to propose. You have obviously done a lot of work on the topic; so do not let some random commenter stop you with his skepticism!

Sorry for not being clear, but I really think there are no real objective ranking technique at this granularity level. (But I would be happy to be proven wrong, so please do not refrain to try).

Hence my initial pun (note [1]), which meant that MITRE was IMHO right not going further than single vulnerability CVSS scoring because more complex things would probably be erroneous.
Hence your own remark that other evaluation techniques are inadequate for software components - not only for cost reasons, but also because either they target full systems like Common Criteria or a specific type of component like FIPS (or specific CC profiles btw).
I would say this is due to that fact that (user dependent) security requirements are necessary to evaluate one system component rank/level.
Most system owners or managers like to hide that fact, but that's cowardness or liability denial. They'd rather put more efforts into explaining what they really want to protect and how much it is worth (to them). Note that's what *you* try to do, so kudos for that.

My fast advice is to acknowledge such difficulty and not try too hard to rank pieces of software based on some hypothetical absolute security measure.
I'd rather simply see one try to rank common security requirements of a Linux system, I mean the properties users want from the various software. And then select the associated list of software project targetted for some effort, possibly sorted subjectively if needed (by you or some other expert point of view).

Note that (security) requirements ranking may necessitate as much black magic as the way you approach the problem currently, so nothing has changed much. :-)

But, tThis could also even be fully subjective. And then?
Select a few different users if you want perpective. Select another expert if the first one was not so great after all. Assume full responsibility for your spending choices if you spend more on vi than emacs, favor KDE over Gnome, or the opposite. Also, defeat the Criteria costly approach by interating much faster and shamelessly choosing the software targets fast enough to spend most of the money on the security improvement itself rather than the managing process.
A good end-to-end evaluation metric would be wonderful to ensure accurate selection or that the feedback loop progresses, but I am sure we can do something even without it and I suspect a bad one may even mislead us (e.g. on trying to improve bad software where vulnerabilities get introduced faster than they are found).

Scoring visibility

Posted Jul 23, 2015 18:35 UTC (Thu) by david.a.wheeler (subscriber, #72896) [Link]

"Regrettably, the raw numbers that make up each package's score do not appear to be available. It would have been interesting to see the exact point values assigned for number of contributors, for example."

The information is there. The data for those values is in the results file, and the scoring is documented on the site. But I see your point, it's not immediately obvious how final score got to be that way... you have to do some (re)analysis.

I've added an issue to the issue tracker proposing this functionality:
https://github.com/linuxfoundation/cii-census/issues/29


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds