The burden of knowledge: dealing with open-source risks
Organizations relying on open-source software have a wide range of tools, scorecards, and methodologies to try to assess security, legal, and other risks inherent in their so-called supply chain. However, Max Mehl argued recently in a short talk at FOSS Backstage in Berlin (and online) that all of this objective information and data is insufficient to truly understand and address risk. Worse, this information doesn't provide options to improve the situation and encourages a passive mindset. Mehl, who works as part of the CTO group at DB Systel, encouraged better risk assessment using qualitative data and direct participation in open source.
Mehl started with a few assumptions about the audience and open-source usage at the organizations they worked at. The first assumption was that audience members were in some way responsible for the use of open source in their organization. Next, those organizations have a five- to seven-digit number of open-source packages in use, spread out among a three- to five-digit number of internal projects. Many of the packages in use at those organizations are direct dependencies—the software the organization's developers actively chose to use—but the majority are indirect dependencies that are required for the software the organization wants to use.
Understanding risk
Those working with open source know that there are potential risks
inherent in open-source projects. A project might have security
vulnerabilities, or it might change licenses at some point and no
longer be considered safe for the organization to use. Projects might
also have sustainability issues, he said, which could take the form of
an inactive maintainer, a completely dead project, or "other things
that indicate that the longevity is not there
".
Naturally, those responsible for open-source use need ways to measure the risk and communicate it to the organization. Mehl noted that there are many frameworks to choose from when assessing risk, but he chose to talk specifically about four methodologies: the Cybersecurity and Infrastructure Security Agency's (CISA) Framework for Measuring Trustworthiness, the OpenSSF Scorecard, the Community Health Analytics in Open Source Software (CHAOSS) framework metrics, and DB Systel's Open Source Red Flag Checker, which examines repositories for both red flags as well as activity and licensing conditions the group considers good.
Mehl put up a slide that highlighted a few quotes from the CISA
blog post about its framework that, he said, helped to "understand
the mindsets at play
" in trying to codify risk around open-source
software. For example, CISA claims that it is more complex to judge
the trustworthiness of open-source software than proprietary software
because there is "no direct relationship between the authors of
software and those who use that software
". Mehl said that the CISA
framework is a bad framework for measuring risk, with a very narrow
view on trust. "It puts me in a very passive relationship with open
source, [it assumes] I have no direct relationship; I cannot change
anything about this.
"
Passive metrics are not sufficient
Mehl said the problem with relying exclusively on the frameworks is
that they only measure what can be measured and that passive metrics
were not enough. He had a hot take on health metrics for
open-source projects: "the people in this room can better assess
the health of an open-source project than all the
metrics
". Metrics cannot replace an experienced "gut
feeling
" about a project. He did not say that organizations should
not use them at all, but that they should not be the sole
authority.
He brought up a
paper from 2023, "Do Software Security Practices Yield Fewer
Vulnerabilities?", by Nusrat Zahan, Shohanuzzaman Shohan, Dan Harris,
and Laurie Williams. The OpenSSF Scorecard is an automated tool that
assesses various factors in open-source projects hosted on GitHub, and
assigns scores from 0-10. It is meant to be used by projects to assess
and improve their security posture, or to be used by organizations to
make decisions about the projects they use or may want to use. The
paper found that a project's OpenSSF score "only explains 12% of
vulnerabilities
". In other words, the scorecard may be missing
other factors that predict vulnerabilities.
The OpenSSF Scorecard and other metrics simply do not take into account many factors that are important when assessing an open-source project. For example, how is the maintainer behaving? How do they react to bug reports or pull requests? Is there a connection to an open-source foundation, or is the project a single-vendor effort? If it is funded by venture capital money, that is not sustainable and is predictive of license changes that will make the software non-free. Mehl pointed out that the CHAOSS framework covers some of these things, but it doesn't weight them.
Most importantly, Mehl said, "those passive metrics do not make
us active
". A huge problem inherent in using open source is that
an organization does not have an alternative if one of these
frameworks finds that an open-source package scores badly. "Most of
the open source we're using is not controlled by ourselves.
" He
said that he had software bill of materials (SBOMs) generated for software being used by
DB Systel, and the software had about 125 dependencies on
average. Many packages had more than 1,000 dependencies. In total, the
project found 117,000 individual packages in use—and that is
without considering versions of packages. If versions were taken into
account, Mehl said that the number of packages would increase dramatically.
Even worse, nine out of ten of the packages in use
score worse than an aggregate five on the OpenSSF Scorecard. "If our
rule were that you can only use open-source projects that score better
than five out of ten, we would have a huge problem.
"
What do we do?
The frameworks can provide a lot of insight and knowledge, he said,
which in some ways can be a burden. Now the organization has
information to act on, but without clear recommendations. Using the
frameworks to assess risks may grant an understanding that a project
is flawed, but they do not provide the answers for addressing that
risk. Replacing dependencies is not easy or economical. In
organizations that are not experienced with open-source use, there may
be a temptation to downplay the risks. "I wouldn't recommend
that
", he said. Another path is to become active and ask
developers to replace dependencies, or even contact maintainers and
ask them to update. "Or we could force them to fill out
forms. You're laughing, but it has happened. Projects have been
bombarded with forms to fill out.
" (Curl creator and maintainer
Daniel Stenberg has written
about exactly this scenario.)
Instead, Mehl suggested, "let's get more honest
". Start with
assessing the organization's risk profile and what is specifically
important to it. Maybe the mere existence of a
contributor license agreement is a risk, or dependence on a single
vendor is a risk. "Risk profiles and risk assessment can be very
individual. You should identify qualitative and quantitative data that
matters to you
" and group assets into different risk classes. For
example, he said that DB Systel had some software that could be
deployed for up to 60 years in critical infrastructure. Organizations
could create risk "themes
" instead of measuring all software
use equally.
Organizations might come to the conclusion that they want to fund some projects financially by supporting maintainers or projects directly. He suggested that it would be wiser to do that through entities like Germany's Sovereign Tech Agency or to nudge software foundations to support specific projects and for organizations to come together and fund things collaboratively rather than one-off funding attempts. Money, however, does not solve everything. Mehl observed that some developers are not looking for or motivated by funding. Money can complicate things in open-source projects, and companies usually want something in return for their funding, which can be off-putting.
Another option is for organizations to contribute code to projects, perhaps even having employees become co-maintainers of projects. He also recommended that organizations could set up teams that provide support for open source and coordinate contributions to external projects, or even partner up with other organizations.
Recommended toolset
All of those options are on the more-reactive side of responding when it's clear an open-source project already in use has greater risk, though, and Mehl encouraged organizations to be more proactive. He proposed coming up with criteria for selecting projects based on risk assessments beforehand, so developers could make educated choices and choose open-source projects more wisely. Since he had knocked the CISA framework earlier, he highlighted what he considered a good example from the Biden administration's 2024 Cybersecurity Priorities. That memo, in part, recommends that agencies should improve open-source-software security and sustainability by directly contributing to maintaining open-source projects.
Organizations should have open-source program offices (OSPOs), he
said, not for the sake of having an OSPO, but to help define roles and
responsibilities and methods of engagement with open-source
projects. "The opposite of CISA, this provides an active framework
and mindset
." Each organization should have tools in its toolbox
to allow it to do four things: assess, sponsor, select, and engage
with open-source projects that are important to the organization.
His final thoughts for the audience were that organizations need to collaborate more on assessment criteria and on how to share those efforts with other organizations. Too often, Mehl complained, that is duplicated work in thousands of organizations that could benefit from sharing.
Get active. Time's over for passive consumption of open source. We see in the world we can't rely on others to fix issues for us. We have to collaborate. We have to get active. We should do this in general but especially in open source.
Questions
The first question was, since Mehl was encouraging collaboration
and reuse, whether DB Systel's internal guide on choosing open source
was available. Mehl admitted that it wasn't, "but we should publish it,
we should do that. It's a sensitive issue and it gets complicated, but
yeah, I think we should share more.
"
Another audience member wanted to know how to tell internal
developers "no" if a project doesn't meet the criteria, and asked if
Mehl had seen pushback when telling someone that a developer's choice
doesn't meet guidelines. Mehl said that DB Systel does not centralize
that choice, that it gives guidance but the teams themselves make risk
assessments. "This is a matter of an organization's risk
assessment. It makes sense to centralize this in some organizations,
but we have a different attack surface
".
One member of the audience said that he was happy to hear someone
say you can't judge risk solely by the numbers. He wanted to know if
Mehl could publish the guidance that these decisions "have to be
from the gut
" somewhere that he could link to. Mehl didn't respond
directly to the question of publishing the guidance but reiterated
that "sometimes it makes sense to use something that scores
badly
" and then plan for the organization to engage with the
project to improve it. "There is no other way around than getting
active
" in open source.
Some projects die over time, an audience member observed. He wanted
to know what experience Mehl had with that and how to spot a project
that is dying. Mehl said that there are a number of indicators that can
hint a project is in trouble. That might include how many commits and
active contributors a project has over time, or some of the OpenSSF
metrics. But "sometimes software is just complete
". Some
projects do not need daily activity to sustain themselves; it might be
good enough to have an update once per year. It depends on the
project. Kubernetes "should be maintained more than once every 90
days
", but other projects do not need that kind of activity. Of
course, it also depends on where a project is being used, too. He said
that an organization had to consider its strategy, identify the
software it depends on most, and look at various things—such as
the health of the community, the behavior of the maintainer, and
CHAOSS metrics—as "a starting point of what you look
at
". Ultimately, it depends on each organization's risk
profile.
While many of Mehl's guidelines can be boiled down to the unsatisfying answer no one likes ("it depends"), it's refreshing to see someone telling organizations they require more in-depth analysis to assess risk than can be had with one-size-fits-all frameworks and scorecards. It is even more encouraging that Mehl pushes organizations to be active in participating in open source, rather than treating projects like another link in the supply chain that can be managed like any other commodity.
| Index entries for this article | |
|---|---|
| Conference | FOSS Backstage/2025 |
Posted Mar 14, 2025 22:09 UTC (Fri)
by roc (subscriber, #30627)
[Link] (5 responses)
Posted Mar 15, 2025 16:18 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (4 responses)
Posted Mar 15, 2025 20:55 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (3 responses)
Eggsackerly. Iirc, the legislation actually *bars* FLOSS authors from issuing CEs (unless, as you state, there are formal contracts in place).
Cheers,
Posted Mar 17, 2025 2:31 UTC (Mon)
by pabs (subscriber, #43278)
[Link] (2 responses)
Posted Mar 17, 2025 10:40 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
There's also quite a bit of language in the CRA that prevents open source licensors from picking up liability without being very deliberate about who they're liable to, and what use that entity is making of the code; you can't just certify that you'll take on all liability for any use or misuse of the software (whereas a commercial entity can, and can even be required to take on liability for expected use of the software).
Posted Mar 17, 2025 16:12 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
Because it doesn't work like that. Nobody can "get their project certified". It's all *self*-certification - with legal teeth.
Let's say I want to use your FLOSS in my product. It's FLOSS - I can do so no problem, I self certify my product, everything's fine ... until something goes wrong, and I'm legally on the hook for your product. I don't want that.
So I go to you, and say "here's this legal checklist, can you tick these boxes?". One of which is the bus factor! As a lone developer you simply can't tick the boxes end of. And bearing in mind the legal liability, would you even *want* to tick the boxes without being paid for it?
So you could set up "Pabs Software Consultancy LLC" with a few like-minded FLOSS developers, sign a support contract, and you can issue your CE and get paid to work on your product doing what you want.
Or, what I could do in these circumstances, is sign a retainer with you about consultancy rates, you're legally obligated to drop everything and fix it for me if I need you to, etc etc, and you still don't issue a CE (you can't), but I'm happy underwriting my CE because I know I've got you on the hook if there's a problem.
The (fraudulent) alternative, which is what pizza was worried about, is if I pretty much blackmail you into issuing a CE for nothing even if you're a lone developer. That won't happen, because most FLOSS guys have much more than two brain cells, and if I try that you'll keep the evidence. So when something *does* go wrong, and there's an investigation by the authorities, not only will I be on the hook for the liability I tried to dodge, but I'll be in very hot water for trying to dodge it.
At the end of the day, the legislation is about legal liability for fixing things, and the authorities are wise to people making promises they can't keep ...
Cheers,
Posted Mar 15, 2025 13:26 UTC (Sat)
by mb (subscriber, #50428)
[Link] (16 responses)
>* ⚠️ Contributions: The top contributor has contributed more than 75% of the contributions of the next 10 contributors
It basically is a single-author project and it basically hasn't been changed in two years.
This tool is dangerous.
What I experience (personally) is that attacks from companies to Open Source projects are ever increasing.
And then, some guy comes and insults me that me being the only maintainer is a red flag.
Posted Mar 16, 2025 10:41 UTC (Sun)
by LtWorf (subscriber, #124958)
[Link]
In the end my workday's open sourced things are way more likely to be abandoned than many single contributor projects. Despite having funding and more contributors. Just because some VC might decide that is no longer something they want to see.
VCs consider LGPL licensed intellectual property owned by the company itself as way more risky than MIT licensed. Which is insane. But these rational actors do not really act rationally.
Posted Mar 16, 2025 23:25 UTC (Sun)
by ballombe (subscriber, #9523)
[Link]
>* ⚠️⚠️ Software is funded by Google.
At this point the life expectancy of a software developed by an unpaid developer is higher than the life expectancy of a software receiving funding from Google.
Posted Mar 17, 2025 8:51 UTC (Mon)
by mxmehl (guest, #104271)
[Link] (13 responses)
Author of the talk here. Joe unfortunately did not mention that I called our Red Flag Checker a PoC. We don't even use it internally in a large scale, basically just to check whether a repo somehow required a Contributor License Agreement (CLA) which we consider to be a red flag.
The activity measurement came on top to explore how this tool could be further developed and whether it actually yields results which are useful to us. And as I said during the talk, I came to the conclusion that there mere activity of a project is not a strong indicator for potential risks.
I don't get where I insulted you or other maintainers in any way. I also don't understand where LtWolf sensed a sales pitch, as neither I nor DB Systel are selling anything to anyone in this room. If you're interested in my motivation and background, let's talk. If you're just here to bash anyone with any connection to a company in the broadest sense, have fun.
Posted Mar 17, 2025 13:48 UTC (Mon)
by vicky@isc.org (subscriber, #162218)
[Link] (5 responses)
I know some checkers - and again the Best Practices badge, penalize projects that have contributors from only one organization. This is also odd from a security perspective - I can see a concern for project longevity, but not code security - if commits are managed by a small, well-integrated team.
A colleague and I did an informal survey last summer asking users what they look at to determine the risk level of OSS. results are published here: https://ec.europa.eu/eusurvey/publication/RIPE88OpenSourc...
bottom line is, it depends.
Posted Mar 17, 2025 14:05 UTC (Mon)
by neverpanic (subscriber, #99747)
[Link]
Don't trust me on this, check what other say, too. Here's for example Daniel Stenberg on record at https://mastodon.social/@bagder/113673188062525753 or https://github.com/curl/curl/discussions/12609.
Posted Mar 18, 2025 10:11 UTC (Tue)
by mxmehl (guest, #104271)
[Link] (3 responses)
For me/us, a CLA is a red flag (so an indicator for a potential risk) because it poses the risk that an Open Source project becomes proprietary without any notice or debate amongst the contributors. Such a license change fundamentally changes our relationship to a certain piece of software, not only from a commercial/financial perspective but also in terms to our possibilities to engage actively or sponsor in order to contribute to the long-term stability of the project.
But again, this is a red flag, not a no-no. There are "good CLAs", e.g. for projects in trustworthy foundations such as Apache or GNU. In those contexts, a CLA may even be considered to be an advantage as it allows projects to improve the licensing situation, e.g. by dual-licensing, for the benefits of all users, so preserving and even extending freedoms, opposed to single-vendor projects where a CLA is rather used to restric freedoms.
Ultimately, this was the gist of my talk: metrics in whatever form are useless unless you interpret them within context. In Open Source, context can be the single projects, the ecosystem around it, and also my very own risk assessments as individual user or organisation.
Posted Mar 18, 2025 11:05 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (1 responses)
In many dialects of English, "red flag" is nearly synonymous with "no-no"; the distinction is that a certain behaviour is a "no-no" if you are considering doing it, while it's a "red flag" if you observe someone else doing it. In both cases, it refers to a behaviour that should deter other people from interacting with you.
So, saying that something's a "red flag, not a no-no" is a bit weird - it can be read as "this is awful behaviour, not awful behaviour", which is nonsense.
Posted Mar 18, 2025 11:59 UTC (Tue)
by paulj (subscriber, #341)
[Link]
This isn't some abstract, vague symbol really. ;)
Posted Mar 18, 2025 12:52 UTC (Tue)
by vicky@isc.org (subscriber, #162218)
[Link]
Vicky
Posted Mar 17, 2025 18:31 UTC (Mon)
by mb (subscriber, #50428)
[Link] (6 responses)
> I don't get where I insulted you or other maintainers in any way.
Well, I'll describe my perspective.
A "red flag" is another word for a big no-go. It flags something that cannot be tolerated and must be fixed.
I maintain a couple of projects that many companies use for various purposes. They probably earn good money with them and that is perfectly fine.
For all of them I am the only maintainer.
I think this tool is dangerous, because if executed by the wrong people with the wrong understanding about what "single maintainer" actually means it can lead to very bad decisions. Both for the company and also for the project.
But it's also true that my reaction is based on my recent bad experience.
This is not a good base for the next one to come along and flag the project as a "red flag".
Posted Mar 18, 2025 5:38 UTC (Tue)
by pabs (subscriber, #43278)
[Link]
Probably the tool needs to be reworked to point out correct solutions to the potential risks it uncovers.
For eg single maintainer working on a project in their spare time, with a donation form => donate money, offer to pay for work on the project and or assign employees to contribute back.
That of course won't deter bad actors but it should make most uses of the tool result in positive outcomes.
Posted Mar 18, 2025 8:43 UTC (Tue)
by kleptog (subscriber, #1183)
[Link]
In other words, it's just like basing important decisions on any other single metric, because a single metric can never cover all the nuances needed for a good decision. See also GDP, GDP per capita, life expectancy, etc. If decision makers take all the human judgement out of a decision, they get what they deserve.
Posted Mar 18, 2025 10:33 UTC (Tue)
by mxmehl (guest, #104271)
[Link] (3 responses)
That's probably the main misunderstanding. For me, a red flag is an indicator for a potential risk. As written in an earlier comment above, CLAs are a good example. There are CLAs which I consider to be an immense risk, e.g. in the context of a single-vendor project lead by a company that a) is unwilling to cooperate with a community and b) is not specifically trustworthy. However, there may also be good CLAs, e.g. for trustworthy foundations that can use CLAs to secure and extend freedoms for the software users, and take effective measures to avoid any abuse of CLAs.
> I think this tool is dangerous, because if executed by the wrong people with the wrong understanding about what "single maintainer" actually means it can lead to very bad decisions. Both for the company and also for the project.
I am sorry to disappoint you but this "metric" is probably the most-used in all kinds of tools and has been implemented in methodologies and tools since at least 10 years. As with all metrics, it can be misinterpreted and stupid decisions can be made based on it -- which is one of the core reasons why I gave this presentation at FOSS Backstage. And hey, basically all of my own Open Source projects also turn out to score badly on these metrics, whatever that means ;)
And let's be honest: In some contexts, I might indeed come to the conclusion that a project lead by a single maintainer is a risk that I cannot accept without any countermeasures, for example in high-risk environments that need to stay stable for 20+ years. But again, a red flag is not a no-no. Nothing prevents such an organisation in this case to approach the project to offer contributions, co-maintainership, sponsorship, contracted work or even a permanent job. And again, outlining these possibilities, also within the context of upcoming legal requirements from the Cyber Resilience Act and similar legislation, was the main motivation for my talk.
That said, I am sorry you made such bad experiences with some individuals from companies. Unfortunately, the CRA might lead to an increase of such stupid requests and demands, but I do hope that in the long-term it will rather turn companies into good Open Source citizens who acknowledge their responsibilities towards the huge Open Source ecosystem they depend on and ultimately the customers of their products.
Posted Mar 18, 2025 12:37 UTC (Tue)
by pm215 (subscriber, #98099)
[Link] (2 responses)
Posted Mar 18, 2025 14:23 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
As for the black flag, I have seen that for real in motor racing. It tells a driver to come off the track. I think this guy would have been in real trouble because he basically ignored it ...
Cheers,
Posted Mar 18, 2025 15:04 UTC (Tue)
by paulj (subscriber, #341)
[Link]
Posted Mar 17, 2025 9:49 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Mar 17, 2025 15:28 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Mar 18, 2025 17:26 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (1 responses)
Posted Mar 19, 2025 9:31 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
monetization opportunity
monetization opportunity
monetization opportunity
Wol
monetization opportunity
AFAICT, the exact rule is that there has to be a contractual relationship of some form (a B2B sale is enough of a contract here, but acceptance of an open source licence is called out as not enough) to allow liability to pass along the chain. And the CRA "certification" is you saying that you accept liability for certain classes of faults in your software; open source gets a special exception, where you can offer the software to all interested parties without accepting liability for relevant faults (where commercial software providers, including those providing open source under a paid contract, can't escape liability in some cases).
monetization opportunity
monetization opportunity
Wol
Red flag checker
>* ⚠️ Contributions: The last commit made by a human is more than 90 days old (192 days)
Pretty red, eh? At least according to their own standards.
I see that companies take stuff, violate my licenses, and then get angry if I don't support them in my free time.
Thanks.
Red flag checker
Red flag checker
Red flag checker
> Pretty red, eh? At least according to their own standards.
>
> This tool is dangerous.
Red flag checker
Red flag checker
Red flag checker
Red flag checker
But again, this is a red flag, not a no-no
Red flag checker
Red flag checker
Red flag checker
That is the base for my reaction.
I really don't see what is wrong with that (a.k.a. red flag) per se.
Almost all smaller Open Source projects are like that.
As long as the maintainer has the resources and can do the job properly then there is no problem at all.
I can already see the new rule in the development process that mandates there being no red flags. Yes, I understood that there's no such rule at DB. I'm actually talking about other companies/people here which adopt such a tool. I see similar harmful metrics being enforced daily in my day job.
I'm not saying that all companies behave bad. In fact, most companies have a very good style of communication with the Open Source community.
But there also is the occasional company which uses Open Source software for bad purposes such as illegal fake products while also violating the license (of course) and then requesting support from the Open Source maintainer. This is what *actually* happened to me recently. Quote from their communication: "its actually working we will just rebrand it [...] Entitled Dick Dev". And this is not the worst part. I don't want to put the more harsh insults here.
So, yes. I was probably overreacting. However, I also wanted to make clear that I don't find such a tool acceptable.
Red flag checker
Red flag checker
Red flag checker
> That is the base for my reaction.
Red flag checker
Red flag checker
Wol
Red flag checker
Which bit about this only applies to open source?
Which bit about this only applies to open source?
Which bit about this only applies to open source?
Which bit about this only applies to open source?
