|
|
Subscribe / Log in / New account

The burden of knowledge: dealing with open-source risks

By Joe Brockmeier
March 14, 2025

FOSS Backstage

Organizations relying on open-source software have a wide range of tools, scorecards, and methodologies to try to assess security, legal, and other risks inherent in their so-called supply chain. However, Max Mehl argued recently in a short talk at FOSS Backstage in Berlin (and online) that all of this objective information and data is insufficient to truly understand and address risk. Worse, this information doesn't provide options to improve the situation and encourages a passive mindset. Mehl, who works as part of the CTO group at DB Systel, encouraged better risk assessment using qualitative data and direct participation in open source.

Mehl started with a few assumptions about the audience and open-source usage at the organizations they worked at. The first assumption was that audience members were in some way responsible for the use of open source in their organization. Next, those organizations have a five- to seven-digit number of open-source packages in use, spread out among a three- to five-digit number of internal projects. Many of the packages in use at those organizations are direct dependencies—the software the organization's developers actively chose to use—but the majority are indirect dependencies that are required for the software the organization wants to use.

Understanding risk

Those working with open source know that there are potential risks inherent in open-source projects. A project might have security vulnerabilities, or it might change licenses at some point and no longer be considered safe for the organization to use. Projects might also have sustainability issues, he said, which could take the form of an inactive maintainer, a completely dead project, or "other things that indicate that the longevity is not there".

Naturally, those responsible for open-source use need ways to measure the risk and communicate it to the organization. Mehl noted that there are many frameworks to choose from when assessing risk, but he chose to talk specifically about four methodologies: the Cybersecurity and Infrastructure Security Agency's (CISA) Framework for Measuring Trustworthiness, the OpenSSF Scorecard, the Community Health Analytics in Open Source Software (CHAOSS) framework metrics, and DB Systel's Open Source Red Flag Checker, which examines repositories for both red flags as well as activity and licensing conditions the group considers good.

Mehl put up a slide that highlighted a few quotes from the CISA blog post about its framework that, he said, helped to "understand the mindsets at play" in trying to codify risk around open-source software. For example, CISA claims that it is more complex to judge the trustworthiness of open-source software than proprietary software because there is "no direct relationship between the authors of software and those who use that software". Mehl said that the CISA framework is a bad framework for measuring risk, with a very narrow view on trust. "It puts me in a very passive relationship with open source, [it assumes] I have no direct relationship; I cannot change anything about this."

Passive metrics are not sufficient

Mehl said the problem with relying exclusively on the frameworks is that they only measure what can be measured and that passive metrics were not enough. He had a hot take on health metrics for open-source projects: "the people in this room can better assess the health of an open-source project than all the metrics". Metrics cannot replace an experienced "gut feeling" about a project. He did not say that organizations should not use them at all, but that they should not be the sole authority.

He brought up a paper from 2023, "Do Software Security Practices Yield Fewer Vulnerabilities?", by Nusrat Zahan, Shohanuzzaman Shohan, Dan Harris, and Laurie Williams. The OpenSSF Scorecard is an automated tool that assesses various factors in open-source projects hosted on GitHub, and assigns scores from 0-10. It is meant to be used by projects to assess and improve their security posture, or to be used by organizations to make decisions about the projects they use or may want to use. The paper found that a project's OpenSSF score "only explains 12% of vulnerabilities". In other words, the scorecard may be missing other factors that predict vulnerabilities.

The OpenSSF Scorecard and other metrics simply do not take into account many factors that are important when assessing an open-source project. For example, how is the maintainer behaving? How do they react to bug reports or pull requests? Is there a connection to an open-source foundation, or is the project a single-vendor effort? If it is funded by venture capital money, that is not sustainable and is predictive of license changes that will make the software non-free. Mehl pointed out that the CHAOSS framework covers some of these things, but it doesn't weight them.

Most importantly, Mehl said, "those passive metrics do not make us active". A huge problem inherent in using open source is that an organization does not have an alternative if one of these frameworks finds that an open-source package scores badly. "Most of the open source we're using is not controlled by ourselves." He said that he had software bill of materials (SBOMs) generated for software being used by DB Systel, and the software had about 125 dependencies on average. Many packages had more than 1,000 dependencies. In total, the project found 117,000 individual packages in use—and that is without considering versions of packages. If versions were taken into account, Mehl said that the number of packages would increase dramatically. Even worse, nine out of ten of the packages in use score worse than an aggregate five on the OpenSSF Scorecard. "If our rule were that you can only use open-source projects that score better than five out of ten, we would have a huge problem."

What do we do?

The frameworks can provide a lot of insight and knowledge, he said, which in some ways can be a burden. Now the organization has information to act on, but without clear recommendations. Using the frameworks to assess risks may grant an understanding that a project is flawed, but they do not provide the answers for addressing that risk. Replacing dependencies is not easy or economical. In organizations that are not experienced with open-source use, there may be a temptation to downplay the risks. "I wouldn't recommend that", he said. Another path is to become active and ask developers to replace dependencies, or even contact maintainers and ask them to update. "Or we could force them to fill out forms. You're laughing, but it has happened. Projects have been bombarded with forms to fill out." (Curl creator and maintainer Daniel Stenberg has written about exactly this scenario.)

Instead, Mehl suggested, "let's get more honest". Start with assessing the organization's risk profile and what is specifically important to it. Maybe the mere existence of a contributor license agreement is a risk, or dependence on a single vendor is a risk. "Risk profiles and risk assessment can be very individual. You should identify qualitative and quantitative data that matters to you" and group assets into different risk classes. For example, he said that DB Systel had some software that could be deployed for up to 60 years in critical infrastructure. Organizations could create risk "themes" instead of measuring all software use equally.

Organizations might come to the conclusion that they want to fund some projects financially by supporting maintainers or projects directly. He suggested that it would be wiser to do that through entities like Germany's Sovereign Tech Agency or to nudge software foundations to support specific projects and for organizations to come together and fund things collaboratively rather than one-off funding attempts. Money, however, does not solve everything. Mehl observed that some developers are not looking for or motivated by funding. Money can complicate things in open-source projects, and companies usually want something in return for their funding, which can be off-putting.

Another option is for organizations to contribute code to projects, perhaps even having employees become co-maintainers of projects. He also recommended that organizations could set up teams that provide support for open source and coordinate contributions to external projects, or even partner up with other organizations.

Recommended toolset

All of those options are on the more-reactive side of responding when it's clear an open-source project already in use has greater risk, though, and Mehl encouraged organizations to be more proactive. He proposed coming up with criteria for selecting projects based on risk assessments beforehand, so developers could make educated choices and choose open-source projects more wisely. Since he had knocked the CISA framework earlier, he highlighted what he considered a good example from the Biden administration's 2024 Cybersecurity Priorities. That memo, in part, recommends that agencies should improve open-source-software security and sustainability by directly contributing to maintaining open-source projects.

Organizations should have open-source program offices (OSPOs), he said, not for the sake of having an OSPO, but to help define roles and responsibilities and methods of engagement with open-source projects. "The opposite of CISA, this provides an active framework and mindset." Each organization should have tools in its toolbox to allow it to do four things: assess, sponsor, select, and engage with open-source projects that are important to the organization.

His final thoughts for the audience were that organizations need to collaborate more on assessment criteria and on how to share those efforts with other organizations. Too often, Mehl complained, that is duplicated work in thousands of organizations that could benefit from sharing.

Get active. Time's over for passive consumption of open source. We see in the world we can't rely on others to fix issues for us. We have to collaborate. We have to get active. We should do this in general but especially in open source.

Questions

The first question was, since Mehl was encouraging collaboration and reuse, whether DB Systel's internal guide on choosing open source was available. Mehl admitted that it wasn't, "but we should publish it, we should do that. It's a sensitive issue and it gets complicated, but yeah, I think we should share more."

Another audience member wanted to know how to tell internal developers "no" if a project doesn't meet the criteria, and asked if Mehl had seen pushback when telling someone that a developer's choice doesn't meet guidelines. Mehl said that DB Systel does not centralize that choice, that it gives guidance but the teams themselves make risk assessments. "This is a matter of an organization's risk assessment. It makes sense to centralize this in some organizations, but we have a different attack surface".

One member of the audience said that he was happy to hear someone say you can't judge risk solely by the numbers. He wanted to know if Mehl could publish the guidance that these decisions "have to be from the gut" somewhere that he could link to. Mehl didn't respond directly to the question of publishing the guidance but reiterated that "sometimes it makes sense to use something that scores badly" and then plan for the organization to engage with the project to improve it. "There is no other way around than getting active" in open source.

Some projects die over time, an audience member observed. He wanted to know what experience Mehl had with that and how to spot a project that is dying. Mehl said that there are a number of indicators that can hint a project is in trouble. That might include how many commits and active contributors a project has over time, or some of the OpenSSF metrics. But "sometimes software is just complete". Some projects do not need daily activity to sustain themselves; it might be good enough to have an update once per year. It depends on the project. Kubernetes "should be maintained more than once every 90 days", but other projects do not need that kind of activity. Of course, it also depends on where a project is being used, too. He said that an organization had to consider its strategy, identify the software it depends on most, and look at various things—such as the health of the community, the behavior of the maintainer, and CHAOSS metrics—as "a starting point of what you look at". Ultimately, it depends on each organization's risk profile.

While many of Mehl's guidelines can be boiled down to the unsatisfying answer no one likes ("it depends"), it's refreshing to see someone telling organizations they require more in-depth analysis to assess risk than can be had with one-size-fits-all frameworks and scorecards. It is even more encouraging that Mehl pushes organizations to be active in participating in open source, rather than treating projects like another link in the supply chain that can be managed like any other commodity.


Index entries for this article
ConferenceFOSS Backstage/2025


to post comments

monetization opportunity

Posted Mar 14, 2025 22:09 UTC (Fri) by roc (subscriber, #30627) [Link] (5 responses)

Surely there is an opportunity here to solve the open-source monetization problem: make the software open-source but charge for compliance. Require commercial software vendors to collect certificates that their dependencies are properly maintained, and let maintainers charge to provide those certificates.

monetization opportunity

Posted Mar 15, 2025 16:18 UTC (Sat) by raven667 (subscriber, #5198) [Link] (4 responses)

Isn't that effectively the EU response, commercial vendors need to take responsibility for whatever they ship including open source components, but open source developers don't have any particular burden unless they have a formal relationship to provide support, if a vendor wants to delegate maintenance to the original open source developers then they need to be willing to pay for it.

monetization opportunity

Posted Mar 15, 2025 20:55 UTC (Sat) by Wol (subscriber, #4433) [Link] (3 responses)

> if a vendor wants to delegate maintenance to the original open source developers then they need to be willing to pay for it.

Eggsackerly. Iirc, the legislation actually *bars* FLOSS authors from issuing CEs (unless, as you state, there are formal contracts in place).

Cheers,
Wol

monetization opportunity

Posted Mar 17, 2025 2:31 UTC (Mon) by pabs (subscriber, #43278) [Link] (2 responses)

Surely if a FLOSS author has the funding and inclination, they should be able to get their project certified and release that info publicly? Why would that be banned?!

monetization opportunity

Posted Mar 17, 2025 10:40 UTC (Mon) by farnz (subscriber, #17727) [Link]

AFAICT, the exact rule is that there has to be a contractual relationship of some form (a B2B sale is enough of a contract here, but acceptance of an open source licence is called out as not enough) to allow liability to pass along the chain. And the CRA "certification" is you saying that you accept liability for certain classes of faults in your software; open source gets a special exception, where you can offer the software to all interested parties without accepting liability for relevant faults (where commercial software providers, including those providing open source under a paid contract, can't escape liability in some cases).

There's also quite a bit of language in the CRA that prevents open source licensors from picking up liability without being very deliberate about who they're liable to, and what use that entity is making of the code; you can't just certify that you'll take on all liability for any use or misuse of the software (whereas a commercial entity can, and can even be required to take on liability for expected use of the software).

monetization opportunity

Posted Mar 17, 2025 16:12 UTC (Mon) by Wol (subscriber, #4433) [Link]

> Surely if a FLOSS author has the funding and inclination, they should be able to get their project certified and release that info publicly? Why would that be banned?!

Because it doesn't work like that. Nobody can "get their project certified". It's all *self*-certification - with legal teeth.

Let's say I want to use your FLOSS in my product. It's FLOSS - I can do so no problem, I self certify my product, everything's fine ... until something goes wrong, and I'm legally on the hook for your product. I don't want that.

So I go to you, and say "here's this legal checklist, can you tick these boxes?". One of which is the bus factor! As a lone developer you simply can't tick the boxes end of. And bearing in mind the legal liability, would you even *want* to tick the boxes without being paid for it?

So you could set up "Pabs Software Consultancy LLC" with a few like-minded FLOSS developers, sign a support contract, and you can issue your CE and get paid to work on your product doing what you want.

Or, what I could do in these circumstances, is sign a retainer with you about consultancy rates, you're legally obligated to drop everything and fix it for me if I need you to, etc etc, and you still don't issue a CE (you can't), but I'm happy underwriting my CE because I know I've got you on the hook if there's a problem.

The (fraudulent) alternative, which is what pizza was worried about, is if I pretty much blackmail you into issuing a CE for nothing even if you're a lone developer. That won't happen, because most FLOSS guys have much more than two brain cells, and if I try that you'll keep the evidence. So when something *does* go wrong, and there's an investigation by the authorities, not only will I be on the hook for the liability I tried to dodge, but I'll be in very hot water for trying to dodge it.

At the end of the day, the legislation is about legal liability for fixing things, and the authorities are wise to people making promises they can't keep ...

Cheers,
Wol

Red flag checker

Posted Mar 15, 2025 13:26 UTC (Sat) by mb (subscriber, #50428) [Link] (16 responses)

So, let's run it on itself then

>* ⚠️ Contributions: The top contributor has contributed more than 75% of the contributions of the next 10 contributors
>* ⚠️ Contributions: The last commit made by a human is more than 90 days old (192 days)

It basically is a single-author project and it basically hasn't been changed in two years.
Pretty red, eh? At least according to their own standards.

This tool is dangerous.

What I experience (personally) is that attacks from companies to Open Source projects are ever increasing.
I see that companies take stuff, violate my licenses, and then get angry if I don't support them in my free time.

And then, some guy comes and insults me that me being the only maintainer is a red flag.
Thanks.

Red flag checker

Posted Mar 16, 2025 10:41 UTC (Sun) by LtWorf (subscriber, #124958) [Link]

Yeah sales pitches at these conferences are sad.

In the end my workday's open sourced things are way more likely to be abandoned than many single contributor projects. Despite having funding and more contributors. Just because some VC might decide that is no longer something they want to see.

VCs consider LGPL licensed intellectual property owned by the company itself as way more risky than MIT licensed. Which is insane. But these rational actors do not really act rationally.

Red flag checker

Posted Mar 16, 2025 23:25 UTC (Sun) by ballombe (subscriber, #9523) [Link]

Also it is missing:

>* ⚠️⚠️ Software is funded by Google.

At this point the life expectancy of a software developed by an unpaid developer is higher than the life expectancy of a software receiving funding from Google.

Red flag checker

Posted Mar 17, 2025 8:51 UTC (Mon) by mxmehl (guest, #104271) [Link] (13 responses)

> It basically is a single-author project and it basically hasn't been changed in two years.
> Pretty red, eh? At least according to their own standards.
>
> This tool is dangerous.

Author of the talk here. Joe unfortunately did not mention that I called our Red Flag Checker a PoC. We don't even use it internally in a large scale, basically just to check whether a repo somehow required a Contributor License Agreement (CLA) which we consider to be a red flag.

The activity measurement came on top to explore how this tool could be further developed and whether it actually yields results which are useful to us. And as I said during the talk, I came to the conclusion that there mere activity of a project is not a strong indicator for potential risks.

I don't get where I insulted you or other maintainers in any way. I also don't understand where LtWolf sensed a sales pitch, as neither I nor DB Systel are selling anything to anyone in this room. If you're interested in my motivation and background, let's talk. If you're just here to bash anyone with any connection to a company in the broadest sense, have fun.

Red flag checker

Posted Mar 17, 2025 13:48 UTC (Mon) by vicky@isc.org (subscriber, #162218) [Link] (5 responses)

It is funny that the CLA is a red flag in your system. We added a CLA to one of our projects because *not having one* was considered a negative security factor by the OpenSSF Best Practices badge.

I know some checkers - and again the Best Practices badge, penalize projects that have contributors from only one organization. This is also odd from a security perspective - I can see a concern for project longevity, but not code security - if commits are managed by a small, well-integrated team.

A colleague and I did an informal survey last summer asking users what they look at to determine the risk level of OSS. results are published here: https://ec.europa.eu/eusurvey/publication/RIPE88OpenSourc...

bottom line is, it depends.

Red flag checker

Posted Mar 17, 2025 14:05 UTC (Mon) by neverpanic (subscriber, #99747) [Link]

The OpenSSF best practices, and the OpenSSF Scorecard especially, are questionable.

Don't trust me on this, check what other say, too. Here's for example Daniel Stenberg on record at https://mastodon.social/@bagder/113673188062525753 or https://github.com/curl/curl/discussions/12609.

Or, see https://www.youtube.com/watch?v=J2X1yItdxvo&t=644.

Red flag checker

Posted Mar 18, 2025 10:11 UTC (Tue) by mxmehl (guest, #104271) [Link] (3 responses)

Indeed, I also wonder why OpenSSF Scorecard with a security focus makes recommendations regarding licensing. They recommend either DCO or CLA. The former is a sane recommendation, although it's questionable whether it's legally helpful.

For me/us, a CLA is a red flag (so an indicator for a potential risk) because it poses the risk that an Open Source project becomes proprietary without any notice or debate amongst the contributors. Such a license change fundamentally changes our relationship to a certain piece of software, not only from a commercial/financial perspective but also in terms to our possibilities to engage actively or sponsor in order to contribute to the long-term stability of the project.

But again, this is a red flag, not a no-no. There are "good CLAs", e.g. for projects in trustworthy foundations such as Apache or GNU. In those contexts, a CLA may even be considered to be an advantage as it allows projects to improve the licensing situation, e.g. by dual-licensing, for the benefits of all users, so preserving and even extending freedoms, opposed to single-vendor projects where a CLA is rather used to restric freedoms.

Ultimately, this was the gist of my talk: metrics in whatever form are useless unless you interpret them within context. In Open Source, context can be the single projects, the ecosystem around it, and also my very own risk assessments as individual user or organisation.

Red flag checker

Posted Mar 18, 2025 11:05 UTC (Tue) by farnz (subscriber, #17727) [Link] (1 responses)

But again, this is a red flag, not a no-no

In many dialects of English, "red flag" is nearly synonymous with "no-no"; the distinction is that a certain behaviour is a "no-no" if you are considering doing it, while it's a "red flag" if you observe someone else doing it. In both cases, it refers to a behaviour that should deter other people from interacting with you.

So, saying that something's a "red flag, not a no-no" is a bit weird - it can be read as "this is awful behaviour, not awful behaviour", which is nonsense.

Red flag checker

Posted Mar 18, 2025 11:59 UTC (Tue) by paulj (subscriber, #341) [Link]

Not just English. In many, many contexts (motorsports, beaches, firing ranges), a red flag indicates imminent danger and a very strong discouragement, if not outright prohibition, on proceeding any further.

This isn't some abstract, vague symbol really. ;)

Red flag checker

Posted Mar 18, 2025 12:52 UTC (Tue) by vicky@isc.org (subscriber, #162218) [Link]

I was thinking of the DCO requirement on the OpenSSF's 'Best Practices' Badge app. I didn't bother clarifying because I thought this thread had run its course. I don't expect the Best Practices badge to 'improve security' or 'guarantee security' but I do think it provides some useful information, for someone who is looking. From my little survey though, users do not seem to be looking at that badge or equivalent scorecards. Maybe this behavior will change over time....

Vicky

Red flag checker

Posted Mar 17, 2025 18:31 UTC (Mon) by mb (subscriber, #50428) [Link] (6 responses)

Thanks for your reply.

> I don't get where I insulted you or other maintainers in any way.

Well, I'll describe my perspective.

A "red flag" is another word for a big no-go. It flags something that cannot be tolerated and must be fixed.
That is the base for my reaction.

I maintain a couple of projects that many companies use for various purposes. They probably earn good money with them and that is perfectly fine.

For all of them I am the only maintainer.
I really don't see what is wrong with that (a.k.a. red flag) per se.
Almost all smaller Open Source projects are like that.
As long as the maintainer has the resources and can do the job properly then there is no problem at all.

I think this tool is dangerous, because if executed by the wrong people with the wrong understanding about what "single maintainer" actually means it can lead to very bad decisions. Both for the company and also for the project.
I can already see the new rule in the development process that mandates there being no red flags. Yes, I understood that there's no such rule at DB. I'm actually talking about other companies/people here which adopt such a tool. I see similar harmful metrics being enforced daily in my day job.

But it's also true that my reaction is based on my recent bad experience.
I'm not saying that all companies behave bad. In fact, most companies have a very good style of communication with the Open Source community.
But there also is the occasional company which uses Open Source software for bad purposes such as illegal fake products while also violating the license (of course) and then requesting support from the Open Source maintainer. This is what *actually* happened to me recently. Quote from their communication: "its actually working we will just rebrand it [...] Entitled Dick Dev". And this is not the worst part. I don't want to put the more harsh insults here.

This is not a good base for the next one to come along and flag the project as a "red flag".
So, yes. I was probably overreacting. However, I also wanted to make clear that I don't find such a tool acceptable.

Red flag checker

Posted Mar 18, 2025 5:38 UTC (Tue) by pabs (subscriber, #43278) [Link]

What you experienced sounds pretty horrible...

Probably the tool needs to be reworked to point out correct solutions to the potential risks it uncovers.

For eg single maintainer working on a project in their spare time, with a donation form => donate money, offer to pay for work on the project and or assign employees to contribute back.

That of course won't deter bad actors but it should make most uses of the tool result in positive outcomes.

Red flag checker

Posted Mar 18, 2025 8:43 UTC (Tue) by kleptog (subscriber, #1183) [Link]

> I think this tool is dangerous, because if executed by the wrong people with the wrong understanding about what "single maintainer" actually means it can lead to very bad decisions.

In other words, it's just like basing important decisions on any other single metric, because a single metric can never cover all the nuances needed for a good decision. See also GDP, GDP per capita, life expectancy, etc. If decision makers take all the human judgement out of a decision, they get what they deserve.

Red flag checker

Posted Mar 18, 2025 10:33 UTC (Tue) by mxmehl (guest, #104271) [Link] (3 responses)

> A "red flag" is another word for a big no-go. It flags something that cannot be tolerated and must be fixed.
> That is the base for my reaction.

That's probably the main misunderstanding. For me, a red flag is an indicator for a potential risk. As written in an earlier comment above, CLAs are a good example. There are CLAs which I consider to be an immense risk, e.g. in the context of a single-vendor project lead by a company that a) is unwilling to cooperate with a community and b) is not specifically trustworthy. However, there may also be good CLAs, e.g. for trustworthy foundations that can use CLAs to secure and extend freedoms for the software users, and take effective measures to avoid any abuse of CLAs.

> I think this tool is dangerous, because if executed by the wrong people with the wrong understanding about what "single maintainer" actually means it can lead to very bad decisions. Both for the company and also for the project.

I am sorry to disappoint you but this "metric" is probably the most-used in all kinds of tools and has been implemented in methodologies and tools since at least 10 years. As with all metrics, it can be misinterpreted and stupid decisions can be made based on it -- which is one of the core reasons why I gave this presentation at FOSS Backstage. And hey, basically all of my own Open Source projects also turn out to score badly on these metrics, whatever that means ;)

And let's be honest: In some contexts, I might indeed come to the conclusion that a project lead by a single maintainer is a risk that I cannot accept without any countermeasures, for example in high-risk environments that need to stay stable for 20+ years. But again, a red flag is not a no-no. Nothing prevents such an organisation in this case to approach the project to offer contributions, co-maintainership, sponsorship, contracted work or even a permanent job. And again, outlining these possibilities, also within the context of upcoming legal requirements from the Cyber Resilience Act and similar legislation, was the main motivation for my talk.

That said, I am sorry you made such bad experiences with some individuals from companies. Unfortunately, the CRA might lead to an increase of such stupid requests and demands, but I do hope that in the long-term it will rather turn companies into good Open Source citizens who acknowledge their responsibilities towards the huge Open Source ecosystem they depend on and ultimately the customers of their products.

Red flag checker

Posted Mar 18, 2025 12:37 UTC (Tue) by pm215 (subscriber, #98099) [Link] (2 responses)

I think the more usual terminology for the flags analogy is that "indicators of potential risk" are *yellow* flags -- indicating "this might be fine, but it's something to investigate", and red flags are reserved for the more serious deal-breaker indicators, things that either mean "avoid" or "this must be addressed immediately". A bit of googling seems to indicate that some people have an extra higher tier of "black flag", but that's new to me.

Red flag checker

Posted Mar 18, 2025 14:23 UTC (Tue) by Wol (subscriber, #4433) [Link]

To me, a red flag is an indication of risk. Yes, often it's a warning to "don't do this", but ?the original? red flag was waved by a man in front of a motor car when they were new and considered dangerous. Then there's the red traffic light, which just means "stop".

As for the black flag, I have seen that for real in motor racing. It tells a driver to come off the track. I think this guy would have been in real trouble because he basically ignored it ...

Cheers,
Wol

Red flag checker

Posted Mar 18, 2025 15:04 UTC (Tue) by paulj (subscriber, #341) [Link]

Black flag probably is from motor racing. It indicates a specific participant must leave the race track, as they are disqualified.

Which bit about this only applies to open source?

Posted Mar 17, 2025 9:49 UTC (Mon) by taladar (subscriber, #68407) [Link] (3 responses)

Most of those risks seem very real for commercial dependencies or software products too, not just open source ones. If anything you can judge the state of the proprietary software less than you can with open source projects.

Which bit about this only applies to open source?

Posted Mar 17, 2025 15:28 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

I wonder if it falls into the same category of behavior as the "if we don't measure COVID infections, we can ignore the problem" behaviors witnessed during the pandemic.

Which bit about this only applies to open source?

Posted Mar 18, 2025 17:26 UTC (Tue) by NAR (subscriber, #1313) [Link] (1 responses)

With commercial dependencies there are contracts (e.g. service level agreements, support, etc.) in place which might mitigate the problem (even if not technically, then legally and/or financially).

Which bit about this only applies to open source?

Posted Mar 19, 2025 9:31 UTC (Wed) by taladar (subscriber, #68407) [Link]

I have certainly seen a lot of companies work with proprietary projects where the original creator had gone away and nobody else had any source code and so they are just ignoring the problem that the software does not get any updates and might stop working any day for a variety of reasons (e.g. compliance with changing laws, newly discovered security issues or critical bugs,...)


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds