|
|
Subscribe / Log in / New account

Libxml2's "no security embargoes" policy

By Joe Brockmeier
June 25, 2025

Libxml2, an XML parser and toolkit, is an almost perfect example of the successes and failures of the open-source movement. In the 25 years since its first release, it has been widely adopted by open-source projects, for use in commercial software, and for government use. It also illustrates that while many organizations love using open-source software, far fewer have yet to see value in helping to sustain it. That has led libxml2's current maintainer to reject security embargoes and sparked a discussion about maintenance terms for free and open-source projects.

A short libxml2 history

The original libxml, also known as gnome-xml, was written by Daniel Veillard for the GNOME project. He also developed its successor, libxml2, which was released in early 2000 under the MIT license, even though GNOME applications tended to be under the GPLv2.

In the early 2000s, Veillard seemed eager to have others adopt libxml2 outside the GNOME project. It was originally hosted on its own site rather than on GNOME infrastructure. Libxml2 is written in C, but had language bindings for C++, Java, Pascal, Perl, PHP, Python, Ruby, and more. The landing page listed a slew of standards implemented by libxml2, as well as the variety of operating systems that it supported, and boasted that it "passed all 1800+ tests from the OASIS XML Tests Suite". The "reporting bugs and getting help" page gave extensive guidance on how to report bugs, and also noted that Veillard would attend to bugs or missing features "in a timely fashion". The page, captured by the Internet Archive in 2004, makes no mention of handling security reports differently than bug reports—but those were simpler times.

One can see why organizations felt comfortable, and even encouraged, to adopt libxml2 for their software. Why reinvent the extremely complicated wheel when someone else has not only done it but also bragged about their wheel's suitability for purpose and given it a permissive license to boot?

By the late 2000s, the project had matured, and the pace of releases slowed accordingly. Veillard continued to maintain the project, but skimming through the GNOME xml mailing list shows that his attention was largely elsewhere. Nick Wellnhofer began to make regular contributions to the project around 2013, and by 2017 he was doing a great deal of work on the project, eventually doing most of the work on releases—though Veillard was still officially sending them out. He was also making similar contributions to a related project, libxslt, which is a processor for Extensible Stylesheet Language Transformations (XSLT) which are used for transforming XML documents into other XML documents or into HTML, plain text, etc.

I want my libxml2

In April 2021, Stefan Behnel complained that it had been almost 18 months since the last libxml2 release. "There have been a lot of fixes during that time, so, may I kindly ask what's hindering a new release?" Veillard replied that the reason was that he was too busy with work, and there was "something I would need to get in before a release". That something seems to be a security fix for CVE-2021-3541, a flaw in libxml2 that could lead to a denial of service. The release of libxml2 2.9.11, which fixed the CVE, and 2.9.12, seem to have been the last contributions from Veillard to the project.

Wellnhofer had become the de facto maintainer of libxml2 and libxslt as Veillard was fading away from them, but he temporarily stepped down in July 2021. He had been able to fund his work through Chrome bug bounties and other Google programs, but: "returns from security research are diminishing quickly and I see no way to obtain a minimal level of funding anymore".

Veillard thanked Wellnhofer for his work, and said he was not sure that he would be able to ensure the same level of care for the projects on his own: "that's obvious for anybody monitoring those lists lately".

In January 2022, Wellnhofer announced that he was able to resume maintenance of libxml2 and libxslt through 2022, thanks to a donation from Google. He planned to move the projects to GNOME's infrastructure and resume releases, plus set up an official way to sponsor libxml2 development. Ultimately, he chose Open Source Collective as a fiscal host. (LWN covered Open Source Collective in 2024.) To date, it appears that the project has received the immense sum of $11,000, most of which was in the form of a $10,000 donation from Google, which appears to be the funding Wellnhofer received for maintenance of libxml2 through 2022.

Irresponsible behavior

Fast-forwarding to 2025, Wellnhofer opened an issue on May 8, in the libxml2 GitLab repository to announce a new security policy for the project. He said that he was spending several hours each week dealing with security issues, and that was unsustainable for an unpaid volunteer.

As an example of what Wellnhofer was faced with, and a hint as to what may have been the final straw, there are currently four bugs marked with the security label in the libxml2 issue tracker. Three of those were opened on May 7 by Nikita Sveshnikov, a security researcher who works for a company called Positive Technologies. One of the issues is a report about a null-pointer deference that could lead to a denial of service. It includes a request for Wellnhofer to provide a CVE number for the vulnerability and provide information about an expected patch date. Note that neither libxml2 nor GNOME are CVE Numbering Authorities (CNAs).

One can debate whether the vulnerabilities reported by Sveshnikov and other researchers have much value. Wellnhofer argues he has fixed about 100 similar bugs and does not consider that class of bugs to be security-critical. Even if it is a valid security flaw, it is clear why it might rankle a maintainer. The report is not coming from a user of the project, and it comes with no attempt at a patch to fix the vulnerability. It is another demand on an unpaid maintainer's time so that, apparently, a security research company can brag about the discovery to promote its services.

If Wellnhofer follows the script expected of a maintainer, he will spend hours fixing the bugs, corresponding with the researcher, and releasing a new version of libxml2. Sveshnikov and Positive Technologies will put another notch in their CVE belts, but what does Wellnhofer get out of the arrangement? Extra work, an unwanted CVE, and negligible real-world benefit for users of libxml2.

So, rather than honoring embargoes and dealing with deadlines for security fixes, Wellnhofer would rather treat security issues like any other bug; the issues would be made public as soon as they were reported and fixed whenever maintainers had time. Wellnhofer also announced that he was stepping down as the libxslt maintainer and said it was unlikely that it would ever be maintained again. It was even more unlikely, he said, with security researchers "breathing down the necks of volunteers."

Treating security flaws as regular bugs might make some downstream users nervous, but Wellnhofer hopes it will encourage more contributions:

The more I think about it, the more I realize that this is the only way forward. I've been doing this long enough to know that most of the secrecy around security issues is just theater. All the "best practices" like OpenSSF Scorecards are just an attempt by big tech companies to guilt trip OSS maintainers and make them work for free.

GNOME contributor Michael Catanzaro worried that security flaws would be exploited in the wild if they were treated like regular bugs, and suggested alternate strategies for Wellnhofer if he was burning out. He agreed that "wealthy corporations" with a stake in libxml2 security issues should help by becoming maintainers. If not, "then the consequence is security issues will surely reach the disclosure deadline (whatever it is set to) and become public before they are fixed".

Wellnhofer was not interested in finding ways to put a band-aid on the problem; he said that it would be better for the health of the project if companies stopped using it altogether:

The point is that libxml2 never had the quality to be used in mainstream browsers or operating systems to begin with. It all started when Apple made libxml2 a core component of all their OSes. Then Google followed suit and now even Microsoft is using libxml2 in their OS outside of Edge. This should have never happened. Originally it was kind of a growth hack, but now these companies make billions of profits and refuse to pay back their technical debt, either by switching to better solutions, developing their own or by trying to improve libxml2.

The behavior of these companies is irresponsible. Even if they claim otherwise, they don't care about the security and privacy of their users. They only try to fix symptoms.

He added that he would love to mentor new maintainers for libxml2, "but there simply aren't any candidates".

The viewpoint expressed by Wellnhofer is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time.

However, Wellnhofer's point that these companies have not looked to improve or care for libxml2 in the intervening years is entirely valid. It seems to be a case of "out of sight, out of mind"; as long as there are no known CVEs plaguing the many open-source libraries that these applications depend on, nobody at Apple, Google, Microsoft, or any of the other companies, seem to care much about the upkeep of these projects. When a vulnerability is found, the maintainer is seemingly expected to spring into action out of a sense of responsibility to the larger ecosystem.

Safe to say no

Wellnhofer's arguments about corporate behavior have struck a chord with several people in the open-source community. Ariadne Conill, a long-time open-source contributor, observed that corporations using open source had responded with "regulatory capture of the commons" instead of contributing to the software they depend on.

She suggested that maintainers lacked the "psychological safety" to easily say no. They can say no to corporate requests; doing so, however, means weighing that "the cost of doing so may negatively impact the project's ability to meet its end goal". In that light, maintainers may opt to concede to requests for free labor rather than risking the unknown consequences.

In response to Wellnhofer's change in security policy for libxml2, Mike Hoye proposed that projects adopt public maintenance terms that would indicate "access to code is no promise of access to people". The terms for a project would be included as a MAINTENANCE-TERMS.md file in the top-level directory, similar to the README.md and CONTRIBUTING.md files included with many projects these days. The sample maintenance terms that Hoye provided state that the software is provided as-is and disclaim any promises, including response time, disclosure schedules, or any "non-contractual obligations or conventions, regardless of their presumed urgency or severity".

Hoye said that the point of the maintenance terms is to deliberately build a culture of social permission where maintainers feel safe saying "no". Otherwise, he said:

Someday, somebody's going to come to you and say, I'm from Apple, I'm from Amazon, I'm from Project Zero and you need to drop everything because your project is the new heartbleed or Log4j or who knows what and the world is falling over and if that psychological offramp isn't there, if you haven't laid out clearly what PROVIDED AS-IS means and how you're going to act about it ahead of time, saying "I'll be at my kid's recital" or "I'm on vacation" or just "no" is extremely difficult.

Chris Siebenmann said that he thinks that Wellnhofer's rejection of security embargoes is "an early sign of more of this to come, as more open source maintainers revolt". The current situation, Siebenmann said, is increasingly bad for the maintainers involved and is not sustainable. He now draws a sharp distinction between the corporate use of open-source software versus independent projects, such as Debian or the BSDs, run by volunteers; he expects that others will be doing the same in the future.

Maintainers may not want to say no to other volunteers. But, Siebenmann said, if a corporation shows up with a security issue, they can point to the maintenance terms—because corporations are not using open source as part of a cooperative venture and are not people "even if they employ people who make 'people open source' noises".

Wellnhofer's stance and Hoye's idea seem to be resonating with other maintainers who have strong feelings about corporate open-source behavior. Whether open-source maintainers adopt MAINTENANCE-TERMS.md files as a common practice is yet to be seen. The increasing frequency of conversations about funding open source and whether corporations are doing their share does suggest that something needs to change soon if open source is to be sustainable and not just a sucker's game for maintainers.



to post comments

"There simply aren't any candidates"

Posted Jun 25, 2025 17:04 UTC (Wed) by pizza (subscriber, #46) [Link] (14 responses)

And there won't ever be, because why work for free when everyone _but_ the project maintainers are making money off of it?

(As an aside, I think this story would have unfolded quite differently had libxml2 been copyleft-licensed...)

"There simply aren't any candidates"

Posted Jun 25, 2025 17:07 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link] (12 responses)

> (As an aside, I think this story would have unfolded quite differently had libxml2 been copyleft-licensed...)

Are you thinking adoption by Google, Microsoft etc would have never happened or something else?

"There simply aren't any candidates"

Posted Jun 25, 2025 17:45 UTC (Wed) by HenrikH (subscriber, #31152) [Link] (11 responses)

Yes none of Apple, Google and Microsoft would have touched this with a 10ft pole if it had been GPL. But I think pizza was more thinking about the fact that those 3 might very well be fixing their internal forks of libxml2 in secret for all we know.

"There simply aren't any candidates"

Posted Jun 25, 2025 19:02 UTC (Wed) by sageofredondo (subscriber, #157944) [Link] (9 responses)

> those 3 might very well be fixing their internal forks of libxml2 in secret for all we know.

I like this argument. Even if they were to use a slightly weaker copyleft that allowed proprietary linking without releasing all the source code, like MPL 2.0, they would have had to publish changes and that means they need to cooperate with the upstream project for security disclosure.

"There simply aren't any candidates"

Posted Jun 26, 2025 0:37 UTC (Thu) by comex (subscriber, #71521) [Link] (8 responses)

I’m skeptical of this argument.

If a company is responsible and cares about avoiding security risk for other users of the upstream library, then they’ll participate in coordinated disclosure, regardless of license.

If the company is irresponsible and does not care, then they can unilaterally ship a fixed version of the library, with or without source code. This does create the risk that attackers will reverse engineer the bug from the fix, and use it to attack upstream users (or users of the company’s fork who haven’t updated).

But first of all, that risk exists regardless of whether the fixed version is distributed with source code or only as a binary. It’s a common practice to use binary reverse-engineering tools to identify patched vulnerabilities. This is more work than just comparing the source code, but that doesn’t mean it doesn’t happen. Companies know this and keep it in mind when deciding when to release security fixes (source: personal experience). Therefore, whether the library is copyleft shouldn’t have much impact on the company’s decisions.

If the fixed library isn’t distributed at all but only used on the company’s own servers, then that risk doesn’t exist as such, but most copyleft licenses don’t trigger in that situation anyway.

Second of all, to the extent that shipping a fix to copylefted code does increase risk for upstream users, why would that stop the company from doing so? We just assumed that the company was irresponsible and didn’t care about others. At most, the increased risk for upstream users would help people shame the company into behaving differently, or perhaps even threaten legal consequences down the line. But that seems like a very indirect approach.

(Shipping a fix also increases risk for the company’s own users who haven’t updated, but that happens regardless of when the fix is released, and the fix has to be released eventually. If anything, these users’ interests are best served by releasing the fix as soon as possible and not waiting for coordinated disclosure.)

"There simply aren't any candidates"

Posted Jun 26, 2025 0:55 UTC (Thu) by sageofredondo (subscriber, #157944) [Link] (7 responses)

> If a company is responsible and cares about avoiding security risk for other users of the upstream library, then they’ll participate in coordinated disclosure, regardless of license.

This entire article comes down to large tech companies 'not' caring about this. If they are legally required (they can be sued over) then they will care more about sharing their fixes.

> Therefore, whether the library is copyleft shouldn’t have much impact on the company’s decisions.

Company lawyers very much would disagree. They have very strong opinions on how OSS licenses affect company decisions. :-)

Much of this wall of text you post is a red herring about sharing source code fixes over speculation that large tech companies have forked code fixes. This is a solved issue-many companies publish security fixes for the kernel and other software all the time. Your arguments almost read like a random anti-GPL commenter from the 90s.

"There simply aren't any candidates"

Posted Jun 26, 2025 6:51 UTC (Thu) by comex (subscriber, #71521) [Link] (6 responses)

It sounds like I misinterpreted your last message. I assumed you were arguing that a copyleft license would make companies want to cooperate with upstream out of security or reputational concerns.

Instead you are claiming that it would directly force them to do so?

No common copyleft license requires companies to contribute fixes upstream at all, let alone participate in coordinated disclosure. (In fact, license requirements to contribute fixes upstream have traditionally been considered to render a license non-free. [1])

Copyleft licenses only require companies to publish source code (or sometimes just offer to send source code) for the specific version of the code they're distributing binaries of. Which is very different from publishing something that would actually be useful to the upstream project.

I'm not speaking hypothetically. Here is Apple’s distribution of libxml2:

https://github.com/apple-oss-distributions/libxml2

Yes, Apple releases source for their patched version even though it’s not copyleft. But that’s the only merit here. In every other way, this release is the kind of ‘code thrown across the wall’ that's typical of what copyleft licenses extract from companies without strong open source cultures.

First notice that the Git history is unrelated to the upstream Git repository. This is based on an older practice of just releasing tarballs for each version. Now Apple has made this into a Git repo, but the repo is just one commit per release.

Also notice that the original libxml2 code is actually in a subdirectory. The content outside the subdirectory mainly consists of an Xcode project: Apple has replaced the entire build system with an Xcode-based one.

Anyway, we need to identify the corresponding upstream version. In my experience, this can sometimes be nontrivial. But in this case, there is a plist file indicating that the base tarball is:

https://gitlab.gnome.org/GNOME/libxml2/-/archive/v2.9.13/...

(The same plist also has a list of “OpenSourceModifications”, but they’re just links to Apple’s internal bug tracker without any accompanying explanation.)

Next we can diff that tarball against Apple’s release. Here is the resulting *448KB* diff:

https://gist.github.com/comex/6f300555ef8ddf27bcd18aba9a0...

It contains tons of changes mashed together. Some of them actually have explanatory comments, including some fun ones:

// libxml2 v2.9 changed the HTML parser's handling of whitespace in a way that broke
// H&R Block at Home 2010. Detect H&R Block at Home 2010 and mimic the old parser behavior.

But most of the changes have no explanation. The internal commit messages probably did, but we can't see those.

Many of the changes appear to be backported from newer versions of libxml2. (The base version 2.9.13 is three years old, and there have been a lot of changes since then.) But other changes in the diff are original.

Of the original changes, many are trying to better adapt libxml2 to Apple platforms, such as by suppressing Clang warnings, adopting the os_log API, and changing link arguments. These improvements might theoretically be worth carrying upstream in some form, but mostly not in their existing form, which is a pile of `#ifdef __APPLE__` blocks in otherwise platform-independent code. At least they bothered to include `#ifdef __APPLE__` rather than just inserting changes that assume an Apple OS. But they likely didn't actually try to compile it on any other OS, and I bet it wouldn't compile.

More to the point, many of the changes look potentially security-related! Lots of integer-overflow-related stuff. But these appear to be mostly backports. Are any of them not backports? (It looks like most of the relevant code has been refactored upstream, so the changes aren't one-to-one.) If they're not backports, are they real bugfixes or just gratuitous hardening? If they're real bugfixes, are they applicable to the latest upstream version? And how severe are the bugs?

Good luck picking through all the changes and figuring that out for each one.

It's not impossible. An attacker might be motivated enough to do it. Or a highly motivated upstream. But not most upstreams. And definitely not an upstream that's already struggling to deal with the security issues that *are* being properly reported to it.

Thus, at least in this case, source releases do nothing to help upstream security. Perhaps some hypothetical copyleft license could require downstreams to handle security 'properly'. But not any of the current ones.

[1] https://www.gnu.org/licenses/license-list.html#eCos11

"There simply aren't any candidates"

Posted Jun 26, 2025 16:49 UTC (Thu) by clump (subscriber, #27801) [Link]

Fascinating. Thanks for sharing your thoughts.

"There simply aren't any candidates"

Posted Jun 27, 2025 19:42 UTC (Fri) by raven667 (subscriber, #5198) [Link]

Thanks for taking a look at it and finding evidence for your thoughts, even though FOSS licensing is no guarantee of a collaborative upstream first process, having the code changes available at all so that we can have concrete, evidence - based discussions about them is a unique benefit that in-house private development lacks

"There simply aren't any candidates"

Posted Jun 28, 2025 2:45 UTC (Sat) by sageofredondo (subscriber, #157944) [Link] (1 responses)

> It sounds like I misinterpreted your last message. I assumed you were arguing that a copyleft license would make companies want to cooperate with upstream out of security or reputational concerns.

A little, see below.

> Instead you are claiming that it would directly force them to do so?

Did not say that at all.

I was responding to your poor arguments that a license does not have much impact on the company's behavior; it very much does. If Microsoft knowingly links GPL source code to the Windows kernel; that must be released under the GPL. Proprietary software companies go out of their way to avoid GPL software for a reason. This is what I meant with the lawyers.

> No common copyleft license requires companies to contribute fixes upstream at all, let alone participate in coordinated disclosure.

Yes, but once again, a copyleft license must share changes if they distribute a binary. Companies cooperate over CVE releases. Therefore, if they want to reduce their rebasing burden of their forks they will coordinate with upstream releases.

> (In fact, license requirements to contribute fixes upstream have traditionally been considered to render a license non-free. [1])

> [1] https://www.gnu.org/licenses/license-list.html#eCos11

Did you add the wrong link? I am not familiar with eCos or why this is an example of upstream fixes are to make a license non-free. https://www.gnu.org/licenses/license-list.html#eCos11

> Thus, at least in this case, source releases do nothing to help upstream security. Perhaps some hypothetical copyleft license could require downstreams to handle security 'properly'. But not any of the current ones.

Your source digging does prove your argument that a company can simply dump source code to comply with copyleft requirements and not contribute back in a helpful manner as far as security is concerned.

But why does Apple do this? They are not required to by the license. If they rebase then they have to deal with conflicts with upstream commits when their fixes change things slightly different than their fixes. It feels like an organization decision with minimal compliance by the people doing the work. This is a waste of developer time, but Apple can afford it with their margins. Most companies do not have their margins and once their source code changes are 'required' to be shared; they have a monetary incentive to contribute back to reduce their rebase burden.

Not responding to most of your post since you jumped on an assumption and wrote another wall of text.

"There simply aren't any candidates"

Posted Jun 30, 2025 9:06 UTC (Mon) by NYKevin (subscriber, #129325) [Link]

> Did you add the wrong link? I am not familiar with eCos or why this is an example of upstream fixes are to make a license non-free. https://www.gnu.org/licenses/license-list.html#eCos11

They did not add the wrong link, you failed to read it:

> This was the old license of eCos. It is not a free software license, because it requires sending every published modified version to a specific initial developer. [...]

"There simply aren't any candidates"

Posted Jul 6, 2025 12:31 UTC (Sun) by poruid (guest, #15924) [Link] (1 responses)

Seems the EU Cyber Resilience Act makes not up-streaming security fixes a violation, regardless of any licence terms.

"There simply aren't any candidates"

Posted Jul 6, 2025 13:47 UTC (Sun) by kleptog (subscriber, #1183) [Link]

Indeed, CRA [1] Article 13.6:

> 6. Manufacturers shall, upon identifying a vulnerability in a component, including in an open source-component, which is integrated in the product with digital elements report the vulnerability to the person or entity manufacturing or maintaining the component, and address and remediate the vulnerability in accordance with the vulnerability handling requirements set out in Part II of Annex I. Where manufacturers have developed a software or hardware modification to address the vulnerability in that component, they shall share the relevant code or documentation with the person or entity manufacturing or maintaining the component, where appropriate in a machine-readable format.

Obviously it only applies to manufacturers that want to sell digital products in the EU market, but it's something. It would definitely include Apple.

[1] https://eur-lex.europa.eu/eli/reg/2024/2847/oj/eng

"There simply aren't any candidates"

Posted Jun 26, 2025 0:20 UTC (Thu) by mbp (subscriber, #2737) [Link]

Big tech companies use plenty of GPL software in builds that they don't distribute to third parties. No redistribution, no obligation to distribute the source.

But, anyhow, I think licensing is orthogonal to the issues of maintainer burnout, security researcher incentives, etc.

"There simply aren't any candidates"

Posted Jun 26, 2025 15:26 UTC (Thu) by ebassi (subscriber, #54855) [Link]

> (As an aside, I think this story would have unfolded quite differently had libxml2 been copyleft-licensed...)

Funnily enough, libxml2 used to be LGPL 2.0 or later, and then got relicensed to MIT.

What license doesn't already say that?

Posted Jun 25, 2025 18:19 UTC (Wed) by DimeCadmium (subscriber, #157243) [Link] (3 responses)

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Not good enough

Posted Jun 25, 2025 21:51 UTC (Wed) by skybrian (guest, #365) [Link] (2 responses)

The license doesn’t tell us whether the software should be used or not. There’s lots of well-maintained software that has no warranty.

Ideally, the README should start with a notice that the library is deprecated and should no longer be used. Then the maintainers can move on in peace.

In practice, though, the “Security” section in the README says “stay away” clearly enough:

> This is open-source software written by hobbyists, maintained by a single volunteer, badly tested, written in a memory-unsafe language and full of security bugs. It is foolish to use this software to process untrusted data. As such, we treat security issues like any other bug. Each security report we receive will be made public immediately and won't be prioritized.

https://gitlab.gnome.org/GNOME/libxml2

Not good enough

Posted Jun 25, 2025 22:41 UTC (Wed) by marcusb (guest, #16598) [Link] (1 responses)

Incorrect. It is perfectly good enough. The license literally states what the terms for providing the software are, and what users should *depend on* from maintainers.

> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,

Not acceptable to you? Buy s bespoke support contract, or accept whatever free support you *happen* to receive.

This really is not that difficult.

Not good enough

Posted Jun 26, 2025 10:55 UTC (Thu) by dmv (subscriber, #168800) [Link]

This is the correct answer. Those terms are included in order to disclaim the traditional implied warranties of merchantability and fitness for a particular purpose.

Understandable

Posted Jun 25, 2025 20:33 UTC (Wed) by wtarreau (subscriber, #51152) [Link] (18 responses)

I understand and generally agree with this maintainer's point regarding getting rid of embargoes. *Most* of the time they're requested by the reporter to have enough time to prepare self-promotion material. On the kernel security list we've seen a few cases where reporters said "please don't publish this yet, we're submitting a paper for next year's conference, hoping it will be accepted" (needless to say that it didn't go as planned).

The only legitimate ones to ask for an embargo are the downstream consumers (distros etc) who need time. But even then it's common to fall into the trap where everyone needs to check with their neighbor first and manager second, and at the end they ask you for 15 days during which basically nothing will happen. The worst I've seen was on vendor-sec around 20 years ago, there were so many people that it looked like a competition and it was impossible to have a reasonable one (30 days were not that uncommon in the last years).

The problem is that there are still some people who believe that exposure starts at the publication, which is wrong. Exposure already exists while people are talking, and embargoes only extend it further and place users even more at risk. Many embargoes have been interrupted due to noticed exploitation in the wild.

Another problem is that when you fix a bug, often you notice other ramifications around that place and you want to fix them. But you can't until the first one is published, and you can't always reorder stuff due to dependencies. So embargoes also prevent you from correctly fixing bugs. And what's particularly bad is that it's extremely common to improperly fix a vulnerability (regressions are super frequent because you focus on the bug and not necessarily on the rare odd cases that you had forgotten about). But when you deal with embargoes and a lot of noise is made at release date, users hear the noise, upgrade and think they're safe, then they listen much less. But often another version is needed next in order to fix the regressions or to completely fix the bug, based on feedback. What's the point of waiting in the first place if it's to deliver a half-assed fix ?

These days I almost never accept embargoes either on my projects. I consider that the standard rule is no embargo, and that when it's really critical (i.e. about once a year), anyway, each time it's different and each time you find a good reason for breaking your own rules, thus in this case you can make an exception and accept an embargo of the suitable time (including waiting for a distro maintainer to be back at work etc). So I prefer to say "no to embargoes unless *I* consider the case special enough to derogate to that rule". Then this is negotiated with those I'm relying on and we can collectively act for the best of our users. For now this has simplified our task (maintainers and distro packagers), significantly reduced the overhead caused by this ridiculous theater and resulted in delivering important fixes very quickly to exposed users.

Note that I understand that for a lib it can be more complicated because sometimes it can require some rebuilds of other components before reaching users. That's not a reason for the reporter to decide on the embargo though.

When it comes to CVEs that reporters request to enhance their resume ("Curriculum Vitae Enhancer"), I simply tell them that I'm not going through that pain anymore but am not opposed to them doing it. Usually they do it the first time, and the second time they just report a bug without asking for a CVE because they realize how annoying it is for very little benefit. The rare cases we do there precisely is when criticality requires an embargo, and in this case, usually distros like Debian or SuSE are super fast to allocate one and pre-fill it so that we can go on without having to deal with that.

Overall fixing bugs including security-related ones is a team's effort. And it works much better with good communication and coordination than with settling on embargoes and paper work. By involving interested parties early, we can each adapt to our respective capabilities and availability and be super efficient without putting too much burden on anyone in the chain. For this reason our distro maintainers are very reasonable as well and always have the last word if they ask for something to ease their task (CVE or embargo typically) because that's extremely rare and we can more easily respect their constraints that we know are real in this case.

Understandable

Posted Jun 26, 2025 0:17 UTC (Thu) by mbp (subscriber, #2737) [Link] (2 responses)

Well said

(And I wish LWN had an upvote or +1 button but hey, it's also nice to be back in the 90s before social media garbage.)

Understandable

Posted Jun 26, 2025 3:27 UTC (Thu) by wtarreau (subscriber, #51152) [Link] (1 responses)

> And I wish LWN had an upvote or +1 button but hey, it's also nice to be back in the 90s before social media garbage.

Agreed (2nd part of the sentence). I've seen some other places become much friendlier *after* they dropped the +1 button, which tends to excite participants and encourage them to respond again. But we're getting out of topic now :-)

Understandable

Posted Jun 26, 2025 22:18 UTC (Thu) by sramkrishna (subscriber, #72628) [Link]

Actually I felt that Phoronix forums got a lot better after adding the +1. Otherwise, the people who were being bigots you can see that most of the readers didn't care for their comments.

Understandable

Posted Jun 26, 2025 7:02 UTC (Thu) by chris_se (subscriber, #99706) [Link] (2 responses)

> I understand and generally agree with this maintainer's point regarding getting rid of embargoes. *Most* of the time they're requested by the reporter to have enough time to prepare self-promotion material. On the kernel security list we've seen a few cases where reporters said "please don't publish this yet, we're submitting a paper for next year's conference, hoping it will be accepted" (needless to say that it didn't go as planned).

Also note that the embargo stuff is kinda backwards now: initially it was always the maintainers of the software (often proprietary) who requested embargoes and didn't want security researchers to publish, in order for them to be able to fix it first. And I remember that this was even a hotly debated topic not too long ago (keywords "responsible disclosure" vs. "full disclosure"). The fact that embargoes are now requested by the security reporters is a perversion of the entire process.

I don't begrudge security researchers wanting to earn money with their job, it just shouldn't be at the expense of the maintainers and users of the software they are analyzing.

> These days I almost never accept embargoes either on my projects. I consider that the standard rule is no embargo, and that when it's really critical (i.e. about once a year), anyway, each time it's different and each time you find a good reason for breaking your own rules, thus in this case you can make an exception and accept an embargo of the suitable time (including waiting for a distro maintainer to be back at work etc). So I prefer to say "no to embargoes unless *I* consider the case special enough to derogate to that rule". Then this is negotiated with those I'm relying on and we can collectively act for the best of our users. For now this has simplified our task (maintainers and distro packagers), significantly reduced the overhead caused by this ridiculous theater and resulted in delivering important fixes very quickly to exposed users.

Fully agree with that sentiment. And once security fixes are out, there's always the time between the fixes being out and them being installed, so even with an embargo there will always be a window in which some systems will be potential subjects to exploitation.

Understandable

Posted Jun 26, 2025 17:13 UTC (Thu) by wtarreau (subscriber, #51152) [Link] (1 responses)

> Also note that the embargo stuff is kinda backwards now: initially it was always the maintainers of the software (often proprietary) who requested embargoes and didn't want security researchers to publish, in order for them to be able to fix it first. And I remember that this was even a hotly debated topic not too long ago (keywords "responsible disclosure" vs. "full disclosure"). The fact that embargoes are now requested by the security reporters is a perversion of the entire process.

That's indeed a very good observation which just reminded me old memories of when that was still the case! This clearly illustrates the perversion of this recent situation! I think it has shifted very slowly without anyone noticing...

> I don't begrudge security researchers wanting to earn money with their job, it just shouldn't be at the expense of the maintainers and users of the software they are analyzing.

Exactly!

Understandable

Posted Jun 27, 2025 12:25 UTC (Fri) by corsac (subscriber, #49696) [Link]

I had (as a Debian security team member) researchers asking for long embargos because they had an article submitted/accepted to a security conference later that year.

I respectfully disagreed to prevent all publication before the conference. It doesn't really make sense and is not really needed in the general case. I can sympathize a bit for the "grand reveal" effect where you drop a nice vulnerability in front of a large audience at a security conference (I might have done a bit of that myself in the past), but in reality you don't lose anything by presenting the *work* you did to discover that, the thought process, and maybe the work you did with upstream and downstreams and other partners to actually fix the vulnerability (some researchers do that).

So yeah, at least for vulnerabilities discovered after some *real work* by security researchers, I think it's just fine to politely decline an embargo, fix the vulnerability publicly and give credit right now. And the researchers can refer to the advisory when doing their talk.

If the success of that talk only relies on the fact the vulnerability will be disclosed live, then it's likely not that interesting to present it at all.

Embargoes

Posted Jun 26, 2025 11:13 UTC (Thu) by rwmj (subscriber, #5474) [Link] (9 responses)

I think you're right that embargoes must only be used in extremely critical situations. I'm thinking the bar is as high as the xz / ssh thing last year, where a critical service was backdoored, and distros did (genuinely) need a few days to update packages and make sure everything was ready on the mirrors, so that `dnf update` would work as soon as the problem was announced. Anything less than that, well, it's open source code isn't it, the clue is in the name.

Embargoes

Posted Jun 29, 2025 9:43 UTC (Sun) by anton (subscriber, #25547) [Link] (8 responses)

My understanding of the xz backdoor is that it was only in bleeding-edge rolling-release versions of the distributions at the time, and that the backdoor cannot be exploited by arbitrary attackers (an attacker needs a key from the backdoor-creator), although the latter property probably was not obvious in the early days of the discovery.

In any case, it's not clear that an embargo was appropriate in this case (and was there an embargo? I don't remember one). Maybe this is generalizable: In case of an intentional backdoor, an embargo is less useful than for an unintentional vulnerability, because the backdoor is already known to the attacker and is probably used by the attacker; the sooner the backdoor is closed on an individual system, the better, even if the closing happens in an uncoordinated fashion.

Embargoes

Posted Jun 29, 2025 10:51 UTC (Sun) by rwmj (subscriber, #5474) [Link] (7 responses)

Yes there was an embargo, although it was broken early because one of the distros unintentionally published a commit referencing the backdoor. We had to rush to get everything ready over a long Saturday night (IIRC, could have been a Sunday).

As soon as the backdoor was known in public it was quite obvious that people would want to update immediately. At that time (and in fact, even know) the full scope of the backdoor was not fully understood, and even if it was "but it's only exploitable by the attacker!11!!" is hardly a reasonable position to take.

Embargoes

Posted Jun 29, 2025 13:38 UTC (Sun) by anton (subscriber, #25547) [Link] (6 responses)

So the embargo was there to allow the distribution people have an easier time at the cost of exposing the users of the faster distributions to the backdoor for longer?

Concerning "reasonable position", in what way do you consider it unreasonable?

"[...] !11!!"
Not sure where that is coming from. Broken browser?

Embargoes

Posted Jul 1, 2025 14:45 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (3 responses)

> So the embargo was there to allow the distribution people have an easier time at the cost of exposing the users of the faster distributions to the backdoor for longer?

Sadly, this has always been how embargoes work. If you want "reasonable" ones, the best way to act is to agree on the shortest that is accepted by at least one distro and let the other ones figure how to bypass the heavy internal paperwork that slows them down to finally get their packages in place in time. Hint: curiously it always works, because everyone can deal with emergencies. Nowadays critical issues seem to be handled as "business as usual" and I remember seeing cases where distros were asking for 14 days for an RCE because you know, the process chain is long before packages arrive... But when you remove 2 managers and 4 weekly meetings from the process, it suddenly becomes possible to build, run the packages through the CI and have them ready for download in a few hours to days. So yes, it's important to pressure downstream to be reasonable by aligning on fast acting ones.

Embargoes

Posted Jul 4, 2025 17:36 UTC (Fri) by anton (subscriber, #25547) [Link] (2 responses)

Sadly, this has always been how embargoes work.
For a vulnerability (i.e., not a backdoor), the idea is that the black hats do not know the vulnerability yet, and they will learn about it when somebody publishes a bugfix; so the embargo synchronizes the publication of the bugfix, and hopefully most users have upgraded before the black hats can exploit it.

The situation is different for a back door: the black hats already know about it. So the embargo only means that some (or maybe all) affected users are exposed to the back door for longer.

Embargoes

Posted Jul 4, 2025 17:52 UTC (Fri) by mb (subscriber, #50428) [Link] (1 responses)

No, not really.
If detailed information about a back door is published without a fix, then *everybody* can start to exploit it.

Embargoes

Posted Jul 5, 2025 16:10 UTC (Sat) by anton (subscriber, #25547) [Link]

No. Installing a backdoor requires a lot of effort, and the ones installing the backdoor have many incentives to secure the access to the backdoor: In particular, they don't want random attackers to use the backdoor for their purposes which may draw attention to the back door or may prevent access directly (e.g., if the random attackers encrypt the target system).

So no, even with information about the back door being public knowledge, only the back door installers can exploit it. Case in point: From what I have read, no security researcher has managed to enter through the xz backdoor yet.

Embargoes

Posted Jul 2, 2025 5:43 UTC (Wed) by donald.buczek (subscriber, #112892) [Link] (1 responses)

Nice link

Posted Jul 4, 2025 17:12 UTC (Fri) by cbushey (guest, #142134) [Link]

It's always good to see that my vpn is doing it's job. Thank you. Sorry about the sidetracking. Wish there was a +1 for this sort of thing. (only joking) So what is it? (red dwarf reference).

Understandable

Posted Jun 27, 2025 4:25 UTC (Fri) by DemiMarie (subscriber, #164188) [Link] (1 responses)

One case where I think an embargo makes sense is where a bug is difficult to discover and either easy to exploit or very tricky to fix. Samba had a vulnerability whose fix required a complete rewrite of the fileserver, which took over a year. That’s an extreme case, though. Kaminsky’s DNS cache poisoning also needed an embargo.

Understandable

Posted Jun 27, 2025 15:41 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

But these ones are requested by those fixing the issue, not by the reporters who are filing the domain name and having a new logo being drawn for the vuln.

bad incentives for security work

Posted Jun 25, 2025 23:38 UTC (Wed) by roc (subscriber, #30627) [Link] (7 responses)

Part of the problem here is the security-research culture that celebrates *finding* bugs but does not celebrate *fixing* them.

bad incentives for security work

Posted Jun 25, 2025 23:44 UTC (Wed) by pizza (subscriber, #46) [Link] (5 responses)

> Part of the problem here is the security-research culture that celebrates *finding* bugs but does not celebrate *fixing* them.

The sad fact of the matter is that *unfixed* bugs are considerably more valuable than fixed ones.

bad incentives for security work

Posted Jun 26, 2025 3:38 UTC (Thu) by wtarreau (subscriber, #51152) [Link] (4 responses)

> The sad fact of the matter is that *unfixed* bugs are considerably more valuable than fixed ones.

In part but not only. I've met many people telling me their stories of how they got root here and there. It's certainly fun and interesting, but it looks so amazing to most people that they quickly feel like super-heroes or magicians who can do stuff nobody else can do, so very quickly there's some pride in finding bugs only.

And I agree that the heroic part is not finding the bug but fixing it without losing functionality. On the kernel security list we encourage a lot the reporters to provide the fix themselves so as to promote their work, trying to make them understand where the value is (and because once you understand a bug well, you're very close from fixing it). Most often it works well. Some come back later with new bugs and the accompanying proposed patch.

bad incentives for security work

Posted Jun 26, 2025 7:17 UTC (Thu) by tjasper (subscriber, #4310) [Link] (3 responses)

Perhaps, to extend this sentiment, a response to an embargo request from a security researcher is to agree, only if said researcher makes the effort and provides a fix at the same time?

bad incentives for security work

Posted Jun 26, 2025 9:58 UTC (Thu) by Wol (subscriber, #4433) [Link]

Not necessarily, but as an absolute minimum it should come with a decent bug report, that explains WHY it's a security problem, and also a strongly plausible explanation as to why it deserves an embargo.

If the researcher can't be arsed to describe WHY it's a problem, then the assumption should be it isn't.

Cheers,
Wol

Requirements to ask for an embargo

Posted Jun 26, 2025 10:53 UTC (Thu) by farnz (subscriber, #17727) [Link]

Not necessarily a fix, but an embargo request should come with enumerated benefits for the upstream maintainer - obviously, "we will work with you to provide a fix that you're happy to apply" is a benefit, but the judgement about whether the benefits on offer are big enough should be left in the volunteer maintainer's hands.

Fundamentally, this ends up being another case of "you shouldn't expect volunteers to co-operate with you for your benefit out of the goodness of their hearts". Maintainers should be free to co-operate if they want to, but also free to refuse to co-operate if that's the way they feel about you today, and if you want them to co-operate, you should be offering them something that makes them want to co-operate (be that money, assistance, public praise, anything else that motivates them).

bad incentives for security work

Posted Jun 26, 2025 17:15 UTC (Thu) by wtarreau (subscriber, #51152) [Link]

> Perhaps, to extend this sentiment, a response to an embargo request from a security researcher is to agree, only if said researcher makes the effort and provides a fix at the same time?

Unfortunately that doesn't work, because I have an example of a currently pending issue on a project where the reporter asked for an embargo and came with the patch. Stupid reasons again.

The norms make sense, but don’t fit the specifics of libxml

Posted Jun 26, 2025 2:56 UTC (Thu) by notriddle (subscriber, #130608) [Link]

I can think is two major causes for this:

- independent security researchers for proprietary and open source don’t have a separate culture; if most of your coworkers aren’t expected to fix the vulnerabilities they find, you won’t want to do it, either, even if they are technically prevented from doing so and you aren’t

- a lot of people want whoever wrote the vulnerability to fix it; same org != same person, but that’s not legible from the outside

But, in either case, norms don’t always match up with specifics. And you can’t just draw the line at “FOSS,” either, since SQLite doesn’t take external contributions, and Chromium has plenty of money to fix their own bugs.

Obligatory XKCD regarding the maintenance of free software

Posted Jun 26, 2025 7:04 UTC (Thu) by chris_se (subscriber, #99706) [Link]

Sounds like Google at least opened the chequebook

Posted Jun 26, 2025 7:44 UTC (Thu) by rhowe (subscriber, #102862) [Link] (1 responses)

Whilst the amount Google contributed financially was small beans compared to the value they will have had out of libxml2, they did at least make a contribution. They get a small nod from me for that.

I'm with the libxml2 authors here in general, though. These large business users are not managing their risk profile well if they are incorporating code without checking it over and ensuring it's fit for the purposes they are putting it to.

Sounds like Google at least opened the chequebook

Posted Jun 26, 2025 17:06 UTC (Thu) by clump (subscriber, #27801) [Link]

I don't know the story behind Google's contribution however the amount involved seems like the maximum some enlightened team could contribute.The Google behemoth could of course do a lot more, but it still reads like some good people did what they could.

Yet again, a significant root cause of issues is C here

Posted Jun 26, 2025 8:18 UTC (Thu) by parametricpoly (subscriber, #143903) [Link] (3 responses)

Parsers should be written in memory safe, preferably total functional languages (not turing complete). The comment by the author

> As you may have noticed, most of our fuzzers inject malloc failures to cover code paths handling such failures, see #344 (closed). In the past, I have fixed ~100 bugs related to handling of malloc failures. I do not consider these issues security-critical,

tells a lot. It wasn't mentioned here but I bet the performance is also abysmal. Examples such as https://gitlab.gnome.org/GNOME/libxml2/-/issues/212. It's just simply the wrong too for the job.

Yet again, a significant root cause of issues is C here

Posted Jun 26, 2025 13:58 UTC (Thu) by pizza (subscriber, #46) [Link] (2 responses)

> Parsers should be written in memory safe, preferably total functional languages (not turing complete).

That's all fine and dandy; except for the minor problem of no such languages existing [1] over two decades ago when libxml2 was first written.

Then there's the other problem where ones internal data structures are usually pretty closely tied to the parser's API (and data structures) which makes it quite hard to retrofit existing code to a different parser.

Then there's the third problem where XML is a particularly sadistic, infinitely-recursive freeform beast. XML parsers have to be malleable and adaptable to handle whatever arbitrary input that you are looking to consume. There is no getting around that inherent complexity, and it's better to have as much as possible handled within the parser itself so application writers can minimize the number of footguns they are juggling.

Yes, these goals are fundamentally in conflict. Welcome to the wonderful world of engineering tradeoffs and technical debt.

[1] At least not with a stable C-compatible ABI. Even Rust is only a decade old (1.0 was released in May 2015), and WUFFS (v0.1 released in 2019) still considers XML a "long-term" roadmap item.

Yet again, a significant root cause of issues is C here

Posted Jun 26, 2025 19:41 UTC (Thu) by wahern (subscriber, #37304) [Link] (1 responses)

> WUFFS (v0.1 released in 2019) still considers XML a "long-term" roadmap item.

While WUFFS includes a JSON decoder, it's just a JSON tokenizer, not a parser. (It includes an example JSON parser, but only the tokenizer is using WUFFS.) I would assume the roadmap for XML is just that, an XML tokenizer, unless they're contemplating extending WUFFs capability set in tandem with an XML parser. But both JSON and XML are designed to be trivial to tokenize[1], and I would be surprised if there were any security CVEs in major implementations purely rooted in tokenization. In my recollection, bugs in libxml2 and libxslt (and libexpat and others, for that matter) have been in the higher levels of the implementation stack.

OTOH, I guess WUFFs does enforce a streaming approach to tokenization and, by extension, parsing. That's generally a good idea, anyhow, but I guess it does serve a channeling function for those inclined toward approaches that would unnecessarily rely on too much ad hoc dynamic allocation and pointer chasing.

[1] At least ignoring interactions with DTDs, like the ability to define new named entities.

Yet again, a significant root cause of issues is C here

Posted Jun 27, 2025 9:07 UTC (Fri) by chris_se (subscriber, #99706) [Link]

> While WUFFS includes a JSON decoder, it's just a JSON tokenizer, not a parser. (It includes an example JSON parser, but only the tokenizer is using WUFFS.)

Just took a look at the JSON examples in WUFFS after you mentioned this, and yeah...

For JSON the difference between a pure tokenizer and a SAX-like parser are small enough that it doesn't really matter, which is why that's fine. But I don't think this still holds for XML, especially if you include all the features libxml2 supports.

Plus the main appeal of libxml2 is the support for DOM and other more advanced features, not just having a SAX parser, those are a dime a dozen, so even a SAX parser would maybe be at most 10% of libxml2...

> I would be surprised if there were any security CVEs in major implementations purely rooted in tokenization. In my recollection, bugs in libxml2 and libxslt (and libexpat and others, for that matter) have been in the higher levels of the implementation stack.

Yes, the pure tokenization of XML is probably the easiest part of parsing XML, so I don't expect that any mature XML parser will have any remaining bugs remaining in the tokenization logic.

Liability and Regulation

Posted Jun 26, 2025 8:23 UTC (Thu) by lmb (subscriber, #39048) [Link] (6 responses)

I believe that regulatory acts such as the CRA, PLD et al can actually help this, because then, a "Manufacturer" (as opposed to the upstream project) will, indeed, be on the hook for making sure these get fixed, and can't just push them to the upstream maintainers.

Will some try? Sure, and I agree maintainers need to be given more safety in saying "No" comfortably, and support in doing so. And will some manufacturers only fix it for their own downstreams without contributing back? Sure, some, but we all understand the trade-offs of maintaining patch sets.

But the security researchers's reports are public in the upstream tracker. A manufacturer would have a very hard time arguing they'd not been aware of it (that sounds like malpractice and/or gross negligence to me, but then, I'm not a lawyer).

So the Open Source Steward can say no to investigating and fixing it, but the manufacturer's *can't* without likely becoming liable. And at that point, they're all better off sending the patch upstream as well.

It also strengthens the position of business models around Open Source distributors, since quite a few manufacturers will need to off-load that effort (and liability) to a third party (unless they want to build up those capabilities in-house, and if they do, more power to them, awesome).

I agree the situation is somewhat dire (I still see some of the security trackers, and the amount of "hey we found this and will tell you if you pay us a bounty" is infuriating, and doesn't seem to have improved since these are AI slop generated at times ...), but I do think that "regulatory capture" isn't the term. The regulations, to me, seem to strengthen FLOSS maintainers and force more people in the system to take the responsibility, and not just take.

Liability and Regulation

Posted Jun 26, 2025 12:51 UTC (Thu) by hailfinger (guest, #76962) [Link]

> I believe that regulatory acts such as the CRA, PLD et al can actually help this, because then, a "Manufacturer" (as opposed to the upstream project) will, indeed, be on the hook for making sure these get fixed, and can't just push them to the upstream maintainers. [...]
> So the Open Source Steward can say no to investigating and fixing it, but the manufacturer's *can't* without likely becoming liable. And at that point, they're all better off sending the patch upstream as well.

Not only that, manufacturers are legally obliged by the CRA to send the patch upstream. That's one of the reasons to love the CRA.

Quoting the CRA chapter II article 13 number 6:

"Manufacturers shall, upon identifying a vulnerability in a component, including in an open source-component, which is integrated in the product with digital elements report the vulnerability to the person or entity manufacturing or maintaining the component, and address and remediate the vulnerability in accordance with the vulnerability handling requirements set out in Part II of Annex I. Where manufacturers have developed a software or hardware modification to address the vulnerability in that component, they shall share the relevant code or documentation with the person or entity manufacturing or maintaining the component, where appropriate in a machine-readable format."

Liability and Regulation

Posted Jun 26, 2025 14:09 UTC (Thu) by pizza (subscriber, #46) [Link] (4 responses)

> But the security researchers's reports are public in the upstream tracker. A manufacturer would have a very hard time arguing they'd not been aware of it (that sounds like malpractice and/or gross negligence to me, but then, I'm not a lawyer).

That depends; legally there's a pretty big difference between "specific knowledge" (you were explicitly informed about that specific issue) and "general awareness" (the existence of upstream bugs that may or may not have security implications)

This is why most liability regimes kick in only if it can be demonstrated that you had specific knowledge and failed to act on it.
(You can also be found liable if you deliberately take steps to avoid gaining said knowledge)

...Meanwhile, in this new CRA world I expect there will be services spinning up that take your SW BoM perform upstream/CVE/etc monitoring services for you.

Liability and Regulation

Posted Jun 26, 2025 15:30 UTC (Thu) by Wol (subscriber, #4433) [Link]

> This is why most liability regimes kick in only if it can be demonstrated that you had specific knowledge and failed to act on it.
> (You can also be found liable if you deliberately take steps to avoid gaining said knowledge)

"Strict" and "vicarious" liability.

We had a case many years ago - a toddler dropped an ice cream, the mum went to clean it up, and a jobsworth stepped in and issued her with a fine for littering. While he was doing that, a seagull nabbed the ice cream.

My immediate reaction was she should have appealed:

Q: Was liability strict or vicarious?

If A: is strict, then she didn't drop it, no liability.
If A: is vicarious, then the jobsworth has greater liability because he deliberately prevented her from cleaning up

(This was ?Brighton, which has a major seagull problem attacking tourists for food - and really literally attacking.)

Cheers,
Wol

Liability and Regulation

Posted Jun 28, 2025 21:26 UTC (Sat) by SLi (subscriber, #53131) [Link] (2 responses)

Ah, this makes we wait with some pleasure for the moment when corporate lawyers start arguing for policies that forbid engineers to access external bug trackers.

Liability and Regulation

Posted Jun 28, 2025 22:43 UTC (Sat) by Wol (subscriber, #4433) [Link]

Then the company gets done for "wilful ignorance".

If it's company POLICY not to look for known bugs, then that's a pretty basic case of negligence.

Cheers,
Wol

Liability and Regulation

Posted Jun 29, 2025 9:30 UTC (Sun) by farnz (subscriber, #17727) [Link]

That falls under "(You can also be found liable if you deliberately take steps to avoid gaining said knowledge)". If policy is to not check external bug trackers, then policy is deliberately taking steps to avoid gaining said knowledge, and you lose.

It's quite likely that CRA liability will extend to any bug that upstream has published in their issue tracker, even if you don't check that issue tracker; the reasoning would be that you chose to take that dependency, so you are expected to track whether that's a bad decision or not.

Corporate entitlement and evil behaviour

Posted Jun 26, 2025 9:56 UTC (Thu) by paulj (subscriber, #341) [Link]

> They can say no to corporate requests; doing so, however, means weighing that "the cost of doing so may negatively impact the project's ability to meet its end goal".

It can be worse than that, much worse. There are evil shitty corporate people out there, with massive senses of entitlement, who will take "no" as a challenge to their superiority, and then work to destroy your project - and perhaps even try destroy your ability to work on your own project.

There are some serious sociopaths out there in the tech world, particularly amongst those who've spent their careers in very cut-throat US big tech (Cisco, e.g., is notorious for fostering a ruthless internal corporate culture - and former employees can bring that elsewhere), and particularly if you encounter people further up the corporate ladder.

The problem is not limited to community free software

Posted Jun 26, 2025 12:39 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

> these companies make billions of profits and refuse to pay back their technical debt, either by switching to better solutions, developing their own or by trying to improve libxml2

That’s true but not caused by libxml2 or its licensing, that’s a pervasive attitude in the industry where code (both public / free software and private / closed) is left to rot security-wise as long as no one notices (the US Air Force gave everyone an object lesson in Afghanistan by flying uber expensive drones with such a broken encryption insurgents could easily tap into the video feeds). And then some people troll community maintainers by pretending the situation is better closed-software side and they should align with very strong goals closed software people readily ignore.

The EU CRA has been a long time in coming.

However its effects won’t show up till some bigcorp loses bigsum during a CRA-motivated trial. Like in the automotive industry where manufacturers lose bigsums each time they cheat on motor quality or pollution performance and are forced by judges to perform general recalls (note that even in that case the lesson does not stick long).

Such is our economic system. Money talks.

It's not just your google, apple, etc

Posted Jun 26, 2025 22:24 UTC (Thu) by sramkrishna (subscriber, #72628) [Link]

Smart TV manufacturers use libxml2. Pick up a smart tv and look at the software licenses stuff and you'll see libxml2 as one of them.

As a GNOME Foundation member and former director, it kills me that for such a critical piece of infrastructure code that nobody wants to donate to the foundation. Never mind other pieces like dbus.

I get it that companies love that they can use code that is maintained without paying for it, but there is always a cost one way or another.

Embargo handling through distributions

Posted Jun 27, 2025 20:26 UTC (Fri) by fw (subscriber, #26023) [Link]

For glibc, we didn't want to bother with setting up the infrastructure for private security bugs, either. Most flaws do not need an embargo, and public discussion allows us to move towards a fix more quickly. For the rare exceptions, we found some distribution security teams to handle the embargoes for us (including distros list notification).

Things have since evolved a bit for glibc, but binutils still follows this model: https://sourceware.org/git/?p=binutils-gdb.git;a=blob_plain;...

If you can find two or more downstream distributions you trust, this looks like a reasonable compromise to me. It does not solve all the other maintenance problems, of course, but I suppose every little bit helps.

oh my

Posted Jul 2, 2025 8:05 UTC (Wed) by rgb (subscriber, #57129) [Link]

If I am an unpaid vounteer, nobody tells me what to work on, period. If someone is not happy with the outcome, it is 100% on them to find a solution whatever that may be, fixing it themselves, fund the development or go look for another product. And that is literally in the fing licence text! People who don't understand this really need their head screwed on the right way.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds