Open source security in spite of AI
The curl project has found AI-powered tools to be a mixed bag when it comes to security reports. At FOSDEM 2026, curl creator and lead developer Daniel Stenberg used his keynote session to discuss his experience receiving a slew of low-quality reports and, at the same time, realizing that large language model (LLM) tools can sometimes find flaws that other tools have missed.
FOSDEM is famously jam-packed with things to do and talks to
attend; there are dozens of devrooms for different
topics, as well as the main-stage
keynotes and sessions. Stenberg's keynote was at 17:00
on Sunday, one of the last events on FOSDEM's schedule; no doubt
the organizers selected his talk as the most likely to lure a
large audience into the main room for the closing
session that would follow it. The ploy worked; the room was
effectively standing-room only. He opened his session by saying "it's
this, and then we can all go home. You look a little tired; it feels
like I've talked to almost all of you already
".
Stenberg said that many of the audience had already followed his
struggles with AI; he has been active in blogging
about and commenting
on AI via social media for some time. He acknowledged that it
would upset some of the audience that he was saying "AI" rather
than being specific with terms like "LLM" or "machine learning" but,
"in my talk, I don't care. I'm using the marketing
language. It's all 'AI'. When people throw something at me, they say
they used AI to do it."
The struggle is real
Instead of naming the specific technologies, he wanted to discuss
the effects of AI. Stenberg said that AI freeloads on open source, and
scrapes the web to death. It overloads maintainers and takes all the
money to boot: "No one can do anything that is not AI because no
one will pay you anything at all. And, you know, try to buy a computer
with memory now.
" AI boosters will tell everyone that it
is good technology and that it will get better. Maybe it will,
"but I'm old and allowed to complain
".
Since the first release, curl has grown from 100 lines to about
180,000 lines; more than 3,500 people are mentioned in its THANKS
file for their contributions. Curl is used in "a few things
",
he deadpanned, and displayed a collage slide with some of the many
devices, vehicles, toys, phones, tablets, gaming consoles, online
services, and operating systems that curl is used in. It is basically
everywhere, he estimated it is used in up to 30 billion
instances. With that being the case, "we take security
seriously
". It could have a bad impact if curl had a "terrible
security thing somewhere
".
When everything runs your code, Stenberg said, "you're a little
bit sensitive to the security problem
". Security reports generally
take the top priority. At the same time, open-source maintainers are
usually overworked and underfunded. The median number of maintainers for
most projects is one, "many are a spare-time or hobby thing we do
on the side or partially paid; 'underfunded' is sort of the middle name
of every open-source project
". There are always things to do, and
many maintainers struggle with burnout.
Before AI, there was friction in creating a security report. People
invested a lot of time and effort in finding something to report, and
maintainers would then spend time assessing the report on their end.
And then, along comes AI. It is super-easy to ask an LLM to find a
problem; and since there's really no cost to try AI tools, it's
basically effortless for people to ask the tools to find a security
problem in an open-source project. "Ask it to make it sound really
horrible, and it will do that. And then you just send that report
away.
" Many people genuinely think that if they ask ChatGPT to
find a security problem, it will find one, and they better report
it.
Stenberg said that people ask him how he knows when a submission is
AI. First, it's too polite. "No human ever started [a report] with
'I apologize, but I found a problem.' No way.
" People who have been
working in open source for a long time know that reports come from
people who are a bit upset and angry. Another tell is that AI reports
are "all perfect English
", and often use title case in
their submission title rather than sentence
case as humans generally tend to do. (Stenberg has curated a list
of examples where this is indeed apparent.)
Of course "every paragraph needs three bullet points in a
list
" and the reports are simply too long. Back in the day, it was
necessary to try to get reporters to include more information. And,
when asking the reporter a question, what happens? "Absolutely
right, I'm sorry. My mistake. I misunderstood. And blah blah
blah.
" What has happened is that maintainers end up communicating
with a proxy for a bot: "That never ends well.
"
HTTP/3 "exploit"
To illustrate his point, he talked about one of his favorite
examples of a slop report. This report came through HackerOne in May 2025;
the reporter claimed to have discovered a "novel exploit leveraging
stream dependency cycles in the HTTP/3 protocol stack
" in curl
8.13.0. It looked legitimate, Stenberg said, and included a proof of
concept, environment setup, GDB output, and more.
The report looked credible, but it was not. The function mentioned
in the report did not exist in curl and even the GDB session had been
faked. "I think I was a little bit inexperienced back then, so I
actually wasted far too much time
" on the report.
This was still early in the AI-slop-reporting era. Now, Stenberg
said, he calls this method of sending AI-generated security reports
"terror reporting
". In the past, he estimated that one out of
six reports turned out to be real security flaws. Now it is more like
one out of 20 or 30. It is a total waste of the curl team's time and
energy. The curl project is not alone in being besieged by AI slop
reports, of course. Stenberg said that once he started talking loudly
about the AI slop, he heard from many other projects that had the same
problem.
He theorized that, in curl's case, people were doing it for the
money. Historically curl had offered a bug bounty that would award
$500 for low-severity flaws and up to $10,000 for finding a
vulnerability of critical severity. "That's sort of the pipe dream;
that's why every report is labeled critical
."
The problem is humans
Stenberg listed some of the things that he had tried to ensure that security reports were researched by a human. He added a submission form that required the reporter to declare if they had used AI. That worked for three or four reports, and then people stopped admitting to it. He tried banning reporters who used AI. That does not work well if the user can simply create a new account the next day. He tried public shaming, which worked to some degree, but not enough to end the reports. Ultimately, curl ended its bug-bounty program in January 2026 because the volume of slop was too great.
The problem is not really AI, though, it's humans. "AI makes it
easy to submit reports and if marketing says this works, they're going
to continue to do this as long as it's very easy and low
effort
". He hoped that ending the bounty would reduce the number
of slop reports, but "we'll see if this actually turns out to be
true
". On February 4 he posted to
Mastodon that the early data indicated "turning off the
bug-bounty may not make much difference
".
Despite the bad experiences Stenberg is still open to the use of
AI, because it is simply a tool. If a person asks it to find a
security problem that they don't verify, "you get really stupid
things
". But if a person is clever and uses a good tool, "you
can do really good stuff. So we work with several AI-powered analyzing
tools now.
"
The good
Even though AI is bad in one way, it is awesome in another way,
Stenberg said. In working with C over the years he had thrown
everything at the code to find bugs; picky compiler options, code
analyzers, fuzzing, and even security audits. And, of course, users
report bugs when they find them. But, using AI tooling, he has found
more than 100 bugs that had been missed by other methods. Even though
the tools find flaws "in what sometimes feels like magical
ways
" there is a need for a clever human in the loop to
decide if the discoveries are real, valid, and important.
He said that AI tools found things that humans did not. For
example, the tools might detect that a code change and the comment related to
the code disagree. "That might sound like a subtle thing, but it's
an awesome thing. [...] If that documentation is wrong, the users of
that function in your code is possibly wrong.
" It is perfect for
detecting edge cases or spotting when code and a specification
disagree. A human can find that, but humans get bored. People are
really bad at code review, but the machines don't get bored and they
don't get tired.
And also it's really good at, for example, analyzing other libraries. So you do function calls from your code into a third-party library and it can tell me about assumptions I make on the data it returns, which also is nothing a normal code analyzer can do, because a normal code analyzer only analyzes your code, not the other code or the interactions between them. So really fascinating tools. It really opens up a new way to improve code and make things stable and better.
What doesn't interest Stenberg is using AI to write code. He said
that he is not impressed with AI for writing code; even when the
machines find a bug and propose a patch to fix it, the patches are
never good. "A human fixes code way better than the AIs
do.
"
He added that he could not discuss AI without mentioning scraperbot overload. The
curl project has a content-delivery network (CDN) sponsor, so the 75TB
a month of traffic that is largely bot-generated does not
harm curl as a project. But other projects are not so
lucky. "Certainly this causes a lot of problems for a lot of
projects.
"
In the end, "it all depends on what you do with it
",
Stenberg said. A fair share of users have always been annoying, but
now they have tools that help them produce junk in new ways. "AI
will continue to augment everything we do in different directions
[...] at least until we start paying for what it actually costs. And
then we'll see what happens.
"
Questions
The audience may have been tired from a long FOSDEM weekend, but
not too tired for questions. The first audience member wanted to know
if there were legal concerns with accepting AI-generated
code. Stenberg said that there's always been an uncertainty with
contributed code. It may have been written by the contributor,
generated by AI, copied from Stack Overflow or some other
source. "I think the risk is roughly the same.
"
Another attendee said that their project had never had a bug
bounty, but had experienced a 600% increase in "these wonderful
security reports
". They wanted advice on how to handle the
situation, since they had no bug bounty to turn off. Stenberg said
that there were many ways, but unfortunately every way to limit
reports meant making it harder to submit reports overall, which is
unfortunate.
While Stenberg's session largely dealt with the negative impacts of AI tools on curl as a project, and for open-source maintainers more generally, his outlook was not pessimistic. It will be interesting to see how the end of the bug bounty plays out for curl, and whether the situation improves as maintainers speak out about the problems they're facing. The video of the session is available on the talk's page on the FOSDEM 2026 web site.
[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brussels to attend FOSDEM.]
| Index entries for this article | |
|---|---|
| Conference | FOSDEM/2026 |
