|
|
Log in / Subscribe / Register

Stenberg: The end of the curl bug-bounty program

Curl creator Daniel Stenberg has written a blog post explaining why the project is ending its bug-bounty program, which started in April 2019:

The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.

I have also started to get the feeling that a lot of the security reporters submit reports with a bad faith attitude. These "helpers" try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don't think we need more of that.

There are these three bad trends combined that makes us take this step: the mind-numbing AI slop, humans doing worse than ever and the apparent will to poke holes rather than to help.

Stenberg writes that he still expects "the best and our most valued security reporters" to continue informing the project when security vulnerabilities are discovered. The program will officially end on January 31, 2026.



to post comments

The list

Posted Jan 26, 2026 17:52 UTC (Mon) by tux3 (subscriber, #101245) [Link] (28 responses)

In case there's any doubt about the quality of these AI slop reports, the previous blog contained this list: https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81f...
With the first entry being from late 2023, this has been going on for a little over two years.

I used to read those cheap fictions that don't quite pass for literature, but that like to play with the reader's emotions. A writer doesn't need much subtlety to make you feel a certain way about a character. Like a cheap restaurant masking bad ingredients with too much flavoring.

And yet with only a few of those reports, I'm feeling things that I haven't felt since then. It's truly remarkable.

The list

Posted Jan 26, 2026 19:00 UTC (Mon) by fest3er (guest, #60379) [Link] (22 responses)

WRT the plea to change strcpy to strncpy, I probably would've closed the bug after the second time he/she/it insisted on changing to strncpy. Remember the old saw: "Never argue with a fool; few can tell which is which."

And I would've started discounting that user and his IP address(es). After, say, 10 false reports, I would've banned the user and blocked the user's IP address(es). [Last year when the unprincipled 'bots were rampantly raging, I managed to block most of them (up to around 100 000), and limited many of those that got through to 2400 baud.]

As to the original post, I don't trust anything an AI has generated, primarily because the people who create the AIs don't understand how a biological neural network works, or how a neural network learns. Their only goal is to separate as much cash as they can from less-knowledgeable as fast as they can. Current AI is a fad, and the bubble will burst soon enough.

If someone cannot present a 1000-line shell script that performs a reasonably complex task, or a 100-line shell script that performs the same CPU-intensive operation on as many files as there are CPUs/cores, or cannot find 25 or more subtle bugs—of 50 or so—in 10 000 lines of related code, that person has little business reporting any bugs, and almost no business defending their reports.

Back in the days of physical manufacturing, almost anyone could be an inspector. "Say, this doesn't look right." "Hmmm,this is an unacceptable flaw." Inspectors didn't need to be able to fix the problems, only to recognize them. Back in the day, 25% of what one Western Electric plant produced was dumped in the trash bin. In other words, WE paid 25% of its manufacturing employees to produce junk when they should've been producing quality product. Inspectors are needed, but so are employees who can produce quality product.

Software, OTOH, exists in the intangible universe. One has to be experienced and good to find subtle bugs. Better software engineers will find bugs en passant while looking for something else. "Now, where is the code that handles … wait. That's an off-by-one bug! And below is a strcpy without neither a bounds check nor comment saying the check isn't needed! And…. Now where was I?" In short, one must prove she is a competent software engineer and programmer before explicit software bug reports are accepted.

[I'd best stop before this turns into a tome.]

The list

Posted Jan 27, 2026 7:40 UTC (Tue) by marcH (subscriber, #57642) [Link] (21 responses)

> Current AI is a fad, and the bubble will burst soon enough.

1. It's been completely eclipsed these days, but AI is not just LLMs/chatbots. There are many other use cases where AI works really well.

> I don't trust anything an AI has generated

2. Back to LLMs, we tend to overestimate the number of "bullshit" jobs and incompetent people who make a lot of mistakes too. Not the same type of mistakes, granted - but people are also much more expensive than AI.

3. Even competent people have to perform "bullshit" tasks sometimes.

There is likely a bubble that is going to burst, but there are many things that will stay.

The list

Posted Jan 27, 2026 8:00 UTC (Tue) by chris_se (subscriber, #99706) [Link] (20 responses)

> 2. Back to LLMs, we tend to overestimate the number of "bullshit" jobs and incompetent people who make a lot of mistakes too. Not the same type of mistakes, granted - but people are also much more expensive than AI.

Are people more expensive than generative AI though (in general)? A couple of factors to consider: 1) the current pricing for generative AI is unsustainable, there's a _LOT_ of VC being burnt right now, and 2) if a person makes a mistake they can either learn from it and not make it again, or if they can't learn from it, they will be replaced in their job at some point. The latter is not really possible with generative AI in the same manner. 3) Additionally, there are externalized costs to generative AI that humanity as a whole still has to pay. (Take for example the increase in RAM pricing right now, but also harder to quantify costs such as how education is currently being ruined for an entire generation.) If you were to factor in all of these costs, I personally am _very_ skeptical that generative AI is actually cheaper than people.

The list

Posted Jan 27, 2026 8:48 UTC (Tue) by marcH (subscriber, #57642) [Link] (19 responses)

- Chat bots can learn too. Not as well as humans but still a bit

- Hardware costs keep going down, human costs don't.

- People in power don't care about the environment - or about externalities in general.

The list

Posted Jan 27, 2026 9:28 UTC (Tue) by Wol (subscriber, #4433) [Link] (3 responses)

> - Chat bots can learn too. Not as well as humans but still a bit

So why when I'm talking to my bank's chatbox, do I get stuck in an infinite loop? Surely after the chatbox has asked for the umpteenth time "has this helped", "no", it should have learnt that it needs to call for help!

I know I know - the guy who wrote the chatbot thought that chatbots know everything!

Cheers,
Wol

The list

Posted Jan 27, 2026 11:36 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> I know I know - the guy who wrote the chatbot thought that chatbots know everything!

No, you presume the goal of the chatbot is to help *you*, when it is really there so the bank can save on labor costs.

The list

Posted Jan 27, 2026 12:49 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

No I don't presume the role of the chatbot is to help me - I know damn well its job to is save money (and hopefully piss customers off enough so they don't come back).

Problem is, it also loses customers. Whether it's me or the AI, I have had almost NO interactions with that junk that's been worth having.

Cheers,
Wol

The list

Posted Jan 27, 2026 16:27 UTC (Tue) by geert (subscriber, #98403) [Link]

They all have bad chatbots. So there is no net customer loss.
They all do have an extra cost of setting up new customers that left their competitors.

The list

Posted Jan 27, 2026 9:38 UTC (Tue) by taladar (subscriber, #68407) [Link] (8 responses)

Chat bots literally can not learn. They only have the data in their training set that is often months out of date and whatever is in their context window. Just try e.g. writing some piece of software in some 0.x library or framework that is still seeing changes and had a couple of releases since the LLM of your choice trained on it. It will insist on reverting your code to the old version of APIs constantly, sometimes even when you only ask it to move existing code to another file.

The list

Posted Jan 27, 2026 10:02 UTC (Tue) by marcH (subscriber, #57642) [Link] (7 responses)

> Chat bots literally can not learn. They only have the data in their training set that is often months out of date and whatever is in their context window.

After googling for literally 10 seconds: https://tensorwave.com/blog/how-to-train-an-llm-on-your-o...

(I knew this already from people who actually performed that themselves)

As an impressive coincidence, I just replied to you less than 5 min ago here: https://lwn.net/Articles/1056071/, where you also seemed to have strong opinions about things you did not seem to know much about.

Chat bots don't learn

Posted Jan 27, 2026 10:21 UTC (Tue) by donaldh (subscriber, #151569) [Link] (6 responses)

He's right though, chat bots don't learn. Chat bots are pure token generation with static LLM weights and whatever is in the context window, just as taladar said. All training happens before models get deployed into production. Sure you can capture transcripts of chat bot sessions and use all that data for a new training run to produce a new model. Then you can deploy the new model but the chat bot doesn't learn.

Chat bots don't learn

Posted Jan 28, 2026 3:38 UTC (Wed) by marcH (subscriber, #57642) [Link] (5 responses)

> Sure you can capture transcripts of chat bot sessions and use all that data for a new training run to produce a new model.

There is a lot more than this:

https://cloud.google.com/blog/products/ai-machine-learnin...
https://stackoverflow.blog/2024/12/05/four-approaches-to-...

> He's right though, chat bots don't learn.

Strong yet apparently not very informed opinion.

Chat bots don't learn

Posted Jan 28, 2026 9:01 UTC (Wed) by taladar (subscriber, #68407) [Link]

Those might be the theoretical state of the art but none of the current major LLMs actually use them, not even for very simple things like a RAG system to retrieve e.g. library documentation for the version of the library in use when trying to code something in that library.

Chat bots don't learn

Posted Jan 28, 2026 9:41 UTC (Wed) by donaldh (subscriber, #151569) [Link] (3 responses)

For a chat bot to learn, the LLM model weights would need to change. But the LLM model weights are immutable in an inference runtime so they cannot and do not change. Any training mechanism such as fine tuning, LoRA or reinforcement learning is akin to manufacturing a new LLM which is then deployed read-only for chat bot use.

When an LLM is deployed in an inference runtime, the only thing that augments the LLM model state is the context window. Things that feed the context window are the prompt and context augmentation approaches such as RAG and MCP.

I'm happy to stick with my assertion that chat bots don't learn because the statement implies that chat bots gradually increase in capability when the reality is more like manufacturing a new chat bot.

Chat bots don't learn

Posted Jan 28, 2026 20:13 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

But it's already doable. There are architectures where a separate LLM looks at transcripts of conversations, extracts new knowledge from them and runs a fine-tuning job for the weights. Kinda like human brain integrates new knowledge during sleep, if you think about it.

Chat bots don't learn

Posted Jan 29, 2026 8:53 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

If by possible you mean someone has theorized that it should be doable, sure. If by possible you mean any of the AI coding tools out there actually use it, then no, it is not possible.

Chat bots don't learn

Posted Jan 29, 2026 21:21 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

It's "possible" in the sense that I know a company that does exactly that in their AI architecture. They're working on a specialized management system.

You also can do that with your coding tools, if you want. OpenAI and Gemini both have fine-tuning APIs, so you can use them to "bake" your project assumptions into your model.

Why nobody has done that? Well, have you _seen_ the quality of the current AI coding tools? It's vibe-coded slop pushed out of the door without even minimal testing. We're still in a "land rush" mode.

Chatbots learning

Posted Jan 27, 2026 10:41 UTC (Tue) by farnz (subscriber, #17727) [Link] (5 responses)

You have to be very careful with statements like "chat bots can learn", because people can interpret that as meaning something it doesn't.

In the current state of the art, the chat bot is either in training or inference mode; in training mode, it learns but cannot produce output, while in inference mode, it produces output but cannot learn long-term. They can, however, learn short-term - they have a context window, and as long as something's in that context window, they learn from it - but that's short-term, and a new session, or the information leaving the context window, will cause it to "forget" what it learnt.

In turn, this means that if you train a bot to do your job, and it's doing it to 90% of your ability, it will never pick up that last 10%; if you trained a human to do your job, and they were doing it to 90% of your ability, they might well learn the last 10% of what you could do over time.

Chatbots learning

Posted Jan 28, 2026 3:41 UTC (Wed) by marcH (subscriber, #57642) [Link] (3 responses)

> You have to be very careful with statements like ...

> In the current state of the art, the chat bot is either in training or inference mode.

Again, googling for 10 seconds shows that things are much more varied than that, see examples above. Where did your careful "state of the art" come from?

Chatbots learning

Posted Jan 28, 2026 10:02 UTC (Wed) by farnz (subscriber, #17727) [Link] (2 responses)

Reading the examples you gave when they first entered my Google Discover feed.

There are two approaches to learning in the examples you gave:

  1. Pre-populate the context window. This can be via a system prompt, or by saved sessions - but the bot has not learnt for the long term, since if the "education" leaves the context window, it forgets it.
  2. Put the bot into training mode, and give it further training - fine-tuning is an example of that. In this case, you're not generating fresh output - you're training the bot.

They also describe RAG, which is a way for a bot to feed itself further context by using the output as search terms - where the retrieved documents are then added to the context window.

None of the links you've given show any sign of learning while doing inference - they're all showing techniques for doing one of the two options I've outlined without starting from scratch.

And if the "learning while doing" problem was cracked, I'd expect to hear about it - the frontier labs (Anthropic, OpenAI etc) are actively researching ways for a model to change its stored weights to match a given context window, so that it can do long-term learning. But that's still not something that's in the latest papers from the labs - if they've cracked it, they're not yet ready to publish or shout about it.

Chatbots learning

Posted Jan 29, 2026 16:58 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

And growing the context window has exponential costs in the length of the window. It cannot and will not go up indefinitely, which means there's a strictly limited amount you can tell it -- and they hardly ever let you know when you've exhausted the window, either.

Context window

Posted Jan 29, 2026 17:44 UTC (Thu) by farnz (subscriber, #17727) [Link]

Note, too, that current frontier models have sophisticated eviction behaviour when it comes to the context window - they can evict "less important" things in preference to "more important", at least in the view of the model. That means that even if you were told when you've exhausted the context window, you don't know what's in it - the model could be retaining older context in preference to newer context.

Ultimately, the direction of travel is teaching models how to tune their own weights online, based on what they're seeing. But this is active research right now - currently using a second model to decide what is long-term memory, and what isn't.

Chatbots learning

Posted Jan 28, 2026 7:15 UTC (Wed) by Wol (subscriber, #4433) [Link]

> In the current state of the art, the chat bot is either in training or inference mode; in training mode, it learns but cannot produce output, while in inference mode, it produces output but cannot learn long-term

There's nothing new under the sun ...

Wasn't Prolog doing something similar back in the 80s? or even 70s? I'm not sure whether the systems I heard about could extend their database themselves, or needed a human to do it for them.

Cheers,
Wol

The list

Posted Jan 26, 2026 19:25 UTC (Mon) by adobriyan (subscriber, #30858) [Link]

I'd would have kept exactly 1 (correct) strncpy instance to act as a honeypot for the clueless.

The list

Posted Jan 27, 2026 7:32 UTC (Tue) by error27 (subscriber, #8346) [Link] (3 responses)

It's absolutely infuriating when you ask someone a question and they just your question into an AI and copy and paste the answer. This is happening more and more.

I live in Africa and the internet came to Africa late. I got to experience the growth of the internet in America in the 1990s and then again in Africa. It was funny to see attitudes towards spam evolve from, "Don't be mean, he's just trying to sell his stuff. Everyone needs to make a living." to just auto-banning. Eventually something similar will happen with AI.

The list

Posted Jan 27, 2026 8:44 UTC (Tue) by marcH (subscriber, #57642) [Link] (2 responses)

The "global village" idea where everyone can contact everyone else directly was the most stupid idea ever. Social media made it real. That went well.

One of the funniest thing is parents scared of letting their kids walk to school cause "stranger danger" while letting the same kids on the internet in the reach of 8 billions strangers.

Part of being a "tribal" species is the inability to comprehend large numbers, probabilities and risks. And maybe just maths in general.

The list

Posted Jan 27, 2026 8:48 UTC (Tue) by error27 (subscriber, #8346) [Link] (1 responses)

I love the global village. I loved having conversations with Alan Cox on slashdot back in the day.

The list

Posted Jan 27, 2026 8:51 UTC (Tue) by marcH (subscriber, #57642) [Link]

You loved it before everyone was on it.

Chatting across the world is great, don't get me wrong. I love that too. It's letting any random people out of 8 billions enter the chat room that rarely ever works.

Cobra Effect

Posted Jan 27, 2026 13:46 UTC (Tue) by yoshi314 (guest, #36190) [Link] (1 responses)

if you get rewarded for finding bugs, people will look for low-hanging fruit and then insist that every bug is of maximum severity. they will basically optimize for maximum outcome with minimum of effort.

we've all been there, done that.

Cobra Effect

Posted Jan 27, 2026 14:54 UTC (Tue) by chris_se (subscriber, #99706) [Link]

According to Daniel's talk at last year's FrOSCon, <https://media.ccc.de/v/froscon2025-3407-ai_slop_attacks_o...>, the main problem is that LLMs made exacerbate the problem to such a degree that it becomes a huge drain on the project. It's worth listening to that talk to see what changed since 2022.

CTF may be a better approach

Posted Jan 28, 2026 15:43 UTC (Wed) by bjackman (subscriber, #109548) [Link]

kCTF [0] is essentially a bug-bounty program, but there's not really any space for pure slop, because of two rules:

1. You don't get paid unless you show us that you can exploit the vuln, by exploiting it on a system we control. (We have special systems for this obviously).

2. You don't get paid unless the bug is fixed upstream and the author of the fix explicitly credits you. One of the easiest way to make them to that is to author the fix yourself.

There is still a "low-hanging fruit" issue: there is a "metagame" where we need to regularly tweak the parameters of the system otherwise researchers just find one deep seam of broken code and pwn it over and over again. It's useful to know about the deep seam but it's not that useful to know about every individual bug in nf_tables that can be exploited from a netns.

Maybe the reason this works for us is that while Linux is super generic, we have a pretty specific usecase in mind so we can set up a practical simulation of it. The curl maintainers might not have such a narrow focus.

[0] https://google.github.io/security-research/kernelctf/rule...


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds