The hidden vulnerabilities of open source (FastCode)
Open source maintainers, already overwhelmed by legitimate contributions, have no realistic way to counter this threat. How do you verify that a helpful contributor with months of solid commits isn't an LLM generated persona? How do you distinguish between genuine community feedback and AI created pressure campaigns? The same tools that make these attacks possible are largely inaccessible to volunteer maintainers. They lack the resources, skills, or time to deploy defensive processes and systems.The detection problem becomes exponentially harder when LLMs can generate code that passes all existing security reviews, contribution histories that look perfectly normal, and social interactions that feel authentically human. Traditional code analysis tools will struggle against LLM generated backdoors designed specifically to evade detection. Meanwhile, the human intuition that spot social engineering attacks becomes useless when the "humans" are actually sophisticated language models.
Posted Sep 2, 2025 15:13 UTC (Tue)
by HenrikH (subscriber, #31152)
[Link] (3 responses)
Posted Sep 2, 2025 15:54 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link]
Right now you almost never get a response to any context question for these reports.
And when chat bots will be smart enough to discuss the review in real time and make it look legit, they'll also be smart enough to run the review as well. Bots will talk to bots and this will go who-knows-where.
On the opposite, I wouldn't bet much on the long life of closed source where code is already being generated in part by chat bots but there's nobody to control it. The only ones that see it are those doing it for a living and who are incentived on disassembling code, running bindiffs etc where the problems remain visible.
Posted Sep 2, 2025 16:02 UTC (Tue)
by jafd (subscriber, #129642)
[Link] (1 responses)
Note that in the specific XZ case, the bit where it all went downhill, was not in the repository access itself, but in the ability to roll out releases and having things in the tarball that have never even been in the git repository — those who built from git didn't contract the vulnerability. One way it could be interpreted is that the commit history makes nefarious things way shallower than delving into release tarballs which usually also contain generated code.
Posted Sep 10, 2025 16:33 UTC (Wed)
by zwol (guest, #126152)
[Link]
Posted Sep 2, 2025 19:30 UTC (Tue)
by valderman (subscriber, #56479)
[Link] (1 responses)
Judging by the code I've seen produced by some LLM enthusiasts I've worked with, I'm similarly skeptical about an LLM being able to generate even a single "solid" (or even "perfectly normal") commit, let alone several months worth of them.
Posted Sep 3, 2025 15:02 UTC (Wed)
by chris_se (subscriber, #99706)
[Link]
Very much this. It's hard to accurately predict the future, especially 10 or more years from now, but when looking at the current state of LLMs for code generation, they are _very_ far away from being able to successfully pull off such an attack.
If I wanted to use LLMs for supply-chain attacks right now, the much better way would be to use them to semi-automatically spam not easily detectable but low-quality contributions that bind resources from other maintainers in rejecting them. This way a sophisticated human could weasel their way into becoming co-maintainer of the project (especially if they start helping out triaging/rejecting these LLM-based spam contributions), then once they are in they ramp up the LLM-spam even more, thus distracting the other maintainers, and then insert the malicious code (with proper human-made obfuscation) into the project.
Posted Sep 3, 2025 5:42 UTC (Wed)
by oldtomas (guest, #72579)
[Link]
The only "legit" applications I've seen with those generative LLMs is in generating bullshit [1], i.e. stuff where the "author" doesn't give a rat's ass whether it's true or false -- it just has to "sound" true [2]. Ad industry. Corporate speak. Simulated truth. The old contract that, when we engage in a discussion, I care about my "truth" as you care about yours, and that we are both willing to adjust our sides went up the hot exhaust of a big data center.
We had that before, mind you, but it was pretty "artisanal", now it's there, on an industrial scale. Now you can truly flood the zone. Needless to say, this has a destructive effect on society. At the price of a couple o'nuclear reactors, what's not to like?
[1] in the Harry Frankfurt sense, as described by Townsend Hicks, Humphries and Slater in https://link.springer.com/article/10.1007/s10676-024-09775-5
Posted Sep 3, 2025 6:49 UTC (Wed)
by Vorpal (guest, #136011)
[Link] (3 responses)
On the other hand, when it comes to larger "agentic" edits, I'm less positive. The only thing I have found it to do reliably is simple refactoring ("extract each widget init code into a separate function", "convert this chain of if-else into a switch statement", ...). But, it is very slow at doing it, and I still need to carefully review what it did. So I don't actually save any time (quite the opposite). I save some key presses and mouse clicks yes (which with RSI is still attractive), but overall it is slower.
So, to tie it back to this sensationalist quote: using AI will not make it easier to do xz attacks. It will make it easier to create spam PRs to take up the time of reviewers though. So at most you can mount a DOS attack with it. As it currently stands at least. (And I don't think the singularity is around thr corner.)
Posted Sep 3, 2025 8:47 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (2 responses)
Sorry to do an ad on lwn, but I use Logitech Ergonomic kit. Take a look at the K860 keyboard if you haven't already, you can get a cheap Perrix, the Rolls Royce used to be Maltron. And the Ergo M575 trackball. I have shoulder problems, and while these take some getting used to, they're brilliant. Typing now, my left wrist is hardly moving at all, and is straight not twisted! (As a guitarist, I'm a six-fingered typist - proper four fingers on the left, hunt and peck with two on the right :-)
Cheers,
Posted Sep 4, 2025 18:17 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
Posted Sep 5, 2025 12:28 UTC (Fri)
by nix (subscriber, #2304)
[Link]
Posted Sep 4, 2025 23:33 UTC (Thu)
by johnfrombluff (guest, #90350)
[Link] (1 responses)
However the sensationalist aspect is predicated on the current ability to spot LLM-generated code or email interactions. As such, it's worth pondering that this ability may erode soon as LLMs advance, and the ability of bad actors to leverage them increases. So the only response that I can see is a web of trust, similar to (open)PGP key signing parties.
Would that be feasible in day-to-day practice? (I am a hobbyist coder, not a professional). Could a maintainer do their job while only accepting commit requests from parties that we in that web of trust? Famously, everyone in the world is supposedly reachable by someone-who-knows-someone-who-knows-someone, etc.
Workable?
Posted Sep 5, 2025 1:35 UTC (Fri)
by neilbrown (subscriber, #359)
[Link]
Trust of people is important in software development, but it mostly relates to the social aspects. Code must be analyzed and tested on the assumption that it is buggy no matter who wrote it.
silly premise
silly premise
silly premise
It's true that a key piece of the XZ attack payload was only in distribution tarballs, but most of it was checked in, concealed as test data. Given the same social conditions -- an overworked lone maintainer delighted to have help from anyone -- I can easily imagine the same kind of attack being just as successful (or more!) with all of it checked in.
silly premise
"AI" alarmism
"AI" alarmism
Meet in person
[2] Of course they try to not run afoul of current law, that's why there's so much so they need some human labour (*lots* of that, preferencially somewhere with low wages) to put in guardrails. Outlier (funded by Peter Thiel) is one of those. Luckily, there are many moderation workers available now, because "social" networks don't need them anymore. Eventually we will also fix that pesky law thing. Nice research (alas, in German): https://taz.de/Ausbeutung-im-Tech-Sektor/!6102646/
Sensationalist
Sensationalist
Wol
For RSI there are several wrist supports, some just a velcro fixed strip with a hole to put the thumb in. I has a very mild case once, got suggested to try this, and it works like magic. I was extremely skeptic.
Sensationalist
Sensationalist
Six degrees of separation
Six degrees of separation
I trust various people and I trust different things about each. I trust this person's opinion on food, that person's opinion on music, the other person's judgement of character. In at most one of those cases is there any possibility of transitivity and it is very limited.
