|
|
Subscribe / Log in / New account

"AI" alarmism

"AI" alarmism

Posted Sep 3, 2025 15:02 UTC (Wed) by chris_se (subscriber, #99706)
In reply to: "AI" alarmism by valderman
Parent article: The hidden vulnerabilities of open source (FastCode)

> Judging by the code I've seen produced by some LLM enthusiasts I've worked with, I'm similarly skeptical about an LLM being able to generate even a single "solid" (or even "perfectly normal") commit, let alone several months worth of them.

Very much this. It's hard to accurately predict the future, especially 10 or more years from now, but when looking at the current state of LLMs for code generation, they are _very_ far away from being able to successfully pull off such an attack.

If I wanted to use LLMs for supply-chain attacks right now, the much better way would be to use them to semi-automatically spam not easily detectable but low-quality contributions that bind resources from other maintainers in rejecting them. This way a sophisticated human could weasel their way into becoming co-maintainer of the project (especially if they start helping out triaging/rejecting these LLM-based spam contributions), then once they are in they ramp up the LLM-spam even more, thus distracting the other maintainers, and then insert the malicious code (with proper human-made obfuscation) into the project.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds