Human authorship?
Human authorship?
Posted Jul 1, 2025 18:37 UTC (Tue) by Cyberax (✭ supporter ✭, #52523)In reply to: Human authorship? by paulj
Parent article: Supporting kernel development with large language models
I can see that in future source code repos will have the LLM-generated source code along with the list of prompts that resulted in it. And then lawyers will argue where exactly the copyright protection is going to stop. E.g. if a prompt "a website with the list of issues extracted from Bugzilla" is creative enough or if it's just a statement of requirements.
Posted Jul 2, 2025 13:43 UTC (Wed)
by kleptog (subscriber, #1183)
[Link] (2 responses)
If the prompt is copyrightable, then the output is too. An LLM is just a tool. Photos don't lose copyright by feeding them through a tool, so why would an LLM be any different? You'd have to somehow argue that an LLM is somehow a fundamentally different kind of tool than anything else you use to process text, which I don't think is a supportable idea.
Posted Jul 2, 2025 14:24 UTC (Wed)
by jani (subscriber, #74547)
[Link]
It's just not that clear cut: https://www.skadden.com/insights/publications/2025/02/cop...
Posted Jul 2, 2025 18:55 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
But suppose we have this case, you build a web service to track sleep times using an LLM. And then I build a service to track the blood sugar data using an LLM.
The source code for them ends up 95% identical, just because there are so many ways to generate a simple CRUD app and we both used the same LLM version. And if you had looked at these two code bases 15 years ago, it would have been a clear-cut case of copyright infringement.
But clearly, this can't be the case anymore. Mere similarity of the code can't be used as an argument when LLMs are at play.
Human authorship?
Human authorship?
Human authorship?