Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Posted May 13, 2024 11:00 UTC (Mon) by mb (subscriber, #50428)In reply to: Parts of Debian dismiss AI-contributions policy by farnz
Parent article: Debian dismisses AI-contributions policy
Ok. I'm fine with that explanation.
AI programs and non-AI programs are both equally capable of transforming Copyrighted work into Public Domain. That makes sense.
>you have not given a source for why you believe that an AI will be treated differently
This discussion is the current source.
But people claiming that AI is some kind of magic Copyright remover comes up over and over again. But that simply doesn't make sense, if it's not equally true for conventional algorithms.
You now explained that it's true for both. Which makes sense.
But that makes me come to the conclusion that Copyright actually is completely useless these days.
I'm fine with that, though. I publish mostly under permissive licenses these days, because I don't really care anymore what people do with the code. I would publish into the Public Domain, if I could.
Posted May 13, 2024 11:04 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (41 responses)
I've not seen anyone other than you in the current discussion claiming that AI is a magic copyright remover; the closest I can see are some misunderstandings of what someone else was saying, and the (correct) statement that in the EU, copyright does not restrict you from feeding a work into an algorithm (but the EU is silent on what copyright implications there are to the output of that algorithm).
So, given that you brought it into discussion, I'd like to know where you got the idea that AI is a magic copyright remover from, so that I can consider debunking it at the source, not when it gets relayed via you.
Posted May 13, 2024 11:14 UTC (Mon)
by mb (subscriber, #50428)
[Link] (23 responses)
Come on. Look harder.
Posted May 13, 2024 13:26 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (22 responses)
I've read every comment on this article, and you are the only person who claims that AI is a "magic copyright removal" tool. Nobody else makes that claim that I can see - a link to a comment where someone other than you makes that claim would be of interest.
The closest I can see is this comment, which could be summarized down to it's already hard to enforce open source licensing in cases where literal copying can be proven, and making it harder to show infringement is going to make it harder to enforce copyright.
Posted May 13, 2024 13:57 UTC (Mon)
by mb (subscriber, #50428)
[Link] (21 responses)
Come on. It's even written in the article itself. Did you read it?
Posted May 13, 2024 14:21 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (20 responses)
I did, and I do not see any claim that AIs are "magic copyright removal tools"; I see claims that AIs can be used to hide infringement, but not that their output cannot be a derived work of their inputs.
Indeed, I see the opposite - people being concerned that someone will use an AI to create something that later causes problems for Debian, since it infringes copyrights that then get enforced.
Posted May 13, 2024 15:05 UTC (Mon)
by mb (subscriber, #50428)
[Link] (19 responses)
> He specified "commercial AI" because ""these systems are copyright laundering machines""
[1] I'm not going to search the internet for you.
Posted May 13, 2024 15:09 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (18 responses)
That's not a claim that AIs are magical copyright removing tools; that's a claim that AIs hide infringement - in English, if something is a "laundering machine", it cleans away the origin of something, but doesn't remove the fact that it was originally dirty.
Again, I ask you to point to where you're getting your claim from, given that this is now the third time I've asked you to identify where you're getting this from, and been pointed to things that don't support the idea that AIs are "magical copyright removal machines", and I've had you insult my reading ability because I dare to question you.
Posted May 13, 2024 15:13 UTC (Mon)
by mb (subscriber, #50428)
[Link] (17 responses)
Posted May 13, 2024 15:19 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (16 responses)
The whole reason I'm asking is that "AI as magic copyright removal machine" is a falsehood, and you've clearly picked that up from somewhere; rather than trying to correct people who pick it up from the same source, I'd like to go back to the source and correct it there.
Now, if it's a simple misunderstanding of what the article says, there's not a lot that can be done there - English is a horrorshow of a language at the best of times - but if you've picked it up from something someone else has actually claimed, that claim can be corrected at source.
Posted May 13, 2024 15:32 UTC (Mon)
by mb (subscriber, #50428)
[Link] (15 responses)
No.
And I really don't understand why you insist so hard that there is anything wrong with that. Because there isn't.
I understand that you don't like these words, for whatever reason, but that's not really my problem to solve.
Thanks a lot for trying to correct me. I learnt a lot in this discussion. But I will continue to use these words, because in my opinion these words describe very well what actually happens.
Posted May 13, 2024 15:40 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (13 responses)
The point is, that's not a fact at all, as it has already been explained many times.
Posted May 13, 2024 15:47 UTC (Mon)
by mb (subscriber, #50428)
[Link] (12 responses)
You guys have to decide on something. Both can't be true. There is nothing in-between. There is no such thing as "half-GPLed".
Posted May 13, 2024 16:32 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (10 responses)
Now, again as it has already been explained, whether the output of a prompt is copyrightable, and whether it's a derived work of existing copyrighted material, is an entirely separate question that depends on many things, but crucially, not on which tool happened to have been used to write it out.
Posted May 13, 2024 16:53 UTC (Mon)
by mb (subscriber, #50428)
[Link] (5 responses)
Ok. Got it now. So
>the fact that something that had been there in the input is no longer there in the output after a processing step.
is true after all.
Posted May 13, 2024 17:15 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
is true after all.
The input was copyright protected and the special exception made it non-copyright-protected because of reasons.
And for whatever strange reason that only applies to AI algorithms, because the EU says so.
No, this is also false.
Copyright law says that there are certain actions I am capable of taking, such as making a literal copy, or a "derived work" (a non-literal copy), which the law prohibits unless you have permission from the copyright holder. There are other actions that copyright allows, such as reading your text, or (in the EU) feeding that text as input to an algorithm; they may be banned by other laws, but copyright law says that these actions are completely legal.
The GPL says that the copyright holder gives you permission to do certain acts that copyright law prohibits as long as you comply with certain terms. If I fail to comply with those terms, then the GPL does not give me permission, and I now have a copyright issue to face up to.
The law says nothing about the copyright protection on the output of the LLM; it is entirely plausible that an LLM will output something that's a derived work of the input as far as copyright law is concerned, and if that's the case, then the output of the LLM infringes. Determining if the output infringes on a given input is done by a comparison process between the input and the output - and this applies regardless of what the algorithm that generated the output is.
Further, this continues to apply even if the LLM itself is not a derived work of the input data; it might be fine to send you the LLM, but not to send you the result of giving the LLM certain prompts as input, since the result of those prompts is derived from some or all of the input in such a way that you can't get permission to distribute the resulting work.
Posted May 13, 2024 17:15 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (2 responses)
No, because what you are stubbornly refusing to understand, despite it having been explained a lot of times, is:
> Now, again as it has already been explained, whether the output of a prompt is copyrightable, and whether it's a derived work of existing copyrighted material, is an entirely separate question that depends on many things, but crucially, not on which tool happened to have been used to write it out.
This is a legal matter, not a programming one. The same paradigms used to understand software cannot be used to try and understand legal issues.
Posted May 13, 2024 17:33 UTC (Mon)
by mb (subscriber, #50428)
[Link] (1 responses)
Yes. That is the main problem. It does not have to make logical sense for it to be "correct" under law.
> stubbornly
I am just applying logical reasoning. The logical chain obviously is not correctly implemented. Which is often the case in law, of course. Just like the logical reasoning chain breaks if the information goes through a human brain. And that's Ok.
Just saying that some people claiming here things like "it's *obvious* that LLMs are like this and that w.r.t. Copyright" are plain wrong. Nothing is obvious in this context. It's partly counter-logical and defined with contradicting assumptions.
But that's Ok, as long as a majority agrees that it's fine.
Posted May 14, 2024 5:11 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link]
Unfortunately, while that article is very well-written and generally illuminates the right way to think about verbatim copying, it can be unintentionally misleading when we're talking about derivative works. The "colors" involved in verbatim copying are relatively straightforward - either X is a copy of Y, or it is not, and this is purely a matter of how you created X. But when we get to derivative works, there are really two* separate components that need to be considered:
- Access (a "color" of the bits, describing whether the defendant could have looked at the alleged original).
The problem is, if you've been following copyright law for some time, you might be used to working in exclusively one mode of analysis at a time (i.e. either the "bits have color" mode of analysis or the "bits are colorless" mode of analysis). The problem is, access is a colored property, and similarity is a colorless property. You need to be prepared to combine both modalities, or at least to perform each of them sequentially, in order to reason correctly about derivative works. You cannot insist that "it must be one or the other," because as a matter of law, it's both.
* Technically, there is also the third component of originality, but that only matters if you want to copyright the derivative work, which is an entirely different discussion altogether. That one is also a "color" which depends on how much human creativity has gone into the work.
Posted May 13, 2024 22:18 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link]
No, wrong.
They’re copyright-protected, but *analysing* copyright-protected works for text and data mining is an action permitted without the permission of the rights holders.
See my other post in this subthread. This limitation of copyright protection does not extend to doing *anything* with the output of such models.
Posted May 13, 2024 22:00 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (3 responses)
Text and data mining are opt-out, and the opt-out must be machine-readable. but this limitation of copyright only applies to doing automated analysēs of works to obtain information about patterns, trends and correlations.
(I grant that creating an LLM model itself probably falls under this clause.)
But not only are the copies of works made for text and data mining to be deleted as soon as they are no longer necessary for this (which these models clearly don’t do, given how it’s possible to obtain “training data” by the millions), it also does not allow you to reproduce the output of such models.
Text and data mining is, after all, only permissible to obtain “information about especially patterns, trends and correlations”, not to produce outputs as genAI does.
Therefore, the limitation of copyright does NOT apply to LLM output, and therefore the normal copyright rules (i.e. mechanically combined of its inputs, whose licences hold true) apply.
Posted May 13, 2024 22:44 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (2 responses)
It doesn't say that it must be deleted, it says:
> Reproductions and extractions made pursuant to paragraph 1 may be retained for as long as is necessary for the purposes of text and data mining.
Not quite the same thing. I don't know whether it's true that verbatim copies of training data are actually stored as you imply, as I am not a ML expert - it would seem strange and pointless, but I don't really know. But even assuming that was true, if that's required to make the LLM work, then the regulation clearly allows for it.
> it also does not allow you to reproduce the output of such models.
Every LLM producer treats such instances as bugs to be fixed. And they are really hard to reproduce, judging from how contrived and tortured the sequence of prompts need to be to make that actually happen. The NYT had to basically copy and paste themselves portions of their articles in the prompt to make ChatGPT spit them back, as showed in their litigation vs OpenAI.
> Therefore, the limitation of copyright does NOT apply to LLM output, and therefore the normal copyright rules (i.e. mechanically combined of its inputs, whose licences hold true) apply.
And yet, the NYT decided to sue in the US, where the law is murky and based on fair use case-by-case decisions, rather than in the EU where they have an office and it would have been a slam dunk, according to you. Could it be that you are wrong? It's very easy to test it, why don't you sue any of the companies that publishes an LLM and see what happens?
Posted May 13, 2024 23:16 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (1 responses)
Doesn’t change the fact that it is possible, and in sufficiently amount to consider that models contain sufficient amounts from their input works for the output to be a mechanically produced derivative of them.
Posted May 13, 2024 23:57 UTC (Mon)
by bluca (subscriber, #118303)
[Link]
They are treated as bugs because they are bugs, despite the contrived and absurd ways that are necessary to reproduce them. Doesn't really prove anything.
> Doesn’t change the fact that it is possible, and in sufficiently amount to consider that models contain sufficient amounts from their input works for the output to be a mechanically produced derivative of them.
Illiterate FUD. Go to court and prove that, if you really believe that.
Posted May 13, 2024 16:58 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
The output from the LLM is almost certainly GPLed in as far as the output from the LLM is (per copyright law) a derived work of the GPLed input. The complexity is that not all LLM outputs will be a derived work as far as copyright law is concerned, and where they are not derived works, there is no copyright, hence there is nothing to be GPLed.
And that's the key issue - the algorithm between "read a work as input" and "write a work as output" is completely and utterly irrelevant to the question of "does the output infringe on the copyright applicable to the input?". That depends on whether the output is, using something like an abstraction-filtration-comparison test, substantially the same as the input, or not.
For example, I can copy if (!ret) { if (ret == 1) ret = 0; goto cleanup; } directly from the kernel source code into another program, and that has no GPL implications at all, even though it's a literal copy-and-paste of 5 lines of kernel code that I received under the GPL. However, if I copy a different 5 lines of kernel code, I am plausibly creating a derived work, because I'm copying something relatively expressive.
This is why both can be true; as a matter of law, not all copying is copyright infringement, and thus not all copying has GPL implications when the input was GPLed code.
Posted May 13, 2024 15:42 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
So to describe an AI as a "magical copyright laundering machine" is to admit / claim that it's involved in illegal activity.
Cheers,
Posted May 13, 2024 12:26 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (16 responses)
Elsewhere in copyright law, the general premise is that copyright is only available for works which are the “personal mental creation” of a human being. Speciesism aside, something that comes out of an LLM is obviously not the personal mental creation of anyone, and that seems to take care of that, even without the EU pronouncing on it in the context of training AI models.
Posted May 13, 2024 13:26 UTC (Mon)
by kleptog (subscriber, #1183)
[Link] (4 responses)
LLMs are prompted, they don't produce output out of thin air. Therefore the output is the creation of the human that triggered the prompt. Now whether that person was pressing buttons on a device that sent network packets to a server that processed all those keystrokes into a block of text to be sent to an LLM in the cloud is irrelevant. Somewhere along the way a human decided to invoke the LLM and controlled which input to send to it and what to do with the output. That human being is responsible for respecting copyright. Whether the output is copyrightable depends mostly on how original the prompt is.
The idea that LLM output cannot be copyrighted is silly. That would be like claiming that documents produced by a human typing into LibreOffice cannot be "the personal mental creation of anyone". LLMs, like LibreOffice, are tools, nothing more. There's a human at the keyboard who is responsible. Sure, most of the output of an LLM isn't going to be original enough to be copyrightable, but that's quite different from saying *all* output from LLMs is not copyrightable.
As with legal things in general, it depends.
Posted May 13, 2024 13:54 UTC (Mon)
by mb (subscriber, #50428)
[Link] (1 responses)
Ok, so if I enter wget into my shell prompt to download some copyrighted music, it makes me the creator?
Posted May 13, 2024 14:18 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
You are the creator of that copy, and in as far as there is anything copyrightable in creating that copy, you own that copyright.
However, that copy is (in most cases) either an exact copy of an existing work, or a derived work of an existing work; if it's an exact copy, then there is nothing copyrightable in the creation of the copy, so you own nothing.
If it's a derived work, then you own copyright in the final work thanks to the creative expression you put in to create the copy, but doing things with that work infringes the copyright in the original work unless you have appropriate permission from the copyright holder on the original work, or a suitable exception in copyright law.
Posted May 13, 2024 21:51 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (1 responses)
> LLMs are prompted, they don't produce output out of thin air. Therefore the output is the creation of the human that triggered the prompt.
This is ridiculous. The “prompt” is merely a tiny parametrisation of a query that extracts from the huge database of (copyrighted) works.
Do read the links I listed in https://lwn.net/Comments/973578/
> The idea that LLM output cannot be copyrighted is silly.
😹😹😹😹😹😹😹
You’re silly.
This is literally enshrined into copyright law. For example:
> Werke im Sinne dieses Gesetzes sind nur persönliche geistige Schöpfungen.
“Works as defined by this [copyright] law are only personal intellectual creations that pass threshold of originality.” (UrhG §2(2))
Wikipedia explains the “personal” part of this following general jurisprudence:
> Persönliches Schaffen: setzt „ein Handlungsergebnis, das durch den gestaltenden, formprägenden Einfluß eines Menschen geschaffen wurde“ voraus. Maschinelle Produktionen oder von Tieren erzeugte Gegenstände und Darbietungen erfüllen dieses Kriterium nicht. Der Schaffungsprozeß ist Realakt und bedarf nicht der Geschäftsfähigkeit des Schaffenden.
“demands the result of an act from the creative, form-shaping influence of a human: mechanical production or things or acts produced by animals do not fulfill this criterium (but legal competence is not necessary).” (<https://de.wikipedia.org/wiki/Urheberrecht_(Deutschland)#Schutzgegenstand_des_Urheberrechts:_Das_Werk>)
So, yes, LLM output cannot be copyrighted (as a new work/edition) in ipso.
And to create an adaption of LLM output, the human doing so must not only invest significant *creativity* (not just effort / sweat of brow!) to pass threshold of originality, but they also must have the permission of the copyright (exploitation rights, to be precise) holders of the original works to do so (and, in droit d’auteur, may not deface, so the authors even if not holders of exploitation rights also have something to say).
Posted May 13, 2024 22:24 UTC (Mon)
by corbet (editor, #1)
[Link]
Posted May 13, 2024 13:40 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (1 responses)
That's looking at the other end of it - the question here is not whether an LLM's output can be copyrighted, but whether an LLM's output can infringe someone else's copyright. And the general stance elsewhere in copyright law is that the tooling used is irrelevant to whether or not a given tool output infringed copyright on that tool's inputs. It might, it might not, but that depends on the details of the inputs and outputs (and importantly, not on the tool in question).
Posted May 13, 2024 14:55 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
The output of an LLM cannot be copyrightED. That is, there is no original creative contribution BY THE LLM worthy of copyright.
But the output of an LLM can be COPYRIGHT. No "ed" on the end of it. The mere fact of feeding stuff through an LLM does not
Again, we get back to the human analogy. There is no restriction on humans CONSUMING copyrighted works. European law explicitly extends that to LLMs CONSUMING copyrighted works.
And just as a human can regurgitate a copyrighted work in its entirety (Mozart is famous for doing this), so can an LLM. And both of these are blatant infringements if you don't have permission - although copyright was in its infancy when Mozart did it so I have no idea of the reality on the ground back then ...
Cheers,
Posted May 13, 2024 13:52 UTC (Mon)
by mb (subscriber, #50428)
[Link] (8 responses)
Well, that is not obvious at all.
Because the inputs were mental creations.
Posted May 13, 2024 21:39 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (7 responses)
Even “mechanical” transformation by humans does not create a work (as defined by UrhG, i.e. copyright). It has to have some creativity.
Until then, it’s a transformation of the original work(s) and therefore bound to the (sum of their) terms and conditions on the original work.
If you have a copyrighted thing, you can print it out, scan it, compress it as JPEG, store it into a database… it’s still just a transformation of the original work, and you can retrieve a sufficiently substantial part of the original work from it.
The article where someone reimplemented a (slightly older version of) ChatGPT in a 498-line PostgreSQL query showed exactly and easily understandable how this is just a lossy compression/decompression: https://explainextended.com/2023/12/31/happy-new-year-15/
There are now feasible attacks obtaining “training data” from prod models in large scale, e.g: https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html
This is sufficient to prove that these “models” are just databases with lossily compressed, but easily enough accessible, copies of the original, possibly (probably!) copyrighted, works.
Another thing I would like to point out is the relative weight. For a work which I offer to the public under a permissive licence, attribution is basically the only remuneration I can ever get. This means failure to attribute so has a much higher weight than for differently licenced or unlicenced stuff.
Posted May 13, 2024 21:55 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (6 responses)
While the AI bandwagon exaggerates greatly the capability of LLMs, let's not fall into the opposite trap. ChatGPT&al are toys, real applications like Copilot are very much not "just databases". A database is not going to provide you with autocomplete based on the current, local context open in your IDE. A database is not going to provide an accurate summary of the meeting that just finished, with action items and all that.
Posted May 13, 2024 22:20 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (5 responses)
Posted May 13, 2024 22:44 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (4 responses)
Posted May 13, 2024 23:14 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link] (3 responses)
Consider a database in which things are stored lossily compressed and interleaved (yet still retrievable).
Posted May 13, 2024 23:58 UTC (Mon)
by bluca (subscriber, #118303)
[Link] (2 responses)
Posted May 14, 2024 0:28 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link] (1 responses)
I don’t have the nerve to even try and communicate with systemd apologists who don’t even do the most basic research themselves WHEN POINTED TO IT M̲U̲L̲T̲I̲P̲L̲E̲ ̲T̲I̲M̲E̲S̲.
Posted May 14, 2024 1:26 UTC (Tue)
by corbet (editor, #1)
[Link]
That's all participants should stop, not just the one I'm responding to here.
Thank you.
Posted May 14, 2024 2:55 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
A lot of people saying it doesn't mean it's true. I think the "magic copyright eraser" argument comes from misapplying the combination of the pro- and anti-AI arguments in a way that isn't supported by the law. The strong anti-AI position is that AI is inherently a derivative work of every piece of training material, and that therefore all the output of the AI is likewise a derivative work. The strong pro-AI position is that creating an AI is inherently transformative, so an AI is not a derivative work (or at least not an infringing use) of the material used to train it. The mistake is applying the anti-AI "everything is a derivative work" logic to the pro-AI position that the AI is not a derivative work and concluding that none of the output of the AI would be infringing.
This sounds reasonable but is absolutely wrong. A human being is not a derivative work of everything used to train them, but humans are still capable of copyright infringement. What matters is whether our output meets the established criteria for infringement, e.g. the abstraction-filtration-comparison test farnz mentions above. The same thing would apply to the output of an AI. Even if the AI itself is not infringing, its output can be.
Basically, the courts won't accept "but I got it from an AI" as an argument against copyright infringement. If anything, saying you got it from an AI would probably hurt you. You can try to defend yourself against charges of infringement by showing you never saw the original and thus must have created it independently. That's always challenging, but it will be much harder with an AI, given just how much material they've trained on. The chances are very good the AI has seen whatever you're accused of infringing, so the independent creation defense is no good.
Posted May 14, 2024 9:16 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
Note, as emphasis on your point, that the independent creation defence requires the defendant to show that the independent creator did not have access to the work they are alleged to have copied. The assumption is that you had access, up until you show you didn't.
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
> that abuse free software
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Just like I call my washing machine the "magic dirt removal machine" I call AI source code transformers "magic copyright removal machines".
It's an aggravation to point out the fact that something that had been there in the input is no longer there in the output after a processing step.
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
If not, then it has been removed (laundered).
This is not Schrödinger's LLM.
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
> but under the copyright exception granted by the law, which trumps any license you might attach to it.
The input was copyright protected and the special exception made it non-copyright-protected because of reasons.
And for whatever strange reason that only applies to AI algorithms, because the EU says so.
Parts of Debian dismiss AI-contributions policy
>the fact that something that had been there in the input is no longer there in the output after a processing step.
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
But that doesn't mean I personally have to agree. Copyright is a train wreck and it's only getting worse and worse.
Parts of Debian dismiss AI-contributions policy
- Similarity (a function of the bits, and not a color)
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Wol
Parts of Debian dismiss AI-contributions policy
the EU is silent on what copyright implications there are to the output of that algorithm
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
>Therefore the output is the creation of the human that triggered the prompt.
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
While this discussion can be seen as on-topic for LWN, I would also point out that we are not copyright lawyers, and that there may not be a lot of value in continuing to go around in circles here. Perhaps it's time to wind it down?
This has gone on for a while
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
automatically cancel any pre-existing copyright.
Wol
Parts of Debian dismiss AI-contributions policy
At which point did the data loose the "mental creation" status traveling through the algorithm?
Will processing the input with 'sed' also remove it, because the output is completely processes by a program, not a human being?
What level or processing do we need for the "mental creation" status to be lost? How many chained 'sed's do we need?
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
Parts of Debian dismiss AI-contributions policy
OK, I'll state it more clearly: it's time to bring this thread to a halt, it's not getting anywhere.
Second try
Parts of Debian dismiss AI-contributions policy
But people claiming that AI is some kind of magic Copyright remover comes up over and over again. But that simply doesn't make sense, if it's not equally true for conventional algorithms.
Parts of Debian dismiss AI-contributions policy
Basically, the courts won't accept "but I got it from an AI" as an argument against copyright infringement. If anything, saying you got it from an AI would probably hurt you. You can try to defend yourself against charges of infringement by showing you never saw the original and thus must have created it independently. That's always challenging, but it will be much harder with an AI, given just how much material they've trained on. The chances are very good the AI has seen whatever you're accused of infringing, so the independent creation defense is no good.