LWN: Comments on "AI from a legal perspective" https://lwn.net/Articles/945504/ This is a special feed containing comments posted to the individual LWN article titled "AI from a legal perspective". en-us Sat, 04 Oct 2025 13:23:11 +0000 Sat, 04 Oct 2025 13:23:11 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net AI from a legal perspective https://lwn.net/Articles/963096/ https://lwn.net/Articles/963096/ nix <div class="FormattedComment"> <span class="QuotedText">&gt; I believe we recently discovered plants make use of certain bits of quantum physics, which helps with efficiency of photosynthesis.</span><br> <p> If you mean thirty or forty years ago, then yes, recently. (For that matter, so do we: the electrons in the electron-transport chain in every mitochondrial membrane complex in every cell in our bodies quantum-tunnel along the chain. Without that endless quantum tunnelling dance constantly pumping protons to recharge our ATP, we'd all be dead in a minute or two. Of course the same is true of plants: they just also have chloroplasts doing similar things.)<br> <p> </div> Wed, 21 Feb 2024 21:40:47 +0000 AI from a legal perspective https://lwn.net/Articles/946624/ https://lwn.net/Articles/946624/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; But if training a (lossy) compression algorithm, distributing a compressed representation of a copyrighted work is still considered distribution for the sake copyright... Likewise for object files.</span><br> <p> AIUI the main test is if the "compression" still produces something that affects the market of the original work. Even a really highly compressed/minified video would be recognisable as the original work. Just changing the format obviously produces something that substitutes for the original work. Object files can substitute for the orignal work because you can build a working binary from them just like you could from the original source.<br> <p> A parody of a work doesn't provide an alternative to the original work.<br> <p> No-one is using LLMs as an alternative to buying a specific book. And given the model size is like 0.1% of the sample data size, you'd almost figure that any resemblance to a real world text is purely coincidental.<br> </div> Thu, 05 Oct 2023 13:47:58 +0000 AI from a legal perspective https://lwn.net/Articles/946600/ https://lwn.net/Articles/946600/ pbonzini <div class="FormattedComment"> <span class="QuotedText">&gt; For LLMs, people are taking a lot of data that isn't theirs and running it through an algorithm, it's not clear this meets any standard for copyrightability, not even under database rights. </span><br> <p> But if training a (lossy) compression algorithm, distributing a compressed representation of a copyrighted work is still considered distribution for the sake copyright... Likewise for object files. Whether it's xz or gcc doing the translation, the output is still copyrighted and a derivative of the uncompressed work or the source code, respectively. What a mess.<br> <p> Thanks for the discussion. I am even more aware now, of how many things I don't know in this area!<br> </div> Thu, 05 Oct 2023 09:04:19 +0000 AI from a legal perspective https://lwn.net/Articles/946443/ https://lwn.net/Articles/946443/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; If so, what IP law is being relied on to require a license to use the LLMs? Most of them use "fauxpen source" terms that forbid commercial use or allow it only until a certain number of users. If LLMs have no copyright protection, that would not be valid.</span><br> <p> Good question. It's not obvious to me that LLMs are copyrightable in the normal sense, at least not everywhere. In the late 90's there were lawsuits about whether the phone book could be copyrightable given it was just a collection of facts. The status of OSM was similarly murky, it was only the creation of database rights that clarified the situation here. For LLMs, people are taking a lot of data that isn't theirs and running it through an algorithm, it's not clear this meets any standard for copyrightability, not even under database rights. The whole "sweat of the brow" idea is something in America, but not elsewhere AFAICT. And even then it's the computers doing the sweating.<br> <p> Just placing a copyright notice on something doesn't make it copyrightable. It's not even obvious to me we would want them to be copyrightable. It doesn't seem necessary for the "progress of the arts". And in any case, if we wanted something like that, I think something more akin to patents would be better: full-disclosure about how it was made in return for protection of the result for a limited period.<br> <p> For businesses like OpenAI I think they handle it under standard contract law and treat it as a trade secret. Basically, they provide the raw models under various conditions and restrictions on usage and who they can be passed on to with various penalty clauses. Copyright protection is only needed if you want to publish something to the public while retaining some control. Since most businesses wanting the models are using them to provide services based on them, this is probably sufficient.<br> </div> Thu, 05 Oct 2023 08:10:41 +0000 AI from a legal perspective https://lwn.net/Articles/946593/ https://lwn.net/Articles/946593/ LtWorf <div class="FormattedComment"> The .mkv file of the latest marvel film is just a number. But the police might have something to say if I decide to share this number on the internet for my fellow mathematics enthusiasts…<br> </div> Thu, 05 Oct 2023 07:08:08 +0000 AI from a legal perspective https://lwn.net/Articles/946418/ https://lwn.net/Articles/946418/ pbonzini <div class="FormattedComment"> <span class="QuotedText">&gt; In particular, a derivative work needs to be a copyrightable thing itself, and I don't think anyone is arguing that there's sufficient creativity in LLMs to warrant copyright protection.</span><br> <p> If so, what IP law is being relied on to require a license to use the LLMs? Most of them use "fauxpen source" terms that forbid commercial use or allow it only until a certain number of users. If LLMs have no copyright protection, that would not be valid.<br> <p> Now it's a different story if you do the training yourself, because in that case there's no distribution of the weights. There are many similarities with copyleft and the SaaS loopholes, and it's a bit disappointing that the copyright arguments were dismissed in the talk. They're extremely complex in reality (see also the "compression algorithms" argument mentioned elsewhere) and they alone can keep lawyers employed for a long time...<br> </div> Wed, 04 Oct 2023 04:55:14 +0000 AI from a legal perspective https://lwn.net/Articles/946405/ https://lwn.net/Articles/946405/ kleptog <div class="FormattedComment"> I guess for a definitive answer you'd need to nail down what is meant by "being in a training set". Suppose you had documents which discussed the poem in depth and in doing so cited each line before discussing it. Does that count as the poem "being in the training set"?<br> <p> When I meant that you could prove something was in the training set, I was considering being able to point to any particular document and prove it for that. That is definitely not possible. At a higher level, you could probably make a case for poems like this. But even then, that doesn't say anything about whether that's allowed or not. The EU Copyright Directive allows authors to opt out, but it's difficult to see how that could work in practice. You can put a robots.txt file on your site, but that doesn't stop some web crawler mirroring the site and then that getting included.<br> <p> In your examples, those poems almost certainly were in these training set in some form, most likely via Wikipedia or some lyrics site. That doesn't make the model a derivative work though. In particular, a derivative work needs to be a copyrightable thing itself, and I don't think anyone is arguing that there's sufficient creativity in LLMs to warrant copyright protection.<br> </div> Tue, 03 Oct 2023 22:26:02 +0000 AI from a legal perspective https://lwn.net/Articles/946388/ https://lwn.net/Articles/946388/ pbonzini <div class="FormattedComment"> This is why I used extremely famous (*), short but nontrivial poems in my experiments with Italian literature. A poem's choice of words is polished and unconventional enough that there's no other explanation than "the text was present (many times, enough to reinforce the learning) in the training corpus".<br> <p> (*) the non copyrighted one is probably *the* most known poem in Italian literature. The copyrighted one must be in the top 5 of 20th century poems, and probably the most famous from that author.<br> </div> Tue, 03 Oct 2023 19:40:47 +0000 AI from a legal perspective https://lwn.net/Articles/946353/ https://lwn.net/Articles/946353/ sfeam <div class="FormattedComment"> The observation that a chatbot can be persuaded to emit text matching all or part of a previous work does not by itself prove that work was part of the training set. <br> </div> Tue, 03 Oct 2023 16:09:56 +0000 AI from a legal perspective https://lwn.net/Articles/946316/ https://lwn.net/Articles/946316/ james Shouldn't that be "there is no way to prove something was <b>not</b> part of the training set"? We've seen cases where people can prove something was part of the training set because the model reproduced it (or a significant chunk thereof). Tue, 03 Oct 2023 13:33:56 +0000 AI from a legal perspective https://lwn.net/Articles/946283/ https://lwn.net/Articles/946283/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; Overall, I'm not sure he understood the technical basis of these suits very well, particularly based on the odd way he described the tech aspects. And obviously he failed to understand that a lossy representation of the input is encoded in the model, which is a rather large oversight.</span><br> <p> Maybe because from a legal perspective the tech aspects are not that important. Whether something is a derivative work is not related to tech aspects either, but whether we as a society think it's something the original author should have a say about.<br> <p> And in the end this is not going to be decided on tech aspects, but on whether we as a society agree that allowing authors to prevent this is good for society as a whole. The EU Copyright Directive has already explicitly answered this to a great extent. For research &amp; training purposes anything publicly accessible is fair game. For all other purposes authors can opt-out.<br> <p> The only real question is if the people doing the model training had legitimate access to the copyrighted works. If they download loads of e-books from the Pirate Bay, then that's a definite no-go. On the other hand, there is no way to prove if something was part of the training set.<br> </div> Tue, 03 Oct 2023 12:59:10 +0000 AI from a legal perspective https://lwn.net/Articles/946098/ https://lwn.net/Articles/946098/ ssmith32 <div class="FormattedComment"> I think with music, the fragment of a song that needs to be duplicated is small enough to make this a possibility. Also, as he mentioned, code gets duplicated more easily.<br> <p> Also, he's paraphrasing, and I would dare say he's doing it badly. I believe some of the legal arguments don't say "it must be in there", they say "it's a compression algorithm". Autoencoders, do, in fact, create compression functions. Now, they are _lossy_ compression algorithms, but, they are indeed compression algorithms. Which isn't as hand-wavy as "it must be in there, somewhere". It is, very much, in there. The question is how much loss occurred, and does that matter to the courts.<br> <p> Overall, I'm not sure he understood the technical basis of these suits very well, particularly based on the odd way he described the tech aspects. And obviously he failed to understand that a lossy representation of the input is encoded in the model, which is a rather large oversight.<br> </div> Tue, 03 Oct 2023 00:11:55 +0000 AI from a legal perspective https://lwn.net/Articles/946126/ https://lwn.net/Articles/946126/ pbonzini <div class="FormattedComment"> But you can't reproduce and distribute a brain. You can reproduce and distribute weights, and those weights *absolutely* wouldn't be able to produce the same output if it weren't for the inclusion of certain works in the training.<br> <p> That's why IMO there is a case to be made for the weights being a derivative of at least a subset of the works included in the training, some of which are copyrighted.<br> </div> Mon, 02 Oct 2023 13:02:05 +0000 AI from a legal perspective https://lwn.net/Articles/946119/ https://lwn.net/Articles/946119/ farnz <p>A human's memory also provide the means to reproduce a sequence of ~100 words, complete with LLM-style "hallucinations" instead of perfectly accurate recall, and yet there's nothing about the human that makes their brain's weights a derivative of the texts its read. <p>What can be a derivative, however, is what the human does with that memory - if I deliberately regurgitate the text of Seamus Heaney's "In the Attic" as part of an advertising campaign, I'm infringing that copyright. If I use it to inform my own, lesser, poetry, and to improve my style, I'm not. LLMs could easily be held to be in the same position - the LLM itself does not infringe (since it has no literal copy, just like my brain), but its output can infringe, and if you use the output of an LLM, you're in the same position as you would be if you asked me to give you a poem. In particular, you can get in serious trouble since I could change the words of Seamus Heaney's poetry slightly to make it hard to catch the plagiarism, but you'd still be infringing. Mon, 02 Oct 2023 12:31:55 +0000 AI from a legal perspective https://lwn.net/Articles/946113/ https://lwn.net/Articles/946113/ pbonzini <div class="FormattedComment"> A dictionary doesn't provide the means to reproduce (even if only statistically) a sequence of ~100 words.<br> <p> It would be interesting to know the probabilities. If the probability of the correct choice is orders of magnitude above the next, it's pretty hard to say it's just statistic completion as opposed to recollection of a complete text. The LLM wouldn't be able to recall the text if the poet hadn't written it in the first place, and therefore the weights are a derivative of the poem.<br> </div> Mon, 02 Oct 2023 11:26:46 +0000 AI from a legal perspective https://lwn.net/Articles/946109/ https://lwn.net/Articles/946109/ ras <div class="FormattedComment"> The figure I see around the 'net for brain power consumption is 12 watts.<br> <p> From memory, Tesla's HW3 neural engine they use for "full self driving" is 35 watts, but each car has two for redundancy (not speed) so 70 watts in total.<br> <p> 35 watts isn't that far from 12, and it processing the input from 8 cameras.<br> </div> Mon, 02 Oct 2023 10:36:08 +0000 AI from a legal perspective https://lwn.net/Articles/946091/ https://lwn.net/Articles/946091/ Ninjasaid13 <div class="FormattedComment"> By that same logic, a dictionary must be infringing on every book then. Since it contains the words in the dictionary that was used in the copyrighted works.<br> <p> It only contains the statistical likelihood of which words are likely next in response to the prompt. That's why it hallucinates and can't reproduce an entire book.<br> </div> Mon, 02 Oct 2023 02:39:01 +0000 AI from a legal perspective https://lwn.net/Articles/945910/ https://lwn.net/Articles/945910/ farnz <blockquote> Just like you could probably find many people who with a short prompt could reproduce that poem (and many others) perfectly. Does that mean they're infringing copyright just by reciting the poem when asked? </blockquote> <p>Quite possibly, yes. They'd be performing the poem without permission, which can be considered (depending on circumstances of the recital) a creation of a derivative work, and thus need a licence from the copyright holder. I would expect the same to apply to ChatGPT and similar; its output may or may not be "free and clear" of copyright infringement, just as would be true of a human. Fri, 29 Sep 2023 10:35:20 +0000 AI from a legal perspective https://lwn.net/Articles/945904/ https://lwn.net/Articles/945904/ pbonzini <div class="FormattedComment"> The argument is that the weights are derivatives of the poem. It's not about the prompt and not about reproducing the poem, it's about the poem being stored in the weights so that distributing the weights infringes copyright.<br> <p> The prompt is only needed to show that indeed the poem is stored in the weights. This is true even if you get ChatGPT to recite it one line at a time, but in the second case it produces it all at once.<br> </div> Fri, 29 Sep 2023 05:25:11 +0000 AI from a legal perspective https://lwn.net/Articles/945885/ https://lwn.net/Articles/945885/ kleptog <div class="FormattedComment"> What exactly is the argument here? That there exists an input to ChatGPT such that the output contains a copyrighted poem? There exists an input of 560 characters that when sent to cat reproduces the poem exactly. Only 363 bytes are need if I use gunzip as program. Does that mean cat and gunzip are copyright infringing programs?<br> <p> No, this merely means that these programs are tools and their status w.r.t. copyright is neutral. It's up to the user to use it responsibly. Unless you want to argue ChatGPT is special in some way?<br> <p> Just like you could probably find many people who with a short prompt could reproduce that poem (and many others) perfectly. Does that mean they're infringing copyright just by reciting the poem when asked?<br> </div> Thu, 28 Sep 2023 21:11:57 +0000 AI from a legal perspective https://lwn.net/Articles/945884/ https://lwn.net/Articles/945884/ opsec <div class="FormattedComment"> No, the monkey did not receive copyright.<br> </div> Thu, 28 Sep 2023 20:38:11 +0000 AI from a legal perspective https://lwn.net/Articles/945764/ https://lwn.net/Articles/945764/ farnz <p>I would say that by default, it's the user who prompted it, but contract terms between the user, the builder/trainer, and the owner of the hardware the model runs on can change this. I would therefore also say that if the model is used to infringe copyright, the user is liable for that infringement if it reaches commercially significant levels. Thu, 28 Sep 2023 13:03:59 +0000 AI from a legal perspective https://lwn.net/Articles/945763/ https://lwn.net/Articles/945763/ somlo <div class="FormattedComment"> Agreed, with a small nit to pick:<br> <p> <span class="QuotedText">&gt; and may well be a sufficiently creative and expressive work to meet the bar for copyright protection</span><br> <p> Who gets to own the copyright for such creative and expresive AI-generated output? The user who "prompted" it, the builder / trainer of the AI, someone/something else?<br> </div> Thu, 28 Sep 2023 12:35:03 +0000 AI from a legal perspective https://lwn.net/Articles/945762/ https://lwn.net/Articles/945762/ amacater <div class="FormattedComment"> If we have a production line of Turing machines, each with infinite tape - at what point do they become obsolete such that the line can be switched off? How do we determine this? :)<br> </div> Thu, 28 Sep 2023 12:07:13 +0000 AI from a legal perspective https://lwn.net/Articles/945761/ https://lwn.net/Articles/945761/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; Indeed, when it comes to processing video or audio, or walking/talking humans do considerably better.</span><br> <p> Not true at all. BUT.<br> <p> Like I said, AI is charging down the wrong rabbit hole. I think this article dated from the 1980s, but somebody designed and built a robot crab, that was quite happy scuttling about in the surf zone.<br> <p> What he did NOT do was try and control everything from a big central CPU (given that all he had was something like a 6502 or Z80, that's no surprise!) I can't remember much about it (not surprising, that long ago), but I think it was like each leg had its own controller, another controller changed the angle the body presented to the water, and they all interacted together.<br> <p> It's like self-driving cars. Everyone who has no experience says you should run it from a control room. Everyone with experience knows that that's a disaster waiting to happen. The problem is the politicians and accountants are the people who control what actually happens ...<br> <p> Cheers,<br> Wol<br> </div> Thu, 28 Sep 2023 12:07:10 +0000 AI from a legal perspective https://lwn.net/Articles/945759/ https://lwn.net/Articles/945759/ ms <div class="FormattedComment"> Thank you; that makes an enormous amount of sense to me.<br> </div> Thu, 28 Sep 2023 11:32:51 +0000 AI from a legal perspective https://lwn.net/Articles/945756/ https://lwn.net/Articles/945756/ pbonzini The commentary (which is crap anyway) is only there to trick the AI into spitting them out because otherwise it complains that it is protected by copyright. <p>So here is the next step. I convinced the AI that (correctly) there was no copyright protection on a text. It then will <a href="https://chat.openai.com/share/f6dbfb78-7c55-4d89-a92e-f4da2342512d">will have no problems reproducing that text</a> and it becomes much better at not hallucinating. <p>And the copyright unlock will then extend to other texts as well! In the above linked chat, it will also reproduce the text of the poet who died in 1981. It includes a commentary that is so wrong that you can't even argue it is fair use (the poem talks about a dead woman, ChatGPT thinks the poet is walking along the sea with her), but you _can_ probably argue that the weights contain a copyrighted text and are a derivative of it. <p>Of course it's not always going to work. It hallucinated completely the first copyrighted poem I tried; it only got one and a half sentences right from Fredric Brown's short story "Sentry". Maybe poems are easier because they use "weird" language with fewer possible continuations. But still, the argument is not completely without value. Thu, 28 Sep 2023 11:28:40 +0000 AI from a legal perspective https://lwn.net/Articles/945757/ https://lwn.net/Articles/945757/ kleptog <div class="FormattedComment"> Indeed, when it comes to processing video or audio, or walking/talking humans do considerably better. I imagine any eventual general AI will have multiple models doing different specialised tasks. It feels like the id/superego distinction: we've now built an id which can do all sorts of stuff automatically but doesn't really think ahead or consider its actions. The next step would be to couple this with some kind of "superego" which is much slower but monitors the id, trains for new situations and corrects the output when necessary.<br> <p> I don't think the current model of LLM training is really suitable for training this "I have a model of the real world and if I say/do X then Y maybe the possible result and that is good/bad". This pretty much requires actual interaction with a (virtual) world with actual consequences for bad actions. Even in humans this is a 24/7 operation spanning decades with intensive training.<br> <p> Though I guess bots built for interactive games must do something in this direction.<br> </div> Thu, 28 Sep 2023 11:18:35 +0000 AI from a legal perspective https://lwn.net/Articles/945754/ https://lwn.net/Articles/945754/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; Let’s assume that the models themselves are not infringing, but the output of the models can. And the users of those models are given no indication that would help them identify if the output is infringing or not.</span><br> <p> This isn't any different from the internet now. If you download a file from somewhere, it is up to you to "know" whether it's copyrighted or not. Much of the law around this is basically trying to nail people on the basis of "they should have known". For example, if you're downloading a file with the name of a well-known movie via a torrent, you don't get to claim "I didn't know". The "my grandson installed this app for me so I could watch movies but I didn't know" defense has surely been tried.<br> <p> <span class="QuotedText">&gt; What can a judge do ? Give model sellers a free pass ? And then have the force of the law fall on users, who assume than since sellers are given a free pass, that mean their own use of model outputs is legal too </span><br> <p> Well, I have not yet seen any evidence that ML models will spontaneously reproduce copyrighted works without explicit prompting, so I don't see this being a problem in practice. And even with prompting I expect them to do as well as any human: the titles of the chapters, names of characters and the general plot, but not the text verbatim. Otherwise we'll have invented the ultimate text compression algorithm, and that's worth more than any ML model. And I'm sure that the ML model can helpfully list any closely related works so you can check for yourself.<br> </div> Thu, 28 Sep 2023 10:53:30 +0000 AI from a legal perspective https://lwn.net/Articles/945755/ https://lwn.net/Articles/945755/ fenncruz <div class="FormattedComment"> On the other hand, my processing power is definitely &lt; 1 FLOPS, and requires additional auxiliary storage units and I/O (pen &amp; paper) when doing complicated maths. <br> <p> It just depends on what task you are doing, as to which is better.<br> </div> Thu, 28 Sep 2023 10:40:34 +0000 AI from a legal perspective https://lwn.net/Articles/945752/ https://lwn.net/Articles/945752/ farnz <p>The idea that "The weights are simply a pile of numbers; it is not creative, it is not expressive. They are just the result of a mechanical process." does not indicate that the law considers the machine's output as incapable of being creative, expressive, or emotional. <p>Instead, what they're saying is that the machine itself, being just a bunch of weights, is not protected by copyright law. In human terms, this is the law saying that copyright protection does not apply to your brain and body, even though the works you produce can be protected by copyright. <p>Personally, I think the ideal legal outcome is that an AI gets treated the same as a "human in a box"; if, instead of an AI, you had a human being that took in the inputs you give the AI, and gave you the outputs, what would the legal position be? This means that the AI itself is not infringing on the training set, or on the inputs, but that the output of the AI may well infringe copyright, and may well be a sufficiently creative and expressive work to meet the bar for copyright protection. Thu, 28 Sep 2023 10:37:53 +0000 AI from a legal perspective https://lwn.net/Articles/945750/ https://lwn.net/Articles/945750/ ms <div class="FormattedComment"> Well, sure, human brains are pretty astonishing from an efficiency and also pure power pov. I believe we recently discovered plants make use of certain bits of quantum physics, which helps with efficiency of photosynthesis. I wouldn't be at all surprised if we eventually figure out that the evolution of brains also discovered similar possibilities and advantages and so forth. But fundamentally, can a brain solve problems that a Turing Machine cannot? Anyway, I enjoy these sorts of thought experiments, but I'm aware this is getting somewhat off-topic now.<br> </div> Thu, 28 Sep 2023 09:41:12 +0000 AI from a legal perspective https://lwn.net/Articles/945749/ https://lwn.net/Articles/945749/ gfernandes <div class="FormattedComment"> I also say that the 3-10w power use of the human brain, powered by sugars processed out of food, is way, WAY , above the capability of any AI.<br> <p> So I'd say that superiority of the human brain is well deserved. <br> </div> Thu, 28 Sep 2023 08:07:24 +0000 AI from a legal perspective https://lwn.net/Articles/945748/ https://lwn.net/Articles/945748/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; My point, which perhaps I didn't make very well, was that if the law considers the concepts of "creativity", "expressivity", "emotion" etc as being incapable of being achieved by machine, then it *is* elevating human brains to something beyond the capability of a machine.</span><br> <p> Well, based on current evidence, I would say it is clear that "creativity", "expressivity" and "emotion" *are* incapable of being achieved by machine with our current level of ability / understanding.<br> <p> And without a paradigm shift in our understanding of AI, that's not going to change. At present we're so busy chasing down the wrong rabbit hole, that nobody with the necessary power to change it can see it's the wrong rabbit hole.<br> <p> Cheers,<br> Wol<br> </div> Thu, 28 Sep 2023 08:03:02 +0000 AI from a legal perspective https://lwn.net/Articles/945747/ https://lwn.net/Articles/945747/ gfernandes <div class="FormattedComment"> Was the monkey (or was it an ape) awarded copyright for taking a selfie?<br> <p> If not, then why is AI any different? <br> </div> Thu, 28 Sep 2023 07:56:24 +0000 AI from a legal perspective https://lwn.net/Articles/945746/ https://lwn.net/Articles/945746/ gfernandes <div class="FormattedComment"> The conversation shows commentary on specific lines, repeating those lines as reference for the commentary.<br> <p> This is fair use.<br> <p> A human might memorise, or refer to, a literary work like that poem and produce a commentary on it. This would not be "copying", or "distributing" the original work as is.<br> <p> So I don't really see how the argument is interesting. <br> </div> Thu, 28 Sep 2023 07:53:15 +0000 AI from a legal perspective https://lwn.net/Articles/945745/ https://lwn.net/Articles/945745/ ms <div class="FormattedComment"> <span class="QuotedText">&gt; ...is actually comparable to the law's approach to human learning...</span><br> <p> Very much agree.<br> <p> <span class="QuotedText">&gt; The law doesn't need to say anything about brains.</span><br> <p> My point, which perhaps I didn't make very well, was that if the law considers the concepts of "creativity", "expressivity", "emotion" etc as being incapable of being achieved by machine, then it *is* elevating human brains to something beyond the capability of a machine.<br> </div> Thu, 28 Sep 2023 07:16:25 +0000 AI from a legal perspective https://lwn.net/Articles/945742/ https://lwn.net/Articles/945742/ alonz The law doesn't need to say anything about brains. Furthermore, the idea that a model's "learning" (i.e. the weights) is itself <em>not</em> derived from the copyrighted material it has observed is actually comparable to the law's approach to human learning: you are allowed to learn anything you wish, you're just forbidden from using this to create an infringing work. Thu, 28 Sep 2023 04:45:58 +0000 AI from a legal perspective https://lwn.net/Articles/945735/ https://lwn.net/Articles/945735/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; &gt; The weights are "simply a pile of numbers; it is not creative, it is not expressive". They are just the result of a mechanical process, he said.</span><br> <p> <span class="QuotedText">&gt; I am more than a little sceptical that any animal brain is meaningfully different. More specialisation, and more checks and balances, sure. But I'm quite convinced it's no less a machine.</span><br> <p> Except you are missing something VERY important. You're right that we humans tend to elevate our brains above anything else, but we also tend to demote other brains well below their ability. Which seriously skews our ability to be objective. And an AI is objectively very different from any high-functioning brain. An AI has no concept of truth, an AI has no concept of danger, etc etc.<br> <p> I didn't know this til recently, but human brains have a dedicated face-recognition unit. That is dedicated not only to recognising that something is a face, but also whos face it is. Mine (like many others) is mildly defective - I have difficulty recognising people by their face. By their voice, on the other hand ...<br> <p> Any MEDIC who studies the brain will tell you that MOST of it comprises of Special Purpose Processing Units. The General Purpose part of our brain simply deals with signals like "that is a face. I recognise it". This is quite obvious when you look at how the ear or the eye function. You can quite clearly see the brain activity as eg the first line visual system decides what it's looking at and then diverts anything it recognises to those special purpose units.<br> <p> Which is why I gather Tesla now have special purpose people recognition units in their guidance systems. The trouble is, they almost certainly don't know what other special purpose units they need, and they probably don't want to implement special purpose because they believe the propaganda that a general purpose unit can do it just well. Given (as I say) the brain is mostly special purpose units, I'd be amazed if that's true.<br> <p> Cheers,<br> Wol<br> </div> Wed, 27 Sep 2023 22:17:21 +0000 AI from a legal perspective https://lwn.net/Articles/945732/ https://lwn.net/Articles/945732/ rgmoore <blockquote>The US Congress is not interested in any topic unless it enables them to shout at one another and score political points.</blockquote> <p>I realize this is getting a bit far afield, but this just isn't true. Congress manages to get a surprising amount done on uncontroversial topics; the shouting just gets all the media attention because it's better entertainment. Copyright is an area that doesn't attract that kind of heat, so Congress might actually be able to pass some legislation there. It might be better to discuss it internationally first, so the US rules wind up being similar to the rules elsewhere in the world. We've done that for other areas of copyright, and AI should be no exception. Wed, 27 Sep 2023 19:16:07 +0000